Maximilian K. Noppel

Karlsruhe Institute of Technology – Ph.D. Student

prof_pic.jpg

noppel@kit.edu

KIT, bldg. 50.34 room 163

Am Fasanengarten 5

76131 Karlsruhe

I am a doctoral researcher in the Intelligent System Security (IntelliSec) group of Christian Wressnegger. After my B.Sc. in Computer Science I worked for three years as a software engineer and software architect for embedded multiprocessor devices. Then I decided to head back to university. In 2020, I graduated to M.Sc. in Computer Science at the Karlsruhe Institute of Technology (KIT). My studies were concentrated on IT security, cryptography, anonymity and privacy, and algorithm engineering.

As a doctoral researcher, I focus on the vulnerabilities of eXplainable Artificial Intelligence (XAI) in adversarial environments. XAI methods augment the predictions of an ML model by an additional output, the explanations. This increase in the amount of scalar outputs potentizes the number of possible adversarial goals. An adversary may fool the prediction, the explanation, or both simultaneously. Note that with the term ‘fooling,’ I capture diverse incentives, e.g., showing a target explanation or injecting a backdoor. I research these attacks with varying threat models, explanation methods, model architectures, and application domains. My research highlights the necessity of robustness guarantees for XAI, which I hope to be able to provide at some point.

In my spare time, I founded the hackerspace vspace.one e.V. in 2016 and several other clubs, e.g., to promote local musicians. I love open-source software and open-hardware projects, including little Arduino projects, but also my homebrew relay CPU project and mechanical keyboards. In addition, I’m working on mechanical projects using CNC mills or 3D printers, and I organize events like code-golfings, lightning-talks, hackathons, hackerjeopardy-parties, or crypto-parties. I am also an active ham radio operator with the call sign DC0MX. You can find me in the university’s ham radio group DF0UK. If you are interested in sports, find me as a trainer for under-water-rugby in the SSC Karlsruhe and KIT University teams.

news [more]

Oct 15, 2024 We have uploaded our preprint titled Generalized Adversarial Code-Suggestions: Exploiting Contexts of LLM-based Code-Completion to ArXiv. In the paper, we investigate if code models can be tricked into suggesting vulnerable source code. Interestingly, this malicious effect can be achieved without adding vulnerable code snippets to the training data, as our study shows. Therefore, our attacks by-pass static analysis and other defensive technique that aim to ensure the security coding assistents. In our research, we found that none of the evaluated defenses can prevent our attack effectively, except for one: Fine-Pruning is effetive but requires a trusted clean data set — which is the problem in the first place.
Sep 24, 2024 I lust presented our extended abstract A Brief Systematization of Explanation-Aware Attacks at the 47th German Conference on Artificial Intelligence 2024 in Würzburg, Germany. In the paper, we summarized our SP24 Systematization of Knowledge paper on three pagess, including the three important dimesions: The capabilities of the adversary, the scopes of the attack, and the attack types. The paper serves an basic introduction to the problem of explanation-aware attacks.
Jun 27, 2024 Our extended abstract paper A Brief Systematization of Explanation-Aware Attacks is accepted to the 47th German Conference on Artificial Intelligence 2024 in Würzburg, Germany. I’m looking forward to interesting discussions at the conference.

selected publications [more]

  1. SoK: Explainable Machine Learning in Adversarial Environments
    In Proc. of the IEEE Symposium on Security and Privacy (S&P), 2024
  2. Disguising Attacks with Explanation-Aware Backdoors
    In Proc. of the IEEE Symposium on Security and Privacy (S&P), 2023
  3. WPES
    Plausible Deniability for Anonymous Communication
    In Proc. of Workshop on Privacy in the Electronic Society (WPES), 2021