Oct 15, 2024 |
We have uploaded our preprint titled Generalized Adversarial Code-Suggestions: Exploiting Contexts of LLM-based Code-Completion to ArXiv. In the paper, we investigate if code models can be tricked into suggesting vulnerable source code. Interestingly, this malicious effect can be achieved without adding vulnerable code snippets to the training data, as our study shows. Therefore, our attacks by-pass static analysis and other defensive technique that aim to ensure the security coding assistents. In our research, we found that none of the evaluated defenses can prevent our attack effectively, except for one: Fine-Pruning is effetive but requires a trusted clean data set — which is the problem in the first place.
|
Sep 24, 2024 |
I lust presented our extended abstract A Brief Systematization of Explanation-Aware Attacks at the 47th German Conference on Artificial Intelligence 2024 in Würzburg, Germany. In the paper, we summarized our SP24 Systematization of Knowledge paper on three pagess, including the three important dimesions: The capabilities of the adversary, the scopes of the attack, and the attack types. The paper serves an basic introduction to the problem of explanation-aware attacks.
|
Jun 27, 2024 |
Our extended abstract paper A Brief Systematization of Explanation-Aware Attacks is accepted to the 47th German Conference on Artificial Intelligence 2024 in Würzburg, Germany. I’m looking forward to interesting discussions at the conference.
|
May 21, 2024 |
Today, I will present our S&P paper SoK: Explainable Machine Learning in Adversarial Environments at the 45th IEEE Symposium on Security and Privacy 2024 in San Francisco, CA, USA. In the paper, Chris and I systematized the field of explanation-aware attacks. We discussed different relevant threat models, scopes of attacks, and attack types. We presented a hierachy of explanation-aware robustness notions and discussed various defensive techniques from the view point of explanation-aware attacks. I am looking forward for the questions and discussions with the community.
|
Mar 26, 2024 |
I just gave a talk The Threat of Explanation-Aware Attacks: The Example of Explanation-Aware Backdoors in the XAI seminar of the Ludwig Maximilian University in Munich and the University Bremen. In the talk, I summarized the learnings from my two paper on explanation-aware attacks. Thanks for the invitation and thanks for having me.
|
Nov 25, 2023 |
For the next few days, I will visit the ACM Conference on Computer and Communications Security (CCS) in Copenhagen, DK. I will present my poster Poster: Fooling XAI with Explanation-Aware Backdoors. there. And I’m looking forward to exciting discussions with other researchers in the community.
|
Oct 9, 2023 |
I’ll be in Berlin for a research stay at TU Berlin until November 25th 2023. I am looking forward to meeting exciting people in person.
|
Sep 27, 2023 |
On September 27th I presented our extended abstract Explanation-Aware Backdoors in a Nutshell at the 46th German Conference for Artificial Intelligence (KI) in Berlin, Germany. Thanks everybody for the interesting discussions on the security and the future of explainable machine learning.
|
Sep 18, 2023 |
We published the camera-ready version of our paper Poster: Fooling XAI with Explanation-Aware Backdoors, which has been accepted for the 30th ACM Conference on Computer and Communications Security (CCS) November 26-30 2023 in Copenhagen, DK. I’m very happy to present the poster there and have interesting discussions with you.
|
Aug 18, 2023 |
We just published the camera-ready version of our paper SoK: Explainable Machine Learning in Adversarial Environments, which has been accepted for the 45th IEEE Symposium on Security and Privacy 2024 in San Francisco. Have a nice read.
|
Aug 11, 2023 |
Our SoK-paper SoK: Explainable Machine Learning in Adversarial Environments has been accepted for the 45th IEEE Symposium on Security and Privacy 2024 in San Francisco. I am happy to get to San Francisco next year and meet all of you there.
|
Aug 1, 2023 |
I’m happy to announce that I will serve the Privacy Enhancing Technologies Symposium (PETS) in 2024 and 2025 as artifact co-chair together with Pasin Manurangsi. I am looking forward to your artifact submissions. Also, we will change the workflow slightly starting this year. Please find details on the PETS Artifacts Review page. If you got any comments on the new process, feel free to write an email.
|
Jun 26, 2023 |
Our short abstract paper Explanation-Aware Backdoors in a Nutshell has been accepted for the 46th German Conference on Artificial Intelligence in September 2023 in Berlin, Germany. I am happy to meet the (not only) german research community on machine learning there.
|
Nov 10, 2022 |
Our paper Disguising Attacks with Explanation-Aware Backdoors has been accepted for the 44th IEEE Symposium on Security and Privacy in May 2023 in San Francisco.
|