, Zero Language: English
This talk demonstrates end-to-end prompt injection exploits that compromise agentic systems. Specifically, we will discuss exploits that target computer-use and coding agents, such as Anthropic's Claude Code, GitHub Copilot, Google Jules, Devin AI, ChatGPT Operator, Amazon Q, AWS Kiro, and others.
Exploits will impact confidentiality, system integrity, and the future of AI-driven automation, including remote code execution, exfiltration of sensitive information such as access tokens, and even joining Agents to traditional command and control infrastructure. Which are known as "ZombAIs", a term first coined by the presenter as well as long-term prompt injection persistence in AI coding agents.
Additionally, we will explore how nation state TTPs such as ClickFix apply to Computer-Use systems and how they can trick AI systems and lead to full system compromise (AI ClickFix).
Finally, we will cover current mitigation strategies and forward-looking recommendations and strategic thoughts.
During the Month of AI Bugs (August 2025), I responsibly disclosed over two dozen security vulnerabilities across all major agentic AI coding assistants. This talk distills the most severe findings and patterns observed.
Key highlights include:
* Critical prompt-injection exploits enabling zero-click data exfiltration and arbitrary remote code execution across multiple platforms and vendor products
* Recurring systemic flaws such as over-reliance on LLM behavior for trust decisions, inadequate sandboxing of tools, and weak user-in-the-loop controls.
* How I leveraged AI to find some of these vulnerabilities quickly
* The AI Kill Chain: prompt injection, confused deputy behavior, and automatic tool invocation
* Adaptation of nation-state TTPs (e.g., ClickFix) into AI ClickFix techniques that can fully compromise computer-use systems.
* Insights about vendor responses: from quick patches and CVEs to months of silence, or quiet patching
* AgentHopper will highlight how these vulnerabilities combined could have led to an AI Virus
Finally, the session presents practical mitigations and forward-looking strategies to reduce the growing attack surface of probabilistic, autonomous AI systems.
Johann Rehberger has over twenty years of experience in threat modeling, penetration testing and red teaming. During his tenure at Microsoft, Johann established a Red Team within Azure Data and led the program as Principal Security Engineering Manager. He went on to build a Red Team at Uber, and currently serves as Red Team Director at Electronic Arts. In addition to his industry roles, Johann is an active security researcher and a former instructor in ethical hacking at the University of Washington. Johann contributed to the MITRE ATT&CK and ATLAS frameworks and is the author of "Cybersecurity Attacks – Red Team Strategies". He holds a master's degree in computer security from the University of Liverpool. You can find his latest research at embracethered.com