29.12.2024 –, Bühne HUFF
Sprache: English
Machine learning systems are becoming increasingly important in critical applications, but their robustness against adversarial inputs remains an essential concern. This talk explores how small, strategically crafted perturbations can lead to catastrophic failures in ML systems. These vulnerabilities can be exploited in both digital and physical scenarios. From misleading autonomous vehicles to bypassing facial recognition, the implications are profound. I will examine the attack process, common types of adversarial attacks, the role of biases in data collection and learning processes, and tools like the Adversarial Robustness Toolbox (ART) to counteract these challenges.
Machine learning systems are becoming increasingly important in critical applications, but their robustness against adversarial inputs remains an essential concern. This talk explores how small, strategically crafted perturbations can lead to catastrophic failures in ML systems. These vulnerabilities can be exploited in both digital and physical scenarios. From misleading autonomous vehicles to bypassing facial recognition, the implications are profound. I will examine the attack process, common types of adversarial attacks, the role of biases in data collection and learning processes, and tools like the Adversarial Robustness Toolbox (ART) to counteract these challenges.
Hi, I’m Dennis Eisermann, a research assistant at the Institute of Distributed Systems at Ulm University. My work focuses on making deep neural networks more understandable and reliable. I’m really into figuring out how to explain their behavior, so these systems aren’t just black boxes. I also look at ways to defend against adversarial attacks and work on quantifying trust in these models to ensure they’re reliable enough for real-world use.