Ω AI needs no introduction as one of the most overhyped technical fields in the last decade. The subsequent hysteria around building AI-based systems has also made them a tasty target for folks looking to cause major mischief. However, most of the popular proposed attacks specifically targeting AI systems focus on the algorithm rather than the system in which the algorithm is deployed. We’ll begin by talking about why this threat model doesn’t hold up in realistic scenarios, using facial detection and self-driving cars as primary examples. We will also learn how to more effectively red-team AI systems by considering the data processing pipeline as the primary target.
Ariel Herbert-Voss is a PhD student at Harvard University, where she specializes in adversarial machine learning, cybersecurity, mathematical optimization, and dumb internet memes. She is an affiliate researcher at the MIT Media Lab and at the Vector Institute for Artificial Intelligence. She is a co-founder and co-organizer of the DEF CON AI Village, and loves all things to do with malicious uses and abuses of AI. Twitter: @adversariel