JARVIS never saw it coming: Hacking machine learning (ML) in speech, text and face recognition - and frankly, everywhere else

Exploits, Backdoors, and Hacks: words we do not commonly hear when speaking of Machine Learning (ML). In this talk, I will present the relatively new field of hacking and manipulate machine learning systems and the potential these techniques pose for active offensive research.

The study of Adversarial ML allows us to leverage the techniques used by these algorithms to find weak points and exploit them in order to achieve:

  • Unexpected consequences (why did it decide this rifle is a banana?),
  • Data leakage (how did they know Joe has diabetes)
  • Memory corruption and other exploitation techniques (boom! RCE)
  • Influence the output (input: virus, output: safe!, as seen on (DEF CON 25 – Hyrum Anderson – Evading next-gen AV using AI)[https://www.youtube.com/watch?v=FGCle6T0Jpc]).

In other words, while ML is great at identifying and classifying patterns, and an attacker can take advantage of this and take control of the system.

This talk is an extension of research made by many people, including presenters at DefCon, CCC, and others – a live demo will be shown on stage!

Garbage In, RCE Out 🙂

Presented by