This presentation sifts through the carnage of history and offers an unvarnished look at some spectacular past machine learning failures to help predict what catastrophes may lay ahead, if we don't step in. You've probably heard about a Tesla autopilot that killed a man...
Humans are great at failing. We fail all the time. Some might even say intelligence is so hard won and infrequent let's dump as much data as possible into our "machines" and have them fail even faster *on our behalf *at lower cost or to free us. What possibly could go wrong?
Looking at past examples, learning from failures, is meant to ensure we avoid their repetition. Yet it turns out when we focus our machines narrowly, and ignore safety decision controls or similar values, we simply repeat avoidable disasters instead of achieving faster innovations. They say hindsight is 20-20 but you have to wonder if even our best machines need corrective lenses. At the end of the presentation you may find yourself thinking how easily we could have saved a Tesla owner's life.