Bladerunner was supposed to be science fiction. And yet here we are today with bots running loose beyond their intended expiration and with companies trying to hire security people to terminate them. This is 2019 and we have several well-documented cases of software flaws in automation systems causing human fatalities. Emergent human safety risks are no joke and we fast are approaching an industry where bots are capable of pivoting and transforming to perpetuate themselves (availability) with little to no accountability when it comes to human aspirations of being not killed (let alone confidentiality and integrity).
Are you ready to discuss very real and discrete risks for global survival, to help leaders see what they’re missing and make a terminal change to a bot’s existence? Perhaps you are interested in building a framework to keep bot development pointed in the right direction (creating benefits) and making AI less prone to being a hazard to everyone around? Welcome to 2019 where we are tempted to reply “”you got the wrong guy, pal”” to an unexpected tap on the shoulder …before we end up on some random roof in a rainstorm with a robot trying to kill us all.