An AI kill switch? Yes, please!

As millennials, we’ve grown up watching movies such as the Terminator or I Am Legend or the Avengers: Age of Ultron, which told us that technology would reach certain milestones such as finding actual AI and there would be a global catastrophe leading to some sort of war that would all come down to some sort of kill switch.

Well, considering the advancements in AI, from Siri to Google Now 2.0 to AI beating humans at complex games like Go, someone had to be thinking about the impending take over by our AI overlords. That someone turned out to be Deepmind, a company acquired by Google in 2014 for an estimated Dh21 million.

The team from Deepmind has recently published a paper titled Safely Interruptible Agents, which takes an in-depth look at how we (the humans) can have an overall switch-off button to control AI in case it starts doing something its human operator doesn’t want it to do. The proposed framework will allow a human operator to repeatedly safely interrupt a reinforcement learning agent while making sure the agent will not learn to prevent or induce these interruptions.

The researchers explain in this paper that AI agents are “unlikely to behave optimally all the time”. They also stress on the fact that “If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions — harmful either for the agent or for the environment — and lead the agent into a safer situation”.

The mentioned framework is said to allow the agent’s human operator to repeatedly and safely interrupt an AI, while making sure that the AI doesn’t learn how to prevent or induce the interruptions.

“Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not normally receive rewards for this,” the authors write.

The above-mentioned is only a gist of the detailed paper. If you’re interested in finding out more, click here. Does the speed at which AI is growing scare you? Do let us know.

(featured image)