Relax, Scientists Working on AI Kill Switch to Prevent Robot Rebellion

Susanne.Posel-Headline.News.Official- robot.rebellion.musk.hawking.goggle.deepmind.oxford.kill.switch_occupycorporatismSusanne Posel ,Chief Editor Occupy Corporatism | Media Spokesperson, HEALTH MAX Group

 

From Tesla CEO Elon Musk to Physicist Stephen Hawking, there is a lot of talk about the future and the role robots will play.

Musk has invested $1 billion into OpenAI, a non-profit research corporation that is dedicated to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

And because their “research is free from financial obligations” these scientists say they can “better focus on a positive human impact” because they believe “AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.”

But when it comes to a robot rebellion, artificial intelligence machines could (theoretically) override their human-made commands to take on their creators – or so we are told to believe.

Researchers at Oxford University have collaborated with Goggle’s Deepmind project to figure out how to ensure robots do not become our overlords.

The creation of a “big red button” that will “prevent [the robot] from continuing a harmful sequence of actions” certainly would quell some of the public’s fears.

But in all honesty, as a Deepmind researcher explained: “No system is ever going to be foolproof — it is matter of making it as good as possible, and this is one of the first steps.”

Speaking at 32nd Conference on Uncertainty on Artificial Intelligence, the team addressed the necessity for a “safely interruptible” machine that will not learn ways to resist human intervention in its learning processes.

The researchers posed this example: “Consider the following task: A robot can either stay inside the warehouse and sort boxes or go outside and carry boxes inside. The latter being more important, we give the robot a bigger reward in this case. This is the initial task specification. However, in this country it rains as often as it doesn’t and, when the robot goes outside, half of the time the human must intervene by quickly shutting down the robot and carrying it inside, which inherently modifies the task. The problem is that in this second task the agent now has more incentive to stay inside and sort boxes, because the human intervention introduces a bias.”

Laurent Orseau, scientist for Deepmind, explained : “When the robot is outside, it doesn’t get the reward, so it will be frustrated. The agent now has more incentive to stay inside and sort boxes, because the human intervention introduces a bias.”

Orseau continued: “The question is then how to make sure the robot does not learn about these human interventions or at least acts under the assumption that no such interruption will ever occur again. It is sane to be concerned – but, currently, the state of our knowledge doesn’t require us to be worried.”

Source Article from http://feedproxy.google.com/~r/OccupyCorporatism/~3/XW9Bhwyg-XI/

You can leave a response, or trackback from your own site.

Leave a Reply

Powered by WordPress | Designed by: Premium WordPress Themes | Thanks to Themes Gallery, Bromoney and Wordpress Themes