Meet the AI That Can Think & Learn Better Than Humans


Susanne.Posel-Headline.News.Official- bpl.mit.ai.turning.handwritting.human.intelligence_occupycorporatismSusanne Posel ,Chief Editor Occupy Corporatism | Co-Founder, Legacy Bio-Naturals
December 11, 2015

 

Researchers at the Massachusetts Institute of Technology (MIT) has designed an artificial intelligence program that can recognize the draw handwritten characters after seeing them a handful of times.

This AI can complete this task as well as people do and no one could tell the difference.

Deep learning has had many advances such as speech and facial recognition; however the programs require 100s if not 1,000s of examples before they can figure out with certainty what and whom they are looking at.

Yet, a human’s brain will break the seen object down into components and this is where the “mind gap between machine learning and human learning” remains vast, but not insurmountable.

The new MIT learning algorithm is able to mimic human information gathering quicker than is predecessors. In order to accomplish this goal, the team used handwritten characters because they “are well suited for comparing human and machine learning on a relatively even footing: They are both cognitively natural and often used as a benchmark for comparing learning algorithms.”

Called Bayesian program learning (BPL), the program is designed to build concepts of images as it goes. To test it, the researchers used 1,623 examples of human handwriting from languages all across the world.

Humans are able to recognize characters with an average error rate of 4.5%, but BPL has an error rate of 3.3%. When the machine written words were given to humans to evaluate, they could not tell which were written by BPL and which were authentically human.

This means that BPL passed the Turning test.

In 1950, Alan Turing wrote a paper entitled Computing Machinery and Intelligence that asked the question: “Can machines think?”

The Turing Test establishes a “test for intelligence in a computer, requiring that a human being should be unable to distinguish the machine from another human being by using the replies to questions put to both.”

Funding for this experiment was provided by the Air Force Office of Scientific Research, the Office of Naval Research, the Army Research Office, the Defense Advanced Research Projects Agency (DARPA), the Intelligence Advanced Research Agency (IARPA) and the National Science Foundation (NSF); as well as a few private sector corporations.

Jason Tenenbaum, member of the MIT team, explained in a press release : “For the first time, we think we have a machine system that can learn a large class of visual concepts in ways that are hard to distinguish from human learners.”

Tenenbaum continued: “You show even a young child a horse or a school bus or a skateboard, and they get it from one example. If you forget what it’s like to be a child, think about the first time you saw, say, a Segway, one of those personal transportation devices, or a smartphone or a laptop. You just needed to see one example and you could then recognize those things from different angles under different lighting conditions, often barely visible in complex scenes with many other objects.”





Source Article from http://feedproxy.google.com/~r/OccupyCorporatism/~3/OpubEgoSA9A/

You can leave a response, or trackback from your own site.

Leave a Reply

Powered by WordPress | Designed by: Premium WordPress Themes | Thanks to Themes Gallery, Bromoney and Wordpress Themes