Emergent Chip Vastly Accelerates Deep Neural Networks






Emergent Chip Vastly Accelerates Deep Neural Networks


January 11th, 2016

Via: The Next Platform:

Using nine different deep neural network benchmarking suites, EIE performed inference operations anywhere (depending on the benchmark) between 13X and 189X faster over regular CPU and also GPU implementations, although this is without any compression. Still, however, consider the power envelope. As the benchmarks show, the energy efficiency is better by between 3,000X on a GPU and 24,000X on CPU.

As Han describes, this is the first accelerator for spare and weight-sharing neural networks. “Operating directly on compressed networks enables the first large neural network models to fit in SRAM, which results in far better energy savings compared to accessing from external DRAM. EIE saves 65.16 percent energy by avoiding weight reference and arithmetic as the 70 percent of activations that are zero in a typical deep learning application.�















<!–

–>











<!– AD CAN GO HERE

Buy gold online - quickly, safely and at low prices

END: AD CAN GO HERE –>

Leave a Reply


You must be logged in to post a comment.







Source Article from http://www.cryptogon.com/?p=47984

You can leave a response, or trackback from your own site.

Leave a Reply

Powered by WordPress | Designed by: Premium WordPress Themes | Thanks to Themes Gallery, Bromoney and Wordpress Themes