Home »

IBM creates ‘super’ chip for neural networks

IBM is aiming at creating vast neural networks with a chip implementing one million neurons and 256 million programmable synapses.

IBM neuron chip

It is the biggest chip the firm has ever designed: 5.4bn transistors and a two-dimensional mesh network of 4096 ‘neuro-synaptic’ cores – which Samsung has built on a 28nm process.

Operating power consumption is a frugal 70mW, and specific power consumption is 46bn synaptic operation/s/W

According to a paper in Science, neural network-based multi-object detection and classification from 400x240pixel video 30frame/s takes 63mW.

“The chip is the culmination of almost a decade of research and development, including the initial single core hardware prototype in 2011 and software ecosystem with a new programming language and chip simulator in 2013,” said IBM, which intends to sell the chip commercially for “distributed sensor and supercomputing applications. This technology could transform science, technology, business, government, and society by enabling vision, audition, and multi-sensory [see below] applications.”

This is the second generation, following a single core prototype in 2011 and software ecosystem with a new programming language and chip simulator in 2013.

Each of the 4096 neuro-synaptic core modules has memory, computation, and communication, and operates in an event-driven, parallel, and fault-tolerant fashion.

For more processing, multiple chips can be tiled “seamlessly”. said IBM, and to prove it, it has tiled 16 chips to get 16 million programmable neurons and 4bn programmable synapses.

“We foresee generations of information technology systems, that complement today’s von Neumann machines, powered by an evolving ecosystem of systems, software, and services,” said IBM chief scientist Dr Dharmendra Modha. “These brain-inspired chips could transform mobility, via sensory and intelligent applications that can fit in the palm of your hand but without the need for Wi-Fi.”

DARPA, the US Defense Advanced Research Projects Agency, has funded the project since 2008 with approximately $53m.

Collaborators include Cornell Tech and iniLabs.

The event-driven circuit elements use an asynchronous design methodology developed at Cornell Tech and refined with IBM since 2008.

It is the hybrid asynchronous-synchronous circuitry, combined with Sumsung’s low-leakage process and the novel architecture that has kept power down to 20mW/cm2.

IBM is also pulling together an ecosystem that includes a chip simulator and supports design through development, debugging, and deployment, plus it has put together a teaching curriculum for universities, companies and its own employees.


Producing a coherent model of reality from ambiguous and contradictory real-world information sources – multi-sensor data fusion – is far from easy. Identifying and locating military targets from multiple radars and infra-red imagers is an example.

With the chip, IBM envisions multi-sensor processing in mobile devices – bucking the trend towards cloud-based remote processing by bring computation back to the sensors that gather data.

It also sees real-time multimedia cloud services accelerated by neural processing, and supercomputers with more than one hundred trillion synapses.

This work os part of a programme called SyNAPSE (systems of neuromorphic adaptive plastic scalable electronics).


One comment

  1. Dennis Ward Konkel

    The people who use these neural networks will have to try similar programs on conventional computers to see which computing algorithm is faster.

Leave a Reply

Your email address will not be published. Required fields are marked *