ISSCC: MIT neural network chip for phones

At the International Solid State Circuits Conference (ISSCC) in San Francisco this week, MIT researchers presented a chip designed to implement neural networks, claimed to be 10 times as efficient as a mobile graphics processor (GPU) and so could enable mobile devices to run artificial-intelligence algorithms locally.

MIT neural network chip

Neural nets are sometimes branded ‘deep learning’.

“Deep learning is useful for many applications, such as object recognition, speech, face detection,” says MIT researcher Vivienne Sze whose group developed the chip. “Right now, the networks are pretty complex and are mostly run on high-power GPUs. You can imagine that if you can bring that functionality to your phone or embedded devices, you could still operate even if you don’t have a Wi-Fi connection. You might also want to process locally for privacy reasons. Processing it on your phone also avoids any transmission latency, so that you can react much faster for certain applications.”

Dubbed ‘Eyeriss’, the chip implements convolutional neural nets, where many nodes in each layer process the same data in different ways. “The networks can thus swell to enormous proportions,” said MIT. “Although they outperform more conventional algorithms on many visual-processing tasks, they require much greater computational resources.”

Data enters and is divided among the nodes in the bottom layer. Each node manipulates the data it receives and passes the results on to nodes in the next layer, which manipulate the data they receive and pass on the results, and so on. The result emerges from the final layer.

A process called ‘training’ decides what each node does. where the network finds correlations between raw data and labels applied to it by human annotators.

“With a chip like the one developed by the MIT researchers, a trained network could simply be exported to a mobile device,” said MIT.

Eyeriss has 168 cores, roughly as many as a mobile GPU, and supports AlexNet and Caffe network architectures.

“The key to Eyeriss’s efficiency is to minimise the frequency with which cores need to exchange data with distant memory banks, an operation that consumes time and energy,” said MIT. “Whereas many of the cores in a GPU share a single, large memory bank, each of the Eyeriss cores has its own memory. Moreover, the chip has a circuit that compresses data before sending it to individual cores.”

Each core is also able to communicate directly with its immediate neighbours, so that if they need to share data, they don’t have to route it through main memory. According to the university, this is essential in a convolutional neural network, in which so many nodes are processing the same data.

There is also custom circuitry that allocates tasks across cores. In its local memory, a core needs to store the data manipulated by the nodes it is simulating, as well as data describing the nodes themselves.

The allocation circuit can be reconfigured for different types of networks, automatically distributing both types of data across cores in a way that maximises the amount of work that each of them can do before fetching more data from main memory.

At ISSCC, MIT researchers used Eyeriss for image-recognition, claimed it to be the first time that a state-of-the-art neural network has been demonstrated on a custom chip.

Applications are also expected in battery-powered autonomous robots, and networked devices which make local decisions – entrusting only their conclusions, rather than raw personal data, to the internet. “The idea is that vehicles, appliances, civil-engineering structures, manufacturing equipment, and even livestock would have sensors that report information directly to networked servers, aiding with maintenance and task coordination,” said MIT.

The work was supported by military funding body DARPA.


Leave a Reply

Your email address will not be published. Required fields are marked *

*