It noted that: “Deep learning is a rapidly evolving topic, and the computational complexity of typical deep neural networks impedes their execution on resource‑scarce mobile or wearable devices.
“Last year, several innovative solutions were introduced to enhance throughput and improve energy efficiency, mostly focusing on the efficiency of convolutional neural networks,” it said.
“The current state-of-the-art still faces two significant challenges: a need to improve energy efficiency for ultra‑low power applications; and finding solutions for efficient execution of fully connected non‑convolutional networks. To improve energy efficiency, there is a trend towards reduced-precision networks, with binary networks as the extreme case – recently, the first binary neural‑network accelerator has appeared.”
ISSCC 2018 pushes peak efficiency to several tens of Top/s/W in digital accelerators, and beyond hundreds of Top/s/W for a mixed-signal implementation.
Several papers treat energy efficiency of fully‑connected‑network acceleration. In such networks, the bottleneck is the memory load/stores, and as such, innovative solutions include the fabrication of a 3D stack of processing and memory dies, as well as smart-memory interfaces enhancing data re-use.
In the Quest chip revealed at ISSCC, Japanese researchers stacked eight die to get 96Mbyte of ram within easy reach of a deep neural network, through inductive wireless die-to-die coupling
These innovations bring deep neural networks within reach of battery operated devices.
The IEEE’s annual International Solid-State Circuits Conference is the place where the world’s companies and universities gather to show off their chip-based circuit developments, and where attending engineers get a first glimpse of the state-of-the‑art in digital, analogue, power and RF design techniques.