Ultra-low power CMOS design explained
Researchers at the Massachusetts Institute of Technology (MIT) have developed a feedback-control scheme that interactively tunes CMOS operating voltage to minimise dissipation.
Energy consumption in CMOS drops quadratically as its supply voltage is bought below its threshold voltage. However, according to MIT, leakage increases exponentially at the same time.
This means that for any given circuit workload and temperature, there is a particular supply voltage that trades capacitive losses with leakage in a way that minimises power consumption.
The example CMOS ‘load’ in the 65nm MIT circuit, fabricated by TI, is a hardware 7-tap FIR filter, whose power supply comes from an on-chip DC-DC converter capable of delivering 250 to 700mV at 1-100µW at over 80 per cent efficiency.
The loop consists of an energy sensor and a controller that moves the supply voltage slightly – via the DC-DC converter – to see what effect it has on energy consumption. In this way the controller can push the supply voltage in the improving-energy direction until it settles at the bottom of the power dip.
Changing the 7-tap filter (at optimal voltage) to a 1-tap version drops power by 25 per cent at constant voltage, whereas feedback control achieves a cut of over 40 per cent.
In the presence of leakage – added as a 1µA constant load to the circuit – power would almost triple, but the loop pulls this down to an increase of only 30 per cent.
With temperature increasing from 0 to 85°C, the loop saves around 50 per cent of power compared with constant voltage operation, claimed MIT.
The technique places no burden on the controlled ‘load’ and consumes a tiny fraction of the power it saves.