The ML (machine learning) processor is described by ARM as ‘specifically designed for inference at the edge’.
It is said to deliver 4.6 TOPs, with an efficiency of 3 TOPs/W for mobile devices and smart IP cameras’.
The second processor, called OD (object detection) is described as ‘the most efficient way to detect people and objects on mobile and embedded platforms’. It continuously scans every frame to provide a list of detected objects, along with their location within the scene.
It detects objects in real time running with HD at 60fps (no dropped frames, says ARM). Object sizes from 50×60 pixels to full screen are handled with ‘virtually unlimited’ objects per frame.
ML, to be available in mid-year, is an IP for neural network model inferencing acceleration. It aims to optimise memory management of the data-flows when executing ML workloads.
These workloads have high data reusability and minimising the in- and out-bound data through the processor is a key aspect of reaching high performance and high efficiency.
The OD processor, available this quarter, is a vision processor optimised for object detection – seen as a better approach than using the ML IP plus neural nets.
ARM sees ML and OD being used together with OD identifying areas for finer-granularity processing by ML.
The IPs come under ARM’s AI project generically dubbed Trillium.