The particular block is a ‘physically uncloneable function’ (PU) – a block with logic inputs and outputs that can be mass produced, with each individual item always booting-up with the same numerical characteristics, that are statistically unrelated to the numerical characteristics of any other individual.
Extremely hard to design, and frequently based on randomly-variable silicon characteristics – such as the fine differences of gate threshold voltage between pairs of transistors – they are used to beef-up security for such activities as IoT device firmware updates.
The University of California Santa Barbara has created a PUF based on an analogue memory array with metal oxide memristor elements.
“The memristor is an electrical resistance switch that can remember its state of resistance based on its history of applied voltage and current,” according to the University. “Not only can memristors can change their outputs in response to their histories, but each memristor, due to the physical structure of its material, also is unique in its response to applied voltage and current. Therefore, a circuit made of memristors results in a black box with outputs extremely difficult to predict based on the inputs.”
The proof-of-concept device has two vertically-integrated 10 x 10 memristive crossbar circuits, and demonstrates near ideal 50% average uniformity and diffuseness, as well as a bit error rate of around 1.5%, according to ‘Hardware-intrinsic security primitives enabled by analogue state and nonlinear conductance variations in integrated memristors,” published in Nature Electronics.
The memristor device is fast, low-power and already suited to both secure device identity and encryption, said the University. “If we scale it a little bit further, it’s going to be hardware which could be, in many metrics, the state-of-the-art,” added Santa Barbara professor Dmitri Strukov.
Whether the devices characteristics driver over time is currently being investigated, and the team is developing ‘strong’ security paths with large memristor circuits for highly-classified data – for military use, for example, and ‘weak’ paths for situations where attackers are unlikely to spend hours or days hacking to get in – consumer electronics, for example.
Hacking by artificial intelligence
PUFs need to offer increasing input-to-output complexity, in the face of machine learning-enabled hacking in which artificial intelligence technology is trained to learn and model inputs and outputs, then predict the next sequence based on its model. According to the university, with machine learning, an attacker doesn’t even need to know what exactly is occurring as the computer is trained on a series of inputs and outputs of a system.
“For instance, if you have two million outputs and the attacker sees 10,000 or 20,000 of these outputs, he can, based on that, train a model that can copy the system afterwards,” said Santa Barbara researcher Hussein Nili. To defeat this, the numerical relationship between inputs and outputs, although deterministic, has to appear to be random.