This was the take-away message from an expert panel at Embedded World in Nuremberg last month.
Call it what you will – IoT or M2M comms – when you connect things to networks, be they wind turbines, production lines or nuclear reactors, there are advantages – like remote monitoring, remote control and remote updates – and disadvantages: everyone in the world on the internet has access to the outer walls of your system, and hackers can crawl all over those walls looking for cracks.
The security conference panel members were: Eugen Kaspersky, CEO of anti-virus firm Kaspersky Labs, who as an academic worked on encryption and virus detection; Mathias Wagner, chief security technologist at NXP, formerly a theoretical physicist; and professor Nikolaus Forgo of the Institute for Information Technology Law at Leibniz University in Hanover, a specialist in IT and computer science law.
Who will hack what?
“The bad guys are designing attacks on infrastructure,” said Kaspersky. “More and more criminal gangs pay attention to industrial environments”, production lines, for example. Kaspersky cited an example: a few years ago a US cartel was sending drugs to Europe through Antwerp in shipping containers. One lot of cocaine, hidden among banana boxes, ended up in a supermarket.
To prevent a recurrence, the cartel employed a hacker to infect the seaport so that it could manage the containers it had drugs in. After the cartel’s online attack was discovered and blocked, it secretly installed network hardware inside the port to regain access.
Kaspersky’s point was that the cartel “didn’t destroy the system, they were using it”. He differentiates this kind of attack from those on consumer PCs, which can be defended by software. “It is quite easy to protect consumers, because they get mass attacks from low-level criminals,” he said. But in an industrial situation the physical environment and every connected device on the production line is at risk.
There is no single product that can defend an industrial system, each needs a separate project to analyse it and determine what protection is needed, and where. “At a power plant, we need to work very closely with IT staff,” said Kaspersky. “Protecting the computers which manage the environment is not enough: we have to have a secure operating system based on an unhackable monitoring system.”
Alongside selling products and services, Kaspersky is an advocate for critical infrastructure protection at national levels. He points out that, with little change, the techniques used by criminals to infiltrate systems could be used to cause damage.
“Criminals do not want to destroy, but small modifications would destroy. Governments could do this,” said Kaspersky. “I hope we will never see a state-sponsored attack on a massive scale on infrastructure because they [potential attackers] are vulnerable too and will expect the same in return. There are some freak countries, and I hope they will be smart enough not to start anything.”
Where is your critical infrastructure?
Kaspersky imagines, for example, a continuous line between power stations and home generators, passing through every size of generator in between. “What is critical infrastructure and what is not? The answer will be different for everyone, he pointed out.”
This is something organisations have to work out for themselves, he said, suggesting three steps to cyber resiliance. Government and industry first have to learn; they then have to design a cyber resilience strategy; and finally they must implement and support that strategy.
What affects critical infrastructure might not be obvious. At the Electronics Weekly IoT conference last year, ARM’s Gary Atkinson pointed out that no one is going to hack a domestic wireless light switch to get at the light switch. They are going to hack it to get into the home network where they can do something to their benefit. Or they might have other plans: “If you hack a light switch, you can turn millions of light on and off and have a serious effect on the grid,” said Atkinson.
Kaspersky sees only two states – Israel and Singapore – at step two on his road to cyber resilience, everyone else is still at step one. “I am waiting for any government to introduce a cyber‑resilience strategy,” he said.
“We have to move fast, but no nation in the world has enough engineers to do it in a reasonable time. It will take 10 to 20 years to protect all critical infrastructure. We are living in a time when we will be vulnerable and not protected”.
What will a protected system look like?
“My idea of perfect world is one that is safe by design: secure platform, secure networks and secure applications,” said Kaspersky. “The systems must be connected, transparent and immune.”
Wagner also believes in transparency. Asked how to tell if a product will be hacked he replied: “We never say never. When there is some independent analysis of your product, you will have a better idea. You have to open your design [to independent security analysts] and they attack it and give you a rating for it.”
He advocates a balance of hardware and software security, with a believable hardware trust anchor, plus some flexibility for modifications as exploits are discovered. “Design lifecycle might be two years, and many academic papers will get written on security in this time,” he said.
There are limits: if someone can make a die‑level attack in a semiconductor lab, all bets are off. But this is very expensive, and could fail the criminal’s cost-benefit analysis, said Wagner: “I am not suggesting we should roll out smartcard security on IoT.”
Software is ahead of hardware in this respect: “You can open it to someone and say ‘attack how you want’,” said Wagner.
That said, software is a numbers game: for every 500 lines of code there is a bug, although not necessarily a bug that compromises security, said Wagner, and bug density does not scale linearly, it increases with larger programmes.
“I wouldn’t dare to claim an unhackable system, it will probably never be there. The hacker only has to find one hole though, so it scales well for them. You should be agile, and remember convenience [in use] is the big enemy,” he said.
A legal guy on the panel?
The unanimous opinion of the panel was that laws and regulation could have a profound effect on how IoT security develops, but probably without any useful protective effect.
“Lawyers are always late. After 40 years of internet development, the lawyers think they might know what is going on,” said Forgo.
Kaspersky agreed. “Lawyers are late and regulators are late. Government regulators typically take 10 years to recognise a problem, but we are living in the cyber age and cyber is moving faster than homo sapiens.”
Aside from common decency and protecting reputations there is little motivation for companies to build proper security into products. If security is broken, the customer pays for the consequences and there is little effect on the supplier – think mobile phone company data breaches.
“Security is a bit like insurance: no one wants to pay for it,” said Wagner. “The company using a product or the consumer takes the pain when something goes wrong. Maybe there will be some government effort to shift liability.” So if they knew they would be in serious trouble, company executives would make sure their products had proper security.
But technology moves faster than law-makers can understand. This means that even where there are laws, they can miss the point, said Forgo. “Usually there is big delay, then a law comes out that doesn’t solve anything,” he said.
Another issue is that laws introduced to mandate something don’t necessarily make it so – according to Forgo a problem that has worsened in past 20 years.
For the IoT there is a further complication, he added: existing law distinguishes between personal and non-personal information: “The more M2M communications we have, the more difficult it is to make the distinction,” said Forgo.
And laws don’t cross international borders.
“It needs to change,” said Forgo. “One of the large challenges is to communicate between technical and legal. A major task for people with an IT background is to teach governments and teach politicians.”
All the gentle prodding and education in the world might not be enough to motivate law-makers in time, as they are largely reactive.
Wagner thinks that it might take an attack that crashes all the infrastructure in a country, and a subsequent stock exchange reaction, to get the attention of politicians.
This said, there are cases where legislation happened, and went too far, according to Wagner: smart meters in Germany might have been over-regulated to the point they were unattractive to industry, he said.
Short of a miracle, “I am quite sure governments and regulation will not provide the solution,” said Forgo.
Self-interested self-regulation is another possible answer, particularly as many companies want to do things correctly, but just don’t want to move on their own. There are no absolutes, said Forgo, but ‘don’t be worse than your competition’ is a good rule of thumb for companies.
Wagner has seen industry self‑regulation work in the creation of FIDO (Fast IDentity Online) USB security tokens.
In between legal regulation and self‑regulation is one further option: a legal framework that allows for enforcement.
“I think industry self‑regulation is a better start than government, but there is no incentive to start,” said Forgo. “Maybe governments can set frameworks for self-regulation.”
Wagner agrees, citing car MoT tests as an example. By law a car has to pass tests at fixed intervals, but what happens in between, and how a car is made ready for tests, is up to the owner.