Whether it is robots taking jobs or data‑gathering and analysis techniques intruding on our civil liberties, technology has implications for society not seen in the past.
And interestingly, it is the UK government which has identified this aspect of technology as an opportunity to play to our strengths and take a lead on the world stage.
Last month, we had the prime minister putting UK specialists in the vanguard for dealing with global threats to cyber security.
A report by the House of Lords Select Committee on Artificial Intelligence (AI), snappily titled: AI in the UK: Ready, Willing and Able? had little doubt that the UK has a role to play in shaping the social as well as technical aspects of AI.
It based this assumption on the UK’s growing community of AI companies, supported by academic research and the idea that this could tap in to the country’s ethical and financial strengths.
The House of Lords Select Committee has not lost enthusiasm for AI on ethical or even social grounds. But it is attempting to remind the technology community of the ethical consequences of AI. And it paints a chilling picture.
“Autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence,” it warns.
The motivation for this is not purely altruistic. The government believes the wider adoption of AI will one day boost the economy.
But it recognises that there is work to do if the public at large is to trust the technology and understand the benefits of using it.
There is also a risk that the technology could be misused, and so society must be in a position to challenge this.
The problem is, how is that going to happen in a technological and financial environment that seems to be all to eager to adopt AI?
Will there be room for those difficult questions about the data rights or privacy of individuals, families or communities?
And this goes to the heart of government responsibility.
There have been impressive advances in AI for healthcare, which the NHS should capitalise on.
The House of Lords Select Committee wants all these issues to be addressed by technologists and the wider business community.
It is not certain that existing liability law will be sufficient if AI systems malfunction or cause harm to users, and the Lords have called for clarification.
But apart from suggesting that the Law Commission should investigate this issue in particular, or that the ethical aspects of AI should be taught in schools, there are few concrete proposals for making a technology like AI more user-friendly.
To trust AI society will need to be reassured that its legal use is not threatening our civil liberties – and to be informed when AI is being used to make significant or sensitive decisions.
Consultant editor Richard Wilson writes a regular column for Electronics Weekly