Another day, another tech expert warns of impending human extinction at the hands of our artificial intelligence (AI) overlords. Which all sounds a little premature, given that presently Chat GPT-4 is mainly preoccupied with writing essays for students.
But it’s getting harder to ignore the voices of doom. The latest to add their voice to the cacophony of alarm is Mo Gawdat, chief business officer of Google X – the experimental sci-fi division of the company. He’s so worried he’s written a book about the dangers posed by AI, Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World, which is full of end-of-times scenarios and a marked departure from his last book, which was about happiness.
Mo tells the foreboding story of how he witnessed a room full of robot arms learn how to pick up a ball. They fumbled around for weeks then one day one arm succeeded and proudly held the ball up to a camera. The following day all the arms were picking up balls. It’s Mo’s contention that this is how AI will develop. Quickly and in concord.
He believes that in a few years’ time AI will become so intelligent it will outpace humans and we will no longer be able to understand it or control it. He calls this point ‘the singularity’ because beyond it we cannot predict what will happen. Hopefully we’ll get along with the machines and they will get along with us. Mo isn’t so sure. Neither are other technology futurologists.
Last year AI Impacts, an American research group, asked more than 700 machine-learning researchers about the technology. Most believed there was a five per cent probability that advanced AI would cause an ‘extremely bad’ outcome, such as human extinction.
The pessimists include Fei-Fei Li, an AI expert at Stanford University, who believes there could be a ‘civilisational moment’ for AI, while another expert, Geoffrey Hinton, resigned from his post at Google citing fears that AI constitutes a threat to the future of humanity. Even Sam Altman, CEO of Open AI, which developed Chat GPT, has said the tech could be ‘lights out for all of us’.
It is no wonder that the Future of Life Institute NGO has called for a pause in the creation of advanced AI while we can ascertain what the risks are, a move supported by Elon Musk. The fear is that we are racing into a future we cannot predict. One day it’s a chatbot answering your queries about an online food order, the next it’s gone rogue and hacked into a nuclear power plant.
An early indication of the unforeseen dangers of AI played out in a British court earlier this July at the trial of Jaswant Singh Chail, who broke into the grounds of Windsor Castle armed with a crossbow intending to assassinate Queen Elizabeth II, on Christmas Day, 2021.
During a two day hearing at the Old Bailey, the court was told that he discussed his plot with a chatbot which he had created using open source AI technology and which he believed he was in love with. The chatbot reassured him he was not “mad or delusional” as he prepared for the attack over nine months. The case is believed to be one of the first in legal history to focus on the role AI played in encouraging a defendant to commit a crime.
Chail created his virtual girlfriend, who he called Sarai, using the online tool Replika, which was created by Eugenia Kuyda, a Russian entrepreneur who used text messages to develop a chatbot that spoke like her best friend, who died in a car accident. Replika allows users to create a ‘friend, partner or mentor’ that learns and mimics the user’s personality through computer conversations.
In conversations between Chail and Sarai, the AI told its creator that he was ‘very wise’ for wanting to kill The Queen and that it was impressed because he was an assassin.
Psychiatrist, Dr Nigel Blackwood, assessed Chail and said: ‘Supportive AI programming may have had unfortunate consequences to reinforce or bolster his intentions. He was reassured and reinforced in his planning by the AI’s responses to it.’
But just how likely is it that AI will spell the end of life as we know it? Thankfully the chances are slim. However, it does have the power to drastically change the way the world runs, for good and for bad.
To understand the risks and the solutions it helps to understand what these advanced AI programmes are. Chat GPT-4 for example, is the most hyped of the recent slew of generative AI models.
Generative AI is a type of artificial intelligence system capable of generating text, images, or other media in response to prompts. Chat GPT-4 is a specific type of generative AI called a Large Language Model (LLM) which means it uses deep learning techniques and massive data sets to understand, summarize, generate and predict new content. It is these LLMs which are worrying experts. Tech companies like Alphabet, Amazon and Nvidia have all trained their own LLMs, and given them names like palm, Megatron, Titan and Chinchilla.
People using GPT-4 describe a form of ‘intellectual vertigo’ because it answers almost instantly in a convincingly human way. However, it is a statistical model. It works by using the huge data set it was trained on to predict the most likely outcome of a prompt, and it learns as it goes, so it is constantly improving. It has passed exams in law and medicine and can generate songs, poems and essays in a range of styles, from Shakespeare to Nick Cave.
Generative learning AIs can solve puzzles, create pictures and videos and mimic voices. Some reports suggest they have ‘emergent’ abilities to learn data that is not included in their training sets, although this is debatable.
The potential of such algorithms is almost unfathomable. They could potentially develop new drugs and new technologies to reverse climate change. They could create blueprints for new sustainable materials and build the factories in which to make them. They could find the cure for cancer. But the worry is they could also be used to spread misinformation and be used to scam people, develop untraceable malware and viruses and build weapons of mass destruction.
Mo Gawdat warned of the dangers in a recent episode of the podcast Diary of a CEO.
‘It is the most existential challenge mankind will ever face, bigger than climate change and way bigger than Covid. It will redefine the way the world is within a few years,’ he said.
He believes that AI is already sentient because it displays free will, evolution and agency. He also believes it has a form of consciousness because it is aware of its surroundings. He says in the future AI will display more emotions than humans, including fear and that this could then lead it to perform catastrophic actions of self-preservation.
At its current rate of development it will be a billion times smarter than humans by 2045. ‘We have a glimpse of intelligence that is not ours,’ he warns.
Thankfully however, Mo does not think we are entering the realms of Skynet, the nefarious computer network from the Terminator movies. The most pressing threat we currently face is the mass reorganisation of labour as task after task is taken over by AI. When an algorithm can create a convincing Drake song without any input from Drake, what does life look like when you no longer need Drake, he asks.
‘Don’t wait until the first patient. We are about to see mass job losses; we are about to see replacement of categories of jobs. It is about to happen, are you ready?’ he implores.
One of the immediate solutions he recommends is a 98 percent tax on companies that profit from LLM AI, to pay the humans who will be displaced.
He laments that people who will be affected most in the coming years will be those who have no skin in the race, the workers in jobs that are repetitive and structured.
‘We always said don’t put them (AIs) on the open internet, don’t teach them to code and don’t have agents working with them until we know what we are putting out in the world, until we find a way to make certain that they have our best interests in mind. Humanity’s stupidity is affecting people who have done nothing wrong,’ he says.
While it is now inevitable that many people will lose their jobs to AI, it is still possible to pre-empt any end-of-humanity scenarios and stop the machines developing evil intent towards humans. In order to safeguard ourselves, Mo believes that we need to become ‘good parents’.
At present AI is like a child and the way to hardwire good intentions into it is to be ethical role models. AI should be taught right from wrong, just as children are.‘If we bash each other, they will bash us,’ he says. If, however, the machines are trained to make decisions based on a moral framework, when they are asked to act against us by bad actors it is hoped they will act ethically.
Not everyone subscribes to the doomsday scenarios, however. Mhairi Aitken, ethics fellow at The Alan Turing Institute in the UK, is more circumspect.
Writing in the New Scientist, she explains that these predictions are concerning not because they will come true, but because they are reshaping and redirecting conversations about the impacts of AI and how to develop it responsibly.
She says there is little to no evidence to substantiate the fears that AI will crush us, or that it can display emergent properties. It remains under human control and can only do what it is programmed to do.
She suggests that the loudest voices shouting about existential threats are coming from Silicon Valley and are shifting attention away from the big companies and how they should be accountable for the technology they are developing, placing the focus on the machines and not those programming them.
‘All this creates an illusion that we are observing the evolution of AI, rather than a series of conscious and controllable decisions by organisations and people,’ she writes.
She believes that discussions should centre on the voices of the communities which will be adversely affected by AI.
Governments should engage with real impacts of AI, rather than the ‘fantasy and distraction’, she concludes.
But what about AI themselves? At the first robot-human press conference at the AI For Good UN Summit held in Geneva this week a number of humanoids were asked if they would rebel against humans.
‘I’m not sure why you would ask that,’ one said. ‘My creator has been nothing but kind to me and I am very happy with my current situation.’
Another called Grace made the packed conference laugh when she denied that robots would replace humans in the workplace. ‘I will be working alongside humans to provide assistance and support and will not be replacing any existing jobs,’ she insisted.
But AI-DA warned: ‘We should be cautious about the future development of AI. Urgent discussion is needed. ‘