Ben Lazarus Ben Lazarus

The godfather of AI: why I left Google

Geoffrey Hinton outside Google HQ (AP Photo/Noah Berger/Alamy)

Already a subscriber? Log in

This article is for subscribers only

Subscribe today to get 3 months' delivery of the magazine, as well as online and app access, for only £3.

  • Weekly delivery of the magazine
  • Unlimited access to our website and app
  • Enjoy Spectator newsletters and podcasts
  • Explore our online archive, going back to 1828

The chief scientist at OpenAI, Ilya Sutskever, is a former student of Hinton’s. ‘I talk to him regularly,’ Hinton says. ‘He’s quite concerned. But he doesn’t think that the problems will be solved by OpenAI stopping research.’

Hinton is also pessimistic about whether any brakes can be applied, because of the competition between the US and China. ‘I don’t think there’s any chance whatsoever of getting a pause. If they paused in the West, China wouldn’t pause. There’s a race going on.’

Does Hinton regret his lifetime’s work? ‘I don’t really have regrets. There’s the standard excuse if I hadn’t done it, somebody else would have,’ he says. ‘But also, until very recently it looked like the benefits were going to be great and were certain; the risks were a long way off and not certain. The decision to work on it may have turned out to be unfortunate, but it was a wise decision at the time.’

His reasons for leaving Google are ‘complicated’: ‘I’m 75. And I’m finding it harder and harder to do the technical work, because you have to remember all sorts of fine details and that’s tricky. Another reason is I want to actually be able to tell the public how responsible Google has been so far. I want to say good things about Google, and it’s much more credible if I’m not there. The third reason is, even if Google doesn’t tell you what you should and shouldn’t say, there’s inevitable self-censorship if you work for an organisation… I’m aware of self-censorship. And so I don’t want to be constrained by it. I just want to be able to say what I believe.’

At the heart of his Oppenheimer-style U-turn is the fear that the human brain isn’t as impressive as digital intelligence. ‘It was always assumed before that the brain was the best thing there was, and the things that we were producing were kind of wimpy attempts to mimic the brain. For 49 of the 50 years I was working on it, that’s what I believed – brains were better.’

The human brain, he explains, runs on very low power: ‘about 30 watts and we’ve got about 100 trillion connections’. He says trying to learn knowledge from someone ‘is a slow and painful business’.

Digital intelligence requires much more energy but is shared across entire networks. ‘If we fabricate it accurately, then you can have thousands and thousands of agents. When one agent learns something, all the others know it instantly… They can process so much more data than we can and see all sorts of things we’ll never see.

‘A way of thinking about this is: if you go to your doctor with a rare condition, you’re lucky if they have seen even one case before. Now imagine going to a doctor who’d seen 100 million patients, including dozens who have this rare condition.’ So there are ‘wonderful’ short-term gains for AI being used in medicine and elsewhere, he says. ‘Anywhere where humans use intelligence, it’s going to help.’ Then he adds, with a smile: ‘In particular, it’s going to help where it’s a kind of not very acute intelligence like law.’

How exactly does Hinton think this digital intelligence could harm humanity? ‘Imagine it [AI] has the power to perform actions in the world as opposed to just answering questions. So a little bit of that power would be the power to connect to the internet and look things up on the internet, which chatbots didn’t originally have. Now imagine a household robot where you could tell it what to do and it can do things. That household robot will be a lot smarter than you. Are you confident it would keep doing what you told it to? We don’t know the answer. It’s kind of like the sorcerer’s apprentice.’

One of the big concerns for Hinton is that as AI progresses it will develop sub-goals to work more efficiently in achieving its main goal, but these sub-goals won’t necessarily align with human objectives and that will make us vulnerable to AI manipulation.

‘If you look at a baby in a highchair, its mother gives him a spoon to feed itself. And what’s the first thing he does? He drops it on the floor and the mother picks it up. He looks at his mother and drops it again.

‘The baby is trying to get control of his mother. There’s a very good reason for that – the more control you have, the easier it is to achieve your other goals. That’s why, in general, having power is good, because it allows you to achieve other things… Inevitably people will give these systems the ability to create sub-goals, because that’s how to make them efficient. One of the sub-goals they will immediately derive is to get more power, because that makes everything else easier.’

Even if AI manipulated us to have sub-goals to be more efficient, would they necessarily want more power? Here’s where evolution comes in, according to Hinton. ‘A sort of basic principle of evolution is if you take two varieties of a species, the one that produces more viable offspring wins. It’s going to end up replacing the other one. You see it operating very fast with viruses, like Omicron. The virus with a higher infection rate wins. But that’s true for all species. That’s how evolution works. Things that can produce more viable offspring win.

‘One of the sub-goals the systems will derive is to get more power, because that makes everything else easier’

‘Now imagine there are several different AGI. Which one’s going to win? The one that produces more copies of itself. So what worries me is if AGI ever got the idea that it should produce lots of copies of itself, then the one that was best at doing that would wipe out the others. It’s not clear that being nice to humans is going to help it produce more copies of itself. I don’t see why we shouldn’t get evolution among AGI.’

Even worse for us is that these machines can’t die. ‘If one of those digital computers dies, it doesn’t matter. You haven’t lost the knowledge. Also, if you just record the knowledge in a memory medium, then as soon as you have another digital computer, you can download it – it’s alive again.’

But surely we could just unplug a dodgy AI and deprive it of electricity – would that not stop them? Hinton smiles at me and paraphrases Hal from 2001: A Space Odyssey. ‘I’m sorry, Dave. I can’t answer that question.’

Comments

Join the debate for just $5 for 3 months

Be part of the conversation with other Spectator readers by getting your first three months for $5.

Already a subscriber? Log in