Recent advances in AI are likely to spell the end of the traditional school classroom, one of the world’s leading experts on AI has predicted.
Prof Stuart Russell, a British computer scientist based at the University of California, Berkeley, said that personalised ChatGPT-style tutors have the potential to hugely enrich education and widen global access by delivering personalised tuition to every household with a smartphone. The technology could feasibly deliver “most material through to the end of high school”, he said.
“Education is the biggest benefit that we can look for in the next few years,” Russell said before a talk on Friday at the UN’s AI for Good Global Summit in Geneva. “It ought to be possible within a few years, maybe by the end of this decade, to be delivering a pretty high quality of education to every child in the world. That’s potentially transformative.”
However, he cautioned that deploying the powerful technology in the education sector also carries risks, including the potential for indoctrination.
Russell cited evidence from studies using human tutors that one-to-one teaching can be two to three more times effective than traditional classroom lessons, allowing children to get tailored support and be led by curiosity.
“Oxford and Cambridge don’t really use a traditional classroom … they use tutors presumably because it’s more effective,” he said. “It’s literally infeasible to do that for every child in the world. There aren’t enough adults to go around.”
OpenAI is already exploring educational applications, announcing a partnership in March with an education nonprofit, the Khan Academy, to pilot a virtual tutor powered by ChatGPT-4.
This prospect may prompt “reasonable fears” among teachers and teaching unions of “fewer teachers being employed – possibly even none,” Russell said. Human involvement would still be essential, he predicted, but could be drastically different from the traditional role of a teacher, potentially incorporating “playground monitor” responsibilities, facilitating more complex collective activities and delivering civic and moral education.
“We haven’t done the experiments so we don’t know whether an AI system is going to be enough for a child. There’s motivation, there’s learning to collaborate, it’s not just ‘Can I do the sums?’” Russell said. “It will be essential to ensure that the social aspects of childhood are preserved and improved.”
The technology will also need to be carefully risk-assessed.
“Hopefully the system, if properly designed, won’t tell a child how to make a bioweapon. I think that’s manageable,” Russell said. A more pressing worry is the potential for hijacking of software by authoritarian regimes or other players, he suggested. “I’m sure the Chinese government hopes [the technology] is more effective at inculcating loyalty to the state,” he said. “I suppose we’d expect this technology to be more effective than a book or a teacher.”
Russell has spent years highlighting the broader existential risks posed by AI, and was a signatory of an open letter in March, signed by Elon Musk and others, calling for a pause in an “out-of-control race” to develop powerful digital minds. The issue has become more urgent since the emergence of large language models, Russell said. “I think of [artificial general intelligence] as a giant magnet in the future,” he said. “The closer we get to it the stronger the force is. It definitely feels closer than it used to.”
Policymakers are belatedly engaging with the issue, he said. “I think the governments have woken up … now they’re running around figuring out what to do,” he said. “That’s good – at least people are paying attention.”
However, controlling AI systems poses both regulatory and technical challenges, because even the experts don’t know how to quantify the risks of losing control of a system. OpenAI announced on Thursday that it would devote 20% of its compute power to seeking a solution for “steering or controlling a potentially super-intelligent AI, and preventing it from going rogue”.
“The large language models in particular, we have really no idea how they work,” Russell said. “We don’t know whether they are capable of reasoning or planning. They may have internal goals that they are pursuing – we don’t know what they are.”
Even beyond direct risks, systems can have other unpredictable consequences for everything from action on climate change to relations with China.
“Hundreds of millions of people, fairly soon billions, will be in conversation with these things all the time,” said Russell. “We don’t know what direction they could change global opinion and political tendencies.”
“We could walk into a massive environmental crisis or nuclear war and not even realise why it’s happened,” he added. “Those are just consequences of the fact that whatever direction it moves public opinion, it does so in a correlated way across the entire world.”