Apple’s famed cofounder Steve Wozniak has made no secret of his anxieties around emerging A.I. technologies.
The multimillionaire—a college dropout who established Apple with Steve Jobs in the 1970s—has spoken out about the potential of artificial intelligence to be misused in a string of recent interviews.
In March, he cosigned an open letter with Elon Musk and more than 1,000 others to call for a six-month ban on creating powerful A.I. tools.
In an interview with the BBC on Monday, Wozniak—known in the tech world by the nickname “Woz”—reiterated his concerns that the technology could be hijacked and used for malicious purposes if it falls into the wrong hands.
He argued that A.I. is now so intelligent that it will make it easier for “bad actors” to trick others about who they are.
“A human really has to take the responsibility for what is generated by A.I.,” Wozniak said.
He conceded that “we can’t stop the technology,” but said regulation was needed to hold Big Tech to account when it came to what their artificial intelligence tools were capable of doing.
These companies, he told the BBC, “feel [as if] they can kind of get away with anything.”
However, Wozniak suggested that even if regulators intervened, they were unlikely to take the right steps to keep the development of artificial intelligence under control.
“I think the forces that drive for money usually win out, which is sort of sad,” he said.
A.I. is ‘dangerous’ and a ‘nightmare’
With billions being invested in the development of cutting-edge A.I. technology, many are speculating about how it will disrupt our day-to-day lives—leading to predictions of jobs being lost to machines, calls for greater A.I. governance, and forecasts that the world will soon see the dawn of a new A.I. era.
Tech giants Microsoft, Google, and Baidu are among those ramping up their efforts to launch advanced chatbots after OpenAI’s ChatGPT took the world by storm.
The rapid development of highly capable A.I. chatbots has sent alarm bells ringing across the tech world, however, with many more insiders voicing concern about the technology alongside Wozniak.
Elon Musk has been outspoken about his fears when it comes to advanced A.I. models, labeling them “more dangerous” than cars or rockets and warning that the technology has the potential to destroy humanity.
Geoffrey Hinton, the so-called Godfather of A.I., has also spoken out about the dangerous potential of artificial intelligence in recent weeks, warning of a “nightmare scenario” in which the tech could soon start to seek power.
Discussions around the tech and its potential to cause harm have also made their way to Washington.
Last week, the CEOs of Google, Microsoft, and OpenAI were summoned to the White House, where they were told they needed to ensure they were protecting the public from the dangers posed by artificial intelligence.
During the meeting, members of the Biden administration said that while A.I. innovation could benefit society, the tech created risks to safety, security, human and civil rights, privacy, jobs, and democratic values.
Meanwhile, in an op-ed for the New York Times last week, Federal Trade Commission boss Lina Khan argued that A.I. must be regulated.
‘Responsible’ A.I. development
For their part, tech bosses have insisted that their A.I. products are being developed responsibly.
OpenAI boss Sam Altman told reporters after last week’s White House meeting that tech executives and government officials were “surprisingly on the same page on what needs to happen.”
Microsoft has a slew of principles in place as part of its commitment to “the advancement of A.I. driven by ethical principles that put people first,” as does Google parent company Alphabet, which insists the technology is “creating new opportunities to improve the lives of people around the world.”
Tim Cook, the CEO of Apple, told investors last week that the tech giant would “continue weaving” A.I. into its products on “a very thoughtful basis.”