Hinton, 75, recently said in an interview with the media that the rapidly developing artificial intelligence technology could gain the ability to be smarter than humans in "five years." If that happens, he added, AI could evolve to a point where humans can't control it.
Hinton noted that one of the ways AI systems might escape control is by writing their own "own computer code," which is something we need to worry about seriously.
Hinton's work is considered crucial to the development of contemporary artificial intelligence systems. In 1986, he co-authored a seminal paper, "Learning Representations through Backpropagation Errors," which was a milestone in the development of neural networks that underlay artificial intelligence techniques. In 2018, he was awarded the Turing Award in recognition of his research breakthroughs.
In April, after 10 years at Google, Hinton stepped down from his position as vice president and engineering fellow so he could speak freely about the risks posed by artificial intelligence. After his departure, Hinton said he regretted his role in developing the technology.
In particular, he stressed that humans, including leading scientists like himself building AI systems, still don't fully understand how the technology works and evolves.
Many AI researchers also freely admit to a lack of understanding. In April, Google CEO Sundar Pichai called it a "black box" problem for artificial intelligence.
As Hinton describes it, scientists design algorithms for AI systems to extract information from "datasets," such as the Internet. "When this learning algorithm interacts with data, it produces complex neural networks that are good at doing certain things," he says. But to be honest, we don't really understand how they do it."
Pichai and other AI experts don't seem to share Hinton's concerns about machines running out of control. Yann LeCun, another Turing Award winner, called any warnings that AI could replace humans "ridiculous" because humans can always block any technology that becomes too dangerous.
But Hinton believes the worst-case scenario is not certain, and that industries such as healthcare are already benefiting greatly from AI.
In addition, AI-generated misinformation, fake photos and videos are widely circulated online. He called for more research to understand AI, for governments to introduce regulations to control the technology, and for a global ban on AI military robots.
Last month, American tech leaders gathered on Capitol Hill for an Artificial Intelligence summit hosted by Senate Majority Leader Chuck Schumer to discuss the future regulation of artificial intelligence. Although tech giants have called on the US Congress to pass legislation to strengthen the regulation of artificial intelligence, tech companies are still divided on how to do so.
Hinton pointed out that whether the artificial intelligence guardrail is implemented by technology companies or under the mandatory requirements of the US federal government, relevant regulatory measures need to be introduced as soon as possible.
He believes that humanity may be at "some kind of tipping point" and that there is huge uncertainty about what happens next. He added that tech and government leaders must decide "whether to develop these things further and how to protect themselves if they do."