Neural Network Pioneer Geoffrey Hinton Sounds The Alarm On AI
One of the world's foremost AI experts, and a pioneer of the neural network concept, now says part of him "regrets his life work." Dr. Geoffrey Hinton, who laid much of the groundwork current AI models use, has left his position at Google so he can freely speak about the danger artificial intelligence poses. Speaking to the New York Times, Dr. Hinton says: "It is hard to see how you can prevent the bad actors from using it for bad things."
Many of the 75-year-old's fears seem to center on how the development of AI is accelerating and how large corporations are competing to rush out more advanced models. He points towards his own time at Google, describing the company as a "proper steward" for the development of AI at first. However, he goes on to highlight Google's rushing out of Bard in response to Microsoft's release of Bing Chat as a sign that the focus has shifted.
His theories on the dangers of AI echo those of his contemporaries, for the most part. He shares the opinion that AI could upend the job market and lead to mass unemployment, and Dr. Hinton also believes that a malevolent AI could cause direct harm. He also warns of the dangers people and corporations with access to AI pose. On top of everything else, he worries AI may be impossible to regulate. Elon Musk has previously claimed AI could be more dangerous than nuclear weapons. Dr. Hinton points out nukes have been effectively regulated, while individuals, countries, and corporations could easily develop AI models in secret.
Hinton isn't the only tech expert to speak out
Hinton isn't the only tech expert who has issued a stark warning about the future dangers posed by AI. Earlier this year numerous experts, including Elon Musk and Steve Wozniak, signed a letter calling for a six-month freeze on AI experimentation. Dr. Hinton would have also signed the letter but claims he avoided doing so as he didn't want to criticize companies like Google while still working for them. The pause would halt the development of any model more powerful than GPT4 and is intended to give AI developers and experts time to come up with AI safety protocols. Governments would also be expected to produce and issue legislation during that time which would put strict boundaries on AI.
The dangers posed by AI are numerous, according to the experts writing the letter. Those concerned have pointed to the rapid manner in which corporations and smaller businesses have adopted ChatGPT and similar programs. They believe there is a danger that GPT4 and similar programs could put millions out of work and cause a global crisis that way. Then there is the belief that creating something far smarter and quicker than even the brightest humans is probably a bad idea. The main goal of the letter is to ensure future AIs are "accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal." Despite the warnings, the pause hasn't been implemented, and AI research continues.