Elon Musk, Steve Wozniak Join AI Experts In Pushing To 'Pause Giant AI Experiments'

The race for building a better AI product is white hot, but some of the biggest names in the world of technology and academic research are now demanding a "pause and reflect" moment. The likes of SpaceX chief Elon Musk, Apple co-founder Steve Wozniak, CEOs at some of the most extensive AI labs, academics, and scientists have signed an open letter titled "Pause Giant AI Experiments: An Open Letter" that wants "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4."  

Advertisement

"Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?" asks the letter. OpenAI chief Sam Altman recently said that ChatGPT is going to eliminate a lot of current jobs. Altman didn't sign the letter, nor did Meta's chief AI scientist Yann LeCun, who disagreed with the whole premise. Interestingly, there is not a single signatory from OpenAI, which can be credited with starting the mass AI frenzy with popular tools like ChatGPT and Dall-E image generator.

Notably, the open letter cites OpenAI's recommendation for taking a step back and seeking independent review for advancing AI innovation at some point in the future. The core objective of the call is to ensure that further AI development should happen only when the stakeholders are confident about their benefits, effects, and, more importantly, the risk management limits.

Advertisement

Should we risk loss of control of our civilization?

The paper mentions GPT-4 as the upper limit for AI systems, which isn't surprising. GPT-4, aside from being faster and smarter than its previous iterations, also happens to be multi-modal. Its pace of adoption by well-known consumer-facing brands has been unprecedented. According to the open letter, putting a temporary lid on the development of AI that is as smart as GPT-4 must be done publicly and in a verifiable fashion. If such an industry-wide AI development pause can't be implemented, the backup recommendation is that government agencies should swoop in and execute the plan.

Advertisement

It further states that during the halt phase, AI labs and experts should come together to develop AI safety protocols with complete transparency and independent checks. Simultaneously, the government must set regulatory guidelines, outline laws to decide liability in cases involving AI-assisted harm, and understand the political and economic ramifications of uncontrolled AI development. The ultimate goal is to create AI models that are "accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal."

Interestingly, Musk, one of the most notable signatories of the open letter and a donor and advisor to the Future of Life Institute that published the letter, has some odd history with AI and OpenAI. The Tesla chief was one of the early backers of OpenAI. According to a Semafor report, Musk wanted to buy the company but walked away after being told he couldn't run the company.

Advertisement

Recommended

Advertisement