OpenAI Just Formed A Team To Keep Superintelligent AI Under Control

OpenAI, widely positioned at the vanguard of AI development, is already thinking about the evolution of a superintelligent AI and how it can be reined in before it can cause any irreversible harm to humanity. The company has announced the formation of Superalignment, a group led by OpenAI co-founder and chief scientist Ilya Sutskever, that will develop strategies and control methods to guide superintelligent AI systems. This won't be the first time that OpenAI is talking about superintelligent AI, a concept that is deemed hypothetical by some, and a looming threat to humanity by the rest. 

Advertisement

In May 2023, OpenAI chief Sam Altman co-authored a paper with Sutskever, in which he described the need for a special kind of supervision for superintelligent AI and how it must be handled safely before it can be integrated with human society. A key prospect was that a threshold should be set for AI systems, and as soon as they cross a certain level of machine intelligence, an international authority must step in to conduct an audit, inspection, compliance check, and most importantly, implement restrictions wherever necessary. 

However, the paper also warned that "it would be unintuitively risky and difficult to stop the creation of superintelligence." That's where Superalignment comes into the picture as a group with the right kind of expertise and foresight to negate the perceived risks associated with a superintelligent AI system. "Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue," says OpenAI.

Advertisement

What's the plan?

The biggest challenge that comes with a superintelligent AI is that it can't be controlled with the same kind of human supervision methods that are deployed for current-gen models like GPT-4, which powers products like ChatGPT. For systems that are smarter than humans, Superalignment proposes the use of AI systems to evaluate other AI systems and aims to automate the process of finding anomalous behavior. The team would employ adversarial techniques like testing a pipeline by willingly training it with misaligned models. 

Advertisement

OpenAI is dedicating 20% of the entire compute resources toward the Superalignment team's goal over the next four years. OpenAI's plans of "aligning superintelligent AI systems with human intent" come at a time when calls for pausing the development of existing models like GPT-4 have been raised from industry stalwarts like Elon Musk, Steve Wozniak, and top scientists across the world citing threats posed to humanity.

Meanwhile, regulatory bodies are also scrambling to formulate, and implement, guardrails so that AI systems don't end up becoming an uncontrolled mess. But when it comes to superintelligent AI, there's a whole world of uncertainty about the capabilities, risk potential, and whether it is even feasible. Fascinating collaborative research by experts at the University of California San Diego, Max-Planck Institute for Human Development, ORGIMDEA Networks Institute of Madrid, and the University of Chile concluded that a superintelligent system would be impossible to contain and that the containment hypothesis for such a system itself is incomputable.

Advertisement

Recommended

Advertisement