How Close Are We To AI Superintelligence? The 3 Types Of AI, Explained
OpenAI's Sam Altman said in a blog post last year that superintelligent AI could be achievable in the very near future. Other tech gurus agree that artificial superintelligence is coming soon, and many of them seem to think it's going to be great. The Chairman of Softbank and AI super-venture Stargate, Masayoshi Son, said during SoftBank's 2024 general meeting of shareholders that artificial superintelligence could surpass human brainpower by 10,000 times by 2035. At the Stargate announcement at the White House in January 2025, Son speculated that it would "solve the issues that mankind would never ever have thought that we could solve."
But what exactly is Artificial Superintelligence, and should we be excited or fearful about it? Superintelligent AI would be smarter than humans – smarter than we can even imagine with the limitations of our own intelligence. However, we are still several steps away from getting there. AI is categorized into three types: Artificial narrow intelligence (ANI), artificial general intelligence (AGI), and artificial superintelligence (ASI). At the moment, all AI — from chatbots to self-driving cars — is narrow AI. Before superintelligence is possible, technology needs to accomplish the next step — artificial general intelligence, that is, AI whose intellect is equal to humans.
And while Altman and Son think that achieving AGI and ASI are both imminent and desirable, other experts are skeptical about both the timeframes and the benefits. Among them is Brent Smolinski, AI Leader at Kearney, who wrote in a LinkedIn article, "It is still likely that we may never achieve superintelligence."
Artificial narrow intelligence: What we currently have
Every type of artificial intelligence that you've used so far has been narrow AI. As impressive as the most recent iterations of large language models or image generators are, they are still narrowly focused on performing one specific purpose. They're models that remain limited to the domain they were designed for. ChatGPT can't drive a car. Tesla's Autopilot can't compose music. AlphaFold can predict protein structures but can't carry on a conversation.
While these AI models excel at their particular functions, they lack the real understanding and versatility that we humans have. OpenAI's o3 may appear to think, but it doesn't actually analyze or reason like we humans do. What narrow AI can do is follow instructions very quickly and generate outputs based on statistical correlations learned during training. A famous early example is Google DeepMind's AlphaGo, which mastered the game of Go and defeated the world champion using moves that he said he wouldn't have thought of, but AlphaGo couldn't do anything else. Similarly, while you can talk to Gemini Live, Google's new conversational AI assistant, as though it were a real person, it is still operating within narrow constraints.
Artificial general intelligence: The next AI frontier
Artificial general intelligence (AGI) is broadly defined as when we have AI that thinks as well as a human. AGI is still hypothetical, and opinions vary about how close we are to achieving it. Elon Musk said in a livestream on X in April 2024 that we'll have AGI "within two years." Dario Amodei, the CEO of Anthropic, gives a similar prediction in his October 2024 essay, "Machines of Loving Grace." He writes, "I think it could come as early as 2026."
Others are more skeptical, giving longer deadlines or doubting whether it can happen at all. Mustafa Suleyman, CEO of Microsoft AI, said, during an interview with Nilay Patel on The Verge's Decoder podcast, "The uncertainty around this is so high that any categorical declarations just feel sort of ungrounded to me and over the top." Gary Marcus, professor of neural science at New York University, wrote in a recent Fortune article, "Most of the money has been invested on the premise that one of these companies would reach artificial general intelligence, which seems (at least in the near term) increasingly unlikely."
One of the problems with predicting how soon it will happen is that there is disagreement about what AGI actually is and how we'll know when we've achieved it. On its website, OpenAI defines AGI as "a highly autonomous system that outperforms humans at most economically valuable work." This falls short of other AGI definitions, which include being able to solve problems with the same flexibility as a human mind, the ability to autonomously learn, and self-awareness. While the idea of self-aware AI sounds like science fiction, this still falls in the AGI category. Humans, after all, achieve self-awareness without being superintelligent. Once we've achieved AGI, then progression to artificial superintelligence could be extremely quick.
Will AI build the next AI?
Humans might not need to build AI superintelligence. AGI could do it for us. Sam Altman says in an essay posted on his personal site, "AI systems are going to get so good that they help us make better next-generation systems." Leopold Aschenbrenner, author of "Situational Awareness", believes we could go from AGI to ASI "within just a year." He writes, "AI progress won't stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress into 1 year. We would rapidly go from human-level to vastly superhuman AI systems."
This is known as recursive self-improvement. AI builds the next new, improved AI, which in turn will build newer and more improved versions of themselves. AI breakthroughs happening at a far faster rate than humans could achieve could trigger an "intelligence explosion." This phrase was coined by mathematician I. J. Good in an essay called "Speculations Concerning the First Ultraintelligent Machine" back in 1965. He wrote, "an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind. Thus, the first ultraintelligent machine is the last invention that man need ever make."
Artificial superintelligence: Smarter than we can imagine
AGI already sounds like the stuff of science fiction, so what can we expect from superintelligence? ASI would surpass human capabilities and think in ways that we aren't capable of. Nick Bostrom, author of "Superintelligence," defines it as AI that can "greatly outperform the best current human minds." It will think better than us, which is a difficult thing to get our heads around due to, as Bostrom puts it, "our lack of experience with any variations of intelligence quality above the upper end of the present human distribution."
Probably the best example of a fictional artificial superintelligence is The Hitchhiker's Guide to the Galaxy's Deep Thought, which, when tasked to find the answer to life, the universe, and everything, proclaimed it was 42. This demonstrates a potential problem with ASI. It might give us answers that our puny little human intelligence will be unable to verify. It would be like us explaining quadratic equations to a gerbil.
We may not be able to understand its thought processes, but Sam Altman predicts in a post to his blog that we will certainly reap the benefits of superintelligent AI. "Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity." This might work in the same way that a cat can appreciate the benefits of an electric blanket without needing to understand the electrical resistance, heat transfer, and thermal regulation that makes it work. Terrence J. Sejnowski, author of "ChatGPT and the Future of AI," wrote an article for Built In where he hypothesized that "In best-case scenarios, superintelligence could be enormously helpful in advancing our health and wealth while preventing catastrophes created by humans."
How dangerous could artificial superintelligence become?
The best-case scenario is that ASI will use its mighty intelligence to cure cancer, solve global challenges like climate change, and generally help us humans out. However, there is another possible outcome. Roman V. Yampolskiy, Professor of Computer Science at the University of Louisville, says in his book "AI: Unexplainable, Unpredictable, Uncontrollable," that AI has "the potential to cause an existential catastrophe." Similarly, Dan Hendrycks, Head of the Center for AI Safety, wrote in his paper "Natural Selection Favors AIs over Humans," that superintelligence "could lead to humanity losing control of its future."
What can be done to safeguard against such a future? An open letter at AItreaty.org calls for "a pause on the advancement of AI capabilities," saying that "Half of AI researchers estimate more than a 10% chance that AI could lead to human extinction or a similarly catastrophic curtailment of humanity's potential." The Future of Life Institute has also published an open letter called "Pause Giant AI Experiments," saying "We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." It has over 30,000 signatories. However, there has been no pause in AI development. OpenAI states that "Building safe and beneficial AGI is our mission."
There is no consensus on what AI superintelligence is or how soon –- if ever -– it will become a reality. There is still time for tech companies to change course. They might even consider whether we need AI superintelligence at all. As Professor Yampolskiy said in a Q&A with the University of Louisville, "We can get most of the benefits we want from narrow AI, systems designed for specific tasks: develop a drug, drive a car. They don't have to be smarter than the smartest of us combined."