OpenAI chief Sam Altman has declared that humanity has crossed into the era of artificial superintelligence—and there’s no turning back.
“We are past the event horizon; the takeoff has started,” Altman states. “Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.”
The lack of visible signs – robots aren’t yet wandering our high streets, disease remains unconquered – masks what Altman characterises as a profound transformation already underway. Behind closed doors at tech firms like his own, systems are emerging that can outmatch general human intellect.
“In some big sense, ChatGPT is already more powerful than any human who has ever lived,” Altman claims, noting that “hundreds of millions of people rely on it every day and for increasingly important tasks.”
This casual observation hints at a troubling reality: such systems already wield enormous influence, with even minor flaws potentially causing widespread harm when multiplied across their vast user base.
The road to superintelligence
Altman outlines a timeline towards superintelligence that might leave many readers checking their calendars.
By next year, he expects “the arrival of agents that can do real cognitive work,” fundamentally transforming software development. The following year could bring “systems that can figure out novel insights”—meaning AI that generates original discoveries rather than merely processing existing knowledge. By 2027, we might see “robots that can do tasks in the real world.”
Each prediction seems to leap beyond the previous one in capability, drawing a line that points unmistakably toward superintelligence—systems whose intellectual capacity vastly outstrips human potential across most domains.
“We do not know how far beyond human-level intelligence we can go, but we are about to find out,” Altman states.
This progression has sparked fierce debate among experts, with some arguing these capabilities remain decades away. Yet Altman’s timeline suggests OpenAI has internal evidence for this accelerated path that isn’t yet public knowledge.
A feedback loop that changes everything
What makes current AI development uniquely concerning is what Altman calls a “larval version of recursive self-improvement”—the ability of today’s AI to help researchers build tomorrow’s more capable systems.
“Advanced AI is interesting for many reasons, but perhaps nothing is quite as significant as the fact that we can use it to do faster AI research,” he explains. “If we can do a decade’s worth of research in a year, or a month, then the rate of progress will obviously be quite different.”
This acceleration compounds as multiple feedback loops intersect. Economic value drives infrastructure development, which enables more powerful systems, which generate more economic value. Meanwhile, the creation of physical robots capable of manufacturing more robots could create another explosive cycle of growth.
“The rate of new wonders being achieved will be immense,” Altman predicts. “It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonisation the next year.”
Such statements would sound like hyperbole from almost anyone else. Coming from the man overseeing some of the most advanced AI systems on the planet, they demand at least some consideration.
Living alongside superintelligence
Despite the potential impact, Altman believes many aspects of human life will retain their familiar contours. People will still form meaningful relationships, create art, and enjoy simple pleasures.
But beneath these constants, society faces profound disruption. “Whole classes of jobs” will disappear—potentially at a pace that outstrips our ability to create new roles or retrain workers. The silver lining, according to Altman, is that “the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before.”
For those struggling to imagine this future, Altman offers a thought experiment: “A subsistence farmer from a thousand years ago would look at what many of us do and say we have fake jobs, and think that we are just playing games to entertain ourselves since we have plenty of food and unimaginable luxuries.”
Our descendants may view our most prestigious professions with similar bemusement.
The alignment problem
Amid these predictions, Altman identifies a challenge that keeps AI safety researchers awake at night: ensuring superintelligent systems remain aligned with human values and intentions.
Altman states the need to solve “the alignment problem, meaning that we can robustly guarantee that we get AI systems to learn and act towards what we collectively really want over the long-term”. He contrasts this with social media algorithms that maximise engagement by exploiting psychological vulnerabilities.
This isn’t merely a technical issue but an existential one. If superintelligence emerges without robust alignment, the consequences could be devastating. Yet defining “what we collectively really want” will be almost impossible in a diverse global society with competing values and interests.
“The sooner the world can start a conversation about what these broad bounds are and how we define collective alignment, the better,” Altman urges.
OpenAI is building a global brain
Altman has repeatedly characterised what OpenAI is building as “a brain for the world.”
This isn’t meant metaphorically. OpenAI and its competitors are creating cognitive systems intended to integrate into every aspect of human civilisation—systems that, by Altman’s own admission, will exceed human capabilities across domains.
“Intelligence too cheap to meter is well within grasp,” Altman states, suggesting that superintelligent capabilities will eventually become as ubiquitous and affordable as electricity.
For those dismissing such claims as science fiction, Altman offers a reminder that merely a few years ago, today’s AI capabilities seemed equally implausible: “If we told you back in 2020 we were going to be where we are today, it probably sounded more crazy than our current predictions about 2030.”
As the AI industry continues its march toward superintelligence, Altman’s closing wish – “May we scale smoothly, exponentially, and uneventfully through superintelligence” – sounds less like a prediction and more like a prayer.
While timelines may (and will) be disputed, the OpenAI chief makes clear the race toward superintelligence isn’t coming—it’s already here. Humanity must grapple with what that means.
See also: Magistral: Mistral AI challenges big tech with reasoning model
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Read the full article here