OpenAI Shifts Attention to Superintelligence in 2025

News Room

OpenAI has announced that its primary focus for the coming year will be on developing “superintelligence,” according to a blog post from Sam Altman. This has been described as AI with greater-than-human capabilities.

While OpenAI’s current suite of products has a vast array of capabilities, Altman said that superintelligence will enable users to perform “anything else.” He highlights accelerating scientific discovery as the primary example, which, he believes, will lead to the betterment of society.

“This sounds like science fiction right now, and somewhat crazy to even talk about it. That’s alright—we’ve been there before and we’re OK with being there again,” he wrote.

The change of direction has been spurred by Altman’s confidence in his company now knowing “how to build AGI as we have traditionally understood it.” AGI, or artificial general intelligence, is typically defined as a system that matches human capabilities, whereas superintelligence exceeds them.

SEE: OpenAI’s Sora: Everything You Need to Know

Altman has eyed superintelligence for years — but concerns exist

OpenAI has been referring to superintelligence for several years when discussing the risks of AI systems and aligning them with human values. In July 2023, OpenAI announced it was hiring researchers to work on containing superintelligent AI.

The team would reportedly devote 20% of OpenAI’s total computing power to training what they call a human-level automated alignment researcher to keep future AI products in line. Concerns around superintelligent AI stem from how such a system could prove impossible to control and may not share human values.

“We need scientific and technical breakthroughs to steer and control AI systems much smarter than us,” wrote OpenAI Head of Alignment Jan Leike and co-founder and Chief Scientist Ilya Sutskever in a blog post at the time.

SEE: OpenAI and Anthropic Sign Deals With U.S. AI Safety Institute

But, four months after creating the team, another company post revealed they “still (did) not know how to reliably steer and control superhuman AI systems” and didn’t have a way of “preventing (a superintelligent AI) from going rogue.”

In May, OpenAI’s superintelligence safety team was disbanded and several senior personnel left due to the concern that “safety culture and processes have taken a backseat to shiny products,” including Jan Leike and the team’s co-lead Ilya Sutskever. The team’s work was absorbed by OpenAI’s other research efforts, according to Wired.

Despite this, Altman highlighted the importance of safety to OpenAI in his blog post. “We continue to believe that the best way to make an AI system safe is by iteratively and gradually releasing it into the world, giving society time to adapt and co-evolve with the technology, learning from experience, and continuing to make the technology safer,” he wrote.

“We believe in the importance of being world leaders on safety and alignment research, and in guiding that research with feedback from real world applications.”

The path to superintelligence may still be years away

There is disagreement about how long it will be until superintelligence is achieved. The November 2023 blog post said it could develop within a decade. But nearly a year later, Altman said it could be “a few thousand days away.”

However, Brent Smolinski, IBM VP and global head of Technology and Data Strategy, said this was “totally exaggerated,” in a company post from September 2024. “I don’t think we’re even in the right zip code for getting to superintelligence,” he said.

AI still requires much more data than humans to learn a new capability, is limited in the scope of capabilities, and does not possess consciousness or self-awareness, which Smolinski views as a key indicator of superintelligence.

He also claims that quantum computing could be the only way we might unlock AI that surpasses human intelligence. At the start of the decade, IBM predicted that quantum would begin to solve real business problems before 2030.

SEE: Breakthrough in Quantum Cloud Computing Ensures its Security and Privacy

Altman predicts AI agents will join the workforce in 2025

AI agents are semi-autonomous generative AI that can chain together or interact with applications to carry out instructions or make decisions in an unstructured environment. For example, Salesforce uses AI agents to call sales leads.

TechRepublic predicted at the end of the year that the use of AI agents will surge in 2025. Altman echoes this in his blog post, saying “we may see the first AI agents ‘join the workforce’ and materially change the output of companies.”

SEE: IBM: Enterprise IT Facing Imminent AI Agent Revolution

According to a research paper by Gartner, the first industry agents to dominate will be software development. “Existing AI coding assistants gain maturity, and AI agents provide the next set of incremental benefits,” the authors wrote.

By 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, according to the Gartner paper. A fifth of online store interactions and at least 15% of day-to-day work decisions will be conducted by agents by that year.

“We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word,” Altman wrote. “We’re pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important.”

Read the full article here

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *