OpenAI is facing diminishing returns with its latest AI model while navigating the pressures of recent investments.
According to The Information, OpenAIâs next AI model â codenamed Orion â is delivering smaller performance gains compared to its predecessors.
In employee testing, Orion reportedly achieved the performance level of GPT-4 after completing just 20% of its training. However, the transition from GPT-4 to the anticipated GPT-5 is said to exhibit smaller quality improvements than the leap from GPT-3 to GPT-4.
âSome researchers at the company believe Orion isnât reliably better than its predecessor in handling certain tasks,â stated employees in the report. âOrion performs better at language tasks but may not outperform previous models at tasks such as coding, according to an OpenAI employee.â
Early stages of AI training usually yield the most significant improvements, while subsequent phases typically result in smaller performance gains. Consequently, the remaining 80% of training is unlikely to deliver advancements on par with previous generational improvements.
This situation with its latest AI model emerges at a pivotal time for OpenAI, following a recent funding round that saw the company raise $6.6 billion. With this financial backing comes increased expectations from investors, as well as technical challenges that complicate traditional scaling methodologies in AI development.
If these early versions do not meet expectations, OpenAIâs future fundraising prospects may not attract the same level of interest.
The limitations highlighted in the report underline a significant challenge confronting the entire AI industry: the diminishing availability of high-quality training data and the necessity to maintain relevance in an increasingly competitive field.
According to a paper (PDF) that was published in June, AI firms will deplete the pool of publicly available human-generated text data between 2026 and 2032. The Information notes that developers have ââlargely squeezed as much out ofâ the data that has been used for enabling the rapid AI advancements weâve seen in recent years.
To address these challenges, OpenAI is fundamentally rethinking its AI development strategy.
âIn response to the recent challenge to training-based scaling laws posed by slowing GPT improvements, the industry appears to be shifting its effort to improving models after their initial training, potentially yielding a different type of scaling law,â explains The Information.
As OpenAI navigates these challenges, the company must balance innovation with practical application and investor expectations. However, the ongoing exodus of leading figures from the company wonât help matters.
(Photo by Jukan Tateisi)
See also: ASI Alliance launches AIRIS that âlearnsâ in Minecraft
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Read the full article here