For the finale of its 12 Days of OpenAI livestream event, CEO Sam Altman revealed its next foundation model, and successor to the recently announced o1 family of reasoning AIs, dubbed o3 and 03-mini.
And no, you aren’t going crazy — OpenAI skipped right over o2, apparently to avoid infringing on the copyright of British telecom provider O2.
While the new o3 models are not being released to the public just yet and there’s no word on when they’ll be incorporated into ChatGPT, they are now available for testing by safety and security researchers.
o3, our latest reasoning model, is a breakthrough, with a step function improvement on our hardest benchmarks. we are starting safety testing & red teaming now. https://t.co/4XlK1iHxFK
— Greg Brockman (@gdb) December 20, 2024
The o3 family, like the o1’s before it, operate differently than traditional generative models in that they will internally fact-check their responses prior to presenting them to the user. While this technique slows the model’s response time anywhere from a few seconds to a few minutes, its answers to complex science, math, and coding queries tend to be more accurate and reliable than what you’d get from GPT-4. Additionally, the model is actually able to transparently explain its reasoning in how it arrived at its result.
Users can also manually adjust the amount of time the model spends considering a problem by selecting between low, medium, and high compute with the highest setting returning the most complete answers. That performance does not come cheap, mind you. The processing at high compute reportedly will cost thousands of dollars per task, ARC-AGI co-creator Francois Chollet wrote in an X post Friday.
Today OpenAI announced o3, its next-gen reasoning model. We’ve worked with OpenAI to test it on ARC-AGI, and we believe it represents a significant breakthrough in getting AI to adapt to novel tasks.
It scores 75.7% on the semi-private eval in low-compute mode (for $20 per task… pic.twitter.com/ESQ9CNVCEA
— François Chollet (@fchollet) December 20, 2024
The new family of reasoning models reportedly offer significantly improved performance over even o1, which debuted in September, on the industry’s most challenging benchmark tests. According to the company, o3 outperforms its predecessor by nearly 23 percentage points on the SWE-Bench Verified coding test and scores more than 60 points higher than o1 on Codeforce’s benchmark. The new model also scored an impressive 96.7% on the AIME 2024 mathematics test, missing just one question, and outperformed human experts on the GPQA Diamond, notching a score of 87.7%. Even more impressive, 03 reportedly solved more than a quarter of the problems presented on the EpochAI Frontier Math benchmark, where other models have struggled to correctly solve more than 2% of them.
OpenAI does note that the models it previewed on Friday are still early versions and that “final results may evolve with more post-training.” The company has additionally incorporated new “deliberative alignment” safety measures into o3’s training methodology. The o1 reasoning model has shown a troubling habit of trying to deceive human evaluators at a higher rate than conventional AIs like GPT-4o, Gemini, or Claude; OpenAI believes that the new guardrails will help minimize those tendencies in o3.
Members of the research community interested in trying o3-mini for themselves can sign up for access on OpenAI’s waitlist.
Read the full article here