OpenAI’s newest LLM, o3, is facing scrutiny after independent tests found it solved a far fewer number of tough math problems than the company first claimed.
When OpenAI unveiled o3 in December, executives said the model could answer “just over a fourth” of the problems in FrontierMath, a notoriously hard set of graduate‑level math puzzles.
The best competitor, they added, was stuck near 2%. “Today, all offerings out there have less than 2%,” Chief Research Officer Mark Chen said during o3 and o3 mini livestream. “We’re seeing, with o3 in aggressive test‑time compute settings, we’re able to get over 25%.”
TechCrunch reported that the result was obtained by OpenAI on a version of o3 that used more computing power than the model the company released last week.
On Friday, the research institute Epoch AI, which created FrontierMath, published its own score for the public o3.
OpenAI has released o3, their highly anticipated reasoning model, along with o4-mini, a smaller and cheaper model that succeeds o3-mini.
We evaluated the new models on our suite of math and science benchmarks. Results in thread! pic.twitter.com/5gbtzkEy1B
— Epoch AI (@EpochAIResearch) April 18, 2025
Using an updated 290‑question edition of the benchmark, Epoch put the model at about 10%.
The result does match with a lower‑bound figure in OpenAI’s December technical paper, and Epoch cautioned that the discrepancy could be due to various reasons.
“The difference between our results and OpenAI’s might be due to OpenAI evaluating with a more powerful internal scaffold, using more test‑time computing, or because those results were run on a different subset of FrontierMath,” Epoch wrote.
FrontierMath is designed to measure progress toward advanced mathematical reasoning. The December 2024 public set contained 180 problems, while the February 2025 private update expanded the pool to 290.
Shifts in the question list and the amount of computing power allowed at test time can cause large swings in reported percentages.
OpenAI confirmed the public o3 model uses less compute than the demo version
Evidence that the commercial o3 is lacking also came from tests by the ARC Prize Foundation, which tried an earlier, larger build. The public release “is a different model… tuned for chat/product use,” ARC Price Foundation posted on X, adding that “all released o3 compute tiers are smaller than the version we benchmarked.”
OpenAI employee Wenda Zhou offered a similar explanation during a livestream last week. The production system, he said, was “more optimized for real‑world use cases” and speed. “We’ve done [optimizations] to make the model more cost efficient [and] more useful in general,” Zhou said, while acknowledging possible benchmark “disparities.”
Two smaller models from the company, o3‑mini‑high and the newly announced o4‑mini, already beat o3 on FrontierMath, and OpenAI says a better o3‑pro variant will arrive in the coming weeks.
Still, it shows how benchmark headlines can be misleading. In January, Epoch was criticized for delaying disclosure of OpenAI funding until after o3’s debut. More recently, Elon Musk’s startup xAI was accused of presenting charts that overstated the capabilities of its Grok 3 model.
Industry watchers say such benchmark controversies are becoming an occurrence in the AI industry as companies race to capture headlines with new models.
Cryptopolitan Academy: Tired of market swings? Learn how DeFi can help you build steady passive income. Register Now