News Week
Magazine PRO

Company

See also  Infinix Note 50 Pro+ Live Images, Price Leaked; Said to Launch Globally on March 20

Epoch AI Launches FrontierMath AI Benchmark to Test Capabilities of AI Models

Date:

Epoch AI, a California-based research institute launched a new artificial intelligence (AI) benchmark last week. Dubbed FrontierMath, the new AI benchmark tests large language models (LLMs) on their capability of reseasoning and mathematical problem-solving. The AI firm claims that existing math benchmarks are not very useful due to factors like data contamination and AI models scoring very high scores on them. Epoch AI claims that even the leading LLMs have scored less than two percent on the new benchmark.

Epoch AI Launches FrontierMath Benchmark

In a post on X (formerly known as Twitter), the AI firm explained that it collaborated with more than 60 mathematicians to create hundreds of origins and unpublished math problems. Epoch AI claims that these questions would take even mathematicians hours to solve. The reason behind developing the new benchmark was cited as the limitations with existing benchmarks such as GSM8K and MATH, where AI models generally score a high point.

The company claimed that the high scores achieved by LLMs are largely due to data contamination. This means the questions somehow were already fed into the AI models, resulting in them easily solving the questions.

FrontierMath solves the problem by including new problems that are unique and have not been published anywhere, mitigating the risks associated with data contamination. Further, the benchmark includes a wide range of questions including computationally intensive problems in number theory, real analysis, and algebraic geometry, as well as topics such as Zermelo–Fraenkel set theory. The AI firm says all the questions are “guess proof”, meaning they cannot be solved accidentally without strong reasoning.

See also  Arm to Scrap Qualcomm Chip Design Licence in Feud Escalation

Epoch AI highlighted that to measure AI’s aptitude, benchmarks should be created on creative problem-solving where the AI has to maintain reasoning over multiple steps. Notably, many industry veterans believe that the existing benchmarks are not sufficient to correctly measure how advanced an AI model is.

Responding to the new benchmark in a post, Noam Brown, an OpenAI researcher who was behind the company’s o1 model welcomed the new benchmark and said, “I love seeing a new eval with such low pass rates for frontier models.”

Epoch AI, a California-based research institute launched a new artificial intelligence (AI) benchmark last week. Dubbed FrontierMath, the new AI benchmark tests large language models (LLMs) on their capability of reseasoning and mathematical problem-solving. The AI firm claims that existing math benchmarks are not very useful due to factors like data contamination and AI models scoring very high scores on them. Epoch AI claims that even the leading LLMs have scored less than two percent on the new benchmark.

Epoch AI Launches FrontierMath Benchmark

In a post on X (formerly known as Twitter), the AI firm explained that it collaborated with more than 60 mathematicians to create hundreds of origins and unpublished math problems. Epoch AI claims that these questions would take even mathematicians hours to solve. The reason behind developing the new benchmark was cited as the limitations with existing benchmarks such as GSM8K and MATH, where AI models generally score a high point.

The company claimed that the high scores achieved by LLMs are largely due to data contamination. This means the questions somehow were already fed into the AI models, resulting in them easily solving the questions.

See also  Mind-Controlling Fungus That Turns Spiders into Zombies Found in Ireland

FrontierMath solves the problem by including new problems that are unique and have not been published anywhere, mitigating the risks associated with data contamination. Further, the benchmark includes a wide range of questions including computationally intensive problems in number theory, real analysis, and algebraic geometry, as well as topics such as Zermelo–Fraenkel set theory. The AI firm says all the questions are “guess proof”, meaning they cannot be solved accidentally without strong reasoning.

Epoch AI highlighted that to measure AI’s aptitude, benchmarks should be created on creative problem-solving where the AI has to maintain reasoning over multiple steps. Notably, many industry veterans believe that the existing benchmarks are not sufficient to correctly measure how advanced an AI model is.

Responding to the new benchmark in a post, Noam Brown, an OpenAI researcher who was behind the company’s o1 model welcomed the new benchmark and said, “I love seeing a new eval with such low pass rates for frontier models.”

 

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

South Carolina prepares for second firing squad execution

A firing squad is set to kill a South...

RRB ALP Recruitment 2025: Apply for 9,970 vacancies from April 12; check selection process and other details here

The RRB ALP Recruitment 2025 application process for 9,970...

‘Gauti (Gautam Gambhir) bhai has helped me understand my potential’

Washington Sundar, a versatile all-rounder, faces the challenge of...

Apple is left without a life raft as Trump’s China trade war intensifies, analysts warn

Apple remains stranded without a life raft, experts say,...