In Brief Last month, I wrote about Mercor’s new benchmark measuring AI agents’ capabilities on professional tasks like law and corporate analysis.

At the time, the scores were pretty dismal, with every major lab scoring under 25%, so we concluded lawyers were safe from AI displacement, at least for now.

But AI capabilities can change a lot in a couple of weeks.

This week’s release of Anthropic’s Opus 4.

6 shook up the leaderboards, with Anthropic’s new model scoring just shy of 30% in one-shot trials, and an average of 45% when given a few more cracks at the problem. Notably, the release included a bunch of new agentic features, including “agent swarms,” which may have helped with this kind of multistep problem-solving.

Regardless, the score is a huge jump from the previous state-of-the-art, and a sign that progress on foundation models isn’t slowing down.

Mercor CEO Brendan Foody, who was particularly impressed, said, “jumping from 18. 8% in a few months is insane. ” The APEX-Agents Leaderboard. Image Credits:Mercor (screenshot) Thirty percent is still a long way from 100%, so it’s not like lawyers need to be worried about getting replaced by machines next week.

But they should be a lot less confident than they were last month!

Highlighted sentences link to their corresponding claims. Click any highlighted sentence to jump to its detailed analysis.
Highlight Colors Indicate Claim Quality:
✓ Healthy Claim - No fallacies or contradictions detected
⚠️ Minor Issues - Has contradictions or minor fallacies
🚨 Serious Issues - Multiple contradictions or severe fallacies
Quality Criteria: Claims are evaluated for logical fallacies and contradictions with other news sources. Green highlights indicate healthy claims suitable for reference.
Source