Qodo raises $70M for code verification as AI coding scales
Maor Ventures, Phoenix Venture Partners, S Ventures, Square Peg, Susa Ventures, TLV Partners, Vine Ventures, Peter Welinder (OpenAI), and Clara Shih (Meta) also joined in the round. Qodo is aiming to serve as a layer focused on improving trust in AI-generated code as enterprises accelerate adoption of tools like OpenClaw and Claude Code.
While most AI review tools focus on what changed, Qodo focuses on how code changes affect entire systems, factoring in organizational standards, historical context, and risk tolerance to help companies better manage AI-generated code more confidently.
At Mellanox, where he worked on automating hardware verification using machine learning, he realized that “generating systems and verifying systems require very different approaches (different tools, different thinking). ” Later, at Alibaba’s Damo Academy, he saw AI evolve toward systems capable of reasoning over human language.
By 2021-2022, just ahead of GPT-3.
5, it became clear to him that AI would generate a large share of the world’s content — especially code — reinforcing his view that code generation and verification would require fundamentally different systems.
“Code generation companies are largely built around LLMs.
But for code quality and governance, LLMs alone aren’t enough,” Friedman said.
It depends on organizational standards, past decisions, and tribal knowledge. An LLM can’t fully understand that context. It’s like taking a great engineer from one company and asking them to review code at another — they lack the internal context. ” Companies such as OpenAI and Anthropic are helping shape the broader AI narrative, including in adjacent areas like code review, but they are largely focused on building features rather than end-to-end solutions, Friedman explained.
Qodo is leaning into performance to stand out in a crowded market.
The startup recently ranked No.
1 on Martian’s Code Review Bench, scoring 64. 3% — more than 10 points ahead of the next competitor and 25 points ahead of Claude Code Review. The benchmark highlights its ability to catch tricky logic bugs and cross-file issues without overwhelming developers with noise.
In the past month, it has launched Qodo 2.
0, a multi-agent code review system now leading current benchmarks, and introduced tools that learn each organization’s definition of code quality. The company is already working with major enterprises such as Nvidia, Walmart, Red Hat, Intuit, and Texas Instruments, as well as high-growth firms like Monday. “Every year has had a defining moment — from Copilot to ChatGPT to full task automation,” Friedman said.
’ That’s what Qodo is built for
Logic Quality Breakdown:
- Updated_At:
- Truth_Blocks:
- Analysis_Method: