A new version of OpenAI’s Codex is powered by a new dedicated chip
On Thursday, OpenAI announced the release of a light-weight version of its agentic coding tool Codex, the latest model of which OpenAI launched earlier this month.
To power that inference, OpenAI has brought in a dedicated chip from its hardware partner Cerebras, marking a new level of integration in the company’s physical infrastructure.
Now, OpenAI calls Spark the “first milestone” in that relationship.
The WSE-3 is Cerebras’ third-generation waferscale megachip, decked out with 4 trillion transistors.
Spark is currently enjoying a research preview for ChatGPT Pro users in the Codex app.
In a tweet in advance of the announcement, CEO Sam Altman seemed to hint at the new model. “We have a special thing launching to Codex users on the Pro plan later today,” Altman tweeted. “It sparks joy for me. ” In its official statement, OpenAI emphasized Spark as designed for the lowest possible latency on Codex.
The company added that Cerebras’ chips excel at assisting “workflows that demand extremely low latency.
The company has previously announced its intentions to pursue an IPO.
“What excites us most about GPT-5.
3-Codex-Spark is partnering with OpenAI and the developer community to discover what fast inference makes possible — new interaction patterns, new use cases, and a fundamentally different model experience,” Sean Lie, CTO and co-founder of Cerebras, said in a statement.
Logic Quality Breakdown:
- Updated_At:
- Truth_Blocks:
- Analysis_Method: