Codex-Spark (GPT-5.3-Codex-Spark) Research Preview: 1000+ Tokens/Second Coding Model
OpenAI
OpenAI released GPT-5.3-Codex-Spark as a research preview for ChatGPT Pro users in the Codex app, CLI, and VS Code extension. The model is optimized to exceed 1000 tokens per second with a 128k context window, enabling real-time interruption and redirection while the model is generating. API access is rolling out to a small set of design partners.
Why it matters
A dramatic speed increase over standard Codex throughput makes true real-time pair-programming viable, allowing developers to interrupt, steer, and rapidly iterate without waiting for generation to complete.
Importance: 4/5
OpenAI frontier coding model — 1000+ t/s enables real-time pair programming, research preview for Pro users