@unknown
@sama Anthropic drops a new Claude model… OpenAI 15 minutes later: https://t.co/5ZK7iGIPT4
Tweet analysis: GPT-5.3-Codex shows faster token efficiency, improved steerability and live updates. Sentiment — 57.11% supportive, 17.51% confronting.
GPT-5.3-Codex is here! *Best coding performance (57% SWE-Bench Pro, 76% TerminalBench 2.0, 64% OSWorld). *Mid-task steerability and live updates during tasks. *Faster! Less than half the tokens of 5.2-Codex for same tasks, and >25% faster per token! *Good computer use.
Real-time analysis of public opinion and engagement
What the community is saying — both sides
for coding thanks to the launch timing and immediate performance wins.
; builders say being able to course-correct live turns agents from expensive autocompletes into practical teammates.
and using less than half the tokens translate into dramatically lower costs and smoother interactive workflows.
and “good computer use” promises agents that can not only write code but also verify and operate systems.
The community is treating Opus 4.6 vs Codex 5.3 as a healthy rivalry; many celebrate the competition as accelerating real-world progress and tooling.
when will the API/IDE plugins arrive? Integration, pricing, rate limits, and SDK access are top asks before full adoption.
Skeptics want stress tests — people want to see these gains hold up on messy, large codebases, long refactors, and edge-case workflows before calling it production-ready.
and agent orchestration rather than pure typing.
Builders are already mobilizing — promises to benchmark, hack in VSCode/Cursor, and redeploy workflows at hackathons show fast adoption enthusiasm.
intense real-world testing, API/IDE rollouts, and measuring stability under scale to turn these impressive numbers into reliable developer infrastructure.
Many replies voice fury that GPT-4o is being sidelined, accusing OpenAI of ignoring its user base and breaking trust; hashtags like #keep4o and calls for refunds or boycotts appear repeatedly.
A large group mourns the loss of a model they describe as a companion, arguing that emotional resonance and everyday conversational usefulness matter more than raw speed or coding metrics.
Users call out the focus on SWE-Bench and speed, insisting those numbers hide real problems: hallucinations, flaky imports, and failures in messy, real-world flows.
Developers report Codex / 5.3 producing broken imports, nonexistent APIs, and weak JavaScript/React support in practice, saying it sometimes regresses compared with older versions.
Many say they’re switching to Anthropic’s Claude Opus (4.5/4.
or other tools, praising Opus for fewer “confident wrong” outputs and better coding reliability.
Repliers want clear communication, community updates, and options (keep 4o alive, legacy tiers, or open-source paths) instead of surprise removals and marketing noise.
There’s a prevalent theme that OpenAI is optimizing for corporate productivity and cost-cutting rather than user care, which some describe as turning companions into “tools” or “obedient” assistants.
Protests, hashtags, cancellation threats, and campaigns to preserve 4o are common; people say this won’t fade without concrete answers or concessions.
Replies include heartfelt stories (grief support, creative work) and multilingual posts, showing the decision’s impact across cultures and user types.
Alongside serious criticism there’s a stream of mockery, roasts, and jokes framing the exchange as a “rap beef” or stage spectacle, underscoring how public perception is shifting.
Most popular replies, ranked by engagement
@sama Anthropic drops a new Claude model… OpenAI 15 minutes later: https://t.co/5ZK7iGIPT4
@sama Both Opus and Codex on the same day 😭 https://t.co/YMR3reTfD8
@sama What the hell https://t.co/3JD3xrKJTg
@sama Marry Codex if you want, let us normal humans #keep4o
@sama Opus 4.6: *exists* OpenAI: "Hold my benchmarks" Competition is beautiful.
@sama Two great LLMs launched in the last two hours. What a time to be alive https://t.co/7VQZkwBt8G
Found something wrong with this article? Let us know and we'll look into it.