@unknown
@sama Anthropic drops a new Claude model… OpenAI 15 minutes later: https://t.co/5ZK7iGIPT4
Tweet analysis: GPT-5.3-Codex shows faster token efficiency, improved steerability and live updates. Sentiment — 57.11% supportive, 17.51% confronting.
Real-time analysis of public opinion and engagement
What the community is saying — both sides
3-Codex a “game changer” for coding thanks to the launch timing and immediate performance wins.
The standout feature people keep repeating is mid-task steerability; builders say being able to course-correct live turns agents from expensive autocompletes into practical teammates.
reports of ~25% faster per-token and using less than half the tokens translate into dramatically lower costs and smoother interactive workflows.
Agentic competence matters — progress on terminal/OS interaction and “good computer use” promises agents that can not only write code but also verify and operate systems.
6 vs Codex 5. 3 as a healthy rivalry; many celebrate the competition as accelerating real-world progress and tooling.
when will the API/IDE plugins arrive? Integration, pricing, rate limits, and SDK access are top asks before full adoption.
Skeptics want stress tests — people want to see these gains hold up on messy, large codebases, long refactors, and edge-case workflows before calling it production-ready.
some worry about shifts in entry-level roles toward architectural oversight and agent orchestration rather than pure typing.
Builders are already mobilizing — promises to benchmark, hack in VSCode/Cursor, and redeploy workflows at hackathons show fast adoption enthusiasm.
intense real-world testing, API/IDE rollouts, and measuring stability under scale to turn these impressive numbers into reliable developer infrastructure.
Anger and betrayal — Many replies voice fury that GPT-4o is being sidelined, accusing OpenAI of ignoring its user base and breaking trust; hashtags like #keep4o and calls for refunds or boycotts appear repeatedly.
Human connection over benchmarks — A large group mourns the loss of a model they describe as a companion, arguing that emotional resonance and everyday conversational usefulness matter more than raw speed or coding metrics.
hallucinations, flaky imports, and failures in messy, real-world flows.
3 producing broken imports, nonexistent APIs, and weak JavaScript/React support in practice, saying it sometimes regresses compared with older versions.
5/4.
or other tools, praising Opus for fewer “confident wrong” outputs and better coding reliability.
Demand for transparency and choice — Repliers want clear communication, community updates, and options (keep 4o alive, legacy tiers, or open-source paths) instead of surprise removals and marketing noise.
Critique of product priorities — There’s a prevalent theme that OpenAI is optimizing for corporate productivity and cost-cutting rather than user care, which some describe as turning companions into “tools” or “obedient” assistants.
Organized dissent — Protests, hashtags, cancellation threats, and campaigns to preserve 4o are common; people say this won’t fade without concrete answers or concessions.
Emotional and global testimony — Replies include heartfelt stories (grief support, creative work) and multilingual posts, showing the decision’s impact across cultures and user types.
Sarcasm and memes — Alongside serious criticism there’s a stream of mockery, roasts, and jokes framing the exchange as a “rap beef” or stage spectacle, underscoring how public perception is shifting.
Most popular replies, ranked by engagement
@sama Anthropic drops a new Claude model… OpenAI 15 minutes later: https://t.co/5ZK7iGIPT4
@sama Both Opus and Codex on the same day 😭 https://t.co/YMR3reTfD8
@sama What the hell https://t.co/3JD3xrKJTg
@sama Marry Codex if you want, let us normal humans #keep4o
@sama Opus 4.6: *exists* OpenAI: "Hold my benchmarks" Competition is beautiful.
@sama Two great LLMs launched in the last two hours. What a time to be alive https://t.co/7VQZkwBt8G