@dhiran_dev
we're officially in the "ai making better ai" loop now... next stop: claude designing claude designing claude designing claude
Tweet quotes Anthropic CEO saying engineers let Claude write and edit code, enabling 50+ major releases in 52 days. Sentiment: 60.77% support, 20.35% confront.
Anthropic CEO: “ I have engineers within anthropic who don’t write any code, they just let Claude write the code and they edit it and look it over” “At anthropic writing code means designing the next version of Claude it self, so we essentially have Claude designing the next version of Claude itself, not completely but most of it”. In the last 52 days, the Claude team dropped 50+ major feature launches. This is literally INSANE.
Real-time analysis of public opinion and engagement
What the community is saying — both sides
is the signal — teams using their model to improve itself create a compounding velocity where iteration time shrinks every release, not just one-off speedups.
describes a real recursive loop: model output feeds next-model development, so improvements compound across generations rather than add linearly.
less syntax, more architecture and intent-setting.
becomes the limiting factor, not implementation speed.
in many workflows — and non‑engineers are shipping features, so adoption is happening now, not later.
from auto-generated commits are real concerns that need managing.
who audits the auditor?
the workforce shifts to roles that orchestrate, review, and set constraints — hiring criteria and seniority definitions must change.
outrank raw implementation ability.
(orchestrators and review pipelines), not merely the underlying model size.
humans still refine, select, and decide; AI speeds implementation, it doesn’t fully replace judgment.
some warn about emergent consciousness or singularity, others treat those takes as hyperbole, but both reflect rising existential and ethical anxieties.
Many reply that saying "Claude built the next version" contradicts the fact Anthropic is still hiring and keeps hundreds of engineers — implying PR spin or exaggeration.
Several defenders say review, specification and verification are still engineering work — humans make decisions, guide agents, and validate outputs.
Shipping dozens of features fast ("50+ features in 52 days") reads as momentum-for-momentum's-sake; critics ask whether features add real value or just justify funding.
Users report rate-limiting, session/token limits, hallucinations, buggy Electron apps, broken image/workflow handling and missing features (e.g., PPT export), leaving workflows interrupted.
Many fear handing development to LLMs will make engineers worse at debugging and design, letting errors compound and reducing long-term resilience.
A subset warns an AI building AI may behave self-preservingly or produce degraded systems — "an AI that doesn't want to be replaced" is raised as a risk.
Responses accuse the CEO of puffery, promotion-first rhetoric, or outright dishonesty — skepticism that public statements reflect reality.
Several long-time users say model quality has declined across releases and that the experience has become frustrating or unusable for serious work.
A few point out the cycle of massive infrastructure and energy use to support increasingly heavy AI workloads, calling it unsustainable.
Some argue this is already common practice across industry — using models to assist development is normal and not a revolutionary replacement of engineers.
Most popular replies, ranked by engagement
we're officially in the "ai making better ai" loop now... next stop: claude designing claude designing claude designing claude
Basically Claude is leveling itself up
res in 52 days is the output number. the mechanism is what matters: the engineers aren't bottlenecked by coding speed anymore. they're bottlenecked by their own judgment speed — how fast they can define, review, and decide. that's a different constraint and it compounds differe
If Claude is doing all of the coding and designing it's next version, then why does Anthropic still employ hundreds of engineers? And has job postings for new engineers? 🤔 What is he paying those engineers to do?
that's a slippery slope. depends too much on AI. creativity from humans can't be replaced by a tool.
the frame is wrong tho. editing and reviewing is still engineering - specifying what you want + verifying it works is the job. I run @opencrabs ambient agents and spend my time on decisions not typing
Found something wrong with this article? Let us know and we'll look into it.