@AnthropicAI
Our run-rate revenue has surpassed $30 billion, up from $9 billion at the end of 2025, as demand for Claude continues to accelerate. This partnership gives us the compute to keep pace. Read more: https://t.co/XgSjL0And7
Tweet analysis: Google & Broadcom TPUs for Claude — 51.03% supportive, 22.60% confronting. Reaction breakdown and implications for AI infrastructure and context.
We've signed an agreement with Google and Broadcom for multiple gigawatts of next-generation TPU capacity, coming online starting in 2027, to train and serve frontier Claude models.
Real-time analysis of public opinion and engagement
What the community is saying — both sides
Many replies treat this as proof that the AI race has shifted from model design to raw infrastructure — "data centers are the new factories" and securing gigawatts locks a competitive lead.
Committing multi-gigawatt TPU capacity starting 2027 is read as a deliberate bet that scaling laws will keep paying off — the jump to a $30B run-rate is cited as validation so far.
Commenters note Anthropic is running a portfolio approach (TPU + Trainium + NVIDIA) to avoid single-vendor choke points and keep unit economics sane.
The deal is seen as bullish for Google and Broadcom ($GOOGL, $AVGO), potentially undercutting Nvidia’s leverage ("the Nvidia tax") and reshuffling winners in the supply chain.
Multiple replies emphasize that the true bottleneck may be the grid, substations and power procurement — this is "utility-scale compute" that requires real-world infrastructure buildout.
Locking multi-year compute reservations is viewed as industry crystallization: access to silicon becomes an admission price, concentrating advantage among a few chip designers and cloud providers.
Users expect higher throughput, fewer rate limits and faster model release cadence — many replies hope the capacity will translate into more powerful, less throttled Claude features.
Many replies treat “gigawatts” as a practical red flag — concerns about huge power bills, where the electricity will come from, and the idea that AI is becoming energy infrastructure, not just software.
People argue this is a power play, not a product race — firms are reserving future compute capacity, creating barriers that favor a few deep‑pocketed players.
Several callers point to the oddity of using an investor/cloud provider/competitor (Google) for massive capacity — framed as a potential conflict of interest and risky dependence.
Repeated complaints about drastic token consumption, 429 errors, and tight quotas — many say usability is crippled and users are cancelling or threatening to leave.
Critics note Claude’s recent flakiness, missing or crippled features (OpenClaw), and canned responses — scaling compute won’t fix a product that users find unreliable or dumb.
Comments range from “overcharging” to predictions of bankruptcy; some highlight that the advertised scale may just mean a bigger bill with diminishing consumer value.
Users suggest alternatives (AMD, Cerebras), point to rivals running well on midrange phones/GPU, and urge focus on efficient architectures instead of brute‑force TPUs.
Threads contain angry, conspiratorial, and abusive replies — accusations of ties to militarized tech, applause for competitors, and vindictive calls from banned or disgruntled users.
A mix of jokey mockery (“buy candles,” “toaster’s got more personality”) and genuine AGI/Skynet worries — some see the scale announcement as either laughable or frightening.
Most popular replies, ranked by engagement
Our run-rate revenue has surpassed $30 billion, up from $9 billion at the end of 2025, as demand for Claude continues to accelerate. This partnership gives us the compute to keep pace. Read more: https://t.co/XgSjL0And7
TPU’s are truly incredible
my GOD we are about to see another post like this again
where u gonna get the power?
Google is backing open-source Gemma while also expanding Anthropic’s access to TPUs and Google Cloud. That level of platform leverage is remarkable.
Please work on the drastic drain of tokens, its very important we are just wasting the tokens coz of your bs
Found something wrong with this article? Let us know and we'll look into it.