@vinrambone
They had to stop the bleed. Sentiment was limits were bad and users were switching to codex. Give it a month and I bet they cut the limits again.
User praises Claude Opus 4.7 rate limit fixes: 27% session use, 23% weekly use after extended coding. Sentiment analysis shows 43.06% support, 31.94% confront.
Claude Code rate limits are way better now. I'm at 27% of my session limit and I've been vibe coding all morning with Claude Opus 4.7. 23% of my weekly limit used. A week ago I was hitting limits in 2 hours. Anthropic actually fixed the rate limits. https://t.co/Y3Lx2iF6eS
Real-time analysis of public opinion and engagement
What the community is saying — both sides
multiple users report Opus 4.7 now lets them run long coding sessions (multi-hour, large refactors) without hitting the previous hard wall — the model appears to spend the meter more slowly.
rate-limit relief stopped destructive context resets, letting developers maintain momentum through full features instead of timing sessions or restarting.
some users say upgrading (e.g., the $100 plan) removed limits entirely for them, so real-world experience still depends on subscription level.
several replies point to compute-class routing/priority queueing and a change in how weekly vs session caps are enforced as the likely cause of improvement.
some attribute the fix to competitive pressure (OpenAI vs Anthropic) or Anthropic’s pragmatic leadership and shipping-focused approach rather than marketing.
opinions differ on xHigh/High/Medium for 4.7 — some use xHigh for everything and are fine, others recommend tuning per project to balance cost and consistency.
community tips include watching a live session-percentage indicator (~27% cited as a comfortable sweet spot) and avoiding >70–80% where constraints and “quiet rules” begin to degrade.
a few users notice usage patterns vary by low-demand timings and that other components (e.g., Codex) can become the limiting factor even if Opus limits improve.
many want an official note or confirmation from Anthropic explaining exactly what changed and whether the improvement will hold.
several replies express firm preference for Claude/Anthropic now, saying they won’t be switching back.
Users feel “trust gone” — accusing the Claude team of gaslighting, bait-and-switch tactics and intentionally degrading the service to push paid plans.
Many report they’ve switched to Codex (or are actively doing so), calling it a clear alternative after frustration with Claude.
Multiple users say Opus 4.7 burns through usage far faster than before — “burnt through sessions,” “4.7 is trash,” and tokens disappear much quicker than with 4.6.
Several replies expect this improvement to be short-lived — “temporarily increased,” “they’ll cut the limits again,” and “this will last a week.”
Some report differences by time or account — rate limits “vary by busy times”, reset days changed, and what helps one user doesn’t help another.
Users running overnight or large batches say the fixes didn’t help — “overnight batches still hit limits” or the situation is now worse for heavy jobs.
A few see this as part of an industry pattern — AI firms are under financial pressure and may eventually raise prices or simplify models, forcing dependence before extracting more revenue.
Most popular replies, ranked by engagement
They had to stop the bleed. Sentiment was limits were bad and users were switching to codex. Give it a month and I bet they cut the limits again.
Tomorrow you'll probably share a post that will cause your members to cancel. Because you keep sharing very unstable and inconsistent data.
Well, not on my end, sadly... mine has gotten worse...
yeah I've noticed this too... limits seem much more generous, even when using only opus 4.7. did you change anything specific in your workflow to make this happen?
Good because it was crazy hitting the limit with one prompt
the rate limit was the silent killer for vibe coding adoption. if it actually opens up, a lot of people who bailed will try again.
Found something wrong with this article? Let us know and we'll look into it.