News: 0178411418

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Anthropic Tightens Usage Limits For Claude Code - Without Telling Users (techcrunch.com)

(Friday July 18, 2025 @05:50PM (msmash) from the tough-luck dept.)


An anonymous reader shares a report:

> Since Monday morning, Claude Code users have been [1]hit with unexpectedly restrictive usage limits . The problems, many of which have been aired on Claude Code's GitHub page, seem to be concentrated among heavy users of the service, many of whom are on the $200-a-month Max plan.

>

> Users are only told "Claude usage limit reached," and given a time (typically within a matter of hours) when the limit will reset. But with no explicit announcement of a change in limits, many users have concluded that their subscription has been downgraded or that their usage is being inaccurately tracked.

>

> "Your tracking of usage limits has changed and is no longer accurate," one user complained. "There is no way in the 30 minutes of a few requests I have hit the 900 messages." When reached for comment, an Anthropic representative confirmed the issues but declined to elaborate further.



[1] https://techcrunch.com/2025/07/17/anthropic-tightens-usage-limits-for-claude-code-without-telling-users/



Re: does not scale (Score:2)

by dknj ( 441802 )

Yet.

Wait until every computing device has the equivalent of an A6000.

Re: (Score:2)

by usedtobestine ( 7476084 )

Yes it does, but it turns out it costs much more than they changed their users. I'm not sure anyone will continue using it if they raise their prices to match their costs...$80000/month per user should cover it.

Re: (Score:2)

by allo ( 1728082 )

Costs are going down rapidly.

[1]https://epoch.ai/data-insights... [epoch.ai]

> For instance, the price to achieve GPT-4’s performance on a set of PhD-level science questions fell by 40x per year. The rate of decline varies dramatically depending on the performance milestone, ranging from 9x to 900x per year. The fastest price drops in that range have occurred in the past year, so it’s less clear that those will persist.

[1] https://epoch.ai/data-insights/llm-inference-price-trends

I noticed that since last month (Score:2)

by backslashdot ( 95548 )

I've been getting hit with that a lot since at least last month. And usually it's because it is correcting its own bugs in code it spit out. Worse is when it has truncations and fragments of code.

Almost sounds like telco/cable (Score:2)

by stabiesoft ( 733417 )

"Unlimited plans". Unlimited until you hit a threshold and then throttled.

Re: (Score:2)

by ihavesaxwithcollies ( 10441708 )

Don't forget the energy recovery fee, resort fee, parking fee, regulation recovery fee, router leasing fee, modem leasing fee, local broadcast fee, regional sports network fee, service fee, order processing fee, delivery fee, service charge and don't forget the forced gratuity.

It's almost like unfettered capitalism is bad for the consumer.

Using AI to determine usage (Score:2)

by drinkypoo ( 153816 )

Really? Who knows. But we do know that LLMs can't count.

Under stress (Score:3)

by AlanObject ( 3603453 )

I was using Claude the other night and things were going fine -- until it didn't. It was responding to prompts but just stopped making the requested changes to code. Then it turned obstinate and started making bad suggestions.

It looks more to me that the GPU resources are under stress and starting to just drop stuff that they would ordinarily process. It doesn't surprise me that they are trying to throttle their workload.

Re: (Score:2)

by smooth wombat ( 796938 )

It looks more to me that the GPU resources are under stress and starting to just drop stuff that they would ordinarily process.

Easy solution. Buy more Nvidia cards. Or AMD. Either one as I own stock in both companies.

Re: (Score:2)

by allo ( 1728082 )

Long context kills coherence. Start a new chat from time to time. Even if a model announced something like 128k context, it usually starts to lose its smarts at around 24k-32k. Claude is probably a bit better, but still you get the best results if your chatlog is short

GitHub Copilot did the same last month (Score:2)

by CubicleZombie ( 2590497 )

Paid accounts, 300 requests per month limit for advanced models. Unlimited for GPT4.1, but that model doesn't work for anything beyond simple tasks.

The first hit is always free.

Anthropic Tightens (Score:2)

by PPH ( 736903 )

Didn't you folks negotiate a safe word in advance?

It appears that after his death, Albert Einstein found himself
working as the doorkeeper at the Pearly Gates. One slow day, he
found that he had time to chat with the new entrants. To the first one
he asked, "What's your IQ?" The new arrival replied, "190". They
discussed Einstein's theory of relativity for hours. When the second
new arrival came, Einstein once again inquired as to the newcomer's
IQ. The answer this time came "120". To which Einstein replied, "Tell
me, how did the Cubs do this year?" and they proceeded to talk for half
an hour or so. To the final arrival, Einstein once again posed the
question, "What's your IQ?". Upon receiving the answer "70",
Einstein smiled and replied, "Got a minute to tell me about VMS 4.0?"