News: 0176988177

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

Anthropic Launches Its Own $200 Monthly Plan (techcrunch.com)

(Wednesday April 09, 2025 @05:00PM (msmash) from the sad-news-for-wallet dept.)


Anthropic has unveiled a new premium tier for its AI chatbot Claude, targeting power users willing to [1]pay up to $200 monthly for broader usage . The "Max" subscription [2]comes in two variants : a $100/month tier with 5x higher rate limits than Claude Pro, and a $200/month option boasting 20x higher limits -- directly competing with OpenAI's ChatGPT Pro tier.

Unlike OpenAI, Anthropic still lacks an unlimited usage plan. Product lead Scott White didn't rule out even pricier subscriptions in the future, [3]telling TechCrunch , "We'll always keep a number of exploratory options available to us." The launch coincides with growing demand for Anthropic's Claude 3.7 Sonnet, the company's first reasoning model, which employs additional computing power to handle complex queries more reliably.



[1] https://techcrunch.com/2025/04/09/anthropic-rolls-out-a-200-per-month-claude-subscription/

[2] https://www.anthropic.com/news/max-plan

[3] https://techcrunch.com/2025/04/09/anthropic-rolls-out-a-200-per-month-claude-subscription/



Without an unlimited plan (Score:3)

by allo ( 1728082 )

Unlimited plans are bullshit for things that have unlimited costs for unlimited usage. In the best case they still rate limit, in the worst case they cancel your account because of some fair usage definition that is nowhere explained because if they tell you the limits it would not be unlimited.

Re: Without an unlimited plan (Score:2)

by Big Hairy Gorilla ( 9839972 )

"Unlimited*"

* is never unlimited, read the terms of service.

You can't offer unlimited or you'll be raped by automated bots leveraging your cheap service.

Hmmm. (Score:3)

by jd ( 1658 )

I had Claude and ChatGPT work with each other on a little engineering project, each finding the limitations in the design the other hadn't spotted.

It was actually good fun. Cost me a bit to get all the technical info they both wanted, but I now have a design both insist is absolutely robust, absolutely perfect. But they also both tell me that it's too big to properly process.

Yes, AI itself designed a project the very same AI cannot actually understand.

I now have a very large file, that cost me a fair bit of money to produce, that I'm quite convinced is useless but no AI examining even part can find fault in.

Beyond the fun aspect of having AI defeat itself, the project illustrates two things:

1. If AI can't handle a toy specification, it's never going to be able to handle any complex problem. This means that the "pro" editions are not all that "pro". The processing windows are clearly too small for real problems if my little effort is too big.

2. Anything either AI got right is, by virtue of how I worked on the problem, something the other AI got wrong. Of course, AI doesn't "understand", it's only looking at word patterns, but it shows that the reasoning capacity simply isn't there, regardless of whether the knowledgebase is.

3. I've now got a quite nice benchmark for AI systems. I can ignore any AI that can't cope. If it hasn't got the capacity to handle any trivial problem, because the complexity is too high, then it won't manage any real problem better.

Is this going to be like Colab did? (Score:2)

by balaam's ass ( 678743 )

I know they say higher rate limits, I but can't help worrying if this will be like Google did with Colab Pro, where you have to pay more to keep the capabilities you were using, and they nerf the pay tier you were using?

The only solution is ... a balance of power. We arm our side with exactly
that much more. A balance of power -- the trickiest, most difficult,
dirtiest game of them all. But the only one that preserves both sides.
-- Kirk, "A Private Little War", stardate 4211.8