TechnologyApr 1, 2026· 6 min read

Are Claude's usage limits too strict? Here’s why tokens seem never enough

Over time, Claude has improved impressively, so much so that Anthropic's AI is now considered one of the best on the market. Especially for programming, since Claude Codex is probably the benchmark for writing code. But also in other respects, particularly the features of Claude Cowork, a powerful assistant that has now entered Microsoft Copilot's ecosystem, alongside OpenAI's models, which remain the default in the Microsoft world.

However, there is a problem that many users complain about: the usage limits. Both in the Pro plan and in the Max plan, the most expensive one, which requires a monthly subscription of 90 euros. The company is racing to remedy this, but more and more users are becoming frustrated.

Bug or business choice? Why do Claude's tokens seem never enough? The usage limits are a constant in artificial intelligence systems. In most cases, it takes very little to reach the usage limits of the free plans, which forces users to wait hours before they can ask new questions to the AI. The problem is how this also happens to paying users, and in this respect, Claude is the AI that reaches the prescribed number of tokens from the plan more easily.

This is a problem recognized by Anthropic itself. "We are aware that many people are reaching the usage limits in Claude Code much faster than expected. We are actively analyzing the problem and will share further information as soon as we have an update," stated a company spokesperson on Reddit on March 31, adding a few hours later that "it is the top priority for the team and we know this is blocking many of you. We will share more information as soon as we have it."

Some developers report reaching the limit in just a few hours, according to The Register, referring to the Max plan, the most costly one. The problem did not arise today and, at least in some cases, it may be caused by a bug that prevents the system from accessing the cache, consequently consuming many more tokens than necessary, up to 10 or 20 times more. But even resolving this bug, the situation would hardly change drastically, since the feeling is that the company has chosen a rather conservative approach for user access to computing resources.

Certainly, one could switch to a pay-as-you-go mode, via the API, paying for the tokens used and avoiding any usage limitations, but costs in this case tend to rise rapidly, as enthusiasts experimenting with OpenClaw are well aware and risk seeing "bills" run high for the AI.

Anthropic has also tried to accommodate its users by doubling the number of tokens available outside peak hours (working hours), but the situation has not improved as hoped.

Greater Transparency on Token Consumption

One of the critical factors is that Claude does not explicitly indicate how many tokens are available for the various subscription plans, remaining quite generic. Greater transparency could certainly help users understand which prompts and automations consume greater resources. However, in our opinion, this might not be enough and at most would push users to reduce the use of this AI, limiting it to the most useful cases for productivity. This is the opposite of what is expected from applications like Claude Cowork, which effectively serves as an assistant that follows us during the creation of documents and complex processes.

The most obvious solution would be to increase the number of tokens available to users, but understandably, companies must also find a balance to carve out profit margins. AI is very expensive, and to date, no company has found an effective business model to profit from LLMs. Specifically, Anthropic lost $6 billion in 2024, $4 billion in 2025, and does not expect to have positive balances before 2028. OpenAI is not doing much better, and expects a loss of $14 billion in 2026.

It is worth noting that most revenues come from API calls, not from subscriptions, which also explains why in certain cases it is quite easy to reach the usage limits of the AI.

What to Expect for the Future?

AI is a disruptive technology that has literally transformed the concept of personal productivity. The statistics we have repeatedly reported on Edge9 suggest huge adoption by individuals, even in corporate contexts. Where it still struggles to express its potential is in transforming workflows. Beyond estimates on the productivity increase resulting from its adoption, as we have stated multiple times, only a few companies are able to seize these advantages, and the vast majority (over 90%) have invested in pilot projects that then did not transform into innovations transferable to the entire company.

The reasons are many, starting with data, often scattered in information silos: for AI to work well, it needs to be fed with business data. But first, it is necessary to understand where they are, how they are organized, and break down the silos. Only a few companies have this digital maturity and are the only ones to reap the greatest benefits from AI. A second problem relates to resistance to change: not everyone willingly embraces this technology. Denis Cassinerio, Senior Director & General Manager South EMEA of Acronis, confirmed this to us recently, stating that in certain cases it is necessary to "actively push" employees to use AI, particularly developers with more years of experience who, perhaps due to generational reasons, show the greatest resistance to change.

And we see this in our small way as well. The author also trains companies on themes related to digital and artificial intelligence, and the gap between those who are enthusiastic and embrace technology and those who are indifferent is enormous. In fact, in some cases, a certain skepticism is evident, also due to a certain narrative from the media (and also from actors in the sector, including Anthropic).

How can one encourage a person to use technology that, according to the less optimistic, will replace most white-collar job functions?

If change management is a problem in itself, with AI the situation becomes even more complex, as it is difficult to convince people to train a tool that will then replace them.

Returning to Claude's initial problem, if someone finds the usage limits frustrating, they always have other alternatives to choose from. Perhaps returning to OpenAI while holding their nose at the company's agreement with the Pentagon, which has raised numerous criticisms and prompted many to switch to Claude. However, if the industry giants cannot quickly find a way to monetize the monstrous investments in R&D, AI training, inference, and building new data centers, there will be two scenarios: increasingly severe limits on the number of prompts or rapidly rising token costs, particularly on the more advanced models.