Skip to main content
TechnologyApr 10, 2026· 3 min read

Does Anthropic Want to Build Chips by Itself? What Changes After the Jump to $30 Billion

While the revenue numbers for Anthropic have already been widely reported in recent days (run rate over $30 billion, tripled in less than four months from the end of 2025), the real news in the last few hours is of a purely technical and infrastructural nature: Dario Amodei's startup is considering the design of proprietary AI chips. Reuters reports this, noting immediately that the project is still in a preliminary phase and that the company has not yet formed a dedicated team nor does it have a defined design on the table.

This technical and infrastructural development hides a strategic question: is it appropriate to push for proprietary silicon, or is it better to continue purchasing accelerators from external suppliers, as has been done so far? The point is that at the current business scale, the cost of computing capabilities has become an industrial lever worth analyzing with a different perspective than six months ago. Developing an advanced AI chip costs on the order of $500 million, including specialized engineering and production validation: a significant amount for a company that is still not profitable, but much more manageable when the run rate reaches the levels that Anthropic revealed in recent days.

Already a Mix of Different Architectures
Currently, Claude already runs on heterogeneous hardware. Anthropic uses Google's TPU developed with Broadcom, custom chips from Amazon, and Nvidia GPUs, assigning workloads to the most suitable platform based on the type of operation: the optimal chip for training is not automatically the best for inference or for specific enterprise workloads. In this sense, the startup would not start from scratch on the front of hardware diversification; it would rather be evaluating the next logical step, namely to directly control the design instead of just choosing among third-party options.

This perspective fits into the multi-year agreement announced a few days ago with Google and Broadcom, which will guarantee Anthropic about 3.5 gigawatts of TPU-based capacity starting from 2027: a level equal to about three times what the company was using at the beginning of 2026. The expansion is linked to business results and is part of an overall commitment of $50 billion in computing infrastructure in the United States announced in November 2025. The potential proprietary chip would not replace this network of partnerships from one day to the next, but would gradually reduce dependence on standardized components with costs and roadmaps beyond direct control.

If confirmed, this move would not be isolated. Broadcom is already a partner of OpenAI in the design of custom accelerators and would also have a fifth XPU client not yet made public: the dedicated AI silicon market is rapidly structuring itself around a few suppliers capable of managing projects of this complexity. Meta and OpenAI have already taken similar paths, and the dominance of Nvidia's general-purpose GPUs is beginning to be complemented by tailored solutions for those with volumes sufficient to justify the development cost.

Anthropic, with the revenue trajectory described in the recent news, has now reached a size that makes it reasonable to at least consider the hypothesis. Whether it ultimately decides to build its own chip or stops at the analysis stage, the mere fact that it has seriously put it on the table is already a sign of how the infrastructural strategy of large companies in the AI landscape is evolving.