Anthropic announced a major expansion of its compute partnership with Google Cloud and Broadcom on April 6-7, 2026. The deal covers roughly 3.5 gigawatts of TPU capacity, with the bulk coming online from 2027 onward. Most of the new compute sits in the United States.
Three things moved with this announcement:
-
Scale of commit. 3.5GW is a frontier-lab-sized compute order. Google-built TPUs are supplied through Broadcom; Google Cloud runs the services layer.
-
Revenue disclosure. Anthropic’s run-rate revenue has passed $30B, up from ~$9B at end of 2025. Over 1,000 business customers now each spend $1M+ annualised, doubling from 500+ in less than two months.
-
Infrastructure multi-sourcing. Anthropic now runs on AWS Trainium (Amazon deal), Google TPUs (this deal), and Nvidia GPUs. No single-supplier dependency remains for its frontier training runs.
Why it matters:
The binding constraint on Anthropic, OpenAI, and Google DeepMind is no longer research talent or algorithmic insight. It is access to cheap power and chips. Anthropic’s multi-year TPU commit locks in supply at a moment when every frontier lab is competing for the same scarce resource.
Implications:
- For builders: Claude pricing is more likely to stay stable (or drop) rather than spike, given Anthropic’s secured supply. The Opus 4.7 pricing held at Opus 4.6’s $5 input / $25 output rate as one immediate signal.
- For competitors: OpenAI signed a similar multi-year compute deal with Microsoft and a separate Oracle deal in 2025. The pattern is every top lab stacking multi-hyperscaler compute contracts.
- For Broadcom: Custom silicon partnerships with Google and Anthropic are now a second growth engine alongside its networking business.
Sources
- Expanding our use of Google Cloud TPUs and Services (Anthropic)
- Anthropic expands partnership with Google and Broadcom (Anthropic)
- Anthropic ups compute deal with Google and Broadcom (TechCrunch)
- Broadcom agrees to expanded chip deals with Google, Anthropic (CNBC)
- Anthropic reveals $30bn run rate (The Register)
Sources
Primary and corroborating references used for this news item.
Spotted an error or want to share your experience with Anthropic Locks In 3.5GW of Google TPU Compute, Discloses $30B Run Rate?
Every tool page is re-verified on a recurring cycle, and corrections land faster when readers flag them directly. If you spot a stale fact, a missing capability, or have used Anthropic Locks In 3.5GW of Google TPU Compute, Discloses $30B Run Rate and want to share what worked or didn't, the editorial desk reviews every message sent through this form.
Email editorial@aipedia.wiki