Anthropic Moving Toward AI Chips for Claude—Is Nvidia Still a Buy in 2026?
Anthropic is exploring custom AI chips to manage compute capacity and costs for its Claude models, mirroring Amazon's potential move to sell internal chips. This trend suggests major AI buyers are seeking alternatives to NVIDIA, potentially impacting its long-term market share despite overall AI demand growth. While NVIDIA's innovation continues with new platforms like Vera Rubin, custom ASICs are projected to increase significantly in AI server shipments. This competitive pressure poses a strategic risk, though NVIDIA's established ecosystem may retain a hybrid market where custom silicon complements its solutions for advanced workloads.

TradingKey - Anthropic is considering making its own AI chips to help resolve issues with capacity and give Anthropic more influence over costs and timing of compute for Claude. This is happening around the same time that Amazon is reportedly looking at potentially selling more of its internal chips to customers.
Both situations point in the same direction: Big AI buyers are looking for sources outside of NVIDIA (NVDA) for supply and to optimize for cost. If this trend continues, it could negatively impact NVIDIA's share of the AI infrastructure market long term even as demand for AI computing expands.
Who Is Anthropic? What Is Claude?
Anthropic is an AI research organization developing helpful, trustworthy, and safe systems. Anthropic's main product is called Claude—a family of Large Language Models that are used by individuals to assist with chat, programming, searching, and executing workflows for businesses. Claude's rapid growth in revenue as reported has led to Anthropic needing more predictable access to compute. Thus, Anthropic uses a variety of chip types today, like the AWS Trainium chip, Google TPUs, NVIDIA GPUs, and may also pursue developing custom silicon in order to reduce reliance on outside sources and better match the needs of its models.
Why Anthropic Could Matter in AI Chips
Anthropic's inquiry into customized chips is still very much in development with neither final designs having been completed nor an assigned team established at this point. Industry experts estimate that developing an advanced AI chip can cost close to $500 million. However, considering Anthropic's size and growth trajectory, there is reason to believe that its plans to pursue building custom chips have merit.
The company also has a recently signed long-term agreement with Google for AI processors as well as Broadcom (AVGO) for significant compute power starting at approximately 3.5 gigawatts when production on the new machine starts in 2027. Essentially, Anthropic is taking a two-pronged approach by developing a supply chain externally; meanwhile, it works towards evaluating or creating an in-house silicon to ensure all options are left available.
If it can provide less expensive/custom accelerators to offset the costs of building its own custom AI chip, it will have improved bargaining position when working with vendors and may also realize a lower overall cost of ownership over time.
NVIDIA Facing Challenges From Anthropic and the Others
Anthropic is not alone. Amazon has already begun offering Trainium and Inferentia chips via AWS and is potentially looking to move into the broader marketplace with these products, which could turn Amazon into a much more significant semiconductor supplier. Alphabet also continues to advance its technology regarding TPUs and has a long-standing history of developing hardware that is optimally designed for its AI workloads. Meta is also developing its own accelerators to reduce third-party GPU dependency for inference workloads now and eventually training workloads as well. OpenAI reportedly is also investigating custom silicon. Additionally, AMD is becoming more aggressive in competing against NVIDIA for both training and inference workloads, and Broadcom has increased its role as a custom silicon partner for hyperscalers.
These trends and corporate strategies align with a more comprehensive market trend; specifically, TrendForce expects that ASIC-based AI servers will increase from about 27.8% of overall server shipments in 2026 to almost 40% by 2030. Therefore, as more compute capability continues to move to application-specific chips, NVIDIA may find it challenging to maintain its status as the market leader in terms of accelerator market share although overall AI-related spending should remain strong.
What Can Investors Expect of NVIDIA in 2026?
While more competitors are getting into NVIDIA’s territory, its consistent pace of innovation continues to serve as a strong counterbalance to that additional competitive pressure. NVIDIA’s Hopper-based H100 GPU revolutionized the AI training economics; now, NVIDIA’s Blackwell-based GB300 GPU has provided even larger leaps in performance in select configurations, up to 10X+.
In addition, NVIDIA will begin shipping the new Vera Rubin Platform containing Rubin GPUs, Vera CPUs, and upgraded networking commercially in the second half of this year, enabling developers to train models with up to 75% fewer GPUs and dramatically reduce inference token costs by nearly 90%. Tokens are words, symbols, or images that are generated when users make a request to an AI system. By greatly reducing token costs for providers of AI solutions, greater adoption by AI consumers and better profit margins for AI solutions providers will continue to generate and fund the demand for high-end accelerators.
NVIDIA posted record financial results for its fiscal year ending January 25, 2026, with $215.9 billion in revenues and $4.77 in earnings per share. At its P/E ratio of 36.10, under a 10-year average P/E ratio of 61.60, NVIDIA's shares are trading at a significant discount to their historical average. Wall Street expects NVIDIA's EPS for FY 2027 to be about $8.29—implying a forward P/E of approximately 21.30. If NVIDIA's stock were to return to its historical P/E levels, we could see an increase in share price of approximately 189%; however, this is not guaranteed. Regardless, there is ample opportunity for upside through future innovation and continued EPS growth.
There Also Exist Risks for NVIDIA in 2026–2030
A key strategic risk to NVIDIA will continue to be if hyperscalers and the biggest AI labs in the world continue to shift workloads to custom ASICs that are less expensive and more power-efficient than NVIDIA's GPUs.
If by 2030, custom ASICs are indeed responsible for 40% of the workloads on those platforms, then NVIDIA would likely have a structurally lower share of a larger total market than it has today; however, NVIDIA will continue to create friction as it integrates GPUs, CPUs, and networking hardware with an established developer base and mature software stack that makes it difficult for customers to change everything at once.
For many companies, NVIDIA continues to be the go-to company for cutting-edge model training and flexible inference. Moving forward, a hybrid market with custom silicon for stable, large-volume workloads combined with NVIDIA's solution for cutting-edge models, fast-moving research, and general use is the most likely outcome of this transition.
Anthropic's future action will give us a lot of indication directionally; if it moves ahead with its own chips for Claude, it may be able to save money over the long term and buy chips from a variety of suppliers. If it stops, it will indicate that the level of difficulty and problems involved in custom silicon scaling, including engineering, up-front capital, yield and packaging issues, and software tooling, is at an extremely high bar to cross.
What is increasingly understood by the rest of the AI industry in many ways is that Anthropic's continued pursuit of custom silicon means that AI is now largely in control over its own compute destiny and this can influence how the procurement process goes throughout the rest of the industry.
Recommended Articles













