TradingKey - As large language models (LLMs) shift from training to inference, the demand for AI infrastructure is undergoing a major transformation. Real-time applications require faster, more efficient, and affordable computing — and that’s creating a booming market for custom-built chips known as ASICs (Application-Specific Integrated Circuits).
Tech giants and cloud platforms are actively looking for alternatives to Nvidia’s expensive general-purpose GPUs, and that’s where ASICs come in. Broadcom and Marvell — two of the biggest beneficiaries of this custom chip wave.
Broadcom has developed strong capabilities in both AI accelerators (XPUs) and high-speed networking chips — two essential components for large-scale AI deployments.
Their flagship Ethernet switch products, Tomahawk and Jericho, are tailor-made for AI data centers. With up to 800Gbps of throughput and ultra-low latency, these chips enable high-efficiency communication between AI processors. Broadcom’s networking solutions are already widely used in the data centers of Amazon, Google, Meta, Microsoft, Alibaba, and other hyperscalers.
Starting in 2025, Broadcom will also become a key supplier of custom AI chips for OpenAI. Analysts believe the company’s client list may extend to other major players like Arm, Apple, Elon Musk’s AI efforts, and one yet-to-be-named hyperscale customer.
Broadcom is scheduled to report its FYQ3 2025 earnings after the market closes on September 4. According to FactSet, analysts expect revenue of 15.83 billion, up 22%, and EPS of 1.66, up 34%.
By contrast, Marvell’s latest earnings missed the mark, which dragged down its stock and put more pressure on Broadcom to deliver.
Even Nvidia’s blockbuster results may have raised the bar for everyone. In today’s market, simply hitting estimates may not be enough to move the stock. Investors are watching closely for upside guidance or evidence of accelerating AI momentum.
As AI inference becomes a bigger part of the workload, ASICs are standing out as a more efficient answer than general-purpose GPUs. They're optimized to perform the matrix operations that power LLMs – without all the overhead that comes with a GPU.
A recent Citi report estimates that an AI ASIC costs about 5,000, compared to20,000–$30,000 for Nvidia’s H100 GPU. On power consumption, ASICs also come out ahead: The H100 draws up to 700W, while a custom ASIC handling a similar task can use 30% less power.
For companies managing massive AI infrastructure, that adds up — potentially cutting their total cost of ownership (TCO) by as much as 75%. That’s a serious selling point.
Broadcom expects the global AI ASIC market to grow to between 60billionand60billionand90 billion by 2027. It’s aiming for a 60% compound annual growth rate (CAGR) in AI-related revenue — faster than Nvidia’s current projection of 50%.
Still, much of Broadcom’s current AI chip business is driven by XPUs — which carry lower margins than other parts of the portfolio. As a result, short-term profitability could be under some pressure. The company expects its gross margin to decline by about 1.3 percentage points in fiscal 2025. But as designs improve and volumes scale, Broadcom believes margins will recover over time.
Nvidia remains dominant in the AI software ecosystem thanks to its CUDA stack and developer network. But Broadcom is catching up on the hardware side.
In fact, reports suggest Nvidia has quietly formed an internal ASIC design team — a sign it’s taking the custom chip trend more seriously.
It reflects what the industry calls Makimoto’s Wave — a 10-year cycle between general-purpose and custom hardware. About a decade ago, AlexNet kicked off the GPU-led AI boom. Now, we may be shifting back toward specialization, and Broadcom is a frontrunner in that domain.
To be clear, this isn't about one chip winning over the other — ASICs and GPUs will likely coexist across different workloads. But in terms of cost, performance, and energy efficiency, specialized chips have their moment right now.
Broadcom currently trades at a forward P/E of around 40x. Not cheap — but still below Nvidia’s 50–60x multiple, which gives Broadcom a relative edge for value-focused investors in the AI space. Since its VMware integration, Broadcom has been generating strong free cash flow — hitting $6.41 billion in Q2 FY2025, up 44% YoY.
According to Seeking Alpha, Broadcom’s forward multiple is expected to trend lower from 2025 through 2028 as earnings catch up, signaling a clearer path toward fair valuation.
As Melius Research puts it: "use any weakness as a buying opportunity since there is such a shortage of this type of leadership outside Nvidia in AI."