Q1 FY2026 Earnings Call
AVGO · Preprocessing Report
2026-03-04
Quality
100%
50
Turns
16
Speakers
5
Sections
11
Exchanges
386
Claims

Entities by group 39

semiconductor supplier 1
Broadcomcompany
company executives 2
Hock TanpersonKirsten Spearsperson
ai accelerators 5
XPUproductTPUproductAI XPUproductMTIAproductIronwood TPUproduct
sell-side analysts 8
Stacy RasgonpersonBlayne CurtispersonHarlan SurpersonJoshua BuchalterpersonRoss SeymorepersonC.J. MusepersonBen ReitzespersonJames Schneiderperson
ai compute customers 6
OpenAIcompanyAnthropiccompanyMetacompanyNVIDIAcompanyGooglecompanyGroqcompany
language models 1
LLMtechnology
gpu accelerators 1
GPUproduct
analysts 3
Vivek AryapersonTimothy ArcuripersonThomas O'Malleyperson
signal serialization 1
SerDestechnology
networking standards 1
Ethernettechnology
network switches 2
Tomahawk 6productTomahawk 7product
application-specific chips 1
ASICproduct
advanced packaging materials 1
T-glasstechnology
interconnect 1
direct attach coppertechnology
investment bank 1
Goldman Sachscompany
signal processors 1
DSPproduct
disaggregated ai architecture 1
CPXtechnology
Ungrouped 2
Charlie KawwaspersonVMwarecompany
REPORTING 79PROJECTING 36POSITIONING 121EXPLANATORY 42ANALYST 69

Topics 76

xpu×38ai×35revenue×26network×18accelerator×13customer×13chip×13supply×12semiconductor×10gross margin×9silicon×7ethernet×7serdes×6tooling×6ebitda×5switch×5vmware×5software×5asic×5custom silicon×5

Themes 233

ai×35scale-up×8margin×6growth×5demand×5infrastructure software×5customer-owned×5revenue growth×4revenue mix×4custom silicon×4capacity estimate×4back-end protocol×4guidance×3semiconductor×3scale-out networking×3gross×3productization×3customer ramp×2llm performance×2direct attached copper×2strategic positioning×2consolidated×2spending×2revenue guidance×2clarification×2outlook×2market share×2high-volume ramp×2high-volume yields×21.6 terabit×2architecture evolution×2customer variation×2early procurement×2supply visibility×2anthropic project chips and racks×2customer diversification×2direct attach copper×2earnings×2training and inference use×2market trend×2customer confidence×2record quarterly total revenue×1record adjusted consolidated earnings×1operating leverage×1deployment phase×1custom business×1custom momentum×1google demand×1future demand×1anthropic compute ramp×1product franchise expansion×1meta roadmap×1meta shipment×1capacity scaling×1new account win×1openai deployment×1compute capacity×1time to market and yields×1compute infrastructure scale-up×1share of ai revenue×1share gain×1scale-out lead×1cost and power advantage×1into 2027×1revenue outlook×1supply chain support×1q1 flat y/y×1enterprise networking broadband and server storage×1wireless seasonal decline×1non-ai semiconductor guidance×1contract value×1infrastructure growth×1cloud foundation×1abstraction layer×1q1 fy2026 detail×1record growth×1record quarterly result×1year-over-year improvement×1adjusted profitability×1detailed review×1results×1operating expenses×1expense ratio×1segment transition×1operating margin×1cash generation×1free cash flow×1days on hand×1cash payout×1common stock×1dividends and repurchases×1balance sheet×1capital return×1non-gaap guidance×1vs networking×1allocation to product lines×1revenue overhang×1cloud spending growth×1gains outlook×1hyperscaler timing×1hyperscaler outlook×1concentration×1enterprise adoption×1customer pipeline×1training demand×1inference-driven demand×1custom accelerator design×1post-announcement acceleration×1full-year outlook×1xpu switch and dsp revenue×1silicon content outlook×1hyperscaler design efforts×1relative performance×1design complexity×1competitive lead×1self-sufficiency in ai hardware×1training and inference workloads×1technology requirements×1best-in-class silicon engineering×1advanced interconnect and packaging×1design experience×1llm competition×1nvidia competition×1generation improvement×1partner technology×1technology leadership×1lab validation×1commentary×1differentiation×1scale-out and scale-up×1networking synergy×1networking speed×1market position×1bandwidth demand×1capacity demand×1customer strength×1next-generation launch×1prefill and decode disaggregation×1gpu mix shift×1disaggregated workloads×1disaggregation confusion×1general-purpose limits×1workload specialization×1mixture-of-experts support in silicon×1inference workload limitation×1custom ai workloads×1gpu comparison×1training stage specialization×1llm workload tuning×1customization across customers×1roadmap across customers×1shipping impact×1shipment impact×1company concern×1steady performance×1pressure from ai products×1manufacturing achievement×1overall impact×1impact guidance×1analyst perspective×1company framing×1analyst comparison×1capacity expansion×12028 visibility×12028 timeline×1sharp growth×1supply constraint×1supply locked up×1remaining components×1component planning×1confidence×1issue×1customer count×1strategic engagements×12- to 4-year expectations×1multiyear commitments×1through 2028 or beyond×1customer fragmentation across channels×1strategic play×1llm and inference roadmap×1gpu and cloud optionality×1projected roadmap×1opportunistic optionality×1mix visibility×1anthropic versus chips×1cpo strategy×1llm and ai data centers×1larger clusters×1xpu connectivity×1low latency and power×1optical technology×1rack and cluster scaling×1xpu and gpu connections×1speed upgrade×1copper interconnect×1competitive positioning×1adoption timing×1cloud networking standard×1ethernet enablement×1custom engagements×1inference applications×1performance and cost advantage versus gpus×1custom projects×1inference silicon efficiency×1lower cost and power×1inference adoption×1interchangeability across workloads×1inference and training use×1customer development cadence×1model training intelligence×1state-of-the-art development×1capacity and chips×1customer visibility×1customer guidance×1visibility guidance×1openai capacity×1openai deal×1deployment ramp×1deployment plan×1progression among leading players×1developer competition across use cases×1training and inference productization×1silicon validation by customers×1team size×1product roadmap×1multi-year planning×1llm deployment×1model monetization×1customer strategy×1cloud training×1gpu alternative×1long-term roadmap×1sustainable play×1

Key Metrics 54

revenue×43gross margin×13gigawatts×7demand×5compute capacity×4operating expenses×4supply visibility×4adjusted ebitda margin×3revenue growth×3operating margin×3bandwidth×3dollars per gigawatt×3customers×3adjusted ebitda×2shipments×2growth×2capital expenditures×2market share×2return on investment×2performance×2switch capacity×2content per gigawatt×2customer count×2data rate×2gigawatt×2operating leverage×1deployment×1power×1bookings×1r&d expense×1operating income×1free cash flow×1inventory×1days inventory on hand×1dividends×1share repurchases×1capital returned×1diluted share count×1cash×1share repurchase program×1tax rate×1clusters×1complexity×1share×1lead×1serdes speed×1yield×1cost×1demand expectations×1customer spending×1capacity×1latency×1visibility×1headcount×1

Entities 691

Broadcom×309Hock Tan×129XPU×46Kirsten Spears×32Charlie Kawwas×13Stacy Rasgon×12LLM×10Blayne Curtis×10GPU×10Harlan Sur×9TPU×8SerDes×8OpenAI×7Joshua Buchalter×7Anthropic×6VMware×6Ethernet×6Meta×5Ross Seymore×5C.J. Muse×5Vivek Arya×5AI XPU×4ASIC×4Ben Reitzes×4MTIA×3NVIDIA×3Timothy Arcuri×3T-glass×3direct attach copper×3Google×2Tomahawk 6×2Tomahawk 7×2Thomas O'Malley×2James Schneider×2Goldman Sachs×2Ironwood TPU×1DSP×1CPX×1Groq×1

Business Segments 283

Semiconductor Solutions×265Infrastructure Software×18

Sectors 296

semiconductors×206artificial intelligence×43cloud computing×26software×11data center×5optical networking×2broadband services×1data storage×1wireless telecommunications services×1

Metadata Distributions

Sentiment
positive 123negative 11neutral 213
Temporality
backward 63forward 57current 227
Certainty
definitive 76confident 103moderate 112tentative 52speculative 4
Magnitude
major 71moderate 148minor 128
Direction
improvement 37decline 6flat 2mixed 5none 297
Time Horizon
immediate 58near_term 104medium_term 36long_term 26unspecified 123
Verifiability
quantitative 129event 18qualitative 200
Analyst Intent
probing 21challenging 3confirming 9seeking_detail 30seeking_guidance 6

Speakers

Executives
CKCharlie KawwasexecutiveHTHock TanCEOKSKirsten SpearsCFO
Analysts
BRBen ReitzesanalystBCBlayne CurtisanalystCMC.J. MuseanalystHSHarlan SuranalystJSJames SchneideranalystJBJoshua BuchalteranalystRSRoss SeymoreanalystSRStacy RasgonanalystTOThomas O'MalleyanalystTATimothy ArcurianalystVAVivek Aryaanalyst
Other
JYJi YooirOPOperatoroperator

Sections

TypeLabelSpeaker
preamblePreambleJi Yoo
prepared_remarksPrepared RemarksHock Tan, Kirsten Spears
qa_sessionQ&A Session
closing_remarksClosing RemarksJi Yoo
operator_signoffOperator Sign-offOperator

Q&A Exchanges 11

#AnalystFirmTurns
1
BCBlayne Curtis
Jefferies3
2
HSHarlan Sur
JPMorgan4
3
RSRoss Seymore
Deutsche Bank3
4
CMC.J. Muse
Cantor Fitzgerald5
5
TATimothy Arcuri
UBS4
6
SRStacy Rasgon
Bernstein3
7
BRBen Reitzes
Melius Research6
8
VAVivek Arya
Bank of America Securities5
9
TOThomas O'Malley
Barclays4
10
JSJames Schneider
Goldman Sachs3
11
JBJoshua Buchalter
TD Cowen3

Claim Taxonomy 347

REPORTING79
resultFinancial outcome for a completed period48
metricNon-financial quantitative fact11
operationalDiscrete completed event20
PROJECTING36
guidanceQuantitative expectation with number + time23
commitmentPromise with binary verifiable outcome7
targetLong-term aspirational quantitative goal6
POSITIONING121
strategyPriority, direction, or initiative95
competitiveCompany's position or advantages10
opportunityMarket condition framed as growth driver7
riskHeadwind, constraint, or uncertainty9
EXPLANATORY42
attributionWhy a specific outcome happened3
contextNon-company macro/industry fact39
FRAMING0
thesisFalsifiable belief about how the world works0
ANALYST69
questionInterrogative seeking information42
observationRestates a fact or data point13
concernFlags a risk or challenge1
estimateAnalyst's own projection or calculation10
sentimentOpinion, praise, or critique3

Transcript

Preamble
OP
Operatoroperator
Welcome to Broadcom Inc.'s First Quarter Fiscal Year 2026 Financial Results Conference Call. At this time, for opening remarks and introductions, I would like to turn the call over to Ji Yoo, Head of Investor Relations of Broadcom Inc.
JY
Ji YooirBroadcom Inc.
Thank you, operator, and good afternoon, everyone. Joining me on today's call are Hock Tan, President and CEO; Kirsten Spears, Chief Financial Officer; Charlie Kawwas, President, Semiconductor Solutions Group; and Ram Velaga, President, Infrastructure Software Group. Broadcom distributed a press release and financial tables after the market closed, describing our financial performance for the first quarter fiscal year 2026. If you did not receive a copy, you may obtain the information from the Investor section of Broadcom's website at broadcom.com. This conference call is being webcast live, and an audio replay of the call can be accessed for 1 year through the Investors section of Broadcom's website. During the prepared comments, Hock and Kirsten will be providing details of our first quarter fiscal year 2026 results, guidance for our second quarter of fiscal year 2026 as well as commentary regarding the business environment. We'll take questions after the end of our prepared comments.
Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call. In addition to U.S. GAAP reporting, Broadcom reports certain financial measures on a non-GAAP basis. A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today's press release. Comments made during today's call will primarily refer to our non-GAAP financial results. I will now turn the call over to Hock.
Prepared Remarks
HT
Hock TanCEOBroadcom Inc.
Thank you, Ji, and thank you, everyone, for joining us today. In our fiscal Q1 2026, total revenue reached a record $19.3 billion, and that's up 29% year-on-year and exceeding our guidance on the back of better-than-expected growth in AI semiconductors. This top line strength translated into exceptional profitability with Q1 consolidated adjusted EBITDA hitting a record $13.1 billion, which is 68% of revenue. These figures demonstrate that our scale continues to drive significant operating leverage. Now we expect this momentum to accelerate as our custom AI XPUs hit their next phase of deployment among our 5 customers. So looking ahead to next quarter Q2 '26, we're guiding for consolidated revenue of approximately $22 billion, which represents 47% year-on-year growth.
Let me now give you more color on our semiconductor business. In Q1, revenue was a record $12.5 billion as year-on-year growth accelerated to 52%. This robust growth was driven by AI semiconductor revenue, which grew 106% year-on-year to $8.4 billion, way above our outlook. In Q2, this momentum accelerates, and we expect semiconductor revenue to be $14.8 billion, up 76% year-on-year. Driving this is AI revenue growth, which will accelerate very sharply to 140% year-on-year to $10.7 billion. Now our custom accelerator business grew 140% year-on-year in Q1. This momentum continues in Q2. The ramp of custom AI accelerators across all our 5 customers is progressing very well.
For Google, we continue our trajectory of growth in '26 with strong demand for the seventh-generation Ironwood TPU. In 2027 and beyond, we expect to see even stronger demand from next generations of TPU. For Anthropic, we are off to a very good start in 2026 for 1 gigawatt of TPU compute. And for '27, this demand is expected to surge in excess of 3 gigawatts of compute. Our XPU franchise, I should add, extends beyond TPUs. Now contrary to recent analyst reports, Meta's custom accelerator MTIA road map is alive and well. We're shipping now. And in fact, for the next-generation XPUs, we will scale to multiple gigawatts in '27 and beyond. Rounding off for customers 4 and 5, we see strong shipments this year and which we expect to more than double in 2027. We also now have a sixth customer. We expect OpenAI deploying in volume their first-generation XPU in 2027 at over 1 gigawatt of compute capacity. Let me take a second to emphasize our collaboration with these 6 customers to develop AI XPUs is deep, strategic and multiyear. We bring to the partnerships, each of them, unmatched technology in SerDes, silicon design, process technology, advanced packaging and networking to enable each of these customers to achieve optimal performance for their differentiated LLM workloads. We have the track record to deliver these XPUs and high volumes at an accelerated time to market with very high yields. And beyond technology, we provide multiyear supply agreements as our customers scale-up deployment of their compute infrastructure. Our ability to assure supply in these times of constrained capacity in leading-edge wafers, in high-bandwidth memory and substrates ensures the durability of our partnerships, and we have fully secured capacity of these components for '26 through '28.
Consistent now with the strong outlook for our XPUs, demand for AI networking is accelerating. Q1 AI networking revenue grew 60% year-on-year and represented 1/3 of total AI revenue. In Q2, we project AI networking to accelerate a lot more and grow to 40% of total AI revenue. We are clearly gaining share in networking. Let me explain.
In scale-out, our first-to-market Tomahawk 6 switch at 100 terabit per second as well as our 200G SerDes are capturing demand from hyperscalers, whether they use XPUs or GPUs this year. This lead will extend in '27 with our next-generation Tomahawk 7 featuring double performance. Meanwhile, in scale-up, as cluster sizes and our customers expand, we are uniquely positioned to enable these customers to stay on direct attached copper through our 200G SerDes. As we next step up to 400G SerDes in 2028, our XPU customers will likely continue to stay on direct attached copper. And this is a huge advantage as the alternative of going to optical is more expensive and requires significantly more power, reflecting the foregoing factors. Our visibility in 2027 has dramatically improved. Today, in fact, we have line of sight to achieve AI revenue from chips, just chips, in excess of $100 billion in 2027. We have also secured the supply chain required to achieve this.
Now turning to non-AI semiconductors. Q1 revenue of $4.1 billion was flat year-on-year, in line with guidance. Enterprise networking, broadband, server storage revenues were up year-on-year, offset by a seasonal decline in wireless. In Q2, we forecast non-AI semiconductor revenue to be approximately $4.1 billion, up 4% from a year ago.
Let me now talk about our Infrastructure Software segment. Q1 Infrastructure Software revenue of $6.8 billion was in line with our guidance, up 1% year-on-year. For Q2, we forecast Infrastructure Software revenue to be approximately $7.2 billion, up 9% year-on-year. VMware revenue grew 13% year-on-year. Bookings continue to be strong and total contract value booked in Q1 exceeded $9.2 billion, sustaining an ARR, which is annual recurring revenue growth of 19% year-upon-year. Let me reinforce that this growth in our Infrastructure Software business reflects our focus and investments in foundational infrastructure, and our Infrastructure Software is not disrupted by AI.
In fact, VMware Cloud Foundation, VCF, is the essential software layer in data centers integrating CPUs, GPUs, storage and networking into a common high-performance private cloud environment. As the permanent abstraction layer between AI software and physical chips, silicon, VCF cannot be disintermediated or replaced. It allows enterprises, in fact, to scale complex generative AI workloads effectively with agility that hardware alone cannot provide. We are confident that the growth in generative and agentic AI will create the need for more VMware, not less. So in summary, let me put it all together for Q2 2026, we expect consolidated revenue growth to accelerate to 47% year-on-year and reach approximately $22 billion, and we expect adjusted EBITDA to be approximately 68% of revenue. So with that, let me turn the call over to Kirsten.
#1
result#2
result#3
result#4
result#5
strategy#6
strategy#7
guidance#8
#9
result#10
result#11
result#12
result#13
guidance#14
guidance#15
guidance#16
result#17
operational#18
operational#19
strategy#20
strategy#21
strategy#22
target#23
strategy#24
competitive#25
operational#26
target#27
target#28
target#29
operational#30
commitment#31
target#32
operational#33
strategy#34
strategy#35
strategy#36
operational#37
commitment#38
strategy#39
result#40
result#41
guidance#42
strategy#43
#44
operational#45
strategy#46
strategy#47
strategy#48
opportunity#49
strategy#50
guidance#51
guidance#52
#53
result#54
result#55
result#56
guidance#57
#58
result#59
guidance#60
result#61
result#62
result#63
result#64
strategy#65
strategy#66
strategy#67
strategy#68
strategy#69
attribution#70
guidance#71
guidance#72
#73
KS
Kirsten SpearsCFOBroadcom Inc.
Thank you, Hock. Let me now provide additional detail on our Q1 financial performance.
Consolidated revenue was a record $19.3 billion for the quarter, up 29% from a year ago. Gross margin was 77% of revenue in the quarter. Consolidated operating expenses were $2 billion, of which $1.5 billion was R&D. Q1 operating income was a record $12.8 billion, up 31% from a year ago. Operating margin increased 50 basis points year-over-year to 66.4% on favorable operating leverage. Adjusted EBITDA of $13.1 billion or 68% of revenue was above our guidance of 67%. Now let's go into detail for our 2 segments.
Starting with semiconductors. Revenue for our Semiconductor Solutions segment was a record $12.5 billion, with growth accelerating to 52% year-on-year, driven by AI. Semiconductor revenue represented 65% of total revenue in the quarter. Gross margin for our Semiconductor Solutions segment was up 30 basis points year-on-year to approximately 68%. Operating expenses of $1.1 billion reflected increased investment in R&D for leading-edge AI semiconductors and represented 8% of revenue. Semiconductor operating margin of 60% was up 260 basis points year-on-year, reflecting strong operating leverage.
Now moving on to Infrastructure Software. Revenue for Infrastructure Software of $6.8 billion was up 1% year-on-year and represented 35% of revenue. Gross margin for Infrastructure Software was 93% in the quarter and operating expenses were $979 million in the quarter. Q1 software operating margin was up 190 basis points year-on-year to 78%.
Moving on to cash flow. Free cash flow in the quarter was $8 billion and represented 41% of revenue. We spent $250 million on capital expenditures. We ended the first quarter with inventory of $3 billion as we continue to secure components to support strong AI demand. Our days of inventory on hand were 68 days in Q1 compared to 58 days in Q4 in anticipation of accelerating AI semiconductor growth. Turning to capital allocation.
In Q1, we paid stockholders $3.1 billion of cash dividends based on a quarterly common stock cash dividend of $0.65 per share. During the quarter, we repurchased $7.8 billion or approximately 23 million shares of common stock. In total, in Q1, we returned $10.9 billion to shareholders through dividends and share repurchases. In Q2, we expect the non-GAAP diluted share count to be approximately 4.94 billion shares, excluding the impact of potential share repurchases. We ended the first quarter with $14.2 billion of cash. Today, we are announcing our Board of Directors has authorized an additional $10 billion for our share repurchase program effective through the end of calendar year 2026. Now moving on to guidance.
Our guidance for Q2 is for consolidated revenue of $22 billion, up 47% year-on-year. We forecast semiconductor revenue of approximately $14.8 billion, up 76% year-on-year. Within this, we expect Q2 AI semiconductor revenue of $10.7 billion, up approximately 140% year-on-year. We expect Infrastructure Software revenue of approximately $7.2 billion, up 9% year-on-year.
For your modeling purposes, we expect consolidated gross margin to be flat sequentially at 77%. We expect Q2 adjusted EBITDA to be approximately 68%. We expect the non-GAAP tax rate for Q2 in fiscal year 2026 to be approximately 16.5% due to the impact of the global minimum tax and the geographic mix of income compared to that of fiscal year '25. That concludes my prepared remarks. Operator, please open up the call for questions.
#74
strategy#75
result#76
result#77
result#78
result#79
result#80
result#81
result#82
commitment#83
commitment#84
result#85
result#86
result#87
result#88
result#89
result#90
strategy#91
result#92
result#93
result#94
result#95
strategy#96
result#97
result#98
result#99
result#100
strategy#101
result#102
result#103
result#104
guidance#105
result#106
commitment#107
strategy#108
guidance#109
guidance#110
guidance#111
guidance#112
guidance#113
guidance#114
guidance#115
#116
#117
Q&A Session
Q&A 1/11
OP
Operatoroperator
[Operator Instructions] And our first question will come from the line of Blayne Curtis with Jefferies.
BC
Blayne CurtisanalystJefferies
Just a clarification and a question. Just clarification, Hock, on the greater than $100 billion. I think you said AI chips. I just want to make sure you're clarifying the difference between the ASICs and networking, and didn't know how rack revenue fits in there. And then the question, I think the biggest overhang on the group here is that you grew roughly double in the quarter AI. I think that's what kind of cloud CapEx is growing this year.
I'm just kind of curious your perspective, I think given the outlook that you have for '27, you should be a share gainer. I'm just kind of curious your perspective in terms of the pessimism that investors kind of think of that the hyperscalers need to get a return on investment in this year or next year or if not, the year after. I'm just kind of curious your perspective, how you factor that into your outlook.
question#118
question#119
question#120
question#121
question#122
question#123
question#124
question#125
question#126
question#127
HT
Hock TanCEOBroadcom Inc.
Well, what we see — what we have seen over the last few months and continue to see even more is — and it's really not so much talking about hyperscalers. Our customers, Blayne, is limited to those few players out there. And some of them are hyperscalers, some of them are not hyperscalers, but they all have one thing in common, which is to create LLMs, productize it and generate platforms, be it for enterprise consumption in code assistance or agentic AI or be it for consumer subscription that we know about, whatever it is, is that few prospects, and many of whom are customers now who are creating this — whether it's generative AI, agentic AI, but creating a platform. That's our customer. And with respect to each of those guys, we are seeing stronger and stronger demand for compute capacity, for training, which is something they do need constantly. But what is very, very interesting and surprising too to us is very much for inference in order to productize the LLMs, their latest LLMs they create and monetize it. And that inference is driving a substantial amount of compute capacity, which is great for us because these — all these players, these 5, 6 customers of ours are on the path to creating their own custom accelerators. And beyond that, their own design architecture of networking clusters of those custom accelerators.
So I think we're going to see demand keeps picking up as we have heard announcements in the past 6 months. Now to clarify your first part, Blayne, when I say we forecast, we have a line of sight that our revenue in '27 will be significantly in excess of $100 billion. I'm focusing on the fact that these are pretty much all based on chips, whether they are XPUs, whether they are switch chips, DSPs, these are silicon content we're talking about.
opportunity#128
risk#129
context#130
context#131
opportunity#132
strategy#133
opportunity#134
operational#135
strategy#136
guidance#137
strategy#138
strategy#139
Q&A 2/11
OP
Operatoroperator
One moment for our next question, and that will come from the line of Harlan Sur with JPMorgan.
HS
Harlan SuranalystJPMorgan
Congratulations to the team on the strong results. Hock, there's been a lot of noise around CSPs and hyperscalers embarking on their own internal XPU, TPU design efforts, right? We call it COT, or customer-owned tooling. This is not a new dynamic with ASICs, right? I think the Broadcom team has been through this COT competitive dynamic before over the 30 years, right, that you've been a leader in the ASIC industry.
And very few of these COT initiatives have ever been successful. Now on AI, some of these COT initiatives are coming to the market now, but it looks like they're at least 2x less performant than your current generation solutions, 2x less complex in terms of chip design complexity, packaging complexity, IP. So maybe just a quick 2-part question. Hock, one for you is, given your visibility into next year, do you see these COT science projects taking any meaningful TPU, XPU share from Broadcom? And then maybe the second quick question for either you or Charlie is, given that Broadcom's TPU, XPU programs from a performance complexity IP perspective are 12 to 18 months ahead of any of these COT programs, how does the Broadcom team widen this gap further?
#140
observation#141
observation#142
observation#143
observation#144
concern#145
sentiment#146
estimate#147
#148
question#149
question#150
HT
Hock TanCEOBroadcom Inc.
Well, that's a great question. And it fits into that I purposely took the time in my opening remarks to say that when any of our — any, I guess, hyperscaler or LLM developer tries to create — become self-sufficient entirely in creating what you call a customer-owned tooling, or COT model, they face tremendous challenges. One is technology, which is technology as it relates to creating the silicon chips and particularly in XPUs that they need to do the computing and that is needed to optimize and run the — train and inference on the workloads they produce out their LLM. It's — that technology we talked about comes from — comes in from different dimensions. You need the best silicon design team around. You need cutting edge, really cutting-edge SerDes, very advanced packaging and most — and just as much, you need to understand how to network clusters of them together. We've been doing this for 20 years, more than 20 years in silicon. And in this particular space today in generative AI, if you're trying to, as an LLM player, to do your own chip, you cannot afford to have a chip that is just good enough.
You need the best chips that is around because you're competing against other LLM players. And most of all, you're also competing against NVIDIA, who is by no means letting down their guard. They are producing better and better chips with every passing generation. So you have to — as an LLM trying to establish your platform in the world, have to create chips that are better than, if not competitive with, not just NVIDIA, but all the other platform players that you're competing against.
And for that, you really need our belief, and we see that firsthand, a partner in silicon with the best technology, IP and execution around. And very modestly, I would say we are by far way out there. And we will not see competition in COT for many years to come. It will come eventually, but we're still a long way off because the race which we see continues.
And one thing I add in there that is particularly unique to us, when you create a silicon, you really have to get it up and running in high volume in production very quickly, time to market. We are very, very experienced in doing that. Anybody can design a chip in a lab that works well. Can you produce 100,000 of those chips quickly at yields that you can afford? And we don't see too many players in the world that can do that. Charlie?
#151
risk#152
risk#153
context#154
competitive#155
strategy#156
strategy#157
context#158
context#159
context#160
context#161
context#162
strategy#163
competitive#164
strategy#165
risk#166
context#167
strategy#168
context#169
operational#170
context#171
#172
CK
Charlie KawwasexecutiveBroadcom Inc.
I think you covered it very well, Hock.
context#173
Q&A 3/11
OP
Operatoroperator
One moment for our next question, and that will come from the line of Ross Seymore with Deutsche Bank.
RS
Ross SeymoreanalystDeutsche Bank
Hock, in your script, you leaned a little bit more into the networking differentiation than you have in the past. So I guess kind of a short-term and a longer-term question. The short term is, what's driving that up to 40% of the AI revenues? And the longer-term question is, is that going — that percentage mix in that $100 billion plus, is that changing now?
What sort of leadership do you expect to maintain in that business, whether it's scale-out or scale-up? And is your leadership position there helping on your XPU side as you can optimize across both the compute and the networking sides?
observation#174
#175
question#176
question#177
question#178
question#179
HT
Hock TanCEOBroadcom Inc.
Well, let's address the first part of that fairly complex question first, Ross. Yes, in networking, especially with the new generation of GPUs, XPUs that are coming out there, we're running at 200 gigabit SerDes out there in terms of bandwidth. And the Tomahawk 6 that we introduced over 6 months ago — or in fact, closer to 9 months ago, we're the only one out there. And our customers and the hyperscalers wants to run with the best networking and with the most bandwidth out there for their clusters. So we are seeing huge demand for this only 100 terabit per second switch out there. So that's driving a lot of demand. And couple that with running bandwidth on scaling out optical transceivers at 1.6 terabit. We are again the only player out there doing DSP at 1.6 terabit. That combination is driving, I would say, the growth of our networking components even faster than our XPUs are growing, which is already pretty remarkable.
So that's what you're seeing. But at some point, I would think these things will settle down, though we're not slowing down the pace because, as I said, next year in '27, we'll launch next-generation Tomahawk 7, 2x the performance and we'll probably be by far the first out there, and that will continue to sustain that momentum. And — but at the end of the day, to answer your question, yes, I expect as a composition of our total AI revenue in any quarter that will be ranging between probably 33% to 40% AI networking components.
#180
strategy#181
competitive#182
context#183
strategy#184
strategy#185
operational#186
opportunity#187
metric#188
strategy#189
commitment#190
guidance#191
Q&A 4/11
OP
Operatoroperator
One moment for our next question, and that will come from the line of C.J. Muse with Cantor Fitzgerald.
CM
C.J. MuseanalystCantor Fitzgerald
I'm curious, how are you thinking about the move to disaggregate prefill and decode from the GPU ecosystem and the impact to custom silicon demand? Are you seeing any potential changes in sort of the relative mix between GPUs and customer silicon?
question#192
question#193
HT
Hock TanCEOBroadcom Inc.
I'm not sure I fully understand your question, C.J., could you clarify what do you mean disaggregate?
strategy#194
CM
C.J. MuseanalystCantor Fitzgerald
Sure. Pushing off workloads to CPX for prefill and working off a Groq for decode and having that disaggregated kind of world. And does that put any pressure in terms of the demand for custom versus going with a full GPU stack?
#195
observation#196
question#197
HT
Hock TanCEOBroadcom Inc.
Okay. I get what you mean, that word disaggregation kind of threw me off. What you're — in a way, what you're really saying is what — how is the architecture of AI accelerator, be it GPU or XPU evolving as workloads starts to evolve. And that's what we are seeing very much in particular. The one size fits all of a general purpose GPU gets you only that far. It can still keep going on because you can still run different workloads, like you run mixture of experts, even though you have — you want to run mixture of experts with [ sparse costs ] to be very effective, you hear the term, but in a GPU, you're designed for dense matrix multiplication. So you do it with software kernels, but it's not as effective as you'd hardcode it in silicon and make those XPUs purposely designed to be much more performing for mixture of expert workloads, say. The same applies for inference. And what that drives down to is you start to see designs of XPUs become much more customized for particular workloads of particular LLM customers of ours. And the design starts to depart from what is the traditional standard GPU design, which is why, as we always indicated before, XPUs will eventually be more the choice simply because it will allow flexibility in making designs that work with particular workloads, one for training even and one for inference. And as you say, one perhaps would be better at prefilling and one to be better at post-training or reinforce learning or test time scaling. You can tweak your TPUs towards the — XPU, sorry, Freudian slip, to a particular kind of workload LLM that you want.
And we're seeing that. We're seeing that road map in all our 5 customers.
#198
context#199
context#200
context#201
context#202
context#203
risk#204
risk#205
strategy#206
competitive#207
strategy#208
strategy#209
strategy#210
strategy#211
Q&A 5/11
OP
Operatoroperator
One moment for our next question, and that will come from the line of Timothy Arcuri with UBS.
TA
Timothy ArcurianalystUBS
I had just a question on sort of the puts and takes on gross margin as you begin to ship these racks. I mean, obviously, it's going to pull the blended margin down, but I'm wondering if there's any guardrails you can give us on this.
It seems like the racks are maybe 45%, 50% gross margin. So I guess, should we think about that pulling gross margin down like 500 basis points roughly as these racks begin to ship? And I guess part of that, Hock, is there some like floor to the gross margin below which you wouldn't be willing to do more racks?
question#212
question#213
observation#214
question#215
question#216
HT
Hock TanCEOBroadcom Inc.
Hate to tell you that you must be a bit hallucinating. Our gross margin is solidly at the number Kirsten report. We will not be affected by the gross margin and by more and more AI products going out. We have gotten our yields. We've gotten our cost to the point where the model we have in AI will be fairly consistent with the models we have in the rest of the semiconductor business. Kirsten?
result#217
result#218
guidance#219
operational#220
result#221
#222
KS
Kirsten SpearsCFOBroadcom Inc.
I would agree with that. I think on further study relative to even comments that I did make last quarter, the impact relative to our overall mix is actually not going to be substantial at all. So I wouldn't worry about it.
#223
strategy#224
guidance#225
Q&A 6/11
OP
Operatoroperator
One moment for our next question, and that will come from the line of Stacy Rasgon with Bernstein.
SR
Stacy RasgonanalystBernstein
I don't know if this is for Hock or Kirsten, but I wanted to dig in a little more to this substantially more than $100 billion next year. I'm trying to just count up the gigawatts. I counted, I don't know, 8 or 9, you have 3 from Anthropic, 1 from OpenAI, so that's 4. You said Meta was multiple, so at least 2. That gets me to 6. Google, I figure, should be bigger than Meta, so like at least 3, that's 9 and then you got a few others.
I had thought that your content per gigawatt was sort of, call it, in a $20 billion per gigawatt range. I guess what I'm asking is, is my math around the gigawatts you plan to ship in '27 correct? And how do I think about your content per gigawatt as that ships? Maybe it will be "substantially" more than $100 billion.
question#226
estimate#227
estimate#228
estimate#229
estimate#230
estimate#231
estimate#232
question#233
question#234
estimate#235
HT
Hock TanCEOBroadcom Inc.
Stacy, you have a very interesting perspective and I got to admire you for that.
But you're right, you can look at it, gigawatts, which is the right way to look at it instead of dollars because that's how we sell our chips to. You have to realize we — depending on our LLM customer, our 6 customers now — sorry, not 5, 6, 6, the dollars per gigawatt chip dollars varies, sometimes quite dramatically. It does vary. But you're right, it's not far from the dollars you're talking about. And if you look at it by gigawatt in '27, we are seeing it getting close to 10 gigawatts.
context#236
strategy#237
metric#238
metric#239
metric#240
target#241
Q&A 7/11
OP
Operatoroperator
And our next question that will come from the line of Ben Reitzes with Melius Research.
BR
Ben ReitzesanalystMelius Research
Hock, great to be speaking with you.
Wanted to ask you about your commentary about supply visibility on those 4 major components through 2028. A, how'd you do it? This is probably the — you're the first one to kind of go out through the '28 time frame. And secondly, after this astounding growth in 2027 for your AI business, do you have enough visibility to grow quite a bit in 2028 based on the supply that you see and that kind of commentary?
#242
question#243
question#244
observation#245
question#246
HT
Hock TanCEOBroadcom Inc.
The best answer is, yes, you're right. We anticipate this sharp accelerated growth, now nobody could anticipate the rate of growth is showing, but we kind of anticipate a large part of it, I guess, for longer than 6 months. We were early in being able to lock up T-glass. It's the infamous T-glass you all heard about. We were very early. We've locked up substrates. We have worked on our good partners on the rest of the stuff we talked about. And so the answer to your question is, it's somewhat anticipation early and the fact that we have very good partners out there in these key components. What else can I say except that, yes. Charlie, you want to add anything.
strategy#247
strategy#248
operational#249
risk#250
operational#251
operational#252
operational#253
strategy#254
strategy#255
#256
CK
Charlie KawwasexecutiveBroadcom Inc.
Yes, just maybe a couple of quick ones.
I think you covered that piece really well. I think then the other piece that's really important, as Hock said, we build custom silicon for 6 customers. We have very deep strategic multiyear engagement with them.
They share with us because of this custom capability exactly what they anticipate at least over the next 2 to 3 years, sometimes 4 years. And so because of that, that's exactly why we went and secured all the elements Hock talked about.
And when we secure this, it requires investments with these partners, sometimes developing not just more capacity but the right technology and capacity for that. So we have to go secure it for multiple years and we're probably — you're right, we're probably the first one to secure that up to '28 or beyond.
#257
strategy#258
metric#259
strategy#260
metric#261
operational#262
strategy#263
operational#264
BR
Ben ReitzesanalystMelius Research
And can you grow in '28 with what you see in supply? Sorry to sneak that in.
question#265
#266
CK
Charlie KawwasexecutiveBroadcom Inc.
Yes.
strategy#267
Q&A 8/11
OP
Operatoroperator
Our next question that will come from the line of Vivek Arya with Bank of America Securities.
VA
Vivek AryaanalystBank of America Securities
Hock, I just wanted to first clarify the Anthropic project you're doing, the $20 billion or so for 1 gigawatt this year, how much of that is chips and how much of that is kind of racks? I just wanted to understand when you say $100 billion in chips, is there a distinction between chips versus your rack scale projects because just that project is supposed to triple next year?
And then my question is, your AI business is transitioning from kind of one large customer that was where you had kind of exclusive partnership to now multiple customers who are using multiple suppliers. So how do you get the visibility and the confidence about how your share will progress at these multiple customers? Because it's a very kind of fragmented engagement that they have across a whole range of cloud service providers and so on. So what are you doing to ensure that you have solid visibility and the right market share at this fragmented set of customers who are using multiple suppliers?
question#268
question#269
observation#270
question#271
observation#272
question#273
HT
Hock TanCEOBroadcom Inc.
Vivek, you have to understand one thing about — first, as Charlie correctly put out very nicely, we only have very few customers, to be precise, 6 for the volume we are driving, the revenue we're driving, we only have just 6. Prior to that, even less recently. And number two, also you have to understand with the dollars each of them spend and the criticality of the nature of what they're embarking on. And that's why I threw out this term, Meta has MTIA, that's their custom accelerator program. To them, as to every one of my customers in this space, it's a strategic play. It's not optionality. To them, long term, short term, medium term is strategic, extremely strategic. They don't stop and they are very clear, each of them, on where they want to position this custom silicon within the trajectory of the LLM development and the trajectory of how they develop inference for productizing those LLM. That part, we have very clear visibility. Anything else on GPU, using new cloud, using cloud business, these are all transactional and optionality. So you have to — you point out very correctly, it seems very confusing. Trust me, not for us, nor those customers we have.
They're very strategic. They're very targeted and they know exactly what they're building up and how much capacity they want to build up each year. And the only thing they think about is, can you do it faster? Otherwise, it's very strategic and targeted on a projected road map.
Anything else you see in the mix is pure, I call it, opportunistic for these guys, the optionality. So it's very clear.
metric#274
metric#275
attribution#276
context#277
opportunity#278
strategy#279
strategy#280
strategy#281
strategy#282
risk#283
context#284
strategy#285
strategy#286
strategy#287
strategy#288
strategy#289
strategy#290
strategy#291
VA
Vivek AryaanalystBank of America Securities
And on the clarification, Hock, Anthropic racks versus chips.
question#292
HT
Hock TanCEOBroadcom Inc.
I'd rather not answer that, but we're okay. As Kirsten said, we're good on our dollars and margin.
strategy#293
result#294
Q&A 9/11
OP
Operatoroperator
Our next question that will come from the line of Tom O'Malley with Barclays.
TO
Thomas O'MalleyanalystBarclays
I have one for Hock and one for Charlie.
So Hock, I know you're very specific and particular about what you put in the preamble, and you noted that customers are staying at direct attached copper through 400 gig SerDes. Is there any reason you're pointing that out in particular, especially as a leading pioneer in CPO? And then on Charlie's side, as you're adding more customers here, I would imagine customers that design ASICs with you are going to use scale-up Ethernet. Maybe talk about scale-up protocols and how you see Ethernet developing there as well.
#295
observation#296
question#297
estimate#298
question#299
HT
Hock TanCEOBroadcom Inc.
Okay. No, unless — I'm just highlighting the fact that we — on networking, our technology is really very, very uniquely positioning us to help our customers and more than our customers, even customers using general purpose GPUs, not just XPUs, which is that if you are running — trying to create LLMs and running — creating your own AI data centers and designing it, architecting it, you truly want larger and larger domains or clusters for — and you really want to connect XPUs to XPUs directly where you can. And the best way to do that is to use direct attach copper. That's the lowest latency, lowest power and lowest cost. So you want to keep doing that, especially in scale-up as long as possible. In scaling out, we're past that. We use optical. That's fine. But I'm talking about scaling up in a rack, in a cluster domain.
You really want to use direct attach copper as long as you can. And we are still based on our technology that Broadcom has with — especially on connecting XPU to XPU or even GPU to GPU. We can do it with copper, and we can push the envelope from 100G to 200G to even 400G. We have SerDes now running 400G that can drive distance on a rack to run copper. What all I'm trying to say is you don't need to go run into some bright shiny objects called CPO, even as we are the lead in CPOs. CPOs will come in its time, not this year, maybe not next year, but in its time. Charlie?
#300
strategy#301
strategy#302
competitive#303
competitive#304
strategy#305
context#306
strategy#307
#308
strategy#309
strategy#310
strategy#311
metric#312
metric#313
competitive#314
strategy#315
#316
CK
Charlie KawwasexecutiveBroadcom Inc.
Yes. No. Well said, Hock. And on the question of Ethernet, with the debut of the cloud, Ethernet became the de facto standard in every cloud for the last 2 decades. If you look at the debut of the back-end networks, as Hock articulated, there was 2 years ago, a big fight about what protocol should be used to achieve the latency, the scale necessary on scale-out.
And the industry at the time, 24 months ago, was not clear. We were clear. We were very clear actually about what the answer should be. And again, because of the deep engagements with our partners, they made it very clear to all of us and the industry, GPU or XPU, that Ethernet is the scale-out of choice, checkmark.
Today, everyone is talking about scaling out with Ethernet. Now when it comes to scale-up, yes, exactly like what happened 3, 4 years ago on scale-up now, what's the right answer for this. And what we're hearing consistently and what we're seeing is the right answer is Ethernet. And as you know, last year, we've announced with multiple hyperscalers and many of our peers in the semiconductor industry that Ethernet scale-up is the right choice. That's what we believe will happen. Time will tell, but a lot of the XPU designs we're doing, we're being asked to scale-up through Ethernet, and we're happy to enable that.
#317
#318
context#319
context#320
operational#321
context#322
strategy#323
strategy#324
strategy#325
context#326
strategy#327
strategy#328
operational#329
commitment#330
strategy#331
Q&A 10/11
OP
Operatoroperator
And our next question that will come from the line of Jim Schneider with Goldman Sachs.
JS
James SchneideranalystGoldman Sachs
Hock, it was helpful to hear you discuss the progress of your other full custom XPU engagements outside of TPUs. As we look into next year, is it fair to assume that those are mostly targeting inference applications or not? And then could you maybe qualitatively speak to either the performance or cost advantages relative to GPUs that is giving those customers the ability to forecast in such a large scale?
observation#332
question#333
question#334
HT
Hock TanCEOBroadcom Inc.
Thanks. It's — most of our customers begin with inference simply because that tends to be the easiest path to start on, not necessarily from anything else than the fact that, when you do inference, it's much — it's less compute. But also then the question is, do you need this general purpose, massive dense matrix multiplication GPUs when you can do it more efficiently, effectively with customs inference silicon XPUs that do the job better or just as well, much cheaper cost, lower power. And that's what we find these customers starting with. But they are now in training and many of our XPUs are used both in training as well as inference. And by the way, they are interchangeable. Just a GPU can be used not just for training, which they are perhaps more perfectly suited to, but it can be used for inference. What we're seeing is our XPUs are used for both. And we are seeing that going on, but we're also seeing very rapidly more for those customers who are much more matured in the progression I talked about in their journey towards complete XPU, that they will start to develop 2 chips each year simultaneously, one for training, one for inference to be specialized.
Why? Because what we're seeing very clearly for this player — LLM players, is you do the training to get — to achieve a higher level of intelligence, smarts for your LLM. So great, you get yourself a great LLM state-of-the-art or more. Now you've got to productize it, which means inference. Well, you can then decide at that time you got your model going as the best because if you decide then to do your inference productization, it'll take you a year at least to productize at which time somebody else is going to create an LLM better than yours. So you — there's a leap of faith here that when you do training to create the next level of super intelligence in your LLM, you have to be investing simultaneously in inference, both in terms of the chip and the capacity. So our visibility is really coming out better and better as we find those 6 customers get more matured in their progression towards better and better LLMs.
So yes, it's — that is the trend we are seeing. It's not happening to all our 6 customers yet, but we are seeing a majority of them headed in that way right now.
#335
context#336
competitive#337
competitive#338
opportunity#339
operational#340
strategy#341
strategy#342
strategy#343
context#344
#345
context#346
strategy#347
context#348
risk#349
context#350
strategy#351
context#352
context#353
Q&A 11/11
OP
Operatoroperator
One moment for our next question, and that will come from the line of Joshua Buchalter with TD Cowen.
JB
Joshua BuchalteranalystTD Cowen
Congrats on the results. Appreciate all the details on the expectations for deployments at specific customers. I was hoping you could just maybe reflect on how visibility has changed over the last 1 to 2 quarters that gave you the confidence to give us more details. And then on a specific one, you mentioned greater than 1 gigawatt for OpenAI in 2027. With that deal being for 10 gigawatts through 2029, that implies a pretty sharp inflection, I guess, in 2028. Is that the right way to think about it? And was that sort of always the plan?
sentiment#354
sentiment#355
question#356
observation#357
estimate#358
question#359
question#360
HT
Hock TanCEOBroadcom Inc.
Yes. Well, yes, this — as you all seen and we all know in this generative AI race that we are in now, and I shouldn't use the word race, let's call it progression among the few players we see here. I mean, it's a competition, each is trying to create an LLM better than the other and more tailored for specific purpose, be they enterprise, be they consumer, be they search, each one is trying to create it more and more. And all of that requires not just training, which is important to keep improving your LLM models, but inference for productization and monetization of your LLMs. And we are — and probably call it the fact that we've been engaged with some of them now for more than a couple years, we're getting better and better visibility as they have more and more confidence that the XPUs they are working on with us is achieving what they're getting at. As they get a sense that the XPUs they are working on with the software, with the algorithms they needed, that they are having more confidence that this XPU silicon is what they need, and they work and it gets better and better. And it gets better, we get more visibility, as Charlie puts up perfectly. Because at the end of the day, we only have 6 guys to work on.
And these 6 guys are all, as I said, look at XPUs and AI in a very strategic manner. They don't think one generation at a time. They think multiple generation, multiple years. And in spite of all the hubris noise out there on what's available, they think very long term on how they deploy the XPUs they develop with us, how they deploy in achieving better and better LLMs that they want to create. And more than that, how they deploy in monetizing.
So it's — we are part of their strategic road map. We are not in just optionality of, oh, shall I use a GPU, shall I use it in the cloud because I need to train for 6 months? No, this is more than that. The investment these guys are making are long term, and it's great to be part of that long-term road map as opposed to a transactional road map.
And the noise, as I answered an earlier question from a — is there's a lot of noise that makes up short-term transactions with what is long-term strategic positioning of our business and our product. And to sum it all, I think our business in XPUs is a strategic sustainable play for all the 6 customers we have today.
#361
context#362
context#363
context#364
strategy#365
strategy#366
attribution#367
metric#368
strategy#369
strategy#370
strategy#371
context#372
context#373
context#374
strategy#375
strategy#376
strategy#377
strategy#378
strategy#379
Closing Remarks
OP
Operatoroperator
That is all the time we have for Q&A today. I would now like to turn the call back over to Ji Yoo for any closing remarks.
JY
Ji YooirBroadcom Inc.
Thank you, Sherry.
Broadcom currently plans to report its earnings for the second quarter of fiscal year 2026 after the close of market on Wednesday, June 3, 2026. A public webcast of Broadcom's earnings conference call will follow at 2:00 p.m. Pacific. That will conclude our earnings call today. Thank you all for joining. Sherry, you may end the call.
#380
#381
#382
#383
#384
#385
#386
Operator Sign-off
OP
Operatoroperator
This concludes today's program. Thank you all for participating. You may now disconnect.