Q3 FY2025 Earnings Call
AVGO · Preprocessing Report
2025-09-04
Quality
100%
59
Turns
17
Speakers
5
Sections
13
Exchanges
540
Claims
Quality issues

Entities by group 50

company executives 2
Hock TanpersonTom O'Malleyperson
sell-side analysts 11
Kirsten SpearspersonVivek AryapersonBen ReitzespersonStacy RasgonpersonRoss SeymorepersonJoe MoorepersonChristopher RollandpersonCarl AckermanpersonJim SchneiderpersonCJ MusepersonJoshua Buchhalterperson
AI accelerators 2
XPUproductTPUproduct
sell-side research 1
Bernstein Researchcompany
interconnect standards 2
UA linktechnologyPCIetechnology
investment banks 3
Bank of AmericacompanyGoldman SachscompanyJPMorgancompany
networking switch silicon 1
Tomahawkproduct
AI models 1
generative AItechnology
AI infrastructure 1
compute clusterstechnology
network access 1
PONtechnology
AI networking switch architectures 1
XGETtechnology
administration tools 1
PowerShelltechnology
virtualization platforms 1
vSpheretechnology
high-speed signaling 1
SerDestechnology
Ungrouped 21
BroadcomcompanyVMwarecompanyEthernettechnologyHarlan SurpersonJericho 4productHarsh KumarpersonGPUtechnologyAItechnologyNVLinktechnologyJericho 3productLLMtechnologyHyperscalerscompanyhyperscale customersotherDOCSIS 4technologyNVIDIAcompanyAT&TcompanyVerizoncompanyBCFtechnologyTCOotherAltrosproductInfiniBandtechnology
REPORTING 91PROJECTING 36POSITIONING 147EXPLANATORY 49ANALYST 110

Topics 89

xpu×39customer×36ethernet×33revenue×22networking×18semiconductor×15vcf×14ai×12backlog×11switch×11chip×11broadband×9order×8growth×8network×7software×7gross margin×7business×6prospect×6private cloud×6

Themes 264

ai×37non-ai×20llm×8growth×7guidance×7deployment×7enterprise×6competition×6revenue growth×5operating×5vcf adoption×5share gains×4scaling×4additional prospects×4demand×3revenue mix×3gross×3adjusted×3customer concentration×3customer selection×3large accounts×3longer-tail opportunity×3open source×3hyperscaler adoption×3customer optionality×3record quarterly growth×2customer pipeline×2computer×2latency reduction×2forecast×2flat sequentially×2infrastructure software×2vmware×2expense level×2q4 revenue×2v-shaped recovery×2recovery×2improving trends×2custom mix×2fiscal 2026 outlook×2breakdown×2capacity×2customer count×2qualification×2multi-site clustering×2hyperscale×2vcf stack adoption×2customer adoption×2value creation×2commoditization×2outlook×2custom ai accelerator×2delivery timing×2timing×2market leadership×2sub-250 nanoseconds×2customer share vs gpu×2investor presentation×2record level×1mix contribution×1customer demand×1production orders×1qualified customer×1scale-up challenge×1compute node capacity×1network capacity×1product launch×1network flattening×1across data centers×1jericho 3 deployment×1jericho 4 launch×1bandwidth capacity×1generative ai clusters×1scale up and out×1sequential growth×1end market improvement×1segment discussion×1bookings×1version 9.0 milestone×1engineering milestone×1cloud foundation release×1cloud foundation platform×1public cloud alternative×1strength driving guidance×1margin guidance×1record consolidated performance×1record consolidated profitability×1segment growth×1r&d spending×1free×1days sales outstanding×1cash and inventory build×1days inventory on hand×1cash and debt position×1fixed-rate debt terms×1floating-rate×1cash payout×1diluted×1consolidated guidance×1semiconductor guidance×1mix impact×1xpu discussion×1backlog-driven growth×1future shipments×1immediate demand×1strong execution×1investor focus×1broadband server storage enterprise×1docsis 4 upgrades×1pon upgrades×1upgrades and spending acceleration×1cyclical upturn magnitude×1year-over-year growth×1wireless and server storage seasonality×1consistent growth×1cyclical uptrend×1source of hope×1downturn impact×1lead times×1prior trends×1year over year growth×1fiscal 2025 rate×1year on year×1fiscal 2025 comparison×1acceleration×1fiscal 2026 acceleration×1fiscal 2026 improvement×1networking versus custom×1customer expansion×1customer gains×1share of pool×1share decline into 2026×1verification×1composition×1duration×1company scale×1steady growth×1business growth×1semiconductor mix×1production addition×1remaining pipeline×1long-term trend×1momentum outlook×1questions on ai prospects×1million-units goal×1current customers×1customer selectivity×1broader basis×1custom-chip prospects×1market segmentation×1market classification×1workload definition×1market segment×1market coverage×1market concentration×1training clusters×1customer qualification×1identified customers and prospects×1concentration×1strong operating performance×1product commentary×1scale-across architecture×1product discussion×1market development×1inferencing revenue uplift×1company framing×1compute within rack×1spanning racks×1gpu count target×1power constraint×1data center footprint×1power availability×1land availability×1100-kilometer multi-site approach×1multi-site xpus and gpus×1single-cluster operation across locations×1telecom network use×1hyperscaler shipments×1hyperscaler product line×1product maturity×1technology requirements×1vcf stack conversion×1semiconductor benefits from vmware adoption×1on-premises deployment×1vcf expansion×1vcf deployment×1vcf strategy×1second phase strategy×1license conversion to vcf×1services and ai expansion×1services and ai opportunity×1data center commoditization×1investment costs×1total cost of ownership×1wireless and xpu mix×1target margin×1drivers of decline×1wireless growth×1wireless outlook×1seasonal quarter×1wireless and tpu×1software×1mix categorization×1time frame×1segmentation×1scale-up versus ua link and pcie solutions×1lower-latency product significance×1scale-up opportunity in ai networking×1disaggregated from ai accelerators×1separate from ai accelerators×1separate from xpu×1for xpu customers×1ethernet interfaces×1customer enablement×1networking protocol×1support sourcing×1vendor substitution×1gpu systems adoption×1scale-out networking architecture×1hyperscaler scale-out networking×1disaggregated scaling across gpus×1ethernet connectivity×1vendor competition×1product displacement×1surprised×1appreciated×1management view×1obvious×1well proven×1hyperscaler preference×1hyperscaler focus×1protocols×1usage×1reliability×1networking concern×1latency-driven adoption×1ethernet switches×1latency optimization×1performance comparison×1latency improvement completed×1improvement×1long-term experience×1new protocol development×1open standard competition×1ethernet compatibility×1interface standardization×1customer choice×1ongoing competition×1competitive investment and innovation×1model development×1on silicon×1design development×1development investment×1metrics×1market share gains vs gpu×1adoption cycle×1product evolution×1custom design iterations×1rising usage with newer generations×1higher utilization across customers×1greater share of compute footprint×1gaining share over time×1quarterly report×1conference call×1

Key Metrics 52

revenue×47backlog×12gross margin×10orders×9customer count×7customers×7revenue growth×6share×6adjusted ebitda×5switch latency×5terabit per second×4growth×4production×4adoption rate×4bookings×3operating expenses×3operating margin×3demand×2compute nodes×2free cash flow×2debt×2mix×2percentage×2gpu count×2capital deployment×2market share×2usage×2earnings×2revenue share×1bandwidth×1latency×1throughput×1contract value×1research and development expense×1operating income×1capital expenditures×1days sales outstanding×1inventory×1days of inventory on hand×1cash and gross principal debt×1dividend×1share count×1tax rate×1spending×1lead time×1revenue mix×1units×1count×1xpu count×1connections×1total cost of ownership×1margin×1

Entities 865

Broadcom×379Hock Tan×188VMware×38Ethernet×35XPU×33Kirsten Spears×32Vivek Arya×19Harlan Sur×14Ben Reitzes×12Jericho 4×10Harsh Kumar×10GPU×8AI×8Stacy Rasgon×6Bernstein Research×6Ross Seymore×5Joe Moore×5Christopher Rolland×5Carl Ackerman×4NVLink×3Jericho 3×3Jim Schneider×3CJ Muse×3TPU×3Joshua Buchhalter×3Tomahawk×2LLM×2UA link×2PCIe×2Hyperscalers×2hyperscale customers×1generative AI×1compute clusters×1DOCSIS 4×1PON×1Bank of America×1NVIDIA×1XGET×1Tom O'Malley×1PowerShell×1AT&T×1Verizon×1vSphere×1BCF×1TCO×1Altros×1InfiniBand×1SerDes×1Goldman Sachs×1JPMorgan×1

Business Segments 336

Semiconductor Solutions×275Infrastructure Software×61

Sectors 326

semiconductor×167artificial intelligence×46cloud computing×43networking equipment×17data center×17wireless communications×9broadband×8enterprise software×6virtualization×4enterprise networking×3server storage×3streaming media×1cable×1cybersecurity×1

Regions 7

US×2China×1Asia×1San Francisco×1London×1Pacific×1

Metadata Distributions

Sentiment
positive 101negative 33neutral 299
Temporality
backward 75forward 64current 294
Certainty
definitive 95confident 129moderate 141tentative 66speculative 2
Magnitude
major 76moderate 195minor 162
Direction
improvement 47decline 16flat 5mixed 8none 357
Time Horizon
immediate 78near_term 147medium_term 39long_term 10unspecified 159
Verifiability
quantitative 135event 22qualitative 276
Analyst Intent
probing 21challenging 3confirming 25seeking_detail 51seeking_guidance 10

Speakers

Executives
HTHock TanCEOKSKirsten SpearsCFO
Analysts
BRBen ReitzesanalystCMCJ MuseanalystCACarl AckermananalystCRChristopher RollandanalystHSHarlan SuranalystHKHarsh KumaranalystJSJim SchneideranalystJMJoe MooreanalystJBJoshua BuchhalteranalystRSRoss SeymoreanalystSRStacy RasgonanalystTOTom O'MalleyanalystVAVivek Aryaanalyst
Other
JYJi YooirOPOperatoroperator

Sections

TypeLabelSpeaker
preamblePreambleJi Yoo
prepared_remarksPrepared RemarksHock Tan, Kirsten Spears
qa_sessionQ&A Session
closing_remarksClosing RemarksJi Yoo, Harsh Kumar
operator_signoffOperator Sign-offOperator

Q&A Exchanges 13

#AnalystFirmTurns
1
RSRoss Seymore
Deutsche Bank3
2
HSHarlan Sur
JPMorgan5
3
VAVivek Arya
Bank of America2
4
SRStacy Rasgon
Bernstein Research6
5
BRBen Reitzes
Melius6
6
JSJim Schneider
Goldman Sachs3
7
TOTom O'Malley
Barclays3
8
CACarl Ackerman
BNP Paribas4
9
CMCJ Muse
Cantor Fitzgerald5
10
JMJoe Moore
Morgan Stanley5
11
JBJoshua Buchhalter
TD Cowen4
12
CRChristopher Rolland
Susquehanna4
13
HKHarsh Kumar
Piper Sandler2

Claim Taxonomy 433

REPORTING91
resultFinancial outcome for a completed period51
metricNon-financial quantitative fact23
operationalDiscrete completed event17
PROJECTING36
guidanceQuantitative expectation with number + time22
commitmentPromise with binary verifiable outcome11
targetLong-term aspirational quantitative goal3
POSITIONING147
strategyPriority, direction, or initiative110
competitiveCompany's position or advantages7
opportunityMarket condition framed as growth driver7
riskHeadwind, constraint, or uncertainty23
EXPLANATORY49
attributionWhy a specific outcome happened2
contextNon-company macro/industry fact47
FRAMING0
thesisFalsifiable belief about how the world works0
ANALYST110
questionInterrogative seeking information41
observationRestates a fact or data point51
concernFlags a risk or challenge2
estimateAnalyst's own projection or calculation13
sentimentOpinion, praise, or critique3

Transcript

Preamble
JY
Ji YooirBroadcom
Welcome to Broadcom Inc. Third Quarter fiscal year 2025 financial results conference call. At this time, for opening remarks and introductions, I would like to turn the call over to Ji Yoo, Head of Investor Relations of Broadcom Inc. Please go ahead. Thank you, Sherry, and good afternoon, everyone. Joining me on today's call are Hock Tan, President and CEO, Kirsten Spears, Chief Financial Officer, and Charlie Coaz, President, Semiconductor Solutions Group. Broadcom distributed a press release and financial tables after the market closed describing our financial performance for 2025. If you did not receive a copy, you may obtain the information from the investors section of Broadcom's website at broadcom.com. This conference call is being webcast live, and an audio replay of the call can be accessed for one year through the Investors section of Broadcom's website. During the prepared comments, Hock and Kirsten will be providing details of our third quarter fiscal year 2025 results, guidance for our 2025, as well as commentary regarding the business environment. We will take questions after the end of our prepared comments.
Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call. In addition to U.S. GAAP reporting, Broadcom reports certain financial measures on a non-GAAP basis. A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today's press release. Comments made during today's call will primarily refer to our non-GAAP financial results. I will now turn the call over to Hock.
Prepared Remarks
HT
Hock TanCEOBroadcom
Thank you, Ji. And thank you everyone for joining us today.
In our fiscal Q3 2025, total revenue was a record $16 billion, up 22% year on year. Now revenue growth was driven by better than expected strength in AI semiconductors, and our continued growth in VMware. Q3 consolidated adjusted EBITDA was a record $10.7 billion, up 30% year on year. Now, looking beyond what we are just reporting this quarter, with robust demand from AI bookings, were extremely strong. And our current consolidated backlog for the company hit a record of $110 billion.
Q3 semiconductor revenue was $9.2 billion as year on year growth accelerated to 26% year on year. And this accelerated growth was driven by AI Semiconductor revenue of $5.2 billion, which was up 63% year on year and extend the trajectory of robust growth to 10 consecutive quarters. Now let me give you more color on our XPU business, which accelerated to 65% of our AI revenue this quarter.
Demand for custom AI accelerators from our three customers continued to grow as each of them journeys at their own pace towards compute self-sufficiency. And progressively, we continue to gain share with these customers. Now, further to these three customers, as we have previously mentioned, we have been working with other prospects on their own AI accelerators.
Last quarter, one of these prospects released production orders to Broadcom. And we have accordingly characterized them as a qualified customer for XPUs. And in fact, has secured over $10 billion of orders of AI regs based on our XPUs. And reflecting this, we now expect the outlook for fiscal 2026 AI revenue to improve significantly from what we had indicated last quarter.
Turning to AI networking. Demand continued to be strong. Because networking is becoming critical as LLMs continue to evolve in intelligence and compute classes have to grow bigger. The network is the computer. And our customers are facing challenges as they scale to clusters beyond 100,000 compute nodes. For instance, scale up which we all know about, is a difficult challenge. When you are trying to create substantial bandwidth to share memory across multiple GPUs or XPUs with the direct Today's AI rank scales up a mere 72 GPUs at 28.8 terabit per second bandwidth using a proprietary NVLink. On the other hand, earlier this year, we have launched Tomahawk five We've opened AI we've opened Ethernet. Sorry. Which can scale up 512 compute nodes for customers using XPUs. Moving on to scaling out across regs. Today, the current architecture using 51.2 terabit per second requires three tiers of networking switches. In June, we launched Tomahawk six and our Ethernet based 102 terabit per second switch which flattens the network.
To two tiers. Resulting in lower latency much less power. And when you scale to clusters beyond single data center footprint, You now need to scale computing across data centers. And over the past two years, we have deployed our Jericho three Ethernet router with hyperscale customers to just do this. And today, we have launched our next generation Jericho four. Ethernet fabric router with 51.2 terabit per second deep buffering intelligence intelligent congestion control, to handle classes beyond 200,000 compute nodes crossing multiple data centers. We know the biggest challenge to deploying larger clusters of compute for generative AI will be in networking. And for the past twenty years, Broadcom has developed for Ethernet, networking is entirely applicable to the challenges of scale up, scale out, and scale across in generative AI. And turning to our forecast as I mentioned earlier, we continue to make steady progress in growing our AI revenue. For Q4, 2025, we forecast AI semiconductor revenue to be approximately $6.2 billion, up 66% year on year.
Now, turning to non AI semiconductors. Demand continues to be slow to recover.
In Q3 revenue of $4 billion was flat sequentially. While broadband showed strong sequential growth, Enterprise networking and service storage were down sequentially. Wireless and industrial were flat quarter on quarter as we expect. In contrast, in Q4, driven by seasonality, we forecast non AI semiconductor revenue to grow low double digits sequentially to approximately $4.6 billion. Broadband, server storage and wireless are expected to improve. While enterprise networking remains down quarter on quarter. Now let me talk about our infrastructure software segment.
Q3 Infrastructure Software revenue of $6.8 billion was 17% year on year. Above our outlook of $6.7 billion as bookings continue to be strong during the quarter. We booked in fact total contract value over $8.4 billion during Q3. But here's one I'm most excited about. After two years of engineering development by over 5,000 developers. We deliver on a promise. When we acquired VMware. We release VMware Cloud Foundation version nine dot zero a fully integrated cloud platform which can be deployed by enterprise customers on prem or carry to the cloud. It enables enterprises to run any application workload including AI workloads, on virtual machines and on modern containers. These provides the real alternative to public club.
In Q4, we expect Infrastructure and Software revenue to be approximately $6.7 billion, up 15% year on year. And in summary, continued strength in AI and VMware would drive our guidance for Q4 consolidated revenue to approximately $17.4 billion, up 24% year on year. And we expect Q4 adjusted EBITDA to be 67% of revenue. And with that, let me turn the call over to Kirsten.
#1
#2
result#3
attribution#4
result#5
commitment#6
metric#7
result#8
result#9
metric#10
result#11
opportunity#12
strategy#13
strategy#14
operational#15
strategy#16
result#17
guidance#18
strategy#19
metric#20
context#21
context#22
metric#23
risk#24
metric#25
#26
operational#27
metric#28
strategy#29
metric#30
operational#31
strategy#32
strategy#33
context#34
operational#35
commitment#36
metric#37
context#38
strategy#39
strategy#40
guidance#41
strategy#42
strategy#43
result#44
result#45
result#46
result#47
result#48
guidance#49
opportunity#50
risk#51
commitment#52
result#53
result#54
result#55
operational#56
operational#57
strategy#58
operational#59
operational#60
strategy#61
competitive#62
KS
Kirsten SpearsCFOBroadcom
Thank you, Hock.
Let me now provide additional detail on our Q3 financial performance. Consolidated revenue was a record $16 billion for the quarter, up 22% from a year ago. Gross margin was 78.4% of revenue in the quarter, better than we originally guided on higher software revenues and product mix within semiconductors. Consolidated operating expenses were $2 billion of which $1.5 billion was research and development. Q3 operating income was a record $10.5 billion, up 32% from a year ago. On a sequential basis, even as gross margin was down 100 basis points on revenue mix, operating margin increased 20 basis points sequentially to 65.5% on operating leverage. Adjusted EBITDA of $10.7 billion or 67% of revenue was above our guidance of 66%.
This figure excludes a $142 million of depreciation. Now a review of the P and L for our two segments. Starting with semiconductors.
Revenue for our Semiconductor Solutions segment was $9.2 billion with growth accelerating to 26% year on year driven by AI. Semiconductor revenue represented 57% of total revenue in the quarter. Gross margin for our Semiconductor Solutions segment was approximately 67% down 30 basis points year on year on product mix. Operating expenses increased 9% year on year to $951 million on increased investment in R and D for leading edge AI semiconductors. Semiconductor operating margin of 57% was up 130 basis points year on year and flat sequentially.
Now moving on to infrastructure software. Revenue for infrastructure software of $6.8 billion was up 17% year on year and represented 43% of revenue. Gross margin for infrastructure software 93% in the quarter compared to 90% a year ago. Operating expenses were $1.1 billion in the quarter, resulting in infrastructure software operating margin of approximately 77%. This compares to operating margin of 67% a year ago reflecting the completion of the integration of VMware.
Moving on to cash flow. Free cash flow in the quarter was $7 billion and represented 44% of revenue. We spent $142 million on capital expenditures. Day sales outstanding were thirty seven days in the third quarter compared to thirty two days a year ago. We ended the third quarter with inventory of $2.2 billion up 8% sequentially in anticipation of revenue growth next quarter. Our days of inventory on hand were sixty six days in Q3, compared to sixty nine days in Q2 as we continue to remain disciplined on how we manage inventory across the ecosystem. We ended the third quarter with $10.7 billion of cash and $66.3 billion of gross principal debt.
The weighted average coupon rate and years to maturity of our $65.8 billion in fixed rate debt is 3.9% and 6.9 years, respectively. The weighted average interest rate and years to maturity of our $500 million at floating rate debt is 4.7% and 0.2 years, respectively. Turning to capital allocation. Q3, we paid stockholders $2.8 billion of cash dividends, based on a quarterly common stock cash dividend of $0.59 per share. We expect the non GAAP diluted share count to be In Q4, approximately 4.97 billion shares, excluding the potential impact of any share repurchases.
Now moving to guidance. Our guidance for Q4 is for consolidated revenue of $17.4 billion up 24% year on year. We forecast semiconductor revenue of approximately $10.7 billion up 30% year on year. Within this, we expect Q4 AI semiconductor revenue of $6.2 billion up 66% year on year. We expect infrastructure software revenue of approximately $6.7 billion up 15% year on year. For your modeling purposes, we expect Q4 consolidated gross margin to be down approximately 70 basis points sequentially primarily reflecting a higher mix of XPUs and also wireless revenue. As a reminder, consolidated gross margins through the year will be impacted by the revenue mix of infrastructure software and product mix within semiconductors. We expect Q4 adjusted EBITDA to be 67%. We expect the non GAAP tax rate for Q4 and fiscal year 2025 to remain at 14%. I will now pass the call back to Hock for some more exciting news.
guidance#63
guidance#64
attribution#65
guidance#66
#67
Q&A Session
Q&A 1/13
OP
Operatoroperator
Thank you. Star one one again. Due to time restraints, we ask that you please limit And our first question will come from the line of Ross Seymore with Deutsche Bank. Your line is open.
RS
Ross SeymoreanalystDeutsche Bank
Hi, guys. Thanks for asking question. Hock, thank you for sticking around for a few more years. So I just wanted to talk about the AI business and, specifically, the XPU. When you said you're gonna grow significantly faster than what you had thought, a quarter ago, what's changed? Is it just the impressive prospect moving to a customer definition, so that $10 billion backlog that you mentioned? Or is it, stronger demand across the existing three customers? Any detail on that will be helpful.
#68
#69
result#70
result#71
result#72
result#73
result#74
result#75
result#76
result#77
HT
Hock TanCEOBroadcom
I think it's both, Ross. But to a large extent, it's the fourth quarter customer that we now add on to our roster. Which we will ship pretty strongly in 2026, I should say. So combination of increasing the volumes from existing three customers and we moved through that very progressively and steadily. And the addition of a fourth customer with immediate and fairly substantial demand. Really put our really, changes our thinking of what '26 would be starting to look like. Thank you.
result#78
#79
#80
result#81
result#82
result#83
result#84
Q&A 2/13
OP
Operatoroperator
One moment for our next question. That will come from the line of Harlan Sur with JPMorgan. Your line is open.
HS
Harlan SuranalystJPMorgan
Hi. Good afternoon. Congratulations on a well executed quarter and strong free cash flow. Know everybody's gonna ask a lot of questions on AI, Hock. I'm gonna ask about the non AI simulators If I look at your guidance for Q4, it looks like the non AI streaming business is gonna be down about 78% year over year on fiscal twenty five if you hit the midpoint of the Q4 guidance. Good news, is that the negative year over year trends have been improving to the year, in fact, think you guys are gonna be positive year over year in the fourth quarter. You've characterized it as relatively close to the cyclical bottom, relatively slow to recover.
However, we have seen some green shoots of positivity. Right? Broadband server storage, enterprise networking. You're still driving the DOCSIS four upgrade in broadband, cable, You've got next gen PON upgrades in China and The US in front of you. Enterprise spending on network upgrades is accelerating. So near term, from the cyclical bottom, how should we think about the magnitude of the cyclical upturn? And given your thirty to forty week lead times, are you seeing continued order improvements in the non AI segment, which would point you to continued cyclical recovery into next fiscal year?
#113
#114
#115
observation#116
question#117
observation#118
question#119
question#120
strategy#121
operational#122
opportunity#123
strategy#124
strategy#125
target#126
HT
Hock TanCEOBroadcom
Well, you know, then if you take a look at that non AI segment, I mean, you're right.
From a year on year Q4 guidance, we are actually up, as you say, slightly. Couple one or 2% from a year ago. And it's not much really to shout about at this point. And the and the big issue is the puts and takes. And the puts and takes and the bottom line to all this is other than seasonality that we perceive if you look at it short term, we've all looking year on year, but looking sequentially. We see in things like wireless and we even start to see some seasonality in server storage these days.
We don't kind of all washes out so far. The only consistent trend we've seen over the last three quarters that is moving up strongly is broadband. And nothing else if you look at it from a cyclical point of view, seems to be able to sustain an uptrend so far. I don't think it's getting but as a whole, they are not getting worse as you pointed out, Harlan. But they are not showing a v shaped recovery as a whole. That we would like to see to see and expect to see in in cyclical semiconductor cycles. The only thing that gives us some hope is broadband at this point.
And it is recovering very strongly. But then it was the business that was most impacted in the in the sharp downturn of '24 and early twenty five. So again, one takes that with a grain of salt.
But best answer to you for you is non non AI semiconductor is kind of slow to recover as I said. And Q4 year on year is up maybe low single digits. Is the best way to to describe it at this point. So I'm expecting to see more of a u shaped recovery in non AI. And perhaps by late twenties mid twenty six, late twenty six, we'll start to see some meaningful recovery. But as of right now, not clear.
#127
HS
Harlan SuranalystJPMorgan
Mhmm. Are are you starting to see that in your order trend in your order book just because your lead times are, like, forty weeks. Right?
HT
Hock TanCEOBroadcom
We are. But we've been tricked before. But we are. The bookings are up, and they are up year on year in excess of 20%. Nothing like what AI bookings look like. But 23% is still pretty good. Right?
Q&A 3/13
OP
Operatoroperator
Thank you. One moment for our next question. That will come from the line of Vivek Arya with Bank of America. Your line is open.
VA
Vivek AryaanalystBank of America
Thanks for taking my question and best wishes for the next part of your tenure. My question is on know, if you could help us quantify what is the new fiscal twenty six AI guidance. Because I think the last call you mentioned '26 could grow at the 60% growth rate. So what is the updated number? Is it, you know, 60% plus the the $10 billion that you mentioned?
And sort of related to that, do you expect the custom versus networking mix to stay broadly what it has been this past year or or evolve more towards customs? So any quantification on this, you know, networking versus custom mix would be very helpful. Fiscal twenty six. Okay. Let's answer the first part first. If I could be so bold as to suggest to you when I last quarter when I said, hey, the trend of growth of '26 will mirror that of '25. Which is 50, 60%. Year on year.
That's really all I say. I didn't quote a ban. Of course, it comes out 50, percent because that's what '25 is. All I'm saying you want to put another way of it, looking at what I'm saying, which is perhaps more accurate, we're seeing the growth rate accelerate. As opposed to just remain steady at that 50, 60% We are expecting and seeing 2026 to accelerate more than the growth rate we see in '25. And I know you love me to throw in a number at you, but you know what? We're not supposed to be giving you a forecast for '26, but best way to describe it, it will be fairly material improvement. And the networking versus custom? Ah, good point. Thanks for reminding me. As we see it and a big part of this driver of growth will be experience. And as a to and the reason of repeating what I said in my remarks, comes from the fact that we are continue to gain share at our three original customers They have to they're on their journey, and each passing generation they go more to XPUs. So we are gaining share from this three. We now have the benefit of an additional four four significant customer. I would just say fourth and very significant customer. And that combination will mean more XPUs. And as I said, as the ratio as the as we create more and more experience among four guys, the networking we get the networking with these four guys, but now the mix of networking from outside these four guys will now be a smaller be diluted, be a smaller share. So I expect actually networking percentage of the pool to be a declining percentage. Going into twenty six.
#128
#129
sentiment#130
observation#131
observation#132
observation#133
observation#134
observation#135
observation#136
#137
observation#138
observation#139
observation#140
observation#141
question#142
question#143
strategy#144
guidance#145
guidance#146
result#147
strategy#148
strategy#149
context#150
strategy#151
competitive#152
risk#153
strategy#154
risk#155
strategy#156
competitive#157
strategy#158
risk#159
risk#160
risk#161
Q&A 4/13
OP
Operatoroperator
Thank you, And one moment for our next question. And that will come from the line of Stacy Rasgon with Bernstein Research. Your line is open.
SR
Stacy RasgonanalystBernstein Research
Hi, guys. Thanks for taking my question.
Was wondering if you could help me parse out this $110 billion backlog. Did I that number right? Could you give us some color on on on the makeup of it? Like, how far out does that go and, like, how much of that $110 billion is AI versus non AI versus software?
#177
question#178
estimate#179
question#180
question#181
question#182
HT
Hock TanCEOBroadcom
Well, I guess, Stacy, we generally don't break up backlog. I've just given a total number to give you a sense of how strong the business is as a whole for the company. And it's largely driven by AI, as a grow in terms of growth. Software continue to add on a steady basis and non AI as as I indicated, has grown double digits. Nothing compared to AI which has grown very strongly. Mhmm. Give you a sense, perhaps, fully 50% of it at least, is semiconductors.
question#183
#184
#185
#186
estimate#187
estimate#188
observation#189
SR
Stacy RasgonanalystBernstein Research
Okay. And it's fair to say that of that semiconductor piece, it's gonna be much more AI than non AI?
observation#190
#191
HT
Hock TanCEOBroadcom
Right.
observation#192
SR
Stacy RasgonanalystBernstein Research
Yeah. Got it. That's helpful. Thank you.
observation#193
estimate#194
#195
observation#196
Q&A 5/13
OP
Operatoroperator
One moment for our next question. And that will come from the line of Ben Reitzes with Melius. Your line is open.
BR
Ben ReitzesanalystMelius
Hey, guys. Thanks a lot. I appreciate it. Hock, congrats on able to guide to the AI revenue well above 60%. For next year. So I wanted to be a little greedy and ask you about maybe 27 and the other three customers or so. How is the dialogue going beyond these four customers? In the past, you've talked about having seven. Now we've added a fourth to production. And then there were three.
Are are you hearing from others? And how's the trend going with maybe with the other three maybe beyond the 26 into 27 and beyond? How's that momentum you think going to shape up? Thanks so much.
#209
#210
question#211
question#212
question#213
question#214
question#215
strategy#216
result#217
result#218
result#219
result#220
strategy#221
#222
metric#223
#224
HT
Hock TanCEOBroadcom
Ben, you are definitely greedy and definitely overthinking this for me. Thank you. But yeah. You know, that's if asking for qualify, subjective qualification. And, frankly, I I don't wanna give that. I'm not comfortable giving that because sometimes we stumble into production in fairly in time frames that are fairly unexpected surprisingly. Equally, it could get delayed. So I rather not give you any more color on prospects than just tell you these prospects are real prospects and continue to be very closely engaged towards developing each of their own experience with every intent of going into substantial production. Like the four we have today who are custom.
question#225
#226
#227
#228
#229
#230
BR
Ben ReitzesanalystMelius
Yeah. You still think that that that million units by, you know, goal for these seven though, is still intact.
HT
Hock TanCEOBroadcom
For the oh, for the three, I said. Now they are four. That's silly. Only for the for the prospects, no comment. I'm not positioned to judge on that. But for our four three, four customers now, yes.
BR
Ben ReitzesanalystMelius
Alright. Thanks a lot. Congrats.
Q&A 6/13
OP
Operatoroperator
One moment for our next question. And that will come from the line of Jim Schneider with Goldman Sachs. Your line is open.
JS
Jim SchneideranalystGoldman Sachs
Good afternoon. Thanks for taking my question.
Hock, I was wondering if you could give us a little bit more color not necessarily on the prospects, which you still have in the pipeline, but how you view the universe of additional prospects beyond the seven you know, customers and prospects you've already identified. Do you still see there being additional prospects that would be worthy of a of a custom chip. And I know you've been relatively know, circumspect in terms of the the number of customers that are out there and the volume that they can provide and and selective in terms of the opportunities you're interested in. So maybe frame for us the additional prospects as you see them beyond the v seven. Thank you.
#231
#232
#233
observation#234
observation#235
concern#236
question#237
HT
Hock TanCEOBroadcom
That's a very good question. And let me let me answer it in a fairly broader basis. Well, as I said before and perhaps said repeat a bit more. We're very send in we look at this market into broad segments. You know, that's one is simply the guys, the parties, the customers, who develop their own LLM. And the rest of the other market I consider is collectively lump as enterprise. That is markers that run that will run AI workloads for enterprise. Whether it's on prem or GPU XPU or whatever as a service. The enterprise.
We don't address that market, to be honest. We That's because that's that's a hard market for us to address. And we're not set up to address that. We instead address this LLM market and as I said many times before, it's a very few narrow markets.
Few players driving frontier models on a consistent on a very accelerated trend towards super intelligence for one plagiarizing the term of someone else, but you know what I mean. And the other guys who would invest, who need to invest a lot initially, my view on training. Training of ever larger and larger clusters of ever more capable accelerators but also as for these guys, you know, they got to be accountable to shareholders or accountable to being able to create cash flows that can sustain their path they start to also invest in inference in a massive way to monetize their models. These are the players we work with. These are individually people or players who spend a lot of money on on a lot of compute capacity just that there are only so few of them. And right we have I have indicated, identified seven. Four of which now our customers Three continues to be prospects we engage with.
And we're very picky and still careful. I should say, Shenandoah, picky. Careful. Who qualifies under that.
And I indicated it. They have a they are building a platform or have a platform and I'm investing very much on leading LLMs models. And we're severed and I think that's about it. We may get see one more perhaps as a prospect But again, we are very thoughtful and careful about even making that qualification. But right now, for sure, we have seven. And that's for now, it's pretty much what we have.
observation#238
observation#239
observation#240
question#241
question#242
question#243
#244
strategy#245
#246
#247
strategy#248
strategy#249
strategy#250
risk#251
operational#252
competitive#253
#254
question#255
context#256
metric#257
context#258
context#259
context#260
strategy#261
#262
#263
#264
Q&A 7/13
OP
Operatoroperator
Thank you. One moment for our next question. And that will come from the line of Tom O'Malley with Barclays. Your line is open.
TO
Tom O'MalleyanalystBarclays
Congrats on the really good results. I wanted to ask on Jericho four commentary. NVIDIA talked about the XGET switch and now is talking about scale across. You're talking about Jericho four. It sounds like this market is really starting to develop. Maybe you could talk about when you see material uplift in revenue there and why it's important to start thinking about those type of switches as we move more towards inferencing. Thank you, Hock.
#265
#266
question#267
question#268
observation#269
question#270
#271
HT
Hock TanCEOBroadcom
Great. Well, thank you for picking that up. Yes. Scale across is a new term now. Right? Drawn in. They scale up, which is within the rack you know, within the which computing within the rack. Scale out doing across racks but within the data center. But now when you get to clusters, that are I'm not 100% sure where the cutoff is, but say above a 100,000 GPU or XPUs. That's you're talking about probably many cases in because of limitation of PowerShell. That the data send that you don't do one single data center footprint site. To hand to sit with over a 100,000 of those XPUs in one site.
Power may not be easily available. Land may not be. It's cumbersome. So many some outcome is most of all our customers now we see create multiple data center sites close at hand not far away, within range 100 kilometers. It's kind of the level. But be able to then put in homogenous XPUs or GPUs in this multiple location three or four, and network across them so that they behave like in fact, a single cluster. That's the coolest part. And that technology which requires because of distance, deep buffering, very intelligent congestion control. It's technology that exist for many many years in the likes of the telcos of AT and T and Verizon doing network routing. Except this is for even somewhat more trickier workloads, but the same. And we've been shipping that to a couple of hyperscalers over the last two years. As Jericho three. As the scale of these clusters and the bandwidth required for AI training expense, We now launched this Jericho four fifty one terabit per second to handle more bandwidth, but same technology we have tested, proven for the last ten, twenty years.
Nothing new. Don't need to create something new for that. It's running in Ethernet. And very proven, very stable, and as I said, last two years under Jericho three, which runs 256 connections, and no compute nodes. We've been selling to a couple of our hyperscale customers.
strategy#272
strategy#273
strategy#274
strategy#275
strategy#276
strategy#277
strategy#278
strategy#279
strategy#280
risk#281
risk#282
risk#283
competitive#284
context#285
context#286
strategy#287
strategy#288
operational#289
context#290
metric#291
metric#292
strategy#293
strategy#294
strategy#295
strategy#296
strategy#297
strategy#298
context#299
strategy#300
metric#301
metric#302
Q&A 8/13
OP
Operatoroperator
One moment for our next question. And that will come from the line of Carl Ackerman with BNP Paribas. Your line is open.
CA
Carl AckermananalystBNP Paribas
Hock, have you completely converted your top 10,000 accounts from vSphere to the entire vSphere Cloud Foundation virtualization stack. I asked because I think last quarter, 87% of accounts had adopted that. And that's certainly a marked increase versus less than 10%.
For those customers who bought the entire suite before the deal, And I guess as you address that, what interest level are you seeing with the longer tail of enterprise customers adopting BCF? And are you seeing tangible cross selling benefits of your merchant semiconductor storage and networking business as those customers adopt VMware. Thank you.
observation#303
question#304
observation#305
observation#306
estimate#307
question#308
HT
Hock TanCEOBroadcom
Okay. To answer your first part of the question, yeah, pretty much virtually way over 90% is is bought VCF. Now like to I am careful about choice of what. Because we have sold them on it, and they bought licenses to deploy doesn't mean they are fully deployed. Here comes the other part of our work, which is to take this 10,000 customers or big chunk of them who have taken who have bought the vision of a private cloud on prem and working with them to enable them to deploy and operate it successfully on their infrastructure on prem. That's the net the hard work over the next two years that we see happening. And as we do it, we see expansion across their foot IT footprint on VCF private cloud. Running on the dataset in within their data center. That's the key part of it. And we see that continuing. And that's the second phase of my VMware story. First phase is convince these people to convert from perpetual subscription so doing purchase VCF. Second phase now, is make that purchase they made on VCF very create the value they look for in private cloud on their premise, on their IT data center. That's what's happening.
And then and that will sustain for quite a while because on top of that, we will start selling advanced services, security, disaster recovery, even AI running AI workloads on it. All that is very exciting. Your second question is, is that able to let me enable me to sell more hardware No. Why it's quite independent?
In fact, as they virtualize their data centers, we consciously accept the fact that we are commoditizing the underlying hardware in the data center. Commoditizing servers, commoditizing storage, commoditizing even networking. And that's fine. And by so commoditizing, we're actually reducing cost of investments in hardware in a in data centers for enterprises.
Now beyond the largest largest 10,000 are we seeing a lot of success? We're seeing some. But again, two reasons why we do not expect it to be as necessary successful. One is the you know, the the value the TCO as they call it, that comes from it will be much less. But the more important thing is the skill sets that needs to not just deploy that you can get. Services and ourselves to help them, but to keep operating might not be something that they can take on.
And we shall see. This is a area this is an area we're still learning. And be interesting to see. Well, VMware has 300,000 customers. We see the top 10,000 as making a for as being people it makes a lot of sense, derive a lot of value, in deploying private cloud using VCF. We now are looking at whether the next 20, 30,000 midsized companies see the same way. Stay tuned. I'll let you know.
#309
#310
#311
#312
strategy#313
#314
#315
strategy#316
strategy#317
target#318
risk#319
risk#320
metric#321
risk#322
risk#323
risk#324
context#325
strategy#326
strategy#327
strategy#328
strategy#329
context#330
context#331
operational#332
operational#333
operational#334
strategy#335
strategy#336
strategy#337
metric#338
strategy#339
CA
Carl AckermananalystBNP Paribas
Very clear. Thank you.
Q&A 9/13
OP
Operatoroperator
One moment for our next question. And that will come from the line of CJ Muse with Cantor Fitzgerald. Your line is open.
CM
CJ MuseanalystCantor Fitzgerald
Yes, good afternoon. Thanks for taking the question. I was hoping to focus on gross margins I understand that the guide down 70 bps, particularly with software lower sequentially. And greater contributions from wireless and XPU. But to to hit that 77 spot seven, I either have to model semiconductor margins flat, which I would think would be lower, or software gross margins to 95% you know, up 200 bps. So can you kinda help me under better understand kind of the moving parts there? To to allow only a 70 bit drop
question#340
observation#341
observation#342
question#343
question#344
#345
#346
KS
Kirsten SpearsCFOBroadcom
Yeah. I mean, the TPUs will be going up along with wireless. As I said on the call. And our software revenue will be, coming up just a bit as well.
metric#347
strategy#348
strategy#349
commitment#350
CM
CJ MuseanalystCantor Fitzgerald
You mean XPUs? XPUs. Yes.
risk#351
strategy#352
strategy#353
KS
Kirsten SpearsCFOBroadcom
And one moment. I was is typically our our our heaviest quarter, right, of the year for wireless. So you have wireless in TPUs with generally lower margins. Right? And then our software revenue coming up.
strategy#354
strategy#355
commitment#356
operational#357
strategy#358
Q&A 10/13
OP
Operatoroperator
And one moment for our next question. And that will come from the line of Joe Moore with Morgan Stanley. Your line is open.
JM
Joe MooreanalystMorgan Stanley
Great. Thank you. In terms of the fourth customer, I think you've talked in the past about potential customers four and five were more hyperscale. And six and seven were more like, you know, the LLM makers themselves. Can you give us a sense for if you could help us categorize that? If not, that's fine. And then the $10 billion of orders, can you give us a time frame on that? Thank you.
#384
#385
observation#386
observation#387
estimate#388
question#389
estimate#390
#391
HT
Hock TanCEOBroadcom
Okay. Yeah. No. It's towards the end of the day, all seven do LLMs Not all of them have a current have a has a huge platform we're talking about. But one could imagine eventually, all of them will have or create a platform. So it's hard to differentiate the two.
But coming back coming on the second and third and the delivery of the $10 billion, I'll probably be in around, I would say, the second half of our fiscal quarter year 2026. I would say, to be more even more precise, likely to be Q3 of our fiscal twenty six.
strategy#392
strategy#393
guidance#394
question#395
observation#396
#397
#398
strategy#399
JM
Joe MooreanalystMorgan Stanley
Okay. Okay.
Q3, it starts or Q3 like, how what time frame does it take to deploy $10 billion? Starts and ends in Q3.
guidance#400
#401
guidance#402
HT
Hock TanCEOBroadcom
Alright. Thank you.
Q&A 11/13
OP
Operatoroperator
One moment for our next question. And that will come from the line of Joshua Buchhalter with TD Cowen. Your line is open.
JB
Joshua BuchhalteranalystTD Cowen
Hey, guys. Thank you for taking my question and congrats on the results. I was hoping you could provide some comments on momentum for your first scale up Ethernet and how it compares with, you know, UA link and PCIe solutions out there. You know, how big of a how meaningful is it to have the Tomahawk Altros product out there with a lower latency? And, know, how meaningful do you think scale up Ethernet opportunity could be over the next year as we think about your AI networking business? Thank you.
#403
#404
estimate#405
estimate#406
question#407
#408
HT
Hock TanCEOBroadcom
Well, that's a good question. And we we ourselves are thinking about that too because to begin with, Ethernet our Ethernet solutions are very disaggregated from the AI's accelerators Anybody does.
It's separate. We treat them as separate. Even though you're right, the network is a computer. We have always believed that Ethernet is open source. Anybody should be able to have choices we keep it separate from my XPU. And but the truth of the matter is for our customers, who use the XPU, we develop and we optimize our networking switches and other components that relate to being able to network signals in the in any classes hand in hand with it.
In fact, all these XPUs have developed with interface that handles Ethernet. Very, very much so. So that's in a way, we've XPUs with our customers, we are openly enabling. Ethernet as a as a networking protocol of choice. Very, very openly. And it need not be our Ethernet switches. It could be any other, but somebody else Ethernet switches that does it. It just happens to be when the lead in this business so we get that.
But beyond it, especially when it comes to a closed system of GPUs, we see less of it. Except in the hyperscalers. Where the hyperscalers are able to architect the GPUs clusters very separate from the networking side, especially in scale scale out. In which case, on those hyperscalers, we sell a lot a lot of these Ethernet switches that does scaling out. And we suspect when it goes to scaling across now, even more Ethernet that are desegregated from the GPUs that are in the in the place. As far as the XPUs are concerned, for sure, it's all Ethernet.
question#409
#410
#411
#412
#413
metric#414
target#415
strategy#416
guidance#417
guidance#418
#419
#420
question#421
observation#422
#423
#424
JB
Joshua BuchhalteranalystTD Cowen
Thank you.
Q&A 12/13
OP
Operatoroperator
One moment for our next question. That will come from the line of Christopher Rolland with Susquehanna. Your line is open.
CR
Christopher RollandanalystSusquehanna
Thank you for the question and congrats on the contract extension, Hock. So, yeah, my my my questions are about competition. Both on the networking side and the ASIC side. You kinda answered some of that, I think, in the last question. But do you view any competition on the ASIC side, particularly from US or Asian vendors, or do you think the is decreasing? And on the networking side, do you think UA link or PCIe even has a chance of displacing Sue in 2027 when it's expected to ramp. Thanks.
#425
#426
question#427
question#428
question#429
#430
#431
HT
Hock TanCEOBroadcom
Thank you for embracing Sue. Thank you. I didn't expect that to come out. And I appreciate that.
Well, you know I'm biased, to be honest. But it's so obvious I can't help but being biased. Because Ethernet is well proven. Ethernet is so known to the engineers, the architects that sits in all these hyperscalers. Developing, designing AI data center, data AI infrastructure.
It's the logical thing for them to use. And they are using it. And they are focusing on it. And the development of separate individualized protocol, frankly, You know, it's beyond my imagination why they bother. Ethernet is there. It's been well used. It's proven. It can keep going up.
The only thing people talk about is perhaps latency. Especially in scaling up. Hence, the the the emergence of NVLink. And even then, as I indicated, it's not hard for us, and we are not the only one who can who can do that. Quite a few others in Ethernet can do it in the switches. You can just tweak the switches to make the latency super good. Better than NVLink, better than InfiniBand. Less than 250, you know, nanoseconds. Easily.
And that's what we did. So it's not that hard. And perhaps this is why I say that because we we have been doing it as the in as the Ethernet has been around the last twenty five years at length. So it's there that they know there's no need to go and create some cook up protocols that now you have to bring people around. Ethernet is the way to go and we and there's plenty of competition too because it's an open source system. So I think Ethernet is way to go and for sure in developing x XPUs for our customers, all these experience with the agreement of customers are made compatible interface with Ethernet. And not some fancy other interface that one has to keep going as bandwidth increase. And we and and I assure you, we have competition, which is one of the why the hyper scalars like the net. It's not just us. They can find somebody else if for whatever reason they don't like us, and we're open to that. It's always good to do that. It's an open source system and lock and there are players in that market.
Not any core system. Switching on to XPU competition, Yeah. You you hear about we hear about competition and all that. It's just that it's it's a competition that's it's an area that we always see competition and our only way to secure a position is we try to out invest and now innovate anybody else in this game.
We have been fortunate to be the first one creating this XPU model of a six. On silicon. And we also have been fortunate to be probably one of the largest IP developers of semiconductor out there. Things like serializer, deserializer, SerDes been able to develop the best packaging been able to do redesign things that are very low power. So just have to keep investing in it, which we do. To outrun the the competition in this space. And I believe we're doing a fairly decent job of doing it at this point.
strategy#432
#433
strategy#434
context#435
strategy#436
strategy#437
strategy#438
strategy#439
#440
strategy#441
strategy#442
#443
strategy#444
context#445
opportunity#446
risk#447
context#448
context#449
strategy#450
strategy#451
strategy#452
#453
CR
Christopher RollandanalystSusquehanna
Very clear. Thanks, Hock. Sure.
Q&A 13/13
OP
Operatoroperator
Thank you. And we do have time for one last question. And that will come from the line of Harsh Kumar with Piper Sandler. Your line is open.
HK
Harsh KumaranalystPiper Sandler
Hey, guys. Thanks for squeezing me in. Hock, congratulations on all the exciting AI metrics, and thanks for everything you do for Broadcom and sticking around. Mark, my question is, you've got three to four existing customers that are ramping. As the data centers for AI clusters get bigger and bigger, it makes sense to have differentiation, efficiency, etcetera. Therefore, the case for XPUs why should I not think that your XPU share at these three or four customers that that are existing will be bigger than the GPU share in the longer term. It will be. It's a logical conclusion.
Yeah. As you're correct. And we are seeing that step by step. As I say, it's a journey It's a multiyear journey because it's multigenerational. Because this experience don't stay still either. I'm doing multiple versions least two versions, two generation version for each of these customers we have. And with each newer generation, they increase the consumption, the usage of the XPU as they gain confidence, as the model improves, they deploy it even more. So that's the logical trend that XPUs will keep in this few customers of ours, whereas they are successfully deployed and their software stabilizes the software stack, the libraries that sits on these chips stabilizes and proves itself out. They'll get they'll have the confidence to keep using a higher and higher percentage of their compute footprint in their own XPUs. For sure. And we see that And that's why I say we progressively gain share.
#454
observation#455
observation#456
observation#457
question#458
question#459
#460
#461
#462
context#463
context#464
context#465
context#466
context#467
context#468
context#469
context#470
context#471
context#472
Closing Remarks
HK
Harsh KumaranalystPiper Sandler
Thank you, Hock.
OP
Operatoroperator
Thank you. I would now like to turn the call back over to Ji Yoo, Head of Investor Relations for any closing remarks.
JY
Ji YooirBroadcom
Thank you, Sherry.
This quarter, Broadcom will be presenting at the Goldman Sachs Communicopia and Technology Conference on Tuesday, September 9 in San Francisco. And at the JPMorgan US All Stars Conference on Tuesday, September 16 in London.
Broadcom currently plans to report its earnings for the fourth quarter and fiscal year 2025 after close of market on Thursday, December 11, 2025. A public webcast of Broadcom's earnings conference call will follow at 2PM Pacific. That will conclude our earnings call today. Thank you all for joining. Sherry, you may end the call.
#513
#514
sentiment#515
observation#516
concern#517
question#518
estimate#519
sentiment#520
#521
#522
observation#523
observation#524
observation#525
observation#526
estimate#527
observation#528
observation#529
#530
observation#531
Operator Sign-off
OP
Operatoroperator
This concludes today's program. Thank you all for participating. You may now disconnect.