Q1 FY2025 Earnings Call
AVGO · Preprocessing Report
2025-03-06
Quality
100%
56
Turns
15
Speakers
5
Sections
11
Exchanges
494
Claims

Entities by group 33

semiconductor company 2
BroadcomcompanyNVIDIAcompany
executives 2
Hock TanpersonKirsten Spearsperson
sell-side analysts 10
Harlan SurpersonRoss SeymorepersonStacy RasgonpersonVivek AryapersonHarsh KumarpersonChristopher RollandpersonVijay RakeshpersonTimothy ArcuripersonC.J. MusepersonBen Reitzesperson
ai model family 1
Frontiertechnology
enterprise virtualization software 1
VMwarecompany
ethernet switch family 2
TomahawkproductJerichoproduct
data center virtualization suite 2
VCFproductVCMproduct
financial institutions 1
JPMorgan Chasecompany
sell-side research 1
Bernstein Researchcompany
ai model developers 1
DeepSeekcompany
networking protocol 1
Ethernettechnology
ai accelerators 1
GPUtechnology
high-speed interfaces 1
SerDestechnology
ai chip designs 1
ASICtechnology
interconnect protocol 1
PCI Expresstechnology
healthcare services 1
Blue Careproduct
Ungrouped 4
XPUproductWilliam SteinpersonvSphereproductUnited Statesother
REPORTING 81PROJECTING 47POSITIONING 168EXPLANATORY 46ANALYST 66

Topics 103

xpu×41ai×27revenue×23customer×19chip×19accelerator×18networking×15semiconductor×13optimization×13design win×12margin×10tariff×10ebitda×7hyperscaler×6expansion×6switch×5earnings×5mix×5partner×4hardware×4

Themes 293

ai×20semiconductor×8product mix×7operating×6adjusted×5performance×5business×4guidance×4revenue×4volume×3demand×3frontier models×3growth×3vmware×3virtualization×3q1 fy2025×3long-term×3at scale×3deployment conversion×3company outlook×3greenfield×3compute versus networking×3total×2radix capacity×2sequential growth×2flat year on year×2software growth×2conversion×2private ai foundation×2year over year growth guidance×2free cash flow×2partner expansion×2second-half outlook×2q2 outlook×2policy uncertainty×2compute capacity×2optimization×2balancing trade-offs×2generative ai development×2sku deployment×2china exposure×2identification×2training and inference×2mixed cards selection×2customer count×2market sizing×2hyperscaler optimization×2model usage×2next generation×2two-nanometer 3d package×2acquisition strategy×2next-generation ai development×1two-nanometer ai xpu×1cluster scaling×1ethernet scaling×1next-generation networking×1tomahawk 6 samples×1hyperscale roadmap×1serviceable addressable×1custom ai×1tape out×1relative strength×1collaboration×1hyperscaler adoption×1hyperscaler engagement×12027 estimate excludes additional hyperscalers×1single design point limitations×1configuration and optimization challenges×1multiyear shift×1deployment ramp×1quarterly result×1non-ai decline×1non-ai recovery×1sequential recovery×1channel inventory×1seasonal decline×1double-digit decline×1expected decline×1non-ai flattish sequentially×1infrastructure shift×1deal timing×1subscription conversion×1upsell×1private cloud×1customer adoption×1future growth opportunity×1on-prem infrastructure×1on-prem deployment×1ecosystem and automation×1gpu and cpu cost reduction×1q2 fy2025 guidance×1forward guidance×1margin guidance×1fiscal period length×1infrastructure software×1gross×1cash taxes×1spending×1days sales outstanding×1future supply×1days on hand×1debt position×1repayment×1reduction×1fixed rate×1floating rate×1cash dividends×1common stock×1employee shares×1dilution guidance×1consolidated guidance×1non-ai semiconductor×1software mix×1customer expansion×1new customer growth×1custom customers×1coming online×1creator role×1hyperscaler enablement×1system architecture×1partner software compatibility×1partner model training systems×1deployment timeline×1development timeline×1accelerated execution×1customer validation×1development for system enablement×1expected later from partners×1customer buildout journey×1timing of initial rollout×1expected eventual progress×1customer base broadening×1capacity ramp×1second-half step-up×1model efficiency impact×1cloud and hyperscale demand×1second-half profile×1mindset visibility×1full mindset disclosure×1q1 beat×1gpu and ai accelerator support×1improved shipments×1pull-ins and acceleration×13nm ramp×1second half fiscal 2025×1strong results×1market disruption×1customer paralysis×1decision making×1resilient growth×1supply chain disruption×1interesting×1relevant and interesting×1concerns×1chip impact×1ongoing headwind×1positive impact×1industry change×1company impact×1technology development×1networking priority×1product upgrades×1company challenges×1new challenges×1accelerators×1distributed computing×1bandwidth×1impact on accelerator performance×1training versus prefilling×1post-training workloads×1design optimization×1memory capacity vs bandwidth×1latency vs bandwidth×1design variables×1memory bandwidth in ai design×1memory capacity and bandwidth in inference×1generative ai disruption×1generative ai hardware×1data center architecture×1shift to public cloud×1ai-driven upgrades×1on-prem ai workloads×1upgrade trend×1deployment location×1sovereignty and cloud data rules×1timing uncertainty×1future visibility×1new engagements×1deployment volume×1definition×1production scale×1production deployment×1tape-out to insertion×1in-hand stage before scale production×1lead time×1design win to deployment×1management view×1large-volume selection×1training demand×1selective pipeline×1customer selectivity×1customer scale×1company strategy×1selection and roadmaps×1selection approach×1startup exclusion×1regulatory impact×1regulatory risk×1customer impact×1training workload×1inference workload×1market mix shift×1competition×1training workloads×1inference product line×1architecture differences×1combined opportunity×1training share×1inference share×1service discussion×1large clusters migration×1scaling performance×1customer focus×1cluster connectivity×1hyperscaler demand×1customer preference×1systems and subsystems×1competitive advantage×1company ownership×1switching and routing×1speed upgrade×1hyperscaler deployment×1tomahawk roadmap×1development pace×1customer concentration×1big bets×1unit growth×1customer contribution×1pricing×1price outlook×1customer mix×1unit target×1customer qualification×1portfolio expansion×1information sharing and differentiation×1capex efficiency×1customer differentiation×1base semiconductor technology for ai models×1algorithms related to models×1model-specific for each partner×1degrees of freedom in process×1limited possible improvement×1partner-driven×1limited visibility×1power in decisions×1cost×1cluster size and training use cases×1test-time scaling×1customer optimization×1geopolitical concerns×1technical issue×1increase×1broad-reaching×1expansion×1brownfield×1all-in×1connectivity technology×1laser technology×1single-mode applications×1laser types×1market opportunity×1ethernet alternatives×1switching architectures×1jericho family×1product lineup×1dual product×1relative importance×1connectivity and networking×1higher spend×1areas of focus×1product-line competitiveness×1production target×1company focus×1outlook×1blue care reference×1surge×1share cap×1temporary shift×1compute and networking split×1time horizon×1deal activity×1scheduled report×1conference call webcast×1

Key Metrics 71

revenue×45revenue mix×9customers×7gross margin×6volume×6shipments×5demand×5adjusted ebitda×5operating margin×5units×4operating expenses×3lead time×3teraflops×2radix capacity×2bandwidth×2free cash flow×2debt×2coupon rate×2compute capacity×2conversion×2conversion ratio×2sku×2design wins×2average selling price×2power×2opex×2xpu×2earnings×2ebitda×1accelerators×1xpus×1xpu clusters×1serviceable addressable market×1sam×1deployments×1bookings×1revenue growth×1growth×1subscription×1adoption rate×1operating income×1cash taxes×1capital expenditures×1days sales outstanding×1inventory×1days of inventory on hand×1cash×1rate×1dividend×1dividend per share×1share repurchases×1diluted share count×1adjusted ebitda margin×1tax rate×1deployed customers×1development time×1capital expenditure×1ramp×1distributed computing×1network bandwidth×1memory bandwidth×1memory capacity×1engagements×1total addressable market×1market opportunity×1radix×1market estimate×1served available market×1customer count×1exaflops per second per dollar×1total cost of ownership×1

Entities 732

Broadcom×359Hock Tan×186XPU×41Kirsten Spears×39Harlan Sur×10Frontier×9VMware×8Ross Seymore×7Tomahawk×6JPMorgan Chase×6VCF×5Stacy Rasgon×5Bernstein Research×5Vivek Arya×5Harsh Kumar×5DeepSeek×4Christopher Rolland×4Vijay Rakesh×4Ethernet×3GPU×3Timothy Arcuri×3VCM×2William Stein×2C.J. Muse×2SerDes×1vSphere×1NVIDIA×1United States×1Ben Reitzes×1ASIC×1PCI Express×1Jericho×1Blue Care×1

Business Segments 288

Semiconductor Solutions×255Infrastructure Software×27Software×6

Sectors 287

semiconductor×161artificial intelligence×49cloud computing×45networking equipment×11data center×7enterprise software×3virtualization×2memory×2optical networking×2broadband×1data storage×1wireless telecommunications×1enterprise hardware×1telecommunications equipment×1

Regions 6

China×5US×1

Metadata Distributions

Sentiment
positive 97negative 20neutral 291
Temporality
backward 78forward 66current 264
Certainty
definitive 108confident 118moderate 128tentative 54
Magnitude
major 72moderate 186minor 150
Direction
improvement 39decline 8flat 4mixed 7none 350
Time Horizon
immediate 70near_term 149medium_term 48long_term 9unspecified 132
Verifiability
quantitative 129event 13qualitative 266
Analyst Intent
probing 24challenging 2confirming 8seeking_detail 25seeking_guidance 7

Speakers

Executives
HTHock TanCEOKSKirsten SpearsCFO
Analysts
BRBen ReitzesanalystCMC.J. MuseanalystCRChristopher RollandanalystHSHarlan SuranalystHKHarsh KumaranalystRSRoss SeymoreanalystSRStacy RasgonanalystTATimothy ArcurianalystVRVijay RakeshanalystVAVivek AryaanalystWSWilliam Steinanalyst
Other
GUGuirOPOperatoroperator

Sections

TypeLabelSpeaker
preamblePreambleGu
prepared_remarksPrepared RemarksHock Tan, Kirsten Spears
qa_sessionQ&A Session
closing_remarksClosing RemarksGu
operator_signoffOperator Sign-offOperator

Q&A Exchanges 11

#AnalystFirmTurns
1
BRBen Reitzes
Melius3
2
HSHarlan Sur
JPMorgan6
3
WSWilliam Stein
Truist Securities4
4
RSRoss Seymore
Deutsche Bank4
5
SRStacy Rasgon
Bernstein Research6
6
VAVivek Arya
Bank of America3
7
HKHarsh Kumar
Piper Sandler4
8
TATimothy Arcuri
UBS4
9
CMC.J. Muse
Cantor Fitzgerald4
10
CRChristopher Rolland
Susquehanna6
11
VRVijay Rakesh
Mizuho5

Claim Taxonomy 408

REPORTING81
resultFinancial outcome for a completed period51
metricNon-financial quantitative fact15
operationalDiscrete completed event15
PROJECTING47
guidanceQuantitative expectation with number + time28
commitmentPromise with binary verifiable outcome10
targetLong-term aspirational quantitative goal9
POSITIONING168
strategyPriority, direction, or initiative142
competitiveCompany's position or advantages2
opportunityMarket condition framed as growth driver10
riskHeadwind, constraint, or uncertainty14
EXPLANATORY46
attributionWhy a specific outcome happened4
contextNon-company macro/industry fact42
FRAMING0
thesisFalsifiable belief about how the world works0
ANALYST66
questionInterrogative seeking information35
observationRestates a fact or data point20
concernFlags a risk or challenge5
estimateAnalyst's own projection or calculation2
sentimentOpinion, praise, or critique4

Transcript

Preamble
OP
Operatoroperator
Welcome to the Broadcom Inc. First Quarter Fiscal Year 2025 Financial Results Conference Call. At this time, for opening remarks and introductions, I would like to turn the call over to Gu, Head of Investor Relations of Broadcom Inc.
GU
GuirBroadcom Inc.
Thank you, Sherry, and good afternoon, everyone. Joining me on today's call are Hock Tan, President and CEO, Kirsten Spears, Chief Financial Officer, and Charlie Kawwas, President, Semiconductor Solutions Group. Broadcom distributed a press release and financial tables after the market closed describing our financial performance for the first quarter of fiscal year 2025. If you did not receive a copy, you may obtain the information from the investors section of Broadcom's website at broadcom.com. This conference call is being webcast live, and an audio replay of the call can be accessed for one year through the investors section of Broadcom's website. During the prepared comments, Hock and Kirsten will be providing details of our first quarter fiscal year 2025 results, guidance for our second quarter of fiscal year 2025, as well as commentary regarding the business environment. We will take questions after the end of our prepared comments.
Please refer to our press release today and our recent filings with the SEC for information on the specific risk factors that could cause our actual results to differ materially from the forward-looking statements made on this call. In addition to US GAAP reporting, Broadcom reports certain financial measures on a non-GAAP basis. A reconciliation between GAAP and non-GAAP measures is included in the tables attached to today's press release. Comments made during today's call will primarily refer to our non-GAAP financial results. I'll now turn the call over to Hock.
Prepared Remarks
HT
Hock TanCEOBroadcom Inc.
Thank you, Gu. And thank you everyone for joining today.
In our fiscal Q1 2025, total revenue was a record $14.9 billion, up 25% year on year. And consolidated adjusted EBITDA was a record again, $10.1 billion, up 41% year on year. So let me first provide color on our semiconductor business. Q1 semiconductor revenue was $8.2 billion, 11% year on year. Growth was driven by AI, as AI revenue of $4.1 billion was up 77% year on year. We repeat our guidance for AI revenue of $3.8 billion due to stronger shipments of networking solutions to hyperscalers on AI.
Our hyperscale partners continue to invest aggressively in the next generation Frontier models, which do require high-performance accelerators as well as AI data centers with larger clusters. Consistent with this, we are stepping up our R&D investment on two fronts. One, we're pushing the envelope of technology in creating the next generation of accelerators. We're taping out the industry's first two-nanometer AI XPU packaging 3.5D, as we drive towards a 10,000 teraflops XPU. Secondly, we have a view towards scaling clusters of 500,000 accelerators for hyperscale customers. We have doubled the radix capacity of this of the existing Tomahawk 5. And beyond this, to enable AI clusters to scale up on Ethernet towards one million XPUs. We have taped out our next-generation 100 terabit Tomahawk 6 switch running 200G SerDes and 1.6 terabit bandwidth. We will be delivering samples to customers within the next few months. These R&D investments are very aligned with the roadmap of our three hyperscale customers as they each race towards one million XPU clusters by the end of 2027. And, accordingly, we do reaffirm what we said last quarter, that we expect these three hyperscale customers will generate a serviceable addressable market or SAM in the range of $60 to $90 billion in fiscal 2027. Beyond these three customers, we had also mentioned previously that we are deeply engaged with two other hyperscalers in enabling them to create their own customized AI accelerator.
We are on track to tape out their XPUs this year. In the process of working with the hyperscalers, it has become very clear that while they are excellent in software, Broadcom is the best in hardware. Working together is what optimizes large language models. It is therefore no surprise to us since our last earnings call, the two additional hyperscalers have selected Broadcom to develop custom accelerators to train their next-generation Frontier models. So even as we have three hyperscale customers, we are shipping XPUs in volume today. There are now four more who are deeply engaged with us to create their own accelerators. And to be clear, of course, these four are not included in our estimated SAM of $60 billion to $90 billion in 2027. So we do see an exciting trend here. New Frontier models and techniques put unexpected pressures on AI systems. It's difficult to serve all classes of models with a single system design point. And therefore, it is hard to imagine that a general-purpose accelerator can be configured and optimized across multiple Frontier models. And as I mentioned before, the trend towards XPUs is a multiyear journey. So coming back to 2025, we see a steady ramp in deployment of all our XPUs and networking products. So Q1 AI revenue was $4.1 billion, and we expect Q2 AI revenue to grow to $4.4 billion, which is up 44% year on year.
Turning to non-AI semiconductors, revenue of $4.1 billion was down 9% sequentially on a seasonal decline in wireless. In aggregate, during Q1, the recovery in non-AI semiconductors continues to be slow. Broadband, which bottomed in Q4 2024, showed a double-digit sequential recovery in Q1 and is expected to be up similarly in Q2 as service providers and telcos step up spending. Server storage was down single digits sequentially in Q1 but is expected to be up high single digits sequentially in Q2.
Meanwhile, enterprise networking continues to remain flattish in the first half of fiscal 2025 as customers continue to work through channel inventory. Wireless was down sequentially due to a seasonal decline. It remained flat year on year. In Q2, wireless is expected to be the same, flat again year on year. Resales in industrial were down double digits in Q1 and are expected to be down in Q2. So reflecting the foregoing puts and takes, we expect non-AI semiconductor revenue in Q2 to be flattish sequentially even though we are seeing bookings continue to grow year on year. In summary, for Q2, we expect total semiconductor revenue to grow 2% sequentially and up 17% year on year to $8.4 billion. Turning now to infrastructure software. Q1 infrastructure software revenue of $6.7 billion was up 47% year on year and up 15% sequentially, exaggerated though by deals which slipped from Q4 to Q1. Now this is the first quarter Q1 2025 where the year on year comparables include VMware in both quarters. We're seeing significant growth in the software segment for two reasons.
One, we're converting from a footprint of largely perpetual licenses to one of full subscription. And as of today, we are over 60% done. Two, these perpetual licenses were only largely for virtualization, otherwise called vSphere. We are upselling customers to a full stack VCF, which enables the entire data center to be virtualized and enables customers to create their own private cloud environment on-prem. And as of the end of Q1, approximately 70% of our largest 10,000 customers have adopted VCF. As these customers consume VCF, we do see a further opportunity for future growth. As large enterprises adopt AI, they have to run their AI workloads on their on-prem data centers, which will include both GPU servers as well as traditional CPUs. And just as VCF virtualizes these traditional data centers using CPUs, VCM will also virtualize GPUs on a common platform and then enable enterprises to import AI models to run their own data on-prem. This platform, which virtualizes the GPU, is called the VMware Private AI Foundation. And as of today, in collaboration with NVIDIA, we have 39 enterprise customers for the VMware Private AI Foundation. Customer demand has been driven by our open ecosystem, superior load balancing, and automation capabilities, which allow them to intelligently pool and run workloads across both GPU and CPU infrastructure, leading to very reduced costs.
Moving on to Q2 outlook for software, we expect revenue of $6.5 billion, up 23% year on year. So in total, we're guiding Q2 consolidated revenue to be approximately $14.9 billion, up 19% year on year. And we expect this will drive Q2 adjusted EBITDA to approximately 66% of revenue. With that, let me turn the call over to Kirsten.
#1
#2
result#3
result#4
operational#5
result#6
result#7
guidance#8
opportunity#9
strategy#10
strategy#11
operational#12
target#13
target#14
metric#15
target#16
operational#17
commitment#18
strategy#19
guidance#20
strategy#21
commitment#22
context#23
strategy#24
operational#25
operational#26
operational#27
context#28
opportunity#29
risk#30
risk#31
risk#32
strategy#33
commitment#34
guidance#35
result#36
result#37
risk#38
guidance#39
guidance#40
strategy#41
risk#42
result#43
guidance#44
result#45
guidance#46
guidance#47
result#48
guidance#49
guidance#50
strategy#51
result#52
result#53
attribution#54
strategy#55
strategy#56
operational#57
operational#58
strategy#59
strategy#60
strategy#61
metric#62
strategy#63
context#64
commitment#65
strategy#66
operational#67
metric#68
attribution#69
strategy#70
guidance#71
guidance#72
guidance#73
guidance#74
guidance#75
#76
KS
Kirsten SpearsCFOBroadcom Inc.
Thank you, Hock.
Let me now provide additional detail on our Q1 financial performance. From a year on year comparable basis, keep in mind that Q1 of fiscal 2024 was a 14-week quarter and Q1 of fiscal 2025 is a 13-week quarter. Consolidated revenue was $14.9 billion for the quarter, up 25% from a year ago. Gross margin was 79.1% of revenue in the quarter, better than we originally guided on higher infrastructure software revenue and more favorable semiconductor revenue mix. Consolidated operating expenses were $2 billion, of which $1.4 billion was for R&D. Q1 operating income of $9.8 billion was up 44% from a year ago with operating margin at 66% of revenue. Adjusted EBITDA was a record $10.1 billion or 68% of revenue, above our guidance of 66%.
This figure excludes $142 million of depreciation. Now a review of the P&L for our two segments. Starting with semiconductors.
Revenue for our semiconductor solutions segment was $8.2 billion and represented 55% of total revenue in the quarter. This was up 11% year on year. Gross margin for our semiconductor solutions segment was approximately 68%, up 70 basis points year on year driven by revenue mix. Operating expenses increased 3% year on year to $890 million on increased investment in R&D for leading-edge AI semiconductors, resulting in semiconductor operating margin of 57%.
Now moving on to infrastructure software. Revenue for infrastructure software of $6.7 billion was 45% and up 47% year on year based primarily on increased revenue from VMware. Gross margin for infrastructure software was 92.5% in the quarter compared to 88% a year ago. Operating expenses were approximately $1.1 billion in the quarter, resulting in infrastructure software operating margin of 76%. This compares to operating margin of 59% a year ago. This year on year improvement reflects our disciplined integration of VMware and sharp focus on deploying our VCF strategy.
Moving on to cash flow. Free cash flow in the quarter was $6 billion and represented 40% of revenue. Free cash flow as a percentage of revenue continues to be impacted by cash interest expense from debt related to the VMware acquisition and cash taxes due to the mix of US taxable income, the continued delay in the reenactment of section 174, and the impact of Corporate AMT. We spent $100 million on capital expenditures.
Day sales outstanding were 30 days in the first quarter compared to 41 days a year ago. We ended the first quarter with inventory of $1.9 billion, up 8% sequentially, to support revenue in future quarters. Our days of inventory on hand were 65 days in Q1 as we continue to remain disciplined on how we manage inventory across the ecosystem.
We ended the first quarter with $9.3 billion of cash and $68.8 billion of gross principal debt. During the quarter, we repaid $495 million of fixed-rate debt and $7.6 billion of floating-rate debt with new senior notes, commercial paper, and cash on hand, reducing debt by a net $1.1 billion. Following these actions, the weighted average coupon rate and years to maturity of our $58.8 billion in fixed-rate debt is 3.8% and 7.3 years respectively. The weighted average coupon rate and years to maturity of our $6 billion in floating-rate debt is 5.4% and 3.8 years respectively. And our $4 billion in commercial paper is at an average rate of 4.6%. Turning to capital allocation.
In Q1, we paid stockholders $2.8 billion of cash dividends based on a quarterly common stock cash dividend of $59 per share. We spent $2 billion to repurchase 8.7 million AVGO shares from employees as those shares vested or withholding taxes. In Q2, we expect the non-GAAP diluted share count to be approximately 4.95 billion shares.
Now moving on to guidance. Our guidance for Q2 is for consolidated revenue of $14.9 billion with semiconductor revenue of approximately $8.4 billion, up 17% year on year. We expect Q2 AI revenue of $4.4 billion, up 44% year on year. For non-AI semiconductors, we expect Q2 revenue of $4 billion. We expect Q2 infrastructure software revenue of approximately $6.5 billion, up 23% year on year. We expect Q2 adjusted EBITDA to be about 66%. For modeling purposes, we expect Q2 consolidated gross margin to be down approximately 20 basis points sequentially on the revenue mix of infrastructure software. As Hock discussed earlier, we are increasing our R&D investment in leading-edge AI in Q2 and accordingly, we expect adjusted EBITDA to be approximately 66%. We expect the non-GAAP tax rate for Q2 and fiscal year 2025 to be approximately 14%. That concludes my prepared remarks. Operator, we will now open for questions.
#77
strategy#78
metric#79
result#80
result#81
attribution#82
result#83
result#84
result#85
result#86
result#87
result#88
#89
#90
result#91
result#92
result#93
result#94
result#95
result#96
strategy#97
#98
result#99
result#100
result#101
result#102
result#103
result#104
attribution#105
#106
result#107
result#108
result#109
result#110
metric#111
result#112
metric#113
result#114
result#115
result#116
result#117
result#118
result#119
#120
result#121
result#122
result#123
guidance#124
#125
guidance#126
guidance#127
guidance#128
guidance#129
guidance#130
guidance#131
guidance#132
strategy#133
guidance#134
guidance#135
#136
#137
OP
Operatoroperator
Thank you.
As a reminder, to ask a question, you will need to press star one one on your telephone. To withdraw your question, press star one one again. Due to time restraints, we ask that you please limit yourself to one question. Please standby while we compile the Q&A roster. And our first question will come from the line of Ben Reitzes with Melius. Your line is open.
Q&A Session
Q&A 1/11
BR
Ben ReitzesanalystMelius
Hey, guys. Thanks a lot and congrats on the results. Hock, you talked about four more customers coming online. Can you just talk a little bit more about the trend you're seeing, can any of these customers be as big as the current three? And what does this say about the custom silicon trend overall and your optimism and upside to the business long term? Thanks.
#138
#139
observation#140
question#141
question#142
#143
HT
Hock TanCEOBroadcom Inc.
Well, very interesting question, Ben. And thanks for your kind wishes. But what we're seeing is and by the way, these four are not customers as we define it. As I've always said, you know, in developing and creating XPUs, you know, we are not really the creator of those XPUs, to be honest. We enable each of those hyperscalers partners we engage with to create that chip and to basically, to create that compute system. Call it that way. And it comprises the model, the software model, working closely with the compute engine, the XPU, and the networking that ties together the clusters of those multiple XPUs as a whole to train those large Frontier models. And so you know, and the fact that we create the hardware it still has to work with the software models and the algorithms of those partners of ours before it becomes fully deployable and scale, which is why we define customers in this case as those where we know they have deployed at scale and we'll receive the production volume to enable it. Right. For that, we only have just the three. The four are, I call it partners, who are trying to create the same thing as the first three and to run their own Frontier models, each of it on there to train their own Frontier models. And as I also said, it doesn't happen overnight. To do the first chip, could take would take typically a year and a half. And that's very accelerated and which we could accelerate given that we essentially have a framework and a methodology that works right now. It works for the three customers. No reason for it to not work for the four. Still need those four partners to create and to develop a software which we don't do to make it work, and to answer your question, there's no reason why these four guys would not create a demand in the range of what we're seeing with the first three guys. But probably later. It's a journey.
They started it later. And so they will probably get there later.
strategy#144
#145
operational#146
strategy#147
strategy#148
#149
strategy#150
strategy#151
strategy#152
#153
metric#154
context#155
context#156
target#157
strategy#158
context#159
strategy#160
strategy#161
context#162
commitment#163
strategy#164
operational#165
commitment#166
BR
Ben ReitzesanalystMelius
Thank you very much.
#167
Q&A 2/11
OP
Operatoroperator
Thank you. One moment for our next question. And that will come from the line of Harlan Sur with JPMorgan. Your line is open.
HS
Harlan SuranalystJPMorgan
Good afternoon, and great job on the strong quarterly results, Hock and team. Great to see the continual momentum in the AI business here in the first half of your fiscal year, and the continued broadening out of your AI ASIC customers. I know, Hock, last earnings, you did call out a strong ramp in the second half of the fiscal year, driven by new three-nanometer AI accelerated programs kinda ramping. You just help us either qualitatively, quantitatively, profile the second half step up relative to what the team just delivered here in the first half?
Has the profile changed? Either favorably less favorably versus what you thought, maybe ninety days ago because quite frankly, I mean, a lot has happened since last earnings.
Right? You've had the dynamics like deep seek and focus on AI model efficiency. But on the flip side, you've had strong CapEx outlooks by your cloud and hyperscale customers. So any color on the second half AI profile would be helpful.
#168
observation#169
observation#170
observation#171
question#172
question#173
question#174
#175
concern#176
observation#177
question#178
HT
Hock TanCEOBroadcom Inc.
You're asking me to look into the minds of my customers. And I hate to tell you they don't tell me, they don't show me the entire mindset here. But one why are we beating the numbers so far in Q1? Seems to be encouraging in Q2. Pardon me? From improved networking shipments, as I indicate that's to foster those XPUs and AI accelerators even in some cases GPUs together, for the hyperscalers. And that's good. And partly, also, we think there is some pull-ins of shipments and acceleration, call it that way, of shipments yes, in fiscal 2025.
context#179
context#180
result#181
guidance#182
#183
metric#184
operational#185
opportunity#186
commitment#187
HS
Harlan SuranalystJPMorgan
And on the second half, that you talked about ninety days ago, the second half three-nanometer ramp, is that still very much on track?
question#188
HT
Hock TanCEOBroadcom Inc.
Alan, thank you. I only got you two. Sorry. Let me let's not speculate on the second half.
#189
#190
#191
strategy#192
HS
Harlan SuranalystJPMorgan
Okay. Thank you, Hock.
#193
#194
Q&A 3/11
OP
Operatoroperator
Thank you. One moment for our next question. And that will come from the line of William Stein with Truist Securities. Your line is open.
WS
William SteinanalystTruist Securities
Great. Thank you for taking my question. Congrats on these pretty great results.
It you know, it seems from the news headlines about tariffs and about Deepseek that there may be some disruption. Some customers and some other complementary suppliers seem to feel a bit paralyzed perhaps. Have difficulty making tough decisions. Those tend to be you know, really useful times for great companies to sort of emerge as something bigger and better than they were in the past. You've, you know, grown this company in a tremendous way over the last, you know, decade plus. And you're doing great now, especially in this AI area. But I wonder if you're seeing that sort of disruption from these dynamics that we suspect are happening based on, you know, headlines of what we see from other companies. And how aside from adding these customers in AI, sure there's other great stuff going on, but should we expect some bigger changes to come from Broadcom as a result of this?
#195
#196
sentiment#197
concern#198
concern#199
concern#200
sentiment#201
observation#202
sentiment#203
question#204
question#205
HT
Hock TanCEOBroadcom Inc.
You post a very interesting set of issues and questions. And those are very relevant interesting issues.
And don't need issue the only problem we have at this point is I would say it's really toward to no way we're all linked. I mean, there's the threat, the noise of tariffs, especially on chips, that has a material lines in it. Nor do we know how it will be structured. So we don't know. But we do experience and we are leaving it now.
Is the disruption on that is paused in a positive way. I should add a very positive disruption in semiconductors, on a generative AI. Generative AI for sure. And we I said that before, so at the risk of repeating, you know, but it's we feel it more than ever. Is really accelerating the development of semiconductor technology. Both process and packaging as well as design. Towards higher and higher performance accelerators and networking functionality. We've seen that innovation that those upgrades occur on a every month. As we face new interesting challenges. And when particularly with XPUs.
We're trying within as to optimize to Frontier models of our partners, our customers, as well as our hyperscale partners. And we it's a lot of I mean, it's a it's a privilege almost for us to be to participate in it and try to optimize. And by optimize, I mean, you look at an accelerator. You can look at it for simple terms, high level, to to perform, to want to maybe mention not just on one single mentoring, which is compute capacity, how many teraflops? It's more than that. It's also tied to the fact that this is a distributed computing problem. It's not just the sing the compute capacity of a single XPU or GPU. It's also the network bandwidth. It ties itself to the next adjacent XPU or GPU. So that has an impact.
So you're doing that. You'll have the balance with that. Then you decide, are you doing training or you're doing prefilling? Post training. Fine tuning.
And again then comes how much memory do you balance against that? And with it, how much latency you can afford, which is memory bandwidth. So you will look at the at least four variables, maybe even five. If you're in clone in memory bandwidth. Not just memory capacity, when you go straight to inference. So we we have all these variables to play with and we try to optimize it.
So all this is very, very I mean, it's a great experience for our engineers to push their envelope on how to create all those chips and so that's the biggest disruption we see right now. From sheer trying to create and push the envelope on generative AI. Trying to create the best hardware infrastructure to run it. Beyond that, yeah, there are there are other things too that come into play because with AI, as I indicated, just not just drive hardware for enterprises, it drives the way the architect their data centers. You know, data requirement they keep keeping data private on on under control becomes important. So suddenly, the push of what loads towards public cloud. May take a little pause as large enterprises particularly, have to they have to take direct device that you want to run AI workloads. You're probably thinking very hard about running them on-prem. And suddenly, push yourself to a saying, got to upgrade your own data centers. To do you know, and manage the your own data to run it on-prem. And that's also pushing a trend that with we have been seeing now over the past twelve months. Hence, my comments on VMware Private AI Foundation. This is to especially enterprises pushing direction at quickly recognizing that how where do they run their AI workloads. So those are trends we see today and a lot of it coming out of AI, a lot of it coming out of sensitive rules on sovereignty, in cloud and in data. On as far as you're mentioning tariffs is concerned, I think that's too early for us to figure out where to online. And probably maybe give it another three, six months we probably have a better idea where to go.
context#206
context#207
risk#208
risk#209
risk#210
context#211
risk#212
opportunity#213
opportunity#214
context#215
strategy#216
opportunity#217
opportunity#218
opportunity#219
strategy#220
strategy#221
risk#222
strategy#223
strategy#224
strategy#225
context#226
context#227
context#228
context#229
context#230
strategy#231
strategy#232
strategy#233
strategy#234
strategy#235
strategy#236
strategy#237
strategy#238
strategy#239
strategy#240
strategy#241
strategy#242
strategy#243
strategy#244
strategy#245
strategy#246
competitive#247
strategy#248
strategy#249
risk#250
context#251
context#252
context#253
context#254
context#255
strategy#256
context#257
context#258
strategy#259
strategy#260
WS
William SteinanalystTruist Securities
Thank you.
#261
Q&A 4/11
OP
Operatoroperator
Thank you. One moment for our next question. And that will come from the line of Ross Seymore with Deutsche Bank. Your line is open.
RS
Ross SeymoreanalystDeutsche Bank
May I speak to a ask a question. I want to go back to the XPU side of things.
And going from the four new engagements, not yet named customers, two last quarter and two more today that you announced. I wanna talk about going from kind of design winded deployment. How do you judge that? Because there is some debate about, you know, tons of design wins, but the deployments actually don't happen either that they never occur or that the volume is never what is originally promised. How do you view that kind of conversion ratio with there a wide range around it? Or is there some way you could help us kind of understand how that works?
#262
question#263
observation#264
question#265
question#266
question#267
question#268
question#269
HT
Hock TanCEOBroadcom Inc.
Well, it's and, Ross, the interesting question I'll take the opportunity to say the way we look at designing is probably very different from the way many of our peers look at it out there.
Number one, to begin with, we believe design win when we know our product is at in produced in scale, at scale and is actually deployed. Literally deployed in production. So that takes a long lead time because from taping out getting in the product, it takes a year. Easily from the product in the hands. Of our partner to when it goes into scale production, it will take six months to a year.
It's our experience that we've seen. Number one and number two. I mean, producing and deploying five thousand SKUs that's that's a joke. That's not real production in my in our view.
And so we also limit ourselves in selecting partners to people who really need that large volume. You need that large volume from our viewpoint in scale right now. In mostly framing, training of large language models from tier models in a continuing trajectory. So we limit ourselves to how many customers or how many potential customers that exist out there, Ross. And we tend to be very selective who we pick. The beginning. So when we say design win, it really is at scale.
It's not something that starts in six months and die or a year and die again. Basically, it's a selection of custom. It just the way we run our ASIC business in general for the last fifteen years. We pick and choose the customers because we know this guy and we do multi multi-year roadmaps with these customers because we know these customers are sustainable. I put it bluntly. We don't do it for start-ups.
strategy#270
strategy#271
strategy#272
metric#273
strategy#274
metric#275
strategy#276
strategy#277
strategy#278
strategy#279
strategy#280
context#281
strategy#282
strategy#283
strategy#284
#285
strategy#286
strategy#287
strategy#288
strategy#289
strategy#290
strategy#291
strategy#292
RS
Ross SeymoreanalystDeutsche Bank
Thank you.
#293
Q&A 5/11
OP
Operatoroperator
And one moment for our next question. And that will come from the line of Stacy Rasgon with Bernstein Research. Your line is open.
SR
Stacy RasgonanalystBernstein Research
Hi, guys. Thanks for taking my question.
I wanted to go to the three customers that you do have in volume today. And what I wanted to ask was is there any concern about some of the new regulations or the AI diffusion rules that are gonna get put in place supposedly in May impacting any of those design wins or shipments. It sounds like you think all three of those are still on at this point. But anything you could tell us about worries about new regulations or AI diffusions impacting any of those wins would be helpful.
#294
#295
observation#296
question#297
observation#298
question#299
HT
Hock TanCEOBroadcom Inc.
Thank you. In this era or this current era of geopolitical tensions, and fairly dramatic actions all around by governments, yes, always some concern at the back of everybody's mind. But to answer your question directly, no. We don't have any concerns.
#300
context#301
strategy#302
strategy#303
SR
Stacy RasgonanalystBernstein Research
Got it. So none of those are going into China? Or to Chinese customers then?
#304
question#305
question#306
HT
Hock TanCEOBroadcom Inc.
No comment. Are you trying to locate who they are?
strategy#307
strategy#308
SR
Stacy RasgonanalystBernstein Research
Okay. That's helpful. Thank you.
#309
#310
#311
Q&A 6/11
OP
Operatoroperator
Thank you. One moment for our next question. And that will come from the line of Vivek Arya with Bank of America. Your line is open.
VA
Vivek AryaanalystBank of America
Thanks for taking my question.
Hock, whenever you have described your AI opportunity, you've always emphasized the training workload. But the perception is that the AI market could be dominated by the inference workload, especially with these new reasoning models. So what happens to your opportunity and share if the mix moves more towards inference, does it, you know, does it create a bigger TAM you than the $60 to $90 billion? You know, does it keep it the same, but is there is a different mix of product? Or does a more inference-heavy market favor a GPU over an XPU? Thank you.
#312
observation#313
observation#314
question#315
question#316
question#317
#318
HT
Hock TanCEOBroadcom Inc.
That's a good interesting question. By the way, I never I do talk a lot about training. We do out check our experience as also focus on inference as a separate product line. They do. And that's why I can say the architecture of those chips are very different from the architecture of the training chips.
And so it's a combination of those two, I should add. That adds up to this $60 to $90 billion. So if I had not been clear I do apologize. It's a combination of both. But having said that, the larger part of the dollars come from training. Not inference. If within the server service, the same that we have talked about so far. Thank you.
#319
context#320
strategy#321
#322
context#323
strategy#324
result#325
#326
strategy#327
metric#328
metric#329
strategy#330
#331
Q&A 7/11
OP
Operatoroperator
One moment for our next question. And that will come from the line of Harsh Kumar with Piper Sandler. Your line is open.
HK
Harsh KumaranalystPiper Sandler
Thanks, Broadcom team. And again, great execution. Just talk had a quick question. We've been hearing that almost all of the large clusters that are 100k plus they're all going to Ethernet.
I was wondering if you could help us understand the importance of when the customer's making a selection, you know, choosing between a guy that has the best switch ASICs such as you, versus a guy that might have to compute there. Can you talk about what the customer is thinking and what are the final points that they wanna hit upon when they make that selection for the mixed cards?
#332
sentiment#333
question#334
observation#335
question#336
question#337
HT
Hock TanCEOBroadcom Inc.
Okay. I see. No. It's a com yeah. It's down to in the in the case of the in the case of the hyperscalers, now very much so, it's very driven by performance. And it's performance what you're mentioning about on connecting scaling up, and scaling out those AI accelerators, be they XPU or GPU.
Among extra scalers. And in main most cases, among those hyperscalers we engage with, when it comes to connecting those clusters. They're very driven.
By performance. I mean, you if you are in the race to really get the best performance out of your hardware as you train and continue to train your Frontier models, that matters more than anything else. So the basic first thing they go for is proven. That's a proven piece of hardware is a proven net it's a proven system subsystem in our case that makes it work.
And in that case, we tend to have a big advantage. Because, I mean, networking is ours. You know, switching and routing is ours for the last ten years at least.
And the fact that it's AI just makes it more interesting for our engineers to work on. And but it's basically based on proven technology and experience in pushing that and pushing the envelope on going from 800 gigabit per second bandwidth to 1.6 and moving on to 3.2, which is exactly why we keep stepping up the rate of investment in coming up with our products where we take a ton of five. We doubled the radix to deal with just one hyperscaler because they want high radix to create larger clusters while running, bandwidth that are smaller but that doesn't stop us from moving ahead to the next generation of Tomahawk 6 and I dare say, we even planning to come out 7 and 8 right now. And we're speeding up the rate of development. And it's all actually for that few guys, by the way. So we're making a lot of investment for very few customers, hopefully, with very large serve available markets. That's if I'm nothing else, that's the big bets we are placing.
#338
#339
#340
#341
context#342
context#343
strategy#344
context#345
context#346
context#347
context#348
context#349
context#350
competitive#351
strategy#352
strategy#353
strategy#354
strategy#355
operational#356
commitment#357
strategy#358
strategy#359
strategy#360
strategy#361
HK
Harsh KumaranalystPiper Sandler
Thank you, Hock.
#362
Q&A 8/11
OP
Operatoroperator
Thank you. One moment for our next question. And that will come from the line of Timothy Arcuri with UBS. Your line is open.
TA
Timothy ArcurianalystUBS
Thanks a lot.
Hock, in the past, you have mentioned XPU units growing from about two million last year to about seven million, you said, in the 2027, 2028 time frame. My question is, do these four new customers, do they add to that seven million unit number? I know in the past, you sort of talked about an ASP. It was, you know, twenty grand by then. So those three the first three customers are clearly a subset of that seven million units. So do these new four engagements drive that seven higher, or do they just fill in to get to that seven million? Thanks.
#363
observation#364
question#365
observation#366
observation#367
observation#368
question#369
#370
HT
Hock TanCEOBroadcom Inc.
Thanks, Tim, for asking that. To clarify, as I made I thought I made it clear in my comments. No. The market we're talking about include when you translate the unit it's only among the three customers we have today. The other four, we talk about engagement partners, we don't consider that as customers yet, and therefore, are not in our served available market.
#371
strategy#372
strategy#373
strategy#374
strategy#375
strategy#376
TA
Timothy ArcurianalystUBS
Okay. So they would add to that number. Okay. Thanks, Hock.
#377
estimate#378
#379
#380
Q&A 9/11
OP
Operatoroperator
Thanks. One moment for our next question. And that will come from the line of CJ Muse with Cantor Fitzgerald. Your line is open.
CM
C.J. MuseanalystCantor Fitzgerald
Yeah. Good afternoon. Thank you for taking the question. I guess, I'll have to follow-up on your prepared remarks and comments earlier around optimization with your best hardware and hyperscalers with their great software. I'm curious how you're expanding your portfolio now to six mega scale kind of frontier models. Will enable you to, you know, a more plush, you know, share tremendous information, but at the same time, a world where these six truly wanna differentiate. So, obviously, you know, the goal for all of these players is exaflops, per second per dollar of capex per watt. And I guess to what degree are you aiding them in these efforts? And where does maybe the Chinese wall kinda start where they wanna kinda differentiate and not share with you kind of some of the work that you're doing.
#381
#382
#383
question#384
observation#385
observation#386
estimate#387
question#388
question#389
HT
Hock TanCEOBroadcom Inc.
You. Oh, you know, we only provide very base basic fundamental technology in semiconductors to enable these guys to use what we have and optimize it to their own particular models. And algorithms that relate to those models. That's it. That's all we do. So that's the level of a lot of that optimization we do for each of them. And as I mentioned earlier, there are maybe five degrees of freedom.
Yeah. We do. And we play with that.
And so even if there are five degrees of freedom, there's only so much we can do at that point. But it is and how did you and then basically, how we optimize it is all tied to the partner telling us how they want to do to do it. So that's only so much we also have visibility on. But it's but it's what we do now is what the XPU model is. She optimization translating to performance but also power. That's very important how they play. It's not just cost. The power come translates into total cost of ownership. Eventually. It's it's how design it in power and how we balance it in terms of the size of the cluster and whether they use it for training. No. Pretraining? Post training? Inference you know, time test time scaling, all of them have their own characteristics. And that's the advantage of doing that XPU and working closely with them to create that stuff. Now as far as your question on China and all that, frankly, I don't have any opinion on that at all. To us, it's a technical game.
#390
strategy#391
strategy#392
#393
#394
strategy#395
strategy#396
#397
#398
#399
risk#400
strategy#401
risk#402
strategy#403
strategy#404
strategy#405
strategy#406
context#407
#408
strategy#409
#410
context#411
context#412
context#413
strategy#414
strategy#415
strategy#416
CM
C.J. MuseanalystCantor Fitzgerald
Thank you very much.
#417
Q&A 10/11
OP
Operatoroperator
One moment for our next question. And that will come from the line of Christopher Rolland with Susquehanna. Your line is open.
CR
Christopher RollandanalystSusquehanna
Hey. Thanks so much for the question. And this one's maybe for Hock and for Kirsten. I'd love to know just because you have kind of the complete connectivity portfolio, how you see new greenfield scale-up opportunities playing out here between you know, could be optical or copper or really anything and what additives this could be for your company. And then, Kirsten, I think, OpEx is up. Maybe just talk about where those OpEx dollars are going towards within the AI opportunity and whether they relate. Thanks so much.
#418
#419
question#420
question#421
concern#422
question#423
#424
HT
Hock TanCEOBroadcom Inc.
Your question is very broad-reaching. In our portfolio. Yeah. We deploy we have the advantage as a and a lot of the customer hyperscalers they're talking about a lot of expansion. But it's all almost all greenfield. You know, bless soul brownfield. It's very greenfield. It's all expansion. And it all tends to be next generation. That we're doing. Which is very exciting.
So the opportunity is very, very high. And we deploy I mean, we are both we can do it in copper, but what we see a lot of opportunity from is when you connect provide the networking connectivity through optical. But there are a lot of active elements including either, you know, multimode lasers which are called VCSELs or intervening lasers. For basically single mode. And we do both. So there's a lot of opportunity just as in scale scale-up versus scale-up. We used to do we still do a lot of other protocols beyond Ethernet to consider PCI Express where we on the leading edge of that PCI Express. And that architecture on networking, switching, so to speak, we offered both. One is a very intelligent switch which is, like, our Jericho family. We've a Dominic or a very smart NIC, we've got down switch, which is a Tomahawk. We offer both architectures as well. So, yeah, we have a lot of opportunities from it. All things said and done, all this nice white portfolio and all that, adds up to probably, as I said in prior quarters, about 20% of our total AI revenue maybe going to 30%. Though last quarter, we hit almost 40%, but that's not the norm. I would say typically, all those other portfolio products still end up to a nice decent amount of revenue for us, but within the sphere of AI, they add up to I would say on average, be close to 30%. And XPUs, the accelerators, is 70%. If that's what you're driving at, perhaps that give you some shed some light on towards where, how one matters over the other. But we have a wide range of products connectivity, networking side of it. They just add up though to that 30%.
strategy#425
strategy#426
strategy#427
opportunity#428
opportunity#429
strategy#430
strategy#431
strategy#432
strategy#433
strategy#434
strategy#435
strategy#436
strategy#437
strategy#438
strategy#439
strategy#440
strategy#441
strategy#442
strategy#443
strategy#444
strategy#445
strategy#446
strategy#447
guidance#448
result#449
result#450
result#451
strategy#452
strategy#453
result#454
CR
Christopher RollandanalystSusquehanna
Thanks so much, Hock.
#455
KS
Kirsten SpearsCFOBroadcom Inc.
And then on the R&D front, as I outlined at a consolidated basis, we spent $1.4 billion in R&D in Q1, and IT stated that it would be going up in Q2. Hock clearly outlined in his script the two areas where we're focusing on. Now I would tell you the company we focus on R&D across all of our product lines so that we can stay competitive with next-generation product offerings, but he did line out that we are focusing on taping out the industry's first two-nanometer AIXPU package in 3D. That was one in the script, and that's an area that we're focusing on.
And then he mentioned that we've doubled the radix capacity of existing Tomahawk 5. To enable our AI customers to scale up on Ethernet. Towards the one million XPUs. I mean, that's a huge focus of the company.
guidance#456
strategy#457
strategy#458
operational#459
operational#460
metric#461
target#462
metric#463
CR
Christopher RollandanalystSusquehanna
Yep. Thank you very much, Kirsten.
#464
#465
Q&A 11/11
OP
Operatoroperator
And one moment for our next question. And that will come from the line of Vijay Rakesh with Mizuho. Your line is open.
VR
Vijay RakeshanalystMizuho
Yeah. Hi. Thanks. Thanks.
Just a question on the networking side. Just wondering how much does up sequentially on the AI side? And any thoughts around M&A going forward? Still a lot of headlines around the include products, Blue Care, so thanks.
#466
#467
#468
#469
observation#470
question#471
question#472
observation#473
HT
Hock TanCEOBroadcom Inc.
On the networking side, as indicated, Q1 showed a bit of a surge. But I don't expect that to be that mix of sixty-forty-sixty is compute and forty percent networking to be something that is normal? I think the norm is closer to seventy, thirty, maybe. At best, thirty percent. And so who knows what Q2 is? We cannot see Q2 as continuing but that's just, at my mind, a temporary flip. The norm will be seventy thirty. If you take it across, a period of time, like six months, a year, that's your question.
M&A, no. I'm too busy we're too busy doing AI and VMware at this point. We're not thinking of it. At this point.
strategy#474
target#475
target#476
metric#477
target#478
strategy#479
risk#480
strategy#481
target#482
strategy#483
context#484
strategy#485
strategy#486
#487
VR
Vijay RakeshanalystMizuho
Thanks, Hock.
#488
OP
Operatoroperator
Thank you. That is all the time we have for our question and answer session. I would now like to turn the call back over to Gu for any closing remarks.
Closing Remarks
GU
GuirBroadcom Inc.
Thank you, Sherry.
Broadcom currently plans to report its earnings for the second quarter of fiscal year 2025 after close of market on Thursday, June 5th, 2025. A public webcast of Broadcom's earnings conference call will follow at 2 PM Pacific. That will conclude our earnings call today. Thank you all for joining. Sherry, you may end the call.
#489
commitment#490
commitment#491
#492
#493
#494
Operator Sign-off
OP
Operatoroperator
Thank you. Ladies and gentlemen, thank you for participating. This concludes today's program. You may now disconnect.