Thank you, Matt, and good afternoon to all those listening today. 2024 was a transformative year for AMD. We successfully established our multi-billion dollar data center AI franchise. Launched a broad set of leadership products and gained significant server and PC market share. As a result, we delivered record annual revenue, grew net income 26% for the year and more than doubled free cash flow from 2023.
Importantly, the data center segment contributed roughly 50% of annual revenue, as Instinct and EPYC processor adoption expanded significantly with cloud, enterprise and supercomputing customers. Looking at our financial results. Fourth quarter revenue increased 24% year-over-year to a record $7.7 billion, led by record quarterly data center and client segment revenue both of which grew by a significant double-digit percentage. On a full year basis, annual revenue grew 14% to $25.8 billion as data center revenue nearly doubled and client segment revenue grew 52%, more than offsetting declines in our Gaming and Embedded segments. Turning to the segments. Data center segment revenue increased 69% year-over-year to a record $3.9 billion. 2024 marks another major inflection point for our server business, as share gains accelerated, driven by the ramp of fifth-gen EPYC Turin and strong double-digit percentage year-over-year growth in fourth-gen EPYC sales.
In cloud, we exited 2024 with well over 50% share at the majority of our largest hyperscale customers. Hyperscaler demand for EPYC CPUs was very strong, driven by expanded deployments powering both their internal compute infrastructure and online services. Public cloud demand was also very strong with a number of EPYC instances increasing 27% in 2024 to more than 1,000. AWS, Alibaba, Google, Microsoft and Tencent launched more than 100 AMD general purpose in AI instances in the fourth quarter alone. This includes new Azure instances powered by a custom-built EPYC processor with HBM memory that delivers leadership HPC performance based on offering 8x higher memory bandwidth compared to competitive offerings. We also built significant momentum with Forbes 2000 Global Businesses using EPYC in the cloud, as enterprise customers activated more than double the number of EPYC cloud instances from the prior quarter. This capped-off a strong year of growth as enterprise consumption of EPYC in the cloud nearly tripled from 2023.
Turning to enterprise on-prem adoption. EPYC CPU sales grew by a strong double-digit percentage year-over-year, as sales grew increased, and we closed high-volume deployments with Akamai, Hitachi, LG, ServiceNow, Verizon, Visa and others. We are seeing growing enterprise pull based on the expanding number of EPYC platforms available and our increased go-to-market investments. Exiting 2024, there are more than 450 EPYC platforms available from the leading server OEMs and ODMs, including more than 120 Turin platforms that went into production in the fourth quarter from Cisco, Dell, HPE, Lenovo, Super Micro and others. Looking forward, Turin is clearly the best server processor in the world with more than 540 performance records across a broad range of industry standard benchmarks. At the same time, we are seeing sustained demand for both fourth and third gen-EPYC processors as our consistent road map execution has made AMD the dependable and safe choice. As a result, we see clear growth opportunities in 2025 across both cloud and enterprise based on our full portfolio of EPYC processors, optimized for leadership performance across the entire range of data center workloads and system price points.
Turning to our data center AI business. 2024 was an outstanding year, as we accelerated our AI hardware road map to deliver an annual cadence of new Instinct accelerators, expanded our ROCm software suite with significant uplifts in inferencing and training performance, built strong customer relationships with key industry leaders and delivered greater than $5 billion of data center AI revenue for the year. Looking at the fourth quarter, MI300X production deployments expanded with our largest cloud partners. Meta exclusively used MI300X to serve their Llama 405B frontier model on meta.ai and added instinct GPUs to its OCP-compliant Grand Teton platform, designed for deep learning recommendation models and large-scale AI inferencing workloads. Microsoft is using MI300X to power multiple GPT 4-based Copilot services, and launched flagship instances that scale up to thousands of GPUs for AI training and inference and HPC workloads. IBM, Digital Ocean, [Vulture] (ph) and several other AI-focused CSPs have begun deploying AMD Instinct accelerators for new instances. IBM also announced plans to enable MI300X on their Watson X AI and data platform for training and deploying enterprise-ready generative AI applications. Instinct platforms are currently being deployed across more than a dozen CSPs globally, and we expect this number to grow in 2025. For enterprise customers, more than 25 MI300 series platforms are in production with the largest OEMs and ODMs. To simplify and accelerate enterprise adoption of AMD Instinct platforms, Dell began offering MI300X as a part of their AI factory solution suite and is providing multiple ready-to-deploy containers via the Dell Enterprise Hub on hugging face. HPC adoption also grew in the quarter. AMD now powers five of the 10 fastest and 15 of the 25 most energy efficient systems in the world, on the latest top 500 supercomputer list. Notably, the El Capitan system at Lawrence Livermore National Labs, debuted as the world's fastest supercomputer, using over 44,000 MI300 AI APUs to deliver more than 1.7 exaflops of compute performance. Earlier this month, the high-performance computer center at the University of Stuttgart, launched the Hunter supercomputer that also uses MI300A. Like El Capitan, Hunter will be used for both foundational scientific research and advanced AI projects, including training LOMs in 24 different European languages. On the AI software front, we made significant progress across all layers of the ROCm stack in 2024. Our strategy is to establish AMD ROCm as the industry's leading open software stack for AI, providing developers with greater choice and accelerating the pace of industry innovation. More than 1 million models on hugging face now run out of the box on AMD. And our platforms are supported in the leading frameworks like PyTorch and JAX, serving solutions like VLLM and compilers like OpenAI Triton. We have also successfully ramped large-scale production deployments with numerous customers using ROCm, including our lead hyperscale partners. We ended the year with the release of ROCm 6.3 that included multiple performance optimizations, including support for the latest flash attention algorithm that runs up to 3 times faster than prior versions and SG Lang run time that enabled day-zero support for state-of-the-art models like DeepSeek V3. As a result of these latest enhancements, MI300X inferencing performance has increased 2.7 times since launch. Looking forward, we're continuing to accelerate our software investments to improve the out-of-the-box experience for a growing number of customers adopting Instinct to power their diverse AI workloads. For example, in January we began delivering biweekly container releases that provide more frequent performance and feature updates and ready to deploy packages, and we continue adding resources dedicated to the open source community that enable us to build, test and launch new software enhancements at a faster pace. On the product front, we began volume production of MI325X in the fourth quarter. The production ramp is progressing very well to support new customer wins. MI325 is well-positioned in market, delivering significant performance and TCO advantages compared to competitive offerings. We have also made significant progress with a number of customers adopting AMD Instinct. For example, we recently closed several large wins with MI300 and MI325 at Lighthouse AI customers that are deploying instinct at scale across both their inferencing and training production environments for the first time. Looking ahead, our next-generation MI350 series featuring our CDNA 4 architecture is looking very strong. CDNA 4 will deliver the biggest generational leap in AI performance in our history, with a 35 times increase in AI compute performance compared to CDNA 3. The silicon has come up really well. We were running large-scale LLMs within 24 hours of receiving first silicon and validation work is progressing ahead of schedule.
The customer feedback on MI350 series has been strong, driving deeper and broader customer engagements with both existing and net new hyperscale customers in preparation for at-scale MI350 deployments. Based on early silicon progress and the strong customer interest in the MI350 series, we now plan to sample lead customers this quarter and are on track to accelerate production shipments to mid-year.
As we look forward into our multiyear AMD Instinct road map, I'm excited to share that MI400 series development is also progressing very well. The CDNA next architecture takes another major leap enabling powerful rackscale solutions that tightly integrate networking CPU and GPU capabilities at the silicon level to support Instinct solutions at data center scale. We designed CDNA next to deliver leadership AI and HPC flops while expanding our memory capacity and bandwidth advantages and supporting an open ecosystem of scale up and scale out networking products. We are seeing strong customer interest in the MI400 series for large-scale training and inference deployments and remain on track to launch in 2026.
Turning to our acquisition of ZTE Systems. We passed key milestones in the quarter and received unconditional regulatory approvals in multiple jurisdictions, including Japan, Singapore and Taiwan. Cloud and OEM customer response to the acquisition has been very positive, as ZTE's Systems expertise can accelerate time to market for future Instinct accelerator platforms. We have also received significant interest in ZTE's manufacturing business. We expect to successfully divest ZTE's industry-leading U.S.-based data center infrastructure production capabilities, shortly after we closed the acquisition, which remains on track for the first half of the year. Turning to our client segment. Revenue increased 58% year-over-year to a record $2.3 billion.
We gained client revenue share for the fourth straight quarter, driven by significantly higher demand for both Ryzen desktop and mobile processors. We had record desktop channel sell-out in the fourth quarter in multiple regions, as Ryzen dominated the best-selling CPU list at many retailers globally. Exceeding 70% share at Amazon, Newegg, [MineFactory] (ph) and numerous others over the holiday period. In mobile, we believe we had a record OEM PC sell-through share in the fourth quarter as Ryzen AI 300 Series notebooks ramp.
In addition to growing share with our existing PC partners, we were very excited to announce a new strategic collaboration with Dell that marks the first time they will offer a full portfolio of commercial PCs powered by Ryzen Pro processors. The initial wave of Ryzen-powered Dell commercial notebooks is planned to launch this spring with the full portfolio ramping in the second half of the year, as we focus on growing commercial PC share. At CES, we expanded our Ryzen portfolio with the launch of 22 new mobile processors that deliver leadership compute, graphics and AI capabilities. Our Ryzen processor portfolio has never been stronger with leadership compute performance across the stack. For AI PCs, we are the only provider that offers a complete portfolio of CPUs, enabling Windows Copilot plus experiences on premium ultrathin, commercial, gaming and mainstream notebooks.
Looking into 2025, we are planning for the PC TAM to grow by a mid-single-digit percentage year-on-year. Based on the breadth of our leadership client CPU portfolio and strong design win momentum, we believe we can grow client segment revenue well ahead of the market. Now turning to our Gaming segment.
Revenue declined 59% year-over-year to $563 million. Semi-custom sales declined as expected as Microsoft and Sony focused on reducing channel inventory. Overall, this console generation has been very strong. Highlighted by cumulative unit shipments surpassing $100 million in the fourth quarter. Looking forward, we believe channel inventories have now normalized and semi-custom sales will return to more historical patterns in 2025.
In Gaming Graphics, revenue declined year-over-year, as we accelerated channel sellout in preparation for the launch of our next-gen Radeon 9000 series GPUs. Our focus with this generation is to address the highest volume portion of the enthusiast gaming market with our new RDNA 4 architecture. RDNA 4 delivers significantly better rate tracing performance and add support for AI-powered upscaling technology that will bring high-quality 4K gaming to mainstream players when the first Radeon 9070 series GPUs go on sale in early March. Now turning to our Embedded segment.
Fourth quarter revenue decreased 13% year-over-year to $923 million. The demand environment remains mixed, with the overall market recovering slower than expected as strength in aerospace and defense and test and emulation is offset by softness in the industrial and communications markets. We continue expanding our adaptive computing portfolio in the quarter with differentiated solutions for key markets. We launched our Versal RF series with industry-leading compute performance for aerospace and defense markets, introduced our Versal premium series Gen 2, as the industry's first adaptive compute devices supporting CXL 3.1 and PCI Gen 6 and began shipping our next-gen Alveo card with leadership performance for ultra-low latency trading. We believe we gained adaptive computing share in 2024 and are well-positioned for ongoing share gains based on our design win momentum. We closed a record $14 billion of design wins in 2024, up more than 25% year-over-year, as customer adoption of our industry-leading adaptive computing platforms expands and we won large new embedded processor designs. In summary, we ended 2024 with significant momentum, delivering record quarterly and full-year revenue. EPYC and Ryzen processor share gains grew throughout the year, and we are well-positioned to continue outgrowing the market based on having the strongest CPU portfolio in our history.
We established our multibillion-dollar data center AI business and accelerated both our Instinct hardware and ROCm software road maps. For 2025, we expect the demand environment to strengthen across all of our businesses, driving strong growth in our data center and client businesses and modest increases in our gaming and Embedded businesses. Against this backdrop, we believe we can deliver strong double-digit percentage revenue and EPS growth year-over-year. Looking further ahead, the recent announcements of significant AI infrastructure investments like Stargate, and latest model breakthroughs from DeepSeek and the Allen Institute highlight the incredibly rapid pace of AI innovation across every layer of the stack, from silicon to algorithms to models, systems and applications. These are exactly the types of advances we want to see as the industry invests in increased compute capacity, while pushing the envelope on software innovation to make AI more accessible and enable breakthrough generative and agentic AI experiences that can run on virtually every digital device. All of these initiatives require massive amounts of new compute and create unprecedented growth opportunities for AMD across our businesses. AMD is the only provider with the breadth of products and software expertise needed to power AI from end-to-end across data center, edge and client devices. We have made outstanding progress building the foundational product, technology and customer relationships needed to capture a meaningful portion of this market. And we believe this places AMD on a steep long-term growth trajectory, led by the rapid scaling of our data center AI franchise for more than $5 billion of revenue in 2024 to tens of billions of dollars of annual revenue over the coming years.
Now I'd like to turn the call over to Jean to provide some additional color on our fourth quarter and full year results. Jean?