Thank you, Toshiya. We delivered another outstanding quarter with revenue of $57 billion, up 62% year over year and a record sequential revenue growth of $10 billion or 22%. Our customers continue to lean into three platform shifts fueling exponential growth for accelerated computing, powerful AI models, and agentic applications. Yet we are still in the early innings of these transitions that will impact our work across every industry. Currently, we have visibility to a half a trillion dollars in Blackwell and Rubin revenue from the start of this year through the end of calendar year 2026. By executing our annual product cadence and extending our performance leadership through full stack design, we believe NVIDIA Corporation will be the superior choice for the $3 to $4 trillion in annual AI infrastructure build we estimate by the end of the decade. Demand for AI infrastructure continues to exceed our expectations. The clouds are sold out, and our GPU installed base, both new and previous generations, including Blackwell, Hopper, and Ampere, is fully utilized.
Record Q3 data center revenue of $51 billion increased 66% year over year, a significant feat at our scale. Compute grew 56% year over year driven primarily by the GB 300 ramp while networking more than doubled given the onset of NVLink scale up and robust double-digit growth across Spectrum X Ethernet and Quantum X InfiniBand.
The world hyperscalers, a trillion-dollar industry, are transforming search recommendations, and content understanding from classical machine learning to generative AI. NVIDIA CUDA excels at both and is the ideal platform for this transition, driving infrastructure investment measured in hundreds of billions of dollars. At Meta, AI recommendation systems are delivering higher quality and more relevant content, leading to more time spent on apps such as Facebook and Threads. Analyst expectations for the top CSPs and hyperscalers in 2026 aggregate CapEx have continued to increase and now sit roughly at $600 billion, more than $200 billion higher relative to the start of the year. We see the transition to accelerated computing and generative AI across current hyperscale workloads contributing toward roughly half of our long-term opportunity.
Another growth pillar is the ongoing increase in compute spend driven by foundation model builders such as Anthropic, Mastral, OpenAI, Reflection, Safe Superintelligence, Thinking Machines Lab, and xAI. All scaling, compute aggressively to scale intelligence. The three scaling laws pretraining, post-training, and inference remain intact. In fact, we see a positive virtuous cycle emerging whereby the three scaling laws and access to compute are generating better intelligence and in turn increasing adoption and profits.
OpenAI recently shared that their weekly user base has grown to 800 million. Enterprise customers have increased to 1 million, and their gross margins were healthy. Well, Anthropic recently reported that its annualized run rate revenue has reached $7 billion as of last month, up from $1 billion at the start of the year. We are also witnessing a proliferation of agentic AI across various industries and tasks. Companies such as Cursor Anthropic, Open Evidence, Epic, and Abridge are experiencing a surge in user growth as they supercharge the existing workforce, delivering unquestionable ROI for coders and healthcare professionals. The world's most important enterprise software platforms like ServiceNow, CrowdStrike, and SAP are integrating NVIDIA Corporation's accelerated computing and AI stack. Our new partner, Palantir, is supercharging the incredibly popular ontology platform with NVIDIA CUDA X libraries and AI models for the first time. Previously, like most enterprise software platforms, Ontology runs only on CPUs. Lowe's is leveraging the platform to build supply chain agility, reducing costs, and improving customer satisfaction. Enterprises broadly are leveraging AI to boost productivity, increase efficiency, and reduce cost. RBC is leveraging agentic AI to drive significant analysts' productivity, slashing report generation time from hours to minutes. AI and digital twins are helping Unilever accelerate content creation by 2x and cut costs by 550%. And Salesforce's engineering team has seen at least 30% productivity increase in new codevelopment after adopting Cursor.
This past quarter, we announced AI factory and infrastructure projects amounting to an aggregate of 5 million GPUs. This demand spans every market CSPs, sovereigns, modern builders, enterprises, and supercomputing centers includes multiple landmark build outs. XAI's Colossus two, the world's first gigawatt scale data center. Lilly's AI factory for drug discovery, the pharmaceutical industry's most powerful data center. And just today, AWS and Humane expanded their including the deployment of up to 150,000 AI accelerators, including our GB 300, x AI and Humane also announced a partnership in which the two will jointly develop a network of world-class GPU data centers anchored by the flagship 500 megawatt facility. Blackwell gained further momentum in Q3. As GB 300 crossed over GB 200 and contributed roughly two-thirds of the total Blackwell revenue. The transition to GB 300 has been seamless. With production shipments to the majority to the major, cloud service providers, hyperscalers, and GP clouds and is already driving their growth. The Hopper platform in its thirteenth quarter since exception recorded approximately $2 billion in revenue in Q3. H '20 sales were approximately $50 million. Sizable purchase orders never materialized in the quarter due to geopolitical issues and the increasingly competitive market in China. While we were disappointed in the current state, that prevents us from shipping more competitive data center compute products to China, we are committed to continued engagement with the US and China governments. And will continue to advocate for America's ability to compete around the world. To establish a sustainable leadership and position in AI computing, America must win. The support of every developer, and be the platform of choice for every commercial business including those in China. The Rubin platform is on track to ramp in the 2026. Powered by seven chips, the Vera Rubin platform will once again deliver an x factor improvement in performance relative to Blackwell. We have received silicon back from our supply chain partners and are happy to report that NVIDIA Corporation teams across the world are executing the bring up beautifully. Rubin is our third generation rack scale system substantially redefined the manufacturability while remaining compatible with Grace Blackwell our supply chain data center ecosystem, and cloud partners have now mastered the build to installation process of NVIDIA Corporation's RAC architecture. Our ecosystem will be ready for a fast Rubin ramp. Our annual x factor performance lead increases performance per dollar while driving down computing cost for our customers.
The long useful life of NVIDIA Corporation's CUDA GPUs is a significant TCO advantage over accelerators. CUDA's compatibility and our massive installed base extend the life NVIDIA Corporation systems well beyond their original estimated useful life. For more than two decades, we have optimized the CUDA ecosystem, improving existing workloads, accelerating new ones, and increasing throughput with every software release. Most accelerators without CUDA and NVIDIA Corporation's time-tested and versatile app architecture became obsolete within a few years as model technologies evolve. Thanks to CUDA, the a 100 GPUs we shipped six years ago are still running at full utilization today. Powered by vastly improved software stack. We have evolved over the past twenty-five years from a gaming GPU company to now an AI data center infrastructure company. Our ability to innovate across the CPU, the GPU, networking, and software, and ultimately drive down cost per token is unmatched across the industry. Our networking business purpose built for AI, and now the largest in the world. Generated revenue of $8.2 billion, up 162% year over year. With NVLink, InfiniBond, and Spectrum X Ethernet, all contributing to growth. We are winning in data center networking as the majority of AI deployments now include our switches with Ethernet GPU attach rates roughly on par with InfiniBand. Meta, Microsoft, Oracle, and xAI are building gigawatt AI factories with Spectrum X Ethernet switches. And each will run its operating system of choice highlighting the flexibility and openness of our platform. We recently introduced SPECTUM Spectrum XGS, a scale across technology that enables gigascale AI factories. NVIDIA Corporation is the only company with AI scale up scale out, and scale across platforms, reinforcing our unique position in the market as the AI infrastructure provider.
Customer interest in NVLink Fusion continues to grow. We announced a strategic collaboration with Suzuki in October where we will integrate Fuzitsu's CPUs and NVIDIA Corporation GPUs via NVLink Fusion. Connecting our large ecosystems. We also announced a collaboration with Intel to develop multiple generations of custom data center and PC products connecting NVIDIA Corporation and Intel's ecosystems using NVLink. This week at supercomputing 25, Arm announced that it will be integrating NVLink IP for customers to build CPU SoCs that connect with NVIDIA Corporation. Currently on its fifth generation, NVLink is the only proven scale up technology available on the market today. In the latest MLPerf training results, Blackwell Ultra delivered 5x faster time to train than hopper. NVIDIA Corporation swept every benchmark. Notably, NVIDIA Corporation is the only training platform to ledge bridge f p four while meeting the MLPerf's strict accuracy standards.
In semianalysis, inference max benchmark, Blackwell achieved the highest performance and lowest total cost of ownership across every model and use case. Particularly important is Blackwell's NVLink's performance on a mixture of experts. The architecture for the world's most popular reasoning models. On DeepSeek, r one, Blackwell delivered 10x higher performance per watt and 10x lower cost per token versus h 200. A huge generational leap fueled by our extreme codesign approach NVIDIA Corporation Dynamo, an open source, low latency, modular inference framework has now been adopted by every major cloud service provider leveraging Dynamos enablement and disaggregated inference the resulting such as MOE models, increase in performance of complex AI models AWS, Google Cloud, Microsoft Azure, and OCI have boosted AI inference performance for enterprise cloud customers. We are working on a strategic partnership with OpenAI focused on helping them build and deploy at least 10 gigawatts of AI data centers. In addition, we have the opportunity to invest in the company. We serve OpenAI, through their cloud partners. Microsoft Azure, OCI, and CoreWeave. We will continue to do so for the foreseeable future.
As they continue to scale, we are delighted to support the company to add self build infrastructure, and we are working toward a definitive agreement and are excited to support OpenAI's growth. Yesterday, celebrated an announcement with Anthropic. For the first time, Anthropic is adopting NVIDIA Corporation and we are establishing a deep technology partnership to support Anthropics fast growth. We will collaborate to optimize anthropic models for CUDA, and deliver the best possible performance, efficiency, and TCO. We will also optimize future NVIDIA Corporation architectures for anthropic workloads. Anthropics compute commitment is initially including up to one gigawatt of compute capacity, with Grace Blackwell and Vera Rubin systems. Our strategic investments in anthropic menstrual, opening eye, reflection, thinking machines, and other represent partnerships.
That grow the NVIDIA Corporation CUDA AI ecosystem and enable every model to run optimally on NVIDIA Corporation's everywhere. We will continue to invest strategically while preserving our disciplined approach to cash flow management. Physical AI is already a multibillion dollar business addressing a multitrillion dollar opportunity, and the next leg of growth for NVIDIA Corporation. Leading US manufacturers and robotics innovators are leveraging NVIDIA Corporation's three computer architecture to train on NVIDIA Corporation.
Test on Omniverse computer, and deploy real world AI on Justin robotic computers. PTC and Siemens introduced new services that bring Omniverse powered digital twin workflows to their extensive installed base of customers. Companies including Belden, Caterpillar, Foxconn, Lucid Motors, Toyota, TSMC, and Wistron are building Omniverse Digital Twin factories to accelerate AI driven manufacturing and automation. Agility robotics, Amazon robotics, Figure, and skilled at AI are building our platform, tapping offerings such as NVIDIA Corporation Cosmos World Foundation Models for development, Omniverse for simulation and validation, and Jetson two power next generation intelligent robots. We remain focused on building resiliency and redundancy in our global supply chain. Last month, in partnership with TSMC, we celebrated the first Blackwell wafer produced on US soil. We'll continue to work with Fox conn, Vistron, Amcor, Spill, and others to grow our presence in The US over the next four years.
Gaming revenue was $4.3 billion, up 30% year on year driven by strong demand as 42 million gamers, while thousands of fans packed the GeForce Gamer Festival in South Korea. To celebrate twenty-five years of GeForce. NVIDIA Corporation Pro Visualization has evolved into computers for engineers and developers. Whether for graphics, or for AI. Professional visualization revenue was $760 million, up 56% year over year. Was another record.
Growth was driven by DGX Spark. The world's smallest AI supercomputer.
Built on a small configuration of Grace Blackwell. Automotive revenue was $592 million, up 32% year over year primarily driven by self-driving solutions. We are partnering with Uber to scale the world's largest level four ready autonomous fleet built on the new NVIDIA Corporation Hyperion l four robotaxi reference architecture. Moving to the rest of the p and l.
GAAP gross margins were 73.4% and non GAAP gross margins was 73.6%. Exceeding our outlook. Gross margins increased sequentially due to our data center mix, improved cycle time, and cost structure. GAAP operating expenses were up 8% sequentially and up 11% on non GAAP basis.
The growth was driven by infrastructure compute, as well as higher compensation and benefits in engineering development costs. Non GAAP effective tax rate for the third quarter was just over 17%. Higher than our guidance of 16.5% due to the strong US revenue. On our balance sheet, inventory grew 32% quarter over quarter while supply commitments increased 63% sequentially. We are preparing for significant growth ahead and feel good about our ability to execute against our opportunity set. Okay.
Let me turn to the outlook for the fourth quarter. Total revenue is expected to be $65 billion plus or minus 2%. At the midpoint, our outlook implies 14% sequential growth driven by continued momentum in the Blackwell architecture. Consistent with last quarter, we are not assuming any data center compute revenue from China.
GAAP and non GAAP gross margins are expected to be 74.875% respectively. Plus or minus 50 basis points. Looking ahead, to fiscal year twenty twenty-seven, input costs are on the rise but we are working to hold gross margins in the mid-seventies. Gap and non GAAP operating expenses are expected to be approximately $6.7 billion and $5 billion respectively. GAAP and non GAAP other income and expenses are expected to be an income of approximately $500 million, excluding gains and losses from non-marketable and publicly held equity securities. GAAP and non GAAP tax phase are expected to be 17%. Plus or minus 1% excluding any discrete items. At this time, let me turn the call over to Jensen. For him to say a few words.