Thank you, Toshiya.
We delivered another strong quarter, with revenue of $44 billion, up 69% year over year, exceeding our outlook in what proved to be a challenging operating environment. Data center revenue of $39 billion grew 73% year on year. The AI workloads have transitioned strongly to inference, and AI factory build-outs are driving significant revenue. Our customers' commitments are firm.
On April 9, the US government issued new export controls on H20, our data center GPU designed specifically for the China market. We sold H20 with the approval of the previous administration. Although our H20 has been in the market for over a year and does not have a market outside of China, the new export controls on H20 did not provide a grace period to allow us to sell through our inventory. In Q1, we recognized $4.6 billion in H20 revenue, which occurred prior to April 9, but also recognized a $4.5 billion charge as we wrote down inventory and purchase obligations tied to orders we had received prior to April 9. We were unable to ship $2.5 billion in H20 revenue in the first quarter due to the new export controls. The $4.5 billion charge was less than what we initially anticipated as we were able to reuse certain materials.
We are still evaluating our limited options to supply data center compute products compliant with the US government's revised export control rules. Losing access to the China AI accelerator market, which we believe will grow to nearly $50 billion, would have a material adverse impact on our business going forward and benefit our foreign competitors in China and worldwide. Our Blackwell ramp, the fastest in our company's history, drove a 73% year-on-year increase in data center revenue. Blackwell contributed nearly 70% of data center compute revenue in the quarter, with a transition from Hopper nearly complete.
The introduction of GB200 NBL was a fundamental architectural change to enable data center scale workloads and to achieve the lowest cost per inference token. These systems are complex to build, we have seen a significant improvement in manufacturing yields, and rack shipments are moving to strong rates to end customers. GB200 and VO racks are now generally available for model builders, enterprises, and sovereign customers to develop and deploy AI. On average, major hyperscalers are each deploying nearly 1,000 NBL72 racks, or 72,000 Blackwell GPUs per week, and are on track to further ramp output this quarter. Microsoft, for example, has already deployed tens of thousands of Blackwell GPUs and is expected to ramp to hundreds of thousands of GB200s with OpenAI as one of its key customers. Key learnings from the GB200 ramp will allow for a smooth transition to the next phase of our product roadmap, Blackwell Ultra.
Sampling of GB300 systems began earlier this month at the major CSPs, and we expect production shipments to commence later this quarter. GB300 will leverage the same architecture, same physical footprint, and the same electrical and mechanical specifications as GB200. The GB300 drop-in design will allow CSPs to seamlessly transition their systems and manufacturing used for GB200 while maintaining high yields.
B300 GPUs, with 50% more HBM, will deliver another 50% increase in dense FP4 inference compute performance compared to the B200. We remain committed to our annual product cadence with our roadmap extending through 2028, tightly aligned with the multiple-year planning cycles of our customers. We are witnessing a sharp jump in inference demand. OpenAI, Microsoft, and Google are seeing a step function leap in token generation. Microsoft processed over 100 trillion tokens in Q1, a fivefold increase on a year-over-year basis. This exponential growth in Azure OpenAI is representative of strong demand for Azure AI Foundry, as well as other AI services across Microsoft's platform. Inference serving startups are now serving models using B200, tripling their token generation rate and corresponding revenues for high-value reasoning models such as Deepseeker One. As reported by Artificial Artificial Analysis, NVIDIA Corporation's Blackwell NBL72 turbocharges AI inference throughput by 30x for the new reasoning models sweeping the industry. Developer engagements increased with adoption ranging from LLM providers such as Perplexity to financial service institutions such as Capital One, who reduced agentic chatbot latency by 5x with Dynamo. In the latest MLPerf inference results, we submitted our first results using GB200 NBL72, delivering up to 30x higher inference throughput compared to our 8 GPU H200 submission on the challenging Llama 3.1 benchmark.
This feat was achieved through a combination of tripling the performance per GPU as well as 9x more GPUs, all connected on a single NVLink domain. And while Blackwell is still early in its life cycle, software optimizations have already improved its performance by 1.5x in the last month alone. We expect to continue improving the performance of Blackwell through its operational life as we have done with Hopper and Ampro. For example, we increased the inference performance of Hopper by four times over two years. This is the benefit of NVIDIA Corporation's programmable CUDA architecture and rich ecosystem.
The pace and scale of AI factory deployments are accelerating, with nearly 100 NVIDIA Corporation-powered AI factories in flight this quarter, a twofold increase year over year, with the average number of GPUs powering each factory also doubling in the same period. More AI factory projects are starting across industries and geographies. NVIDIA Corporation's full-stack architecture is underpinning AI factory deployments as industry leaders like AT&T, BYD, Capital One, Foxconn, MediaTek, and Telenor are strategically vital sovereign clouds like those recently announced in Saudi Arabia, Taiwan, and the UAE. We have a line of sight to projects requiring tens of gigawatts of NVIDIA Corporation AI infrastructure in the not-too-distant future. The transition from generative to agentic AI, AI capable of perceiving, reasoning, planning, and acting, will transform every industry, every company, and country. We envision AI agents as a new digital workforce capable of handling tasks ranging from customer service to complex decision-making processes. We introduced the Llama NemoTron family of open reasoning models designed to supercharge agentic AI platforms for enterprises.
Built on the Llama architecture, these models are available as NIMS or NVIDIA Corporation inference microservices with multiple sizes to meet diverse deployment needs. Our post-training enhancements have yielded a 20% accuracy boost and a 5x increase in inference speed. Leading platform companies including Accenture, Cadence, Deloitte, and Microsoft are transforming work with our reasoning models.
NVIDIA Corporation's Nexmo microservices are generally available across industries and are being leveraged by leading enterprises to build, optimize, and scale AI applications. With Nexmo, Cisco increased model accuracy by 40% and improved response time by 10x in its code assistant. Nasdaq realized a 30% improvement in accuracy and response time in its AI platform's search capabilities. Shell's custom LLM achieved a 30% increase in accuracy when trained with NVIDIA Corporation's Nexmo. Nemo's parallelism techniques accelerated model training time by 20% when compared to other frameworks. We also announced a partnership with Yam Brands, the world's largest restaurant company, to bring NVIDIA Corporation AI to 500 of its restaurants this year, expanding to 61,000 restaurants over time to streamline order taking, optimize operations, and enhance service across its restaurants. For AI-powered cybersecurity, leading companies like Checkpoint, Cloudstrike, and Palo Alto Networks are using NVIDIA Corporation's AI security and software stack to build, optimize, and secure agentic workflows, with CloudStrike realizing 2x faster detection, triage with 50% less compute cost.
Moving to networking, sequential growth in network resumed in Q1 with revenue up 64% quarter over quarter to $5 billion. Our customers continue to leverage our platform to efficiently scale up and scale out AI factory workloads. We created the world's fastest switch, NVLink, for scale-up or NVLink compute fabric in its fifth generation offers 14x the bandwidth of PCIe Gen 5. NVLink72 carries 130 terabytes per second, a bandwidth in a single rack equivalent to the entirety of the world's peak Internet traffic. NVLink is a new growth vector and is off to a great start, with Q1 shipments exceeding $1 billion. At Computex, we announced NVLink Fusion. Hyperscale customers can now build semi-custom CCUs and accelerators that connect directly to the NVIDIA Corporation platform with NVLink. We are now enabling key partners, including ASIC providers such as MediaTek, Norbel, Alchip Technologies, and Astera Labs, as well as CPU suppliers such as Pizizzo and Qualcomm, to leverage NVLink Fusion to connect our respective ecosystems. For scale-out, our enhanced Ethernet offerings deliver the highest throughput low latency networking for AI. Spectrum X posted strong sequential and year-on-year growth and is now annualizing over $8 billion in revenue. Adoption is widespread across major CSPs and consumer Internet companies, including Coreweave, Microsoft Azure, Oracle Cloud, and XAI. This quarter, we added Google Cloud and Meta to the growing list of SpectrumX customers. Introduced Spectrum X and Quantum X silicon photonics switches, featuring the world's most advanced copacage optics. These platforms will enable next-level AI factory scaling to millions of GPUs through the increasingly power efficiency by 3.5x and network resiliency by 10x while accelerating customer time to market by 1.3x.
Transitioning to a quick summary of our revenue by geography. China, as a percentage of our data center revenue, is slightly below our expectations and down sequentially due to H20 export licensing controls. For Q2, we expect a meaningful decrease in China data center revenue. As a reminder, while Singapore represented nearly 20% of our Q1 build revenue as many of our large customers use Singapore for centralized invoicing, our products are almost always shelved elsewhere. Note that over 99% of H100, H200, and Blackwell data center compute revenue billed to Singapore was for orders from US-based customers.
Moving to gaming and AI PCs. Gaming revenue was a record $3.8 billion, increasing 48% sequentially and 42% year on year. Strong adoption by gamers, creatives, and AI enthusiasts have made Blackwell our fastest ramp ever. Against the backdrop of robust demand, we greatly improved our supply and availability in Q1 and expect to continue these efforts in Q2. AI is transforming PC and creator and gamers. With a 100 million user installed base, GeForce represents the largest footprint for PC developers. This quarter, we added to our AI PC laptop offerings, including models capable of running Microsoft's Copilot.
Plus, this past quarter, we brought Blackwell architecture to mainstream gaming with its launch of GeForce RTX 5060 and 5060 Ti, starting at just $299. The RTX 5060 also debuted in laptops starting at $1,099. These systems double the frame rate and slash latency. These GeForce RTX 5060 and 5060 TI desktop GPUs and laptops are now available. In console gaming, the recently unveiled Nintendo Switch 2 leverages NVIDIA Corporation's neural rendering and AI technologies, including next-generation custom RTX GPUs with DLSS technology to deliver a giant leap in gaming performance to millions of players worldwide. Nintendo has shipped over 150 million Switch consoles to date, making it one of the most successful gaming systems in history.
Moving to crow visualization. Revenue of $509 million was flat sequentially and up 19% year on year. Tariff-related uncertainty temporarily impacted Q1 systems.
Demand for our AI workstations is strong, and we expect sequential revenue growth to resume in Q2. NVIDIA Corporation's DGX Spark and Station revolutionized personal computing by putting the power of an AI supercomputer in a desktop form factor. DGX Spark delivers up to one petaflop of AI compute while DGX Station offers an incredible 20 petaflops and is powered by the GB300 superchip. DGX Spark will be available in calendar Q3, and DGX Station later this year.
We have deepened Omniverse's integration and adoption into some of the world's leading software companies, including SAP and Schneider Electric. New Omniverse blueprints such as MEGA for at-scale robotic fleet management are being leveraged by Keyon Group, Pegatron, Accenture, and other leading companies to enhance industrial operations. At Computex, we showcased Omniverse's great traction with technology manufacturing leaders, including TSMC, Quanta, Foxconn, and Pegatron. Using Omniverse, TSMC saves months in work by designing fabs virtually. Foxconn accelerates thermal simulations by 150x. Pegatron reduced assembly line defect rates by 67%. Lastly, with our automotive group, revenue was $567 million, down 1% sequentially, but up 72% year on year. Year-on-year growth was driven by the ramp of self-driving across a number of customers and robust end demand for NEVs. We are partnering with GM to build the next-gen vehicles, factories, and robots using NVIDIA Corporation's AI simulation and accelerated computing. We are now in production with our full-stack solution for Mercedes Benz, starting with the new CLA, hitting roads in the next few months. We announced Isaac Groot, N1, the world's first open fully customizable foundation model for humanoid robots, enabling generalized reasoning and skill development. We also launched new open NVIDIA Corporation Cosmo World Foundation models.
Leading companies include OneX, Agility Robots, Figueroa, Uber, and Wabi. We've begun integrating Cosmos into their operations for synthetic data generation. While agility robotics, Boston Dynamics, and XPEN robotics are harnessing Isaac simulation to advance their humanoid efforts. GE Healthcare is using the new NVIDIA Corporation Isaac platform for healthcare simulation built on NVIDIA Corporation Omniverse and using NVIDIA Corporation Cosmos. The platform speeds development of robotic imaging and surgery systems. The era of robotics is here. Billions of robots, hundreds of millions of autonomous vehicles, and hundreds of thousands of robotic factories and warehouses will be developed. Alright.
Moving to the rest of the P&L. GAAP gross margins and non-GAAP gross margins were 60.5% and 61% respectively. Excluding the $4.5 billion charge, Q1 non-GAAP gross margins would have been 71.3%, slightly above our outlook at the beginning of the quarter. Sequentially, GAAP operating expenses were up 7% and non-GAAP operating expenses were up 6%, reflecting higher compensation and employee growth. Our investments include expanding our infrastructure capabilities and AI solutions, and we plan to grow these investments throughout the fiscal year. In Q1, we returned a record $14.3 billion to shareholders in the form of share repurchases and cash dividends. Our capital return program continues to be a key element of our capital allocation strategy. Let me turn to the outlook for the second quarter.
Total revenue is expected to be $45 billion, plus or minus 2%. We expect modest sequential growth across all of our platforms. In the data center, we anticipate the continued ramp of Blackwell to be partially offset by a decline in China revenue. Note, our outlook reflects a loss in H20 revenue of approximately $8 billion for the second quarter.
GAAP and non-GAAP gross margins are expected to be 71.8% and 72% respectively, plus or minus 50 basis points. We expect better Blackwell profitability to drive modest sequential improvement in gross margins. We are continuing to work towards achieving gross margins in the mid-seventies range late this year. GAAP and non-GAAP operating expenses are expected to be approximately $5.7 billion and $4 billion, respectively. We continue to expect full-year fiscal year 2026 operating expense growth to be in the mid-thirty percent range. GAAP and non-GAAP other income and expenses are expected to be an income of $450 million, excluding gains and losses from non-marketable and publicly held equity securities. GAAP and non-GAAP tax rates are expected to be 16.5%, plus or minus 1%, excluding any discrete items. Further financial details are included in the CFO commentary and other information available on our IR website, including a new financial information AI agent.
Let me highlight upcoming events for the financial community. We will be at the B of A Global Technology Conference in San Francisco on June 4, the Rosenblatt Virtual AI Summit, and the Aztec Investor Conference in London on June 10, and GTC Paris at Viva Tech on June 11 in Paris. We look forward to seeing you at these events. Our earnings call to discuss the results of our second quarter of fiscal 2026 is scheduled for August 27. Well, now let me turn it over to Jensen to make some remarks.