Thanks, Dave.
Today, we're reporting $187.8 billion in revenue, up 10% year over year. Given the way the dollar strengthened throughout the quarter, we had $700 million more foreign exchange headwind than we anticipated at guidance. Without that headwind, revenue would have been 11% year over year and exceeded the top end of our guidance. Operating income was $21.2 billion, up 61% year over year, and trailing twelve-month free cash flow adjusted for equipment finance leases was $36.2 billion, up $700 million year over year. We're pleased with the invention, customer experience improvements, and results delivered in 2024, and have a lot more planned in 2025. I'll start by talking about our stores business. We saw 10% year over year revenue growth in our North America segment, and 9% year over year in our international segment, excluding the impact from foreign exchange rates. Our continued focus on expanding selection, lowering prices, and improving convenience drove strong unit growth that even outpaced our revenue growth.
We continue to add to our broad range giving customers choice across a variety of price points. We welcomed notable brands to our store throughout 2024, including Clinique, Estee Lauder, Aura Rings, and Armani Beauty. We continue to add to the hundreds of millions of products offered from our selling partners, who made up 61% of items that we sold in 2024, our highest annual mix of third-party seller units ever. We also launched Amazon Haul for US customers in Q4, which offers customers an engaging shopping experience that brings ultra-low priced products into one convenient destination.
It's off to a very strong start. In the fourth quarter, consumers saved more than $15 billion with our low everyday prices and record-setting events during Prime Big Deal Days in October, Black Friday, and Cyber Monday around Thanksgiving. Additionally, Profitero's annual pricing study found that entering the holiday season, Amazon had the lowest online prices for the eighth year in a row, averaging 14% lower prices on average than other leading retailers in the US. Our speed of delivery continues to accelerate, and 2024 was another record-setting year for Prime members. We expanded the number of same-day delivery sites by more than 60% in 2024, which now serve more than 140 metro areas. Overall, we delivered over 9 billion units the same or next day around the world. Our relentless pursuit of better selection, price, and delivery speed is driving accelerated growth in Prime membership. For just $14.99 a month, Prime members get unlimited free shipping on 300 million items, off the same day or one day delivery, exclusive shopping events like Prime Day, access to a vast collection of premium programming and live sports on Prime Video, ad-free listening of 100 million songs and podcasts with Amazon Music, access to unlimited generic prescriptions for only $5 a month, unlimited grocery delivery on orders over $35 from Whole Foods Market and Amazon Fresh for $9.99 a month, a free Grubhub Plus membership with free unlimited delivery, and our latest benefit of a 10 cent per gallon fuel discount at BP, AMPM, and AMCO stations. When you think about this as a whole, and also compared to many other membership services that are comparably or more expensively priced and offer just one benefit like video, Prime is a screaming deal. And we have more coming for our Prime members in 2025.
We also remain squarely focused on cost to serve in our fulfillment network, which has been a meaningful driver of our increased operating income. We talked about the regionalization of our US network. We've also recently rolled out our redesigned US inbound network. While still in its early stages, our inbound efforts have improved our placement of inventory so that even more items are close to end customers. Ahead of Black Friday in November, we'd improved the percentage of ordered units available in the ideal building by over 40% year over year. We've also spent considerable time optimizing the number of items sent to customers in the same package, which reduces packaging, is more convenient for customers, and less expensive for us to fulfill. And our per-unit transportation costs continue to decline as we build out and optimize our last-mile network. Overall, we've reduced our global cost to serve on a per-unit basis for the second year in a row. While at the same time increasing speed, improving safety, and adding selection. As we look to 2025 and beyond, we see opportunities to reduce costs again as we further refine inventory placement, grow our same-day delivery network, and accelerate robotics and automation throughout the network.
In advertising, we remain pleased with the strong growth on a very large base, generating $17.3 billion of revenue in the quarter, and growing 18% year over year. That's a $69 billion annual revenue run rate, more than double what it was just four years ago at $29 billion. Sponsored products, the largest portion of ad revenue, are doing well, and we see a runway for even more growth. We also have a number of newer streaming offerings that are starting to become significant new revenue sources. On the streaming video side, we wrapped up our first year of Prime Video ads, and we're quite pleased with the early progress and head into this year with momentum. We made it easier to do full funnel advertising with us. Full funnel is from the top of the funnel with broad reach advertising that drives brand awareness to mid funnel where sponsored brands let companies specify certain keywords and audiences to attract people to their detail pages or brands to our Amazon to bottom of the funnel where sponsored products help advertisers surface relevant product ads to customers at the point of purchase. We also have differentiated audience features that leverage billions of signals from Amazon Marketing Cloud secure data clean rooms, providing advertisers the ability to analyze data, produce core marketing metrics, and understand how their marketing performs across various channels. With our new multi-touch attribution model, advertisers can understand how various ad types in their campaigns contribute to sales. Moving on to AWS, in Q4, AWS grew 19% year over year and now has a $115 billion annualized revenue run rate. AWS is a reasonably large business by most folks' standards. And though we expect growth will be lumpy over the next few years as enterprises adopt and technology advancements impact timing, it's hard to overstate how optimistic we are about what lies ahead for AWS' customers and business. I spent a fair bit of time thinking several years out. And while it may be hard for some to fathom a world where virtually every app is generative AI-infused, with inference being a core building block just like compute, storage, and database, and most companies having their own agents that accomplish various tasks interact with one another, this is the world we're thinking about all the time. And we continue to believe that this world will mostly be built on top of the cloud with the largest portion of it on AWS. To best help customers realize this future, you need powerful capabilities of all three layers of the stack. At the bottom layer, for those building models, you need compelling chips. Chips are the key ingredient in the compute that drives training and inference. Most AI compute has been driven by NVIDIA chips, and we obviously have a deep partnership with NVIDIA and will for as long as we can see into the future. However, there aren't that many generative AI applications of large scale yet, and when you get there, as we have with apps like Alexa and Rufus, cost can get steep quickly. Customers want better price performance, and it's why we built our own custom AI silicon. Tranium 2 just launched at our AWS reInvent conference in December, E2 instances with these chips are typically 30 to 40 percent more price per form than other current GPU-powered instances available. That's very compelling at scale. Several technically capable companies like Adobe, Databricks, Poolside, and Qualcomm have seen impressive results in early testing of Tranium 2. It's also why you're seeing Anthropic build its future frontier models on Tranium 2. We're collaborating with Anthropic to build project right near. A cluster of training and two ultra-servers containing hundreds of thousands of training m two chips. This cluster is going to be five times the number of Exo-ZLofts as the cluster that Anthropic used to train their current leading set of cloud models. We're already hard at work on Training 3, which we expect to preview late in 25, and defining Training 4 thereafter. Building outstanding performing chips that deliver leading price performance has become a core strength of AWS's, starting with our Nitro and Graviton chips in our core business, and now extending to Tranium and AI. It's something unique to AWS relative to other competing cloud providers. The other key component for model builders is services that make it easier to construct their models. I won't spend a lot of time on these comments on Amazon SageMaker AI, which has become the go-to service for AI model builders to manage their AI data, build models, experiment, and deploy these models. HyperPOD capability automatically splits training workloads across many AI accelerators, prevents interruptions by periodically saving checkpoints, and automatically repairing faulty instances from their last saved checkpoint and saving training time by up to 40 percent. It continues to be a differentiator with several new compelling capabilities at reinvent including the ability to manage costs at a cluster level, and prioritize which workloads should receive capacity when budgets are reached. It is increasingly being adopted by model builders.
At the middle layer, for those wanting to leverage frontier models to build Jet AI apps, Amazon Bedrock is our fully managed service providing high performing foundation models with the most compelling features making it easy to build a high-quality generative AI application. We are iterating quickly on Bedrock announcing Luma AI, poolside, and over a hundred other popular emerging models to Bedrock and reinvent. We've just added DeepSeq's R1 models to Bedrock and SageMaker. Additionally, we delivered several compelling new Bedrock features to re-event, including prompt caching, intelligent prompt routing, and model distillation, all of which help customers achieve lower cost and latency in their inference. Like SageMaker AI, Bedrock is growing quickly and resonating strongly with customers. Related, we also launched Amazon's own family of frontier models in Bedrock called Nova. These models compare favorably in intelligence against the leading models in the world to offer lower latency, lower price, about 75 percent lower than other models in Bedrock, and are integrated with key Bedrock features like fine-tuning, model distillation, knowledge base as a rag, and Agentec capabilities. Thousands of AWS customers are already taking advantage of Amazon Nova models' capabilities and price performance, including Palantir, SAP Densu Fortinet Trellix, and Robinhood, and we've just gotten started. At the top layer of the stack, Amazon Q is our most capable generative AI-powered assistant for software development and to leverage your own data.
You may remember that on the last call, I shared the very practical use case where Q transformation helps save Amazon Teams $260 million and 4500 developer years in migrating over 30,000 applications to new versions of the Java JDK. This is real value, and companies ask for more, which we obliged with our recent deliveries of Q transformation that enable moves from Windows dot net applications to Linux VMware to EC2, accelerates mainframe migrations. Early customer testing indicates the queue can turn was going to be a multi-year effort to do a mainframe migration into a multi-quarter effort, cutting by more than 50% the time to migrate mainframes. This is a big deal, and these transformations are good examples of practical AI. While AI continues to be a compelling new driver in the business, we haven't lost our focus on core modernization of companies' technology. We signed new AWS agreements with companies including Intuit, PayPal, Norwegian Cruise Line Holdings, Northrop Grumman, The Guardian Life Insurance Company of America, Reddit, Japan Airlines, Baker Hughes, the Hertz Corporation, Resin Chime Financial, Asana, and many others. Consistent customer feedback from our recent AWS reInvent was appreciation that we're still inventing rapidly in non-AI key infrastructure areas like storage, compute, database analytics. Our functionality leadership continues to expand and there were several key launches customers were buzz about, including Amazon Aurora D SQL, our new serverless distributed SQL database that enables applications with the highest availability strong consistency, post-test compatibility, It's four times faster reason rights compared to other pop distributed SQL databases, Amazon S3 tables, which makes S3 the first cloud object store with fully managed support for Apache Iceberg for faster analytics, Amazon S3 metadata, Which automatically generates queryable metadata simplifying data discovery, business analytics, and real time inference to help customers unlock the value of their data in S3, and the next generation of Amazon SageMaker which brings together all the data analytics services and AI services in interface to do analytics and AI more easily at scale. As 2024 comes to an end, I want to thank our teammates and partners for their meaningful impact throughout the year. It was a very successful year across almost any dimension you pick. We're far from done, and look forward to delivering for customers in 2025. With that, I'll turn it over to Brian for a financial update.