AMD Stock Price Is Up 20 Percent On Its Best Post-Earnings Day In Seven Years And Here Is Why

May 6, 2026
AMD Stock
AMD Stock via Shutterstock

Advanced Micro Devices reported its first quarter 2026 earnings on Tuesday May 5 after the market close and the stock responded with its best post-earnings reaction in seven years.

The stock surged as much as 15 to 20 percent in pre-market trading Wednesday morning on a quarter that beat expectations across every major metric and delivered guidance that implies the company’s revenue will grow another 46 percent in the current quarter.

Lisa Su, AMD’s chair and chief executive, called it an “outstanding first quarter.” The numbers support the characterization without requiring embellishment.

Revenue came in at $10.25 billion, up 38 percent year-over-year from $7.44 billion and ahead of the $9.89 billion analyst consensus.

Non-GAAP earnings per share were $1.37, up 43 percent from a year ago and above the $1.25 to $1.29 estimates analysts had set.

Gross margin expanded 300 basis points year-over-year to 53 percent. Free cash flow hit a record for the company. Q2 guidance, revenue of approximately $11.2 billion, landed above every estimate on the Street.

The company’s stock had already gained approximately 63 percent year-to-date and more than tripled over the past year.

After Tuesday’s report, the move into Wednesday suggests the market thinks the story has further to run.

The Data Center That Changed Everything

The number that defines AMD’s current moment is $5.8 billion, the data center segment revenue for Q1 2026, up 57 percent year-over-year and a new quarterly record for the company.

Data center is no longer one of AMD’s segments. It is the primary driver of AMD’s entire business.

Su was explicit about that in her prepared remarks. “Data Center now the primary driver of our revenue and earnings growth,” she said.

The segment generated $1.6 billion in operating income, a 28 percent operating margin, up from 25 percent a year ago.

The improvement in margin within a segment growing 57 percent in revenue is the operating leverage story that has been building in AMD’s model and is now showing up at scale in the financial results.

The data center segment combines two product lines with different but complementary roles in the AI infrastructure buildout. EPYC server CPUs, AMD’s central processing unit line that competes directly with Intel’s Xeon, drove strong demand as agentic AI applications created a CPU renaissance that few predicted.

Instinct GPUs, AMD’s answer to Nvidia’s H100 and B200 accelerators, continued their ramp as hyperscalers and cloud providers seeking alternatives to a Nvidia-only strategy moved AMD from pilot deployments into large-scale production.

Su said the company is “seeing strong momentum as inferencing and agentic AI drive increasing demand for high-performance CPUs and accelerators.”

The specific pairing of CPUs and accelerators in that sentence is not accidental.

AMD is unique among major chipmakers in being genuinely competitive in both categories simultaneously, a positioning that is becoming more valuable as AI workloads evolve from pure training on GPU clusters toward inference and agentic applications that distribute compute across CPUs and specialized accelerators.

The Meta Deal

The most strategically significant piece of news AMD shared alongside the Q1 results was the expansion of its partnership with Meta Platforms to deploy up to 6 gigawatts of AMD Instinct GPUs across several product generations, a multi-year commitment that gives AMD both the revenue visibility and the reputational validation it has been building toward.

Six gigawatts of GPUs is not a purchase order. It is an infrastructure commitment that spans years and multiple product generations.

For Meta, which is spending $145 billion on AI capital expenditure in 2026 alone, committing to AMD at that scale signals a strategic decision to build a supply chain that is not exclusively dependent on Nvidia.

When the world’s largest social media company signs a multi-generation deal with a GPU maker, other hyperscalers take notice.

Su framed the Meta deal alongside the previously announced OpenAI partnership in terms that made the strategic significance explicit.

“Together with our previously announced OpenAI partnership, these engagements position AMD as a core partner to the world’s largest AI infrastructure builders, with deep co-engineering relationships and multi-year visibility into large-scale deployments,” she said.

OpenAI and Meta together represent the two most consequential AI development organizations in the world in terms of compute demand. AMD now has confirmed multi-generation relationships with both.

The Helios System That Puts AMD In Nvidia’s Category

The Q1 report also provided an update on Helios, AMD’s first full rack-scale AI system, which is the product category that Nvidia has owned with its Grace Blackwell and Vera Rubin systems that sell for more than $3 million per rack.

AMD entering this market is significant because rack-scale systems represent the highest-value, highest-margin end of the AI infrastructure market and have historically been Nvidia’s exclusive territory.

Su confirmed that Helios shipments are set to begin in the second half of 2026. Both OpenAI and Meta have already signed up for shipments.

The entry of a credible alternative to Nvidia at the rack-scale level, at a price point and performance profile that hyperscalers have demonstrated willingness to commit to, changes the competitive dynamics of the most expensive segment of the AI chip market.

The CPU Renaissance Nobody Was Expecting

One of the most interesting dimensions of AMD’s Q1 results is the strength of its client and server CPU business at a moment when most of the AI conversation has focused on GPUs.

Client segment revenue, desktops and laptops running Ryzen processors, came in at $2.9 billion, up 26 percent year-over-year, driven by market share gains and strong demand for the latest generation Ryzen AI chips.

The server CPU growth story is even more dramatic. AMD guided to server CPU revenue growth of more than 70 percent year-over-year in Q2, with robust growth continuing through the rest of 2026 and into 2027.

The reason is agentic AI. Large language model training was primarily a GPU-dominated workload, you needed enormous parallel processing power and the GPU architecture was optimized for it.

But inference, the process of running a trained model to generate responses, and agentic AI, where AI systems chain together multiple reasoning steps and tool calls to complete complex tasks, both require different compute profiles that mix GPU acceleration with high-performance CPU processing.

AMD’s EPYC processors, which are the leading alternative to Intel Xeon in the server market, are positioned exactly at the intersection of that demand shift.

The AMD and Intel joint announcement last week of AI Compute Extensions for x86 CPUs, a new instruction set that boosts compute density by 16 times and improves energy efficiency, added another dimension to the CPU narrative. AMD and Intel, who compete fiercely in the CPU market, collaborated specifically to advance the x86 architecture’s capabilities for AI workloads.

That collaboration is a signal that both companies see the CPU AI opportunity as large enough to justify cooperation that expands the entire category rather than fighting over a smaller pie.

Why The Post-Earnings Reaction Is Remarkable

The best post-earnings move in seven years requires context to appreciate fully. AMD’s earnings history over the past five quarters before this one showed a pattern that made investors wary. The company consistently beat analyst estimates and consistently saw its stock decline the next day.

The average post-earnings move across those five quarters was negative 5.15 percent, with the most severe being a 17.31 percent single-day decline following the Q4 2025 earnings report that was itself a beat.

That persistent disconnect between strong fundamental performance and negative market reaction reflected several things simultaneously.

Investor anxiety about Nvidia’s competitive dominance, skepticism about whether AMD’s GPU ramp would translate into sustained financial results, and the broader market’s difficulty pricing AI infrastructure companies during a period of extraordinary uncertainty about the demand durability of the AI buildout.

What changed in Q1 2026 is that AMD addressed the specific skepticisms that had been weighing on the stock.

The data center segment hit $5.8 billion, a number that demonstrates the GPU ramp is real and accelerating.

The Meta and OpenAI partnerships provide multi-year visibility that addresses the demand durability question.

The guidance for Q2 of $11.2 billion implies continued acceleration rather than deceleration. And the AMD-Intel CPU collaboration provides an additional growth vector that was not fully in investor models.

The Motley Fool noted that AMD’s PEG ratio, which adjusts the price-to-earnings multiple for the company’s growth rate, is 0.82. Any number below 1 signals an undervalued stock by that metric.

At a time when AMD’s revenue is accelerating, its margins are expanding, and its strategic partnerships span the two most important AI development organizations in the world, the market is reassessing a stock that spent years being penalized for underperforming Nvidia and is now being rewarded for becoming something distinct from it.

What Q2 2026 Is Setting Up

The guidance Lisa Su and CFO Jean Hu provided for Q2 is the clearest forward signal in the results.

Revenue of approximately $11.2 billion would represent 46 percent year-over-year growth and a sequential increase of approximately 9 percent from Q1’s $10.25 billion.

Non-GAAP gross margin of approximately 56 percent would be a new record.

The company’s next earnings report is scheduled for August 4, 2026. Between now and then, Helios begins shipping to OpenAI and Meta. The AMD-Intel AI Compute Extensions ramp in new server platforms.

AMD’s next-generation EPYC processors begin ramping to meet demand that Su said is exceeding initial customer forecasts.

The broader AI infrastructure buildout, which is being funded by $650 billion or more in combined hyperscaler capital expenditure in 2026, continues to require the kind of high-performance CPUs and GPUs that AMD produces.

Su’s framing at the conclusion of her prepared remarks captured where AMD believes it sits in the current moment. “Looking ahead, we expect server growth to accelerate meaningfully as we scale supply to meet demand.”

Supply scaling to meet demand is a statement about a company whose constraint is not customers but manufacturing and packaging capacity, and whose order book justifies the investment to expand that capacity as aggressively as the supply chain allows.

Leave a Reply

Your email address will not be published.