After the market closed on Tuesday Eastern Time, AMD ( Advanced Micro Devices)... The company released its third-quarter 2025 financial results and held a conference call, delivering a report card with record-breaking revenue and profit.
During the earnings call, AMD CEO Dr. Lisa Su and AMD CFO Jin Hu answered numerous questions from analysts, with a particular focus on details of AMD's multi-year agreement with OpenAI and its technological and product progress.
Lisa Su stated that the company's 6-gigawatt Instinct GPU deployment agreement with OpenAI will contribute over $100 billion in revenue over the next few years, and also clarified that the company's AI business will generate billions of dollars in annual revenue by 2027.
Key focus of the earnings call: 1. Performance: Revenue and profit both increased, with multiple business segments contributing synergistically.
Overall performance: Q3 revenue was $9.2 billion (YoY +36%, QoQ +20%), net income increased by 31% year-over-year, free cash flow reached $1.5 billion (a record), non-GAAP gross margin was 54%, and diluted earnings per share were $1.20 (YoY +30%). All key financial metrics exceeded market expectations.
Sector Differentiation: Data Center Business became the mainstay of growth, with revenue of $4.3 billion (up 22% year-over-year and 34% quarter-over-quarter). The fifth-generation EPYC Turin CPU accounted for nearly 50% of EPYC's total revenue, and sales of the Instinct MI350 series GPUs increased significantly; client-side and gaming businesses also contributed to growth. The business generated $4 billion in revenue (up 73% year-over-year), with Ryzen 9000 processors driving record desktop CPU sales. Gaming revenue grew by 181% due to Radeon 9000 graphics cards and semi-custom products. Embedded business revenue was $857 million (down 8% year-over-year), but up 4% sequentially. Design orders have exceeded $14 billion this year, continuing the record-breaking momentum.
2. AI Business: Record-breaking order volume, breakthroughs in both technology and cooperation.
Major collaborations finalized: A multi-year agreement was signed with OpenAI, with plans to deploy 6 gigawatts of Instinct GPUs, and the first 1 gigawatt MI450 series deployments to begin in the second half of 2026, expected to contribute over $100 billion in revenue; Oracle We will become a launch partner for the MI450, deploying tens of thousands of MI450 GPUs starting in 2026; in partnership with U.S. Energy... The two sides will cooperate to build the "Lux AI" factory and the "Discovery" supercomputer, consolidating their position in the national AI and supercomputing fields.
Technology and Product Progress: The ROCm 7 software platform has seen significant performance improvements (inference performance +4.6x and training performance +3x compared to the previous generation), and has gained support from developers such as Hugging Face; the MI400 series GPUs and Helios rack-mount solutions will be launched in 2026, supporting the Meta open rack standard; the Venice 2nm CPU has entered laboratory testing, and customer cooperation intentions have reached an all-time high.
Market Demand and Supply Chain: Hyperscale cloud service providers have strong demand for AI computing, with many customers planning to expand CPU deployment in 2026; the company has secured large-scale Helios production by selling its ZT manufacturing business and partnering with Sanmina, while also stating that its supply chain is ready to support growth demand in 2026-2027.
3. Future Outlook: Product iteration will be key in 2026, with clear growth paths for multiple business segments.
Short-term guidance (Q4 2025): Revenue is expected to be $9.6 billion (plus or minus $300 million), representing a 25% year-over-year increase. This is driven by double-digit growth in the data center business (due to continued capacity expansion of MI350), growth in the client business, a short-term decline in the gaming business, and a recovery in growth in the embedded business. Non-GAAP gross margin is expected to be 54.5%, and operating expenses are expected to be approximately $2.8 billion.
Mid-to-long-term plan: In 2026, the core focus will be on launching the Venice 2nm CPU, MI400 series GPU, and Helios rack-mount solution, aiming to achieve large-scale growth in the AI business; in 2027, the annual revenue of the AI business will strive to reach "several billion US dollars", while planning to drive the client and server businesses to continue to outpace the industry growth rate through product structure optimization and market share increase.
4. Investor Concerns: Gross Margin, Customer Concentration, and Response to Supply Chain Risks
Gross margin trend: The gross margin of the data center GPU business will gradually improve as the production capacity of new generation products (such as MI400) increases. After a short transition period, it will tend to stabilize. The company prioritizes ensuring the growth of both revenue and gross profit.
Customer concentration risk: Although the cooperation with OpenAI is large-scale, the company emphasizes that it has built a diversified customer matrix (including Oracle , the Department of Energy and many hyperscale cloud service providers), and its supply chain planning can support large-scale deployment for multiple customers, reducing dependence on a single customer.
Supply chain and external constraints: The MI308 graphics card is not yet included in revenue. It has obtained some export licenses and is communicating with customers about their needs. The supply will be adjusted according to market conditions. The risk of power and component shortages in the industry is controllable. The company is working with ecosystem partners to plan and ensure deployment needs in 2026.
This meeting will be broadcast live, and a replay will be available on the official website afterward.
Before the meeting officially begins, I would like to clarify the following: Dr. Lisa Su, along with the AMD senior management team, will be presenting our long-term financial strategy at the Financial Analyst Day event in New York next Tuesday (November 11th); Dr. Su will also be attending the UBS Global Technology and Artificial Intelligence Summit on Wednesday, December 3rd. He will attend the conference and deliver a speech; finally, on Wednesday, December 10th, Hu Jintao will speak at the 23rd Barclays Annual Meeting. Speaking at the Barclays Global Technology Conference.
Next, I will hand over the chairing of the meeting to Dr. Lisa Su.
AMD Chairman and CEO Lisa Su: Thank you, Matt, and best wishes to everyone listening to the conference this afternoon. This quarter, we delivered outstanding results, with record revenue and profitability, driven by strong demand in our data center AI , server, and PC businesses. Revenue increased 36% year-over-year to $9.2 billion; net income increased 31% year-over-year; and free cash flow more than doubled, primarily driven by record sales of EPYC, Ryzen, and Instinct processors.
Our record-breaking third quarter results mark a significant leap forward in our growth trajectory. This was driven by our expanding computing business and the rapid scaling of our data center AI business, both of which contributed to substantial revenue and profit growth.
Next, let's discuss the various business segments: The data center segment saw a 22% year-over-year revenue increase to a record high of $4.3 billion, primarily driven by increased production capacity of the Instinct MI350 series graphics cards (GPUs) and growth in server market share. Server central processing unit (CPU) revenue reached a record high as adoption of the fifth-generation EPYC Turin processor accelerated, accounting for nearly 50% of total EPYC revenue this quarter. Furthermore, sales of the previous generation EPYC processors also performed strongly this quarter, demonstrating their strong competitive advantage across various workload scenarios.
In cloud computing In this area, we achieved record sales. Hyperscalers have expanded the deployment of EPYC CPUs to power both their own services and public cloud services. This quarter, hyperscalers launched more than 160 cloud instances based on EPYC processors, including those from Google and Microsoft. Azure, Alibaba The newly launched Turin processor-based instances deliver unparalleled performance and cost-effectiveness across a wide range of workloads. Currently, there are over 1350 public EPYC cloud instances available globally, representing a nearly 50% increase compared to the same period last year.
Large enterprises have more than doubled their adoption of EPYC processors in the cloud year-over-year. Enterprise customers are increasingly demanding AMD cloud instances to support hybrid computing models, driven by our growing market share in on-premises deployments. We expect cloud demand to remain strong as hyperscale cloud providers significantly increase general-purpose computing capabilities while scaling AI workloads. Many customers are planning to significantly increase their CPU deployments in the coming quarters to meet the growing demand in the AI space, which will be a powerful new driver for our server business.
Looking at enterprise adoption: EPYC server sell-through rates saw significant year-over-year and quarter-over-quarter growth, indicating that enterprise adoption of the product is accelerating. HP Manufacturers such as HPE, Dell, Lenovo, and Supermicro have launched over 170 platforms based on the fifth-generation EPYC processor, our most comprehensive portfolio to date, with solutions optimized for virtually all enterprise workloads.
This quarter, we won major new customers across several key verticals, including Fortune 500 companies in technology, telecommunications, financial services, retail, streaming, social, and automotive, further expanding our business footprint across major industries. The performance and total cost of ownership (TCO) advantages of the EPYC portfolio, our increased investment in marketing, and the growing product offerings from leading server and solution providers have collectively laid a solid foundation for our continued growth in the enterprise market share.
Looking ahead, we plan to launch our next-generation 2nm EPYC Venice processor in 2026, and this plan is progressing smoothly. The Venice processor has entered the laboratory testing phase and is performing exceptionally well, achieving significant improvements in performance, energy efficiency, and compute density. Customer demand and willingness to collaborate on Venice have reached unprecedented levels, reflecting both our competitive advantage and the market's growing demand for data center computing power. Several cloud service providers and original equipment manufacturer (OEM) partners have already deployed the first Venice platforms, laying the foundation for widespread solution availability and cloud deployment upon product launch.
Shifting towards Data Center AI: Our Instinct graphics card business continues its strong growth momentum. Revenue from this business grew year-over-year, driven by significant increases in sales of the Instinct MI350 series graphics cards and the expanding deployment of the MI300 series. Currently, several major cloud service providers and AI providers have initiated deployments of the MI350 series, with more large-scale deployments planned to be rolled out in the coming quarters.
Oracle has become the first hyperscale cloud provider to publicly offer MI355X instances, which deliver significantly higher performance for real-time inference and multimodal training workloads on the OCI ZetaScale supercluster. This quarter, emerging cloud providers such as Crusoe, DigitalOcean, TensorWave, and Vultr (Neocloud) also began gradually launching public cloud services based on the MI350 series.
In the AI developer community, the deployment of the MI300 series graphics cards expanded further this quarter. IBM and Zyphra will train multiple generations of future multimodal models on large-scale MI300X clusters; Cohere is currently using MI300X to train its Command series models on OCI. In the inference field, several new partners, including Character AI and Luma AI, are now running production-grade workloads on the MI300 series, demonstrating the performance and total cost of ownership advantages of our architecture in real-time AI applications.
This quarter, we also made significant progress in the software space. We launched ROCm 7—our most advanced and feature-rich version of ROCm to date. Compared to ROCm 6, ROCm 7 offers up to 4.6x improvement in inference performance and 3x improvement in training performance. Furthermore, ROCm 7 introduces seamless distributed inference capabilities, enhances code portability across hardware, and adds enterprise-grade tools to simplify the deployment and management of the Instinct solution.
Importantly, our open software strategy received a positive response from the developer community. Organizations such as Hugging Face, VLLM, and SGLang directly contributed to the development of ROCm 7, helping us to build ROCm into an open platform for large-scale AI development.
Looking ahead, our data center AI business is entering a new growth phase. Customer enthusiasm is already building ahead of the launch of the next-generation MI400 Series accelerators and Helios rack-scale solutions in 2026. The MI400 Series, combining a new compute engine, industry-leading memory capacity, and advanced networking capabilities, will deliver a significant performance leap for the most complex AI training and inference workloads.
The MI400 series integrates our expertise in chips, software, and systems to power Helios (our rack-mount AI platform). The Helios platform aims to redefine performance and energy efficiency standards at the data center scale. It integrates the Instinct MI400 series graphics cards, Venice EPYC CPUs, and Pensando network cards into a single double-width rack solution, optimized for the performance, power consumption, thermal management, and maintainability required for next-generation AI infrastructure, and supports the new open rack-width standard of Meta.
With the deep technical support of an increasing number of hyperscale cloud service providers, AI companies, OEMs, and original design manufacturers (ODMs), the development of the MI400 series graphics cards and Helios racks is progressing rapidly, laying the foundation for large-scale deployment next year. The ZT Systems team, which we acquired last year, plays a key role in Helios development. Leveraging decades of experience building infrastructure for the world's largest cloud service providers, they ensure customers can quickly deploy and scale the Helios platform in their own environments. Furthermore, last week we completed the sale of ZT's manufacturing business to Sanmina and established a strategic partnership, designating Sanmina as Helios's primary manufacturing partner. This collaboration will accelerate the deployment of our rack-mounted AI solutions among large customers.
In terms of customer partnerships, we announced a comprehensive, multi-year agreement with OpenAI to deploy 6 gigawatts of Instinct graphics cards, with the first gigawatt-level MI450 series accelerators expected to be operational in the second half of 2026. This collaboration makes AMD a core computing supplier for OpenAI and underscores our strategic strengths in hardware, software, and full-stack solutions.
Looking ahead, AMD and OpenAI will collaborate more closely on future hardware, software, networking, and system-level roadmaps and technologies. OpenAI's choice of the AMD Instinct platform to handle its most complex and sophisticated AI workloads clearly demonstrates that our Instinct graphics cards and ROCm open software stack can meet the performance and total cost of ownership requirements of the most demanding deployment scenarios. We anticipate this collaboration will significantly drive the growth of our data center AI business, potentially generating over $100 billion in revenue over the next few years.
Oracle also announced that it will be a major launch partner for the MI450 series, planning to deploy tens of thousands of MI450 graphics cards in Oracle Cloud Infrastructure (OCI) starting in 2026, and further expand the deployment scale in 2027 and beyond.
Furthermore, our Instinct platform is gaining increasing recognition in Sovereign AI and national supercomputing projects. In the United Arab Emirates (UAE), Cisco… Cisco and G42 will deploy a large-scale AI cluster powered by Instinct MI350X graphics cards to support the nation’s most advanced AI workloads. In the United States, we are collaborating with the Department of Energy and Oak Ridge National Laboratory, along with industry partners such as OCI and HPE, to build Lux AI, the first AI factory focused on scientific discovery. This AI factory will utilize our Instinct MI350 series graphics cards, EPYC CPUs, and Pensando networking products, and is expected to be operational by early 2026, providing a secure and open platform for large-scale training and distributed inference.
The U.S. Department of Energy has also selected our upcoming MI430X graphics cards and EPYC Venice CPUs to power Discovery, the next-generation flagship supercomputer at Oak Ridge National Laboratory. This supercomputer aims to set new standards for AI-driven scientific computing and solidify the United States' leadership in high-performance computing. Our MI430X graphics cards, designed specifically to support national AI and supercomputing projects, will further extend our leading position in powering the world's most powerful computers, contributing to the next generation of scientific breakthroughs.
In summary, our AI business is entering a new growth phase, driven by our leading rack-mount solutions, expanding customer adoption, and a growing number of large-scale global deployments. We have a clear trajectory to achieve billions of dollars in annual AI revenue by 2027. I look forward to detailing our growth plans for our data center AI business at next week's Financial Analyst Day event.
Let's look at the Client & Gaming segment: revenue in this segment grew 73% year-over-year to $4 billion. Our PC processor business performed exceptionally well, with record quarterly sales, driven by strong demand and growth momentum from our leading Ryzen portfolio.
Desktop CPU sales hit a record high, with strong demand for the Ryzen 9000 series processors driving both sell-in and sell-out figures to new records. This series of processors delivers unparalleled performance in gaming, productivity, and content creation applications. This quarter also saw significant growth in Ryzen-powered laptop sales through OEM channels, reflecting continued end-customer demand for high-end gaming and business AMD PCs.
Commercial growth accelerated further this quarter: driven by large purchases by Fortune 500 companies in the healthcare, financial services, manufacturing, automotive and pharmaceutical industries, enterprise adoption of Ryzen PCs increased significantly, with actual Ryzen PC sales growing by more than 30% year-over-year.
Looking ahead, with the strength of the Ryzen product portfolio, broader platform coverage, and increasing investment in marketing, we believe the client business is poised to continue growing faster than the overall PC market.
In the gaming business: revenue increased by 181% year-on-year, reaching $1.3 billion. With Sony... Sony and Microsoft are preparing for the upcoming holiday sales season, with revenue from semi-custom products showing growth. In the gaming graphics card segment, both revenue and channel sales have seen significant growth, thanks to the leading price-performance ratio of the Radeon 9000 series. Our machine learning super-resolution technology, FSR 4, has seen rapid adoption this quarter, with the number of games supporting it more than doubling since its launch to over 85. This technology improves frame rates and creates a more immersive visual experience.
Finally, let's look at the embedded systems business segment: revenue in this segment decreased by 8% year-over-year to $857 million. On a sequential basis, both revenue and actual sales volume increased, driven by a recovery in demand in multiple markets, including simulation testing, aerospace and defense, industrial vision, and healthcare.
We further expanded our embedded portfolio and solidified our leadership in adaptive and x86 computing by launching new solutions. We have begun shipping the industry-leading Versal Prime series second-generation adaptive system-on-chips (SoCs) to key customers; provided customers with the first Versal RF development platforms to support multiple next-generation design projects; and launched the Ryzen Embedded 9000 series, which achieves industry-leading performance and latency per watt for robotics applications . Edge computing And smart factory application scenarios.
Our embedded product portfolio continues to see strong design win momentum, poised to set a new design win record for the second consecutive year. To date, total design wins for the year have exceeded $14 billion, reflecting the increasing adoption of our leading products across various markets and applications.
In summary, our record-breaking third-quarter results and strong fourth-quarter outlook demonstrate the robust growth momentum across our business segments, driven by continued product leadership and rigorous execution. Our Data Center AI, Server, and PC businesses are poised for strong growth, benefiting from the expanding Total Addressable Market (TAM), accelerating adoption of the Instinct platform, and increased market share of EPYC and Ryzen CPUs.
The market demand for computing power is at an unprecedented level—every major breakthrough in business, science, and society today relies on more powerful, efficient, and intelligent computing capabilities. These trends present AMD with unprecedented growth opportunities. I look forward to providing a detailed overview of our strategy, product roadmap, and long-term financial goals at next week's Financial Analyst Day meeting.
Now, I will hand over the chairing of the meeting to Hu Jin, who will then provide further analysis of the third quarter's performance.
Hu Jin: Thank you, Dr. Su Zifeng, and I extend my afternoon greetings to everyone. I will first review our financial results, and then provide an outlook for the fourth quarter of fiscal year 2025.
We are pleased with our third-quarter financial results. We achieved record revenue of $9.2 billion, representing a 36% year-over-year increase and exceeding the upper end of our guidance, reflecting strong momentum across our business segments. It should be noted that the third-quarter results did not include any revenue generated from exports of MI308 graphics cards to the Chinese market.
Revenue grew 20% sequentially this quarter, driven by strong growth in the data center, client and gaming segments and moderate growth in the embedded segments.
Gross margin was 54%, an increase of 40 basis points year-over-year, primarily driven by product mix optimization. Operating expenses were approximately $2.8 billion, a 42% increase year-over-year, due to continued increased investment in R&D to capitalize on significant AI opportunities, as well as increased marketing spending to drive revenue growth. Operating profit was $2.2 billion, with an operating margin of 24%. Taxes, interest, and other expenses totaled $273 million.
In the third quarter of 2025, diluted earnings per share (EPS) were $1.20, a 30% increase from $0.92 in the same period last year.
Next, I will present the results for each reportable business segment, starting with the data center segment. The data center segment achieved a record revenue of $4.3 billion, a 22% year-over-year increase, primarily driven by strong demand for fifth-generation EPYC processors and the Instinct MI350 series graphics cards. On a sequential basis, revenue growth in the data center segment was 34%, driven by a significant increase in production capacity for the AMD Instinct MI350 series graphics cards.
The data center segment generated $1.1 billion in operating profit, representing 25% of the segment's revenue; in the same period last year, the segment generated $1 billion in operating profit, representing 29% of revenue. The profit growth was primarily driven by increased revenue, but some of this growth was offset by increased R&D investment to capitalize on significant AI opportunities.
Client and gaming revenue reached a record $4 billion, a 73% year-over-year increase and a 12% quarter-over-quarter increase, driven by strong demand for the latest generation of client processors and graphics cards, as well as increased sales of game consoles. Client revenue reached a record $2.8 billion, a 46% year-over-year increase and a 10% quarter-over-quarter increase, primarily driven by record sales of Ryzen processors and a more optimized product mix. Gaming revenue grew to $1.3 billion, a 181% year-over-year increase and a 16% quarter-over-quarter increase, benefiting from increased revenue from semi-custom products and strong demand for Radeon graphics cards.
The client-based and gaming segment generated $867 million in operating profit, representing 21% of the segment's revenue; in the same period last year, the segment generated $288 million in operating profit, representing 12% of revenue. The profit growth was primarily driven by increased revenue, but some of this growth was offset by increased marketing spending to support revenue growth.
The embedded systems segment generated $857 million in revenue, down 8% year-over-year but up 4% sequentially, driven by a recovery in demand across several end markets. Operating profit for the embedded systems segment was $283 million, representing 33% of the segment's revenue; compared to $372 million, or 40% of revenue, in the same period last year. The decline in operating profit was primarily due to reduced revenue and changes in the end market structure.
Before reviewing the balance sheet and cash flow, it's important to note that we completed the sale of ZT Systems' manufacturing business to Sanmina last week. The third-quarter financial results of the ZT manufacturing business are presented separately in our financial statements as a discontinued operation and are not included in non-GAAP financial metrics.
Regarding the balance sheet and cash flow: This quarter, our operating cash flow from continuing operations was $1.8 billion, and free cash flow reached a record $1.5 billion. We returned $89 million to shareholders through share repurchases, bringing the total share repurchase amount for the first three quarters of 2025 to $1.3 billion. As of the end of this quarter, our share repurchase program still has a $9.4 billion authorization limit.
As of the end of this quarter, we had $7.2 billion in cash, cash equivalents and short-term investments, and total debt of $3.2 billion.
Next, we will look at our outlook for the fourth quarter of 2025: It should be noted that our fourth quarter outlook does not include any revenue generated from the export of AMD Instinct MI308 graphics cards to the Chinese market.
We project revenue of approximately $9.6 billion for the fourth quarter of 2025, plus or minus $300 million. The midpoint of this guidance represents a year-over-year revenue growth of approximately 25%, driven by strong double-digit growth in the Data Center & Client & Gaming segments and a return to growth in the Embedded Systems segment.
On a sequential basis, we expect revenue growth of approximately 4%, driven by factors including: double-digit growth in the data center segment (strong growth in server business, while continued expansion of MI350 series graphics card production); a decline in the client and gaming segments (revenue growth in client business, but a significant double-digit decline in gaming revenue); and double-digit growth in the embedded systems segment.
In addition, we expect the non-GAAP gross margin for the fourth quarter to be approximately 54.5%; non-GAAP operating expenses to be approximately $2.8 billion; net interest and other expenses to be approximately $37 million; the non-GAAP effective tax rate to be 13%; and the diluted total number of shares to be approximately 1.65 billion.
In conclusion, our execution has been strong, with record revenue in the first three quarters of the year. Our ongoing strategic investments enable us to fully capitalize on the expanding AI opportunities across various end markets, driving sustainable long-term revenue growth and profit enhancement, and creating substantial value for our shareholders.
Next, I will hand over the chairing of the meeting back to Matt and move on to the Q&A session.
Matt: Thank you very much. Now, we can begin collecting questions from the audience.
Host: Please wait a moment, we are collecting questions. The first question is from Bank of America Securities. Vivek Arya of Bank of America Securities, please ask your question.
Vivek Arya: Thank you for taking the time to ask questions. I have a short-term question and a medium-term question. In the short term, Dr. Lisa Su, could you please explain the revenue breakdown for CPUs and GPUs in the third and fourth quarters? Strategically, how will your company manage the transition from the MI355 to the MI400 in the second half of next year? Before customers begin adopting the MI400 series, can your company maintain the current (fourth quarter) growth level in the first half of next year, or do we need to anticipate a period of stagnation or market digestion?
Lisa Su: Thank you for your question, Vivek. Let me briefly explain a few points. Our data center business performed very strongly this quarter, with both the server and data center AI businesses achieving better-than-expected growth. It's worth noting that this achievement was not included in any MI308 sales.
The production ramp-up for the MI355 is progressing very smoothly. We had anticipated a significant increase in production capacity for this product in the third quarter, and that has indeed happened. Furthermore, we are seeing further growth in server CPU sales – and this growth is not only in the short term; customer outlooks for the next few quarters indicate that demand will remain high, which is a positive sign.
Looking ahead to the fourth quarter, the data center business will continue to perform strongly, with revenue growing at a double-digit rate sequentially. Both the server and data center AI businesses will contribute to this growth , thanks to the continued strong performance of these two business segments.
Regarding your second question: Obviously, we haven't disclosed specific plans for 2026 yet, but based on the current situation, we expect the market demand environment to remain favorable in 2026. Therefore, we anticipate continued capacity expansion for the MI355 in the first half of 2026; as we mentioned before, the MI450 series will enter the market in the second half of 2026, at which time we expect the data center AI business to experience even faster growth in the second half of the year.
Vivek Arya: I understand. My follow-up question is: There's currently some debate in the industry about whether OpenAI is simultaneously working with three major vendors and an ASIC supplier, given its constraints on electricity and capital expenditure (CapEx) and existing cloud service provider (CSP) partnerships. What is your company's perspective on this? How was your company's visibility in the initial stages of your collaboration with OpenAI? More importantly, what will the situation be like when the collaboration expands further in 2027? Is there a way to estimate OpenAI's resource allocation to each vendor? Or, how should we view the visibility issue in this important client collaboration?
Lisa Su: That's an excellent question, Vivek. We are clearly incredibly excited about our collaboration with OpenAI – it's a significant partnership. The AI industry is currently in a very unique period, with workloads demanding extremely high computing power. In our collaboration with OpenAI, we have planned our work for the next few quarters to ensure that power supply and supply chains can keep up.
Key information : The initial 1-gigawatt deployment is scheduled to begin in the second half of 2026, and preparations are progressing smoothly. Considering factors such as delivery timelines, we are working closely with OpenAI and cloud service provider partners to ensure we can deploy the Helios platform and related technologies as planned.
Therefore, overall, our cooperation is progressing very smoothly. We have a clear vision of the MI450 expansion schedule, and all work is proceeding according to plan.
Host: The next question comes from Thomas O'Malley of Barclays . Please ask your question.
Thomas O'Malley : Good morning. Thank you for taking the time to ask questions, and congratulations on your excellent performance. My first question is about the Helios platform. Obviously, with the announcements regarding the Open Computing Project (OCP), customer interaction will inevitably increase. Could you please talk about your company's outlook for next year regarding the ratio of discrete component sales to system sales? When do you expect the ratio to cross (i.e., system sales exceeding discrete component sales)? Furthermore, what was the initial feedback from customers who had a close look at the Helios platform at the trade show?
Lisa Su: Okay, thank you for your question, Tom. There's been a lot of interest in the MI450 and Helios platforms, especially at the OCP exhibition, where the response from customers was particularly enthusiastic. We received numerous customers, many of whom brought their engineering teams to gain a deeper understanding of the system's details and construction methods.
There has been ongoing discussion in the industry about the complexity of rack-mounted systems, and these systems are indeed very complex. We are very proud of the design of Helios—it has all the features and functionalities expected, and excels in reliability, performance, and power efficiency.
In the past few weeks, market interest in the MI450 and Helios has further increased as we announced collaborations with OpenAI and OCI (Oracle Cloud Infrastructure), and reached a related collaboration with Meta (Metaverse) at the OCP exhibition.
Overall, we believe the Helios platform has made good progress in both R&D and customer collaboration. Regarding rack-mount solutions, we expect early customers of the MI450 to primarily adopt rack-mount solutions; of course, the MI450 series will also offer other form factors, but there is currently a very high market interest in complete rack-mount solutions.
Thomas O'Malley: Very helpful. My follow-up question is a more macro-level one, similar to Vivek's earlier question. Looking at plans announced early next year, some projects have substantial power demands; furthermore, the industry is facing supply issues related to interconnect memory components. As an industry leader, where do you see the bottlenecks in the future? Will component supply shortages occur first, or will data center infrastructure (such as site availability) or power supply become limiting factors affecting the progress of large-scale deployment plans next year?
Lisa Su: Okay, Tom. The question you raised is precisely the challenge our entire industry needs to address together—the entire ecosystem must be planned collaboratively, and that's exactly what we're doing right now. We're working with our customers to plan power supply solutions for the next two years; and in the chip, memory, packaging, and component supply chains, we're also working closely with our supply chain partners to ensure that capacity at all stages can keep up in a timely manner.
Based on our current visibility, we are confident in the strength of our supply chain – it is ready to support our significant growth rate and meet market demand for large-scale computing power.
当然,所有环节都将处于紧张状态。从部分企业的资本支出情况可以看出,市场有强烈的意愿增加计算能力部署,我们正为此紧密协作。我想说的是,当行业面临这类供应紧张情况时,整个生态系统都会全力以赴应对挑战。同时,我们也看到,随着我们在电力供应、组件供应等方面持续投入,相关瓶颈正逐步得到缓解。
总而言之,我们有充分的信心,在向2026 年下半年及2027 年过渡的过程中,随着MI450 与Helios 平台的推出,我们能够实现显著的增长。
主持人:下一个问题来自TD Cowen 的约书亚·布哈尔特(Joshua Buchalter),请您提问。
约书亚·布哈尔特:各位好。感谢贵公司接受提问。我想先从CPU 业务说起。贵公司与CPU 领域的主要竞争对手都提到,近期Agentic 技术推动下,通用服务器在AI 工作负载方面的需求表现强劲。能否请您谈谈这一趋势的可持续性?竞争对手提到了供应链限制问题,贵公司在供应链中是否也观察到类似情况?此外,我们应将数据中心CPU 业务视为处于非季节性周期(即不受传统季节因素影响),还是预计明年上半年会回归正常的季节性波动?
苏姿丰:好的,约书亚,我来就服务器CPU 业务谈几点看法。过去几个季度,我们一直在关注这一趋势,实际上,几个季度前我们就已观察到CPU 需求出现积极信号。随着2025 年的推进,我们发现CPU 需求正逐步扩大—— 多家大型超大规模云服务商已开始预测2026 年将大幅增加CPU 部署量。从这一角度来看,当前CPU 需求环境十分积极。
出现这一情况的原因是:AI 工作负载需要大量通用计算能力,而这恰好与我们Turin 处理器的扩产周期相契合。Turin 处理器的扩产速度远超预期,市场对该产品的需求十分旺盛;同时,我们其他产品线的需求也保持稳定强劲。
关于2026 年的季节性问题:我们预计2026 年CPU 需求环境将保持积极态势。年底时我们会提供更详细的指引,但目前来看,随着AI 工作负载逐步进入实际应用阶段,对计算能力的需求将持续增长,因此CPU 需求有望保持强劲—— 这一趋势具有可持续性,并非短期现象,而将是一个持续多个季度的过程。
在供应链方面,约书亚,我们拥有充足的供应能力支持增长,尤其是在2026 年,我们已为产能提升做好了准备。
约书亚·布哈尔特:非常感谢两位的解答。我的追问是,苏姿丰博士,您在准备好的发言中提到了ROCm 7 的进展—— 我们知道ROCm 一直是贵公司的重点投入领域。能否请您用一两分钟时间谈谈,目前ROCm 在竞争中的定位如何?贵公司能为开发者社区提供多大范围的支持?在缩小潜在竞争差距方面,贵公司仍需在哪些领域继续努力?
苏姿丰:好的,约书亚,感谢你的问题。ROCm 的发展取得了显著进展,ROCm 7 在性能提升及支持的框架范围方面实现了重大突破。对我们而言,确保对所有最新模型的“首日支持”(day-zero support),以及对所有最新框架的原生支持,至关重要。
目前,大多数新采用AMD 产品的客户在将工作负载迁移至AMD 平台时,都能获得非常顺畅的体验。当然,我们仍有改进空间—— 我们正持续扩充库资源与整体生态系统,尤其是在训练与推理结合强化学习的新一代工作负载领域。
但总体而言,ROCm 的进展十分显著。值得一提的是,我们将继续在该领域加大投入,因为打造顺畅的客户开发体验对我们至关重要。
主持人:下一个问题来自Tanner Fitzgerald 的C·J·缪斯(CJ Muse),请您提问。
C·J·缪斯:下午好。感谢贵公司接受提问。我的第一个问题是:在从MI355 向MI400 过渡,并逐步转向完整机架式解决方案的过程中,我们应如何看待2026 年全年毛利率的变化框架(即毛利率将如何波动)?
胡锦:好的,C·J,感谢你的问题。总体而言,正如我们过去所提及的,对于数据中心GPU 业务,新一代产品推出后,随着产能提升,毛利率将持续改善。通常在产能提升初期,会经历一个过渡阶段,之后毛利率将逐步趋于稳定。
我们目前尚未提供2026 年的具体指引,但数据中心GPU 业务的首要目标是实现营收的大幅增长与毛利额的提升,同时我们也将持续推动毛利率百分比的上升。
C·J·缪斯:非常有帮助。我的追问是,苏姿丰博士,能否请您谈谈贵公司对2026 年及以后的增长预期—— 您之前提到2027 年AI 业务营收有望达到数十亿美元。从宏观层面来看,贵公司如何看待OpenAI 及其他大型客户的合作情况?我们应如何理解2026 年至2027 年贵公司客户渗透范围的扩大情况?若能提供相关信息,将对我们非常有帮助。
苏姿丰:好的,C·J。下周的“财务分析师日” 活动中,我们将更详细地探讨这一话题,但我可以先分享一些宏观层面的观点。
首先,我们对产品路线图充满信心,并且已在大型客户中取得了显著进展。与OpenAI 的合作对我们而言意义非凡,能够达成吉瓦级规模的合作,也印证了我们向市场交付大规模计算解决方案的能力。
除OpenAI 外,我们还与众多其他客户开展了深度合作。例如,我们之前提到了与OCI(甲骨文云基础设施)的合作,也宣布了与美国能源部合作的多个大型系统项目。目前,我们还有许多其他正在推进的合作项目。
因此,大家可以这样理解:在MI450 产品周期中,我们预计将有多家客户实现大规模部署,这也体现了我们客户合作的广度。同时,我们也在调整供应链规模,以确保既能满足与OpenAI 的合作需求,也能支持其他众多合作项目的推进。
主持人:下一个问题来自伯恩斯坦研究公司(Bernstein Research)的斯泰西·拉斯贡(Stacy Rasgon),请您提问。
斯泰西·拉斯贡:各位好。感谢贵公司接受提问。我的第一个问题是:在本季度数据中心板块中,从美元金额与百分比增长来看,服务器业务与GPU 业务哪个同比增长更快?
苏姿丰:斯泰西,正如我们之前所提及的,本季度数据中心板块的服务器业务与数据中心AI 业务(GPU 业务)均实现了良好的同比增长。
斯泰西·拉斯贡:能否请您进一步说明—— 仅从增长趋势来看,哪个业务增长更快?我不需要具体数字,只需了解大致趋势。
苏姿丰:从增长趋势来看,两者增长速度相近,但服务器业务略快一些。
斯泰西·拉斯贡:好的。关于业绩指引:贵公司提到数据中心板块整体将实现两位数增长,其中服务器业务将实现“强劲的两位数增长”。这具体意味着什么?是指增长超过20% 吗?我想了解“强劲的两位数增长” 的具体定义。此外,从全年来看,GPU 业务营收是否仍处于您上季度提及的约65 亿美元区间?目前的情况似乎仍符合这一预期。
胡锦:斯泰西,我们的指引是:数据中心板块营收环比将实现两位数增长,服务器业务将实现强劲增长,同时MI350 系列也将继续扩产。你刚才提到的65 亿美元营收预期,并非我们之前给出的指引。
斯泰西·拉斯贡:好的。也就是说,如果贵公司提到服务器业务“强劲增长”,是否意味着其增长速度将超过Instinct(GPU)业务?因为您并未明确提及Instinct 业务的增长情况。
苏姿丰:斯泰西,我来澄清一下。数据中心板块营收环比将实现两位数增长,这一增长将由服务器业务与数据中心AI 业务(GPU 业务)共同推动,两者均将实现增长。我们之前提到的“强劲的两位数增长”,可能是针对同比增长而言的。
主持人:下一个问题来自瑞银(UBS)的蒂莫西·阿库里(Timothy Arcuri),请您提问。
蒂莫西·阿库里:非常感谢。苏姿丰博士,距离您宣布与OpenAI 的合作已有一个月时间,能否请您分享一些案例,说明这一合作如何影响贵公司在市场中的定位,以及是否因此接触到了之前未合作过的客户?这是我的第一个问题。
第二个问题与之前的提问相关:从2027 年至2028 年的时间范围来看,OpenAI 的合作可能会占据贵公司数据中心GPU 营收的一半左右。在您看来,对单一客户存在如此高的依赖度,风险有多大?
苏姿丰:好的,蒂姆。我来分两点回答你的问题。首先,与OpenAI 的合作已筹备了相当长的时间,我们很高兴能够公开这一合作,并分享部署规模(吉瓦级)与合作期限(多年期)等关键信息—— 这些信息都非常积极。
此外,过去一个月中,还有其他多个因素推动我们的业务发展。除了与OpenAI 的合作,在开放计算项目(OCP)展会上全面展示Helios 机架式平台也是一个重要里程碑—— 客户能够直观看到Helios 的工程设计与功能优势。
若你问过去一个月是否有更多客户表示合作兴趣,或合作进程是否加速,答案是肯定的。目前客户的合作意愿普遍较高,且合作规模也更大,这是一个积极信号。
关于客户集中度风险:我们在数据中心AI 业务中的一个核心战略基础,就是建立广泛的客户群体。一直以来,我们都与多家客户保持合作。在供应链规划方面,我们也确保有充足的产能,以支持2027 年至2028 年期间多家客户实现类似规模的部署—— 这无疑是我们的目标。
主持人:下一个问题来自富国银行(Wells Fargo)的亚伦·雷克斯(Aaron Rakers),请您提问。
亚伦·雷克斯:感谢贵公司接受提问。我想了解,在当前服务器业务表现强劲的背景下,如何分析Turin 产品周期中销量增长与平均售价(ASP)提升对营收的贡献比例?贵公司对未来这一比例的变化有何预期?
苏姿丰:好的,亚伦。在服务器CPU 业务中,Turin 处理器的功能更丰富,因此随着该产品产能的提升,平均售价(ASP)有所上升。但正如我在准备好的发言中所提及的,上一代Genoa 处理器的需求依然旺盛—— 超大规模云服务商无法立即将所有部署都切换到最新一代产品,因此Genoa 仍保持着良好的销售势头。
从我们的角度来看,当前CPU 需求具有广泛基础,覆盖各类工作负载场景。这在一定程度上属于服务器更新周期,但从与客户的沟通中我们了解到,更重要的驱动因素是:AI 工作负载催生了更多传统计算需求,因此需要扩大部署规模。
展望未来,我们观察到客户对最新一代产品的需求意愿更强。因此,我们对Turin 处理器的扩产进展感到满意;同时,我们也看到市场对Venice 处理器的需求十分强劲,且早期合作项目已在推进—— 这从侧面反映出当前通用计算的重要性。
亚伦·雷克斯:感谢解答。我的追问是:虽然不想过多占用下周“财务分析师日” 的讨论内容,但苏姿丰博士,您一直强调AI 芯片总可寻址市场(TAM)规模有望达到5000 亿美元,且目前这一规模仍在扩大。我想了解,在当前大型吉瓦级部署项目不断涌现的背景下,贵公司对未来AI 芯片TAM 规模的最新看法是什么?
苏姿丰:亚伦,正如你所说,我们不想过多提前透露下周的讨论内容。但可以肯定的是,下周我们将全面阐述对市场的看法。从目前观察到的情况来看,AI 计算市场的TAM 规模正持续扩大。下周我们将公布更新后的具体数据,但可以明确的是,尽管最初5000 亿美元的TAM 规模听起来已十分庞大,但我们认为未来几年该市场的机会将更大—— 这无疑是一个令人兴奋的趋势。
主持人:下一个问题来自New Street Research 的安托万·奇卡班(Antoine Chikaban),请您提问。
安托万·奇卡班:感谢贵公司接受提问。我想请问,与OpenAI 不断深化的合作,是否会推动贵公司软件栈的定制化开发?能否请您分享一下双方合作的实际运作方式,以及这一合作是否有助于提升ROCm 的稳定性?
苏姿丰:好的,安托万,感谢你的问题。答案是肯定的—— 所有大型客户的合作,都在推动我们软件栈的拓展与深化。与OpenAI 的合作尤其如此,我们计划在硬件、软件、系统及未来路线图方面开展深度协作。从这一角度来看,我们与OpenAI 在Triton(推理框架)方面的合作,无疑具有重要价值。
但我想强调的是,除了OpenAI,我们与所有大型客户的合作都对软件栈的完善起到了重要作用。我们已投入大量新资源,不仅服务于大型客户,还与众多AI 原生企业合作—— 这些企业正积极基于ROCm 栈进行开发,为我们提供了大量反馈。
目前,我们在训练与推理软件栈方面已取得显著进展,未来还将继续加大投入。因此,越多客户采用AMD 产品,就越能推动ROCm 栈的完善。下周我们还将就此话题展开更多讨论,此外,我们也在利用AI 技术加速ROCm 内核开发及整个生态系统的建设进程。
安托万·奇卡班:感谢解答,苏姿丰博士。我的追问是:能否请您谈谈显卡的使用寿命?我了解到大多数云服务提供商(CSP)对显卡的折旧年限设定为5 至6 年,但在与他们的沟通中,您是否观察到或听到任何早期迹象表明,他们实际计划延长显卡的使用时间?
苏姿丰:安托万,我们确实观察到了一些早期迹象。关键在于,一方面,在建设新数据中心基础设施时,客户显然希望采用最新、最先进的显卡—— 例如,MI355 通常部署在新建的液冷设施中,MI450 系列未来也将如此;但另一方面,市场对AI 计算能力的需求十分旺盛,因此上一代显卡(如MI300X)在推理等场景中的部署与使用依然活跃。
因此,目前市场呈现出两种趋势并存的局面。
主持人:下一个问题来自摩根士丹利的乔·摩尔(Joe Moore),请您提问。
乔·摩尔:非常感谢。您提到了MI308 显卡。我想了解贵公司对该产品的战略定位:如果未来出口限制有所放宽,允许出货,贵公司是否已做好准备?能否请您说明该产品可能对营收产生的潜在影响规模?
苏姿丰:好的,乔。目前MI308 面临的形势仍较为动态(存在不确定性),这也是我们未将该产品营收纳入第四季度指引的原因。我们已获得部分MI308 的出口许可,感谢政府部门对这些许可的支持。
目前,我们正与客户沟通,了解需求环境及潜在市场机会,未来几个月我们将提供更多更新信息。
乔·摩尔:好的。但如果市场开放,贵公司是否已有现成产品支持供应,还是需要重新建立库存?
苏姿丰:我们目前有部分在制品(work in process),并且会继续推进相关工作,但具体供应情况仍需根据需求环境来确定。
乔·摩尔:非常感谢。
苏姿丰:感谢提问。
主持人:接线员,我们或许还能再接受最后一个提问。
接线员:没问题。最后一个问题来自德意志银行的罗斯·西摩(Ross Seymore),请您提问。
罗斯·西摩:感谢贵公司让我参与提问。苏姿丰博士,虽然可能在整点前时间有限,但我还是想请教:目前OpenAI 已公布多项吉瓦级合作,AMD 如何在其中实现真正的差异化竞争?当看到这家大型客户同时与其他显卡厂商及ASIC 供应商签约时,贵公司采取了哪些与竞争对手不同的市场策略,以确保不仅能获得最初的6 吉瓦部署订单,还能在未来争取到更多份额?
苏姿丰:好的,罗斯。目前全球对AI 计算能力的需求十分旺盛,这是一个核心背景。OpenAI 在追求更多AI 计算能力方面走在行业前列,但他们并非个例—— 展望未来几年,所有大型客户对AI 计算能力的需求都将大幅增长。
在产品定位方面,各家厂商都有自己的优势。我们认为,MI450 系列(尤其是结合机架式解决方案)具有极强的竞争力。从计算性能与内存性能来看,该产品在推理与训练场景中均处于领先地位。
关键成功因素包括:产品上市时间(time to market)、总拥有成本(TCO)、深度合作关系,以及对未来产品路线图的规划(不仅限于MI450 系列,还包括后续产品)。目前,我们已深入讨论MI500 及以后的产品规划。
我们有充分的信心,不仅能参与到这一市场中,还能在需求旺盛的环境中占据重要份额。过去几年,我们在AI 产品路线图方面积累了丰富经验,深刻理解大型客户在工作负载方面的需求—— 因此,我对我们未来的市场表现持乐观态度。
罗斯·西摩:非常好。我的追问是:在与OpenAI 的合作中,贵公司采用了独特的结构,包括授予部分认股权证(warrant),且根据报道,定价方式也颇具创新性,能够实现多方共赢。您认为这是一项相对独特的协议,还是鉴于全球对计算能力的迫切需求,AMD 未来愿意通过类似的股权工具等创新方式,与其他客户达成合作以满足市场需求?
苏姿丰:好的,罗斯。从AI 行业当前的特殊时期来看,与OpenAI 的合作协议确实具有独特性。我们在合作中的首要目标,是建立深度、长期的合作关系,实现多代产品、大规模的部署—— 我们显然达成了这一目标。
该合作结构实现了极强的利益一致性,是一种多方共赢的模式:AMD、OpenAI,以及我们的股东,都能从中受益,而这些收益最终将反哺到产品路线图的推进中。
展望未来,我们正在与众多客户发展有趣的合作关系,包括大型AI 用户及主权AI 项目。我们将每一项合作都视为独特的机会,会充分调动AMD 在技术及其他方面的全部能力,为合作方创造价值。
因此,与OpenAI 的合作确实具有独特性,但我相信,未来我们还将有更多机会将自身能力融入生态系统,以重要方式参与市场发展。
主持人:女士们、先生们,问答环节到此结束,本次电话会议也即将结束。感谢各位的参与,现在您可以挂断电话了。
(Article source: CLS)