SAN FRANCISCO, October 9, 2025: OpenAI has entered into a multibillion-dollar agreement to purchase high-performance graphics processing units (GPUs) from AMD, marking a significant expansion of its compute capacity as demand for artificial intelligence workloads continues to surge. The deal also includes a warrant granting OpenAI the right to acquire up to a 10 percent stake in AMD, depending on performance and deployment milestones. Under the terms of the agreement, OpenAI plans to deploy six gigawatts of AMD GPUs over multiple phases, with the initial rollout of one gigawatt expected to begin in the second half of 2026.

The chips will be sourced from AMD’s upcoming Instinct MI450 series, which is designed for large-scale AI training and inference operations. In exchange for the large-scale procurement, AMD has issued OpenAI warrants to purchase up to 160 million of its shares at pre-agreed conditions. This equity component is contingent on the volume of chips deployed and specific price thresholds being met. The arrangement has been publicly filed with the U.S. Securities and Exchange Commission, confirming its financial scope and timeline. The announcement triggered notable activity in financial markets.
AMD shares rose more than 20 percent following the disclosure, reaching their highest level since mid-2024. In contrast, shares of Nvidia, which remains the dominant supplier of GPUs for AI, fell by approximately 1.5 percent during the same trading session. OpenAI’s collaboration with AMD does not alter its existing partnerships with other hardware providers. The organization continues to operate large clusters powered by Nvidia GPUs through various cloud service providers. However, this new agreement significantly expands OpenAI’s access to alternative compute infrastructure amid rising demand for generative AI models and related applications.
OpenAI signs major GPU agreement with AMD
AMD has made significant investments in positioning its Instinct line as a viable alternative in the data center and AI sectors. The company recently introduced its MI300 and MI400 series chips, which are optimized for parallel processing and AI workloads. The OpenAI contract is expected to accelerate AMD’s efforts to gain traction in the high-performance AI computing segment. The structure of the deal reflects an increasingly common model in the AI ecosystem, where chip manufacturers and AI developers form tightly integrated supply agreements that may include equity components.
The financial terms of the warrant were disclosed in regulatory filings, making this one of the most transparent such agreements to date. Nvidia CEO Jensen Huang commented publicly on the OpenAI-AMD deal, describing the arrangement as “imaginative” and “unique.” He acknowledged AMD as a competitive force in the chip market while affirming Nvidia’s continued focus on delivering advanced computing solutions across AI and enterprise sectors. OpenAI has previously stated that expanding its compute capabilities is central to the deployment of future versions of its language and multimodal models.
OpenAI maintains multi-vendor compute partnerships
The organization currently operates some of the world’s most powerful AI systems and relies on multiple hardware vendors to maintain uptime and scalability for its growing customer base. The collaboration with AMD represents one of the largest public commitments to date involving next-generation GPU infrastructure. It also reinforces the growing scale of demand among AI developers seeking dedicated, high-efficiency compute capacity to train and run large language models. The deployment of AMD GPUs is scheduled to begin in late 2026, with infrastructure build-outs expected to continue through the end of the decade.
Both companies have stated that they will work closely to optimize the integration of hardware and software for maximum performance across OpenAI’s operational stack, ensuring seamless deployment of AMD GPUs into AI training environments and maintaining efficiency at hyperscale levels while enhancing support for complex model development and large-scale inference workloads through customized engineering, deep systems collaboration, and performance tuning. – By Content Syndication Services.
