Rackspace and AMD Prepare a Governed AI Cloud for Regulated Enterprises

Rackspace Technology and AMD have signed a memorandum of understanding to create an enterprise AI infrastructure focused on critical workloads, regulated sectors, and environments where security, sovereignty, and operational accountability cannot be treated as an afterthought at the end of the project. The agreement envisions a managed Enterprise AI Cloud based on AMD Instinct GPUs and AMD EPYC processors, with Rackspace serving as the operator responsible for the entire stack—from hardware to inference and production agents.

The announcement comes at a time when many companies have moved from experimenting with language models to asking how to deploy AI into production without losing control over data, costs, compliance, and availability. Over the past two years, much of the market has centered around renting GPU capacity by the hour. Rackspace and AMD propose a different approach: dedicated, operated, and governed infrastructure, with a provider taking on greater responsibility for integration and daily operations.

The fine print also matters. This is not yet a definitive contract to deploy a specific platform in all its terms, but rather an MOU—a framework for potential collaboration between the two companies. Rackspace notes in its own statement that there are no final agreements in place and that it cannot guarantee all the anticipated benefits will materialize. This nuance does not diminish the strategic importance of the initiative but emphasizes that it is a developing strategic move rather than a fully established product from day one.

From GPU Rental to Enterprise AI with Single Ownership

Rackspace’s core premise is clear: the prevailing model forces companies to rent GPU capacity and handle the complexity of integration, security, governance, scaling, and responsibility. For organizations with sensitive workloads, this approach may fall short. Merely having accelerators is not enough; they must be integrated into an operational framework that ensures data location, access control, process auditing, service levels, and contingency plans when workloads fail.

The agreement with AMD aims to fill this gap with four capabilities. The first is Enterprise AI Cloud—a private and hybrid managed cloud built on AMD Instinct and AMD EPYC, designed for companies needing sovereignty, compliance, and operational control. The second is Enterprise Inference Engine—a runtime inference platform with enterprise context, session history, and domain knowledge to enable models and agents to operate more reliably in production environments.

The third component is Inference as a Service—offering dedicated, managed AMD Instinct GPUs with tools for inference and fine-tuning, as a governed alternative to generic GPU rental. The fourth is Bare Metal AMD Instinct—targeted at clients requiring physical isolation, deterministic performance, and direct hardware access for highly customized training or inference workloads.

Compared to a standard cloud provider, the difference isn’t just in hardware. It’s in the promise of operations. Rackspace aims to position itself as the sole responsible party for the platform, with service level agreements, hardware support, governance, and cost control. For industries like finance, healthcare, public administration, telecommunications, or advanced manufacturing, this approach can be more appealing than transferring sensitive data to generic AI services without a clear responsibility architecture.

AMD Seeks Greater Market Share in the Face of NVIDIA’s Dominance

For AMD, the agreement bolsters its push into enterprise AI infrastructure. NVIDIA remains dominant in accelerators, software, ecosystem, and adoption among hyperscalers, but AMD is gaining ground with Instinct, EPYC, and ROCm. In recent months, AMD’s messaging has been consistent: offering an open, efficient alternative tailored for companies that want to deploy AI within their own data centers or private clouds.

AMD’s AMD Instinct MI300X GPUs are already targeted at generative AI workloads and high-performance computing, with large memory capacity and high bandwidth. More recently, AMD introduced the MI350 series, including the MI350P PCIe, designed to deploy generative and agent-based AI within existing infrastructure, avoiding the need for complete data center redesigns. This approach suits companies unwilling to rely solely on large, closed clusters or long hardware refresh cycles.

The role of EPYC processors is equally important. In many AI architectures, the focus is on GPUs, but true performance depends on the entire system: CPU, memory, network, storage, virtualization, security, orchestration, observability, and software. Rackspace and AMD want to market this combination as a managed platform, not just a collection of components.

Another key element is ROCm, AMD’s open software stack for acceleration. AMD’s greater challenge compared to NVIDIA isn’t just silicon—it’s ecosystem maturity. CUDA has been the de facto standard for years among developers, frameworks, and research teams. To make an AMD-based Enterprise AI Cloud attractive, Rackspace will need to handle much of the complex work: integration, model validation, tool compatibility, inference optimization, support, and ongoing operations.

Governance, Sovereignty, and Production Agents

The choice of the word “governed” is deliberate. Companies aren’t just asking whether a model responds correctly; they want to know if the system can be integrated into security policies, auditing, permissions, traceability, and compliance frameworks. This becomes even more critical with AI agents, which are systems capable of executing steps, calling tools, querying internal data, and automating tasks.

Rackspace envisions that its Enterprise Inference Engine can maintain domain context, session history, and company-specific data across queries. This can enhance the utility of agents and models in production but also raises security requirements. If a system retains institutional memory, it must do so with robust controls: client segregation, encryption, auditing, access limits, data retention policies, deletion protocols, and human oversight for critical processes.

This approach is especially relevant for sovereign workloads. In Europe, many organizations are reevaluating where they run their AI systems, who operates the infrastructure, what legislation applies, and how data is protected from unauthorized access. A U.S.-based provider like Rackspace doesn’t resolve all these digital sovereignty debates alone, but its private, hybrid, governed cloud model addresses a genuine demand: enterprise AI can’t always depend on generic public services if sensitive data or processes are involved.

There’s also a market perspective. Many clients prefer not to become their own AI infrastructure operators. They want to utilize models, agents, and data with proper guarantees. This creates opportunities for managed providers who don’t compete solely on GPU hourly rates but on taking operational complexity off clients’ plates. Rackspace aims to reposition itself as not just a hosting or managed cloud company but as an end-to-end enterprise AI operator.

The AMD partnership makes strategic sense within this framework, but its success hinges on execution. Regulated industries don’t buy promises; they require references, certifications, measurable performance, support, predictable costs, and seamless system integration. If Rackspace can turn the MOU into mature offerings, with real deployments and credible SLAs, it could carve out a compelling space between hyperscale public clouds and self-managed private infrastructure.

Ultimately, the race toward enterprise AI is entering a more sober phase. Having GPU access is no longer enough. The key questions are: who governs the infrastructure, who is responsible when it fails, how is data usage audited, and which provider can bring models and agents into production without turning each deployment into a bespoke project. Rackspace and AMD have identified this need. Now, they must demonstrate they can translate it into a real platform.

Frequently Asked Questions

What have Rackspace and AMD announced?
They have signed a memorandum of understanding to create a governed enterprise AI infrastructure based on AMD Instinct GPUs, AMD EPYC processors, and Rackspace-managed operations.

Is this a finalized commercial agreement?
Not yet. The announcement refers to an MOU—a potential collaboration framework. Rackspace emphasizes that no definitive agreements are in place and that discussions are still preliminary.

How does this differ from renting GPUs by the hour?
It offers dedicated, managed, and governed infrastructure with Rackspace responsible for the entire stack, as opposed to clients handling integration, security, scaling, and daily operations on their own.

Which types of companies is this aimed at?
Organizations with critical or regulated workloads, such as finance, healthcare, public administration, industry, telecommunications, or companies needing sovereignty, compliance, physical isolation, and cost control.

Scroll to Top