
TECHNICAL BLOG
Innovation Meets Infrastructure: The OQC Quantum-AI Data Centre
At OQC, we’re building the infrastructure for a new computational era. One where quantum and AI technologies don’t just coexist, they co-design, co-optimise, and co-evolve.
Today, we’re thrilled to announce the next step in that journey: an industry first Quantum-AI Data Centre, in partnership with Digital Realty and in collaboration with NVIDIA. A purpose-built facility designed from the ground up to power the most advanced quantum computing workloads alongside scalable AI inference and training. AI is smart. Quantum computing is powerful. Together, they’re transformative.
Jamie Friel
TECHNOLOGY MANAGER: QUANTUM THEORY
Jamie is responsible for building software solutions that will help build a quantum future. In particular building a bespoke quantum compiler that will allow groundbreaking problems to be solved on OQC’s hardware. Before joining OQC, Jamie worked as a software developer for a grid battery company, part of the UK’s national grid goal to bring greenhouse gas emissions to net zero by 2050.
An Infrastructure To Co-Exist and Co-Evolve
In 2008, something quietly revolutionary happened: researchers discovered that a consumer-grade GPU could accelerate deep learning by orders of magnitude. It was the spark that lit the modern AI era.
But the technology behind that moment wasn’t new: graphics acceleration chips had been around since the late 1970s, powering arcade classics like Asteroids and Space Invaders. For decades, these chips were purpose-built for pixel pipelines, not neural networks. It took nearly 30 years, a shift in mindset, and an explosion of supporting infrastructure to unlock their full potential for machine learning.
Even after 2008 it took almost a decade and the development of specialised infrastructure, libraries, platforms, and data centres, for that insight to become an engine of global-scale innovation. What began with one repurposed graphics card evolved into an entire ecosystem, culminating in today’s billion-parameter models and GPU megaclusters.
That long arc of innovation is instructive. Today, quantum computing is at a similar tipping point. The foundational hardware is real and improving rapidly. Early breakthroughs in quantum simulation, AI-assisted control, and hybrid algorithms are already here. But the supporting infrastructure? Fragmented. Underpowered. Too slow.
If we want to avoid another 30-year wait, we need to compress the innovation cycle, from idea to prototype to deployment. That means building environments where quantum hardware and AI accelerators don’t just coexist, but co-evolve.
Our new Quantum-AI Data Centre is designed to do exactly that:
- Fully integrated quantum and AI compute at the infrastructure level.
- Purpose-built for real-time hybrid workloads.
- Engineered to accelerate iteration, experimentation, and scale.
Because history shows that hardware revolutions are inevitable, but they don’t reach their full potential until the ecosystem catches up.
At OQC, we’re building that ecosystem now.
The Vision: A Unified Environment
The biggest breakthroughs in computing haven’t come from software alone, they’ve come from the fusion of hardware and software into a unified environment.
Think of the early days of GPUs: before CUDA and TensorFlow, developers had to hack graphics pipelines to run neural networks. Progress was slow, brittle, and fragmented. Only once the hardware stack (programmable GPUs) and the software ecosystem (OpenCL, CUDA, cuDNN, PyTorch, TensorFlow) coalesced into a single development environment did deep learning explode into the transformative field we recognise today.
Right now, quantum computing researchers build hybrid applications across a patchwork of cloud emulators, lab cryostats, AI toolkits, and custom drivers. Every experiment is slowed by friction with mismatched protocols, latency bottlenecks, and fragile integrations. This fragmentation makes it hard to iterate quickly and iteration speed is the heartbeat of innovation.
Reproducible Results
Innovation isn’t just about speed, it’s about reproducible results. In science, reproducibility is the foundation of credibility; in industry, it’s essential for economic viability. No enterprise will pay for a system that delivers variable answers to the same problem. Imagine a financial model that predicts different risk profiles each morning, or a drug simulation that shifts with the weather. Poor reliability can turn a valuable system into a costly liability when applied to real problems.
This is where a unified hardware–software environment becomes essential:
- Integrated control systems ensure that calibration, tuning, and execution are consistent across runs.
- AI-assisted feedback loops stabilise qubits, dynamically reducing drift and error rates.
- Cloud-native orchestration guarantees that a workload run today can be rerun tomorrow with identical conditions and outcomes.
By embedding reproducibility into the very design of the data centre, instead of treating it as an afterthought, we are closing the gap between experimental quantum devices and commercially viable platforms. The value of a Quantum-AI system is ultimately measured not just in its power, but in how reliably it can deliver that power day after day.
We’re not only building for better quantum processors today, we’re building for a reliable, fault-tolerant future of quantum computing – and that requires more than just qubits.
Why Data Centres?
Data centres have been refined for decades to deliver unmatched performance and reliability. That’s why they’re the foundation of our approach.
Our data centre partner Digital Realty has achieved an extraordinary 18 consecutive years of “five nines” uptime, a standard of availability that sets the benchmark for economically viable quantum computing. Data centre engineers are world experts in power reliability, cooling redundancy, and continuous monitoring. These capabilities are especially crucial when supporting the rapid innovations in quantum computing.
By embedding our quantum systems into this environment, we combine scientific innovation with industrial-grade infrastructure.
The result: quantum devices that are not just powerful in principle but trustworthy in practice. Network connectivity within the data center further enables us to unify the runtime environment of quantum computing, while keeping space open for future innovation in connectivity.
The first generation of QPU to be deployed in our Quantum-AI data center is OQC GENESIS, a 16 error-detected qubit device built on our OQC Dimon™ architecture.
GENESIS is designed with reliability at its core. It incorporates error awareness and the ability to post-select against erasure events, providing more stable computational outcomes. The device is engineered from the ground up to effectively eliminate T1 errors (energy loss) while mitigating the effects of T2 errors (loss of coherence), two of the most pervasive challenges in superconducting quantum computing.
By housing GENESIS within a data centre-optimised cryogenic system, we align the constraints of quantum hardware with the strengths of enterprise-grade infrastructure. This co-design ensures that GENESIS is not just a step forward in performance, but a scalable platform for the transition to fully error-corrected devices.
A quantum computer cannot exist in isolation: to reach its full potential, it must be embedded in a high-performance computing (HPC) environment that amplifies its strengths and compensates for its current limitations.
In our Quantum-AI Data Centre, the HPC layer plays multiple critical roles today:
- Calibration and Monitoring: AI-driven compute nodes constantly analyse telemetry from the QPU, fine-tuning control pulses, detecting drift, and stabilising qubit behaviour in real time. This continuous loop between hardware and HPC dramatically improves reproducibility and uptime.
- Digital Twins: will play a key role in improved simulation of the QPU for testing and development.
- Compilation: See our recent pre-print in collaboration with NVIDIA and Q-Ctrl on how GPU can accelerate critical bottlenecks in quantum compilation.
- Error mitigation and correction: classical processing to suppress and mitigate dominant errors with an eye towards error correction.
- Hybrid Workloads: The most promising quantum applications are hybrid by design, requiring seamless hand-offs between quantum and classical processors. By co-locating QPUs with GPU and CPU clusters in the same data center fabric, we minimise latency and enable truly interactive hybrid computation.
See our previous blog post for further details on how we integrate quantum computing with NVIDIA accelerated computing in a data centre environment, as well as our recent pre-print on our HPC integration in partnership with Fujistu at the CESGA supercomputing centre. This integration means that every experiment benefits from the same industrial-grade environment: tightly coupled, continuously optimised, and scalable on demand.
The OQC Quantum-AI Data Centre
For quantum computing to create real impact, it must be usable by developers and researchers without specialist lab training. That’s why we are designing not just the hardware, but the entire user journey of building quantum applications.
A key aspect of the Quantum-AI Data Centre is entry points, allowing secure entry at different levels of the quantum computing stack by design. It will allow us to build a much richer ecosystem of possibilities including:
- Quantum programming interfaces for writing and optimising circuits that natively work in their environments.
- AI-integrated toolchains for error mitigation, circuit compilation, and hybrid workflows.
- Cloud-native APIs that provide secure, reproducible, and shareable access to quantum resources.
The result is an experience where designing, deploying, and iterating on quantum applications feels as seamless as working in today’s most advanced cloud platforms. Developers don’t need to worry about hardware complexities like cryogenic conditions or control electronics. What remains is a clean, unified environment that accelerates learning cycles and empowers innovation.
The Path To Fault Tolerant Computing
Every decision in our data centre design points toward one goal: fault-tolerant quantum computing.
OQC GENESIS, our first-generation error-detected QPU, is a milestone on that path. By embedding it in an environment optimised for reliability and integration, we create the foundation for scalable error correction. As devices grow from tens to hundreds to thousands of qubits, in line with our roadmap, the surrounding data center infrastructure ensures that power, cooling, monitoring, and orchestration all scale with them.
This setup allows us to shorten the journey to fault tolerance in three ways:
- Consistent Infrastructure: Stable, reproducible environments ensure that advances in algorithms and control systems are transferable across generations of hardware.
- Accelerated Iteration: Unified HPC + quantum + AI integration compresses development cycles, allowing breakthroughs to move from concept to deployment faster.
- Economic Viability: By embedding reliability and reproducibility into the platform from day one, we build the trust needed for enterprises to invest at scale.
Fault tolerance is not a distant dream, it is the direction of travel. Towards this goal we are already working with Riverlane on building a QEC test-bed in a UK data centre environment, developing critical capabilities such as real-time decoding and digital twins. With this and all our data centre environments, we are ensuring that every step we take today brings that fault tolerant future closer.
Real-Life Application, For Results
The true promise of our Quantum-AI data centre lies in the new applications it enables that simply weren’t possible before. By embedding quantum processors and AI accelerators in the same environment, we create a feedback loop where each technology amplifies the other: AI accelerating the path to useful QPUs, which can in turn accelerate progress in AI.
It was recently shown that many quantum advantages cannot be predicted by classical means; this underscores the need for reliable and performant QPUs to demonstrate the full potential of quantum computing. Quantum hardware is powerful but delicate, requiring constant calibration, error suppression, and intelligent control, tasks that AI is uniquely well suited to handle.
In our data centre, AI models and HPC compute run side-by-side with quantum processors.This enables many applications including real-time calibration for the control and monitoring of the QPUs, error mitigation, and automated optimisation within the compiler. It also enables new work towards automatic discovery of when a dataset is well suited to Quantum-AI and how to best leverage the power of quantum computing.
The result is that every quantum run becomes smarter, faster, and more reproducible, accelerating the path from raw hardware to usable computation.
The relationship doesn’t stop there. Quantum computing can also unlock new frontiers for artificial intelligence itself.
Within our hybrid environment, researchers will be able to explore:
- Novel Orchestration: such as training on classical resources and deploying on quantum computers.
- Quantum-Accelerated Training: Tackling bottlenecks in training, from hyperparameter search to resource allocation in large-scale AI models.
- Quantum Sampling For Generative Models: Leveraging quantum distributions to enrich creativity and diversity in generative AI outputs.
By hosting quantum and AI workloads in the same data center fabric, we reduce the latency and friction that have historically slowed progress in this field. Developers can run truly integrated experiments, testing how quantum circuits can improve AI models, and how AI can make quantum computing more reliable. This is critical to realising the groundbreaking discoveries we expect to see across materials discovery, quantum chemistry, fundamental physics, financial modelling and national security.
Our Quantum-AI Data Centre is designed to be the proving ground for this synergy, a place where the next breakthroughs in science, industry, and intelligence can emerge.
Comprehensively Selected Partnerships
No single company will deliver fault-tolerant quantum computing alone. The breakthroughs that will transform industries will come from an ecosystem: hardware innovators, algorithm designers, middleware developers, and software tool creators, working together seamlessly.
Today, that ecosystem is rich but fragmented. Brilliant startups, research groups, and tool providers are producing extraordinary capabilities, new compilers, error mitigation strategies, quantum-inspired algorithms, and developer frameworks. Yet combining these tools into a coherent workflow is often cumbersome. Researchers and enterprises alike face a patchwork of interfaces, protocols, and environments that slow innovation and limit adoption.
By embedding quantum processors within a data centre-grade environment and exposing them through a unified runtime platform, we create a space where these tools can come together natively.
- Algorithm developers can test their innovations directly against reliable, reproducible hardware.
- Middleware providers can integrate their solutions into a real-time hybrid environment.
- Software tool creators can deliver developer-friendly experiences that work out of the box.
We see this data centre not just as a facility, but as a collaboration hub. A place where the best of quantum algorithms, middleware, and enabling software will be co-developed into a coherent, powerful stack.
This is how the quantum ecosystem matures: by turning fragmented innovation into integrated capability. Together with our partners, we are building the environment where quantum computing moves from promise to practice.
Quantum computing has reached its inflection point. The science is real and the potential is vast, but unlocking it requires more than individual breakthroughs. Creating the fertile ground for the next wave of innovations is at the core of the Quantum-AI Data Centre approach: hardware and software, quantum and AI, innovation and infrastructure.
With our Quantum-AI Data Centre in New York City, OQC is building not just a machine, but an environment for discovery. One where ideas can be tested, scaled, and trusted. One where the best tools in the ecosystem come together into a unified whole. One where the journey to fault tolerance accelerates, and the future arrives faster.
Let’s not forget that a novel approach to graphics problems — driven initially by gamers — has evolved beyond imagination to power the AI boom. That was one form of acceleration provided by the graphical processing unit. Quantum looks to follow a similar road.
If you are a researcher pushing the boundaries of quantum algorithms, a startup creating powerful middleware, an enterprise exploring new applications, or a developer who simply wants to build, we invite you to join us.
Join our newsletter for more articles like this
By clicking ‘sign up’ you’re confirming that you agree with our Terms & Conditions