Hey everyone,
We keep hearing about the push to connect high-performance computers and quantum computers. But what does it actually take to place a quantum computer inside a supercomputing facility? How do you power it, cool it, and make it play nicely with the classical nodes?
That’s exactly what IQM and the Leibniz Supercomputing Centre (LRZ) explored in a recent case study [1]: installing and operating a 20-qubit superconducting quantum computer in an HPC center. The paper stood out to me because it moves beyond concepts, documenting what hybrid operation looks like in practice: from site selection and environmental constraints to software orchestration, automated calibration, and user training.
So let’s look at the key questions this real-world experiment can help us answer.
How much power does a quantum computer use?
Surprisingly little. The 20-qubit system has a peak power consumption of 30 kW during its cooldown phase. For comparison, this is significantly less than a single classical Cray EX4000 cabinet, which can draw up to ~140 kW.
From a power-budget perspective, quantum computers are far less demanding than many think. The challenge isn’t electricity. It’s the specifics of cooling. The system’s cryogenic components require cooling water between 15°C and 25°C, while many HPC cooling loops run at warmer temperatures, sometimes up to 45°C.
How sensitive is it to noise and vibration?
Very. Superconducting qubits can lose coherence from magnetic, acoustic, or mechanical noise. LRZ engineers measured floor vibrations, magnetic fields, and sound levels for over 25 hours before installation, discovering that even nearby trams or loud (Finnish death metal) music can introduce measurable noise.
With that data, they selected a location where the temperature changed by less than 1°C over 24 hours and vibrations stayed within the ISO vibration limit for office spaces. The takeaway is clear: HPC centers can host superconducting systems, but only with rigorous site surveys and preparation.
How do you connect it to an HPC cluster?
Through software. The connection is managed through the so-called Munich Quantum Software Stack (MQSS), which provides two integration modes: asynchronous API access and a tightly coupled accelerator mode for HPC workflows. It supports multiple frontends like Qiskit and Pennylane, and uses a flexible, MLIR-based compiler to translate user code.
A key feature is also the Quantum Device Management Interface (QDMI), which gives the compiler live data about the hardware's status, such as noise characteristics, at runtime. This enables adaptive compilation, a technique that can reduce the effect of noise.
How do you keep qubits calibrated?
You don’t. The control system does. The system at LRZ operated for more than 100 days continuously without human intervention in calibration, thanks to automated routines controlled by the HPC center's scheduler. Operators can choose between a quick calibration cycle (~40 minutes) to optimize for uptime or a full cycle (~100 minutes) to maximize fidelity. This is where hybrid control blurs the lines between maintenance and computation. Calibration itself becomes a schedulable part of the workflow.
What happens if the system fails?
Cooling or power outages are expensive in time, not necessarily hardware. If the cryostat warms above 1 K, the calibration state can be lost, and a full cooldown and recovery can take two to five days. That’s why redundant cooling and uninterruptible power supplies are essential to mitigate downtime. The hardware is robust, so the primary bottleneck in the event of an outage is the thermal recovery process.
Can HPC users actually use the quantum system?
Yes, but not without structured support. LRZ ran a dedicated onboarding program that split users into two groups: quantum experts and traditional HPC scientists. Each received tailored mentorship, tutorials, and hands-on sessions with Jupyter notebooks. The result was tangible: the first user projects have already produced publications and preprints. It’s a reminder that the human interface often matters as much as the hardware one.
What’s the takeaway?
HPC+QC integration is rapidly moving from science fiction to operational reality. While pioneers like IBM and OQC have also co-located systems in data centers, this detailed case study from IQM and LRZ offers a transparent roadmap for what it takes. It proves that superconducting quantum systems can behave like first-class citizens in an HPC environment when their unique requirements for calibration, control, and cooling are designed into the compute fabric.
As this trend continues, we’ll undoubtedly see many more such integrations. The question is no longer if these two worlds will merge, but how tightly we can couple them; and what new capabilities will emerge from this quantum–classical interface.
Have a great day,

References
[1] Mansfield, Eric, et al. "First Practical Experiences Integrating Quantum Computers with HPC Resources: A Case Study With a 20-qubit Superconducting Quantum Computer." arXiv preprint arXiv:2509.12949 (2025)
Thumbnail Source: LRZ, https://www.lrz.de/en/technologies/quantum-computing