- Michaela Eichinger, PhD
- Posts
- New Paths to a Perfect Measurement
New Paths to a Perfect Measurement
Exploring the next generation of high-fidelity qubit readout
Hey everyone,
If you've been following quantum hardware news this year, you’ve probably noticed the same thing I have: a wave of creative new approaches to qubit readout is emerging from labs across the globe. For years, measurement was the Achilles' heel of superconducting quantum computers - slower and less accurate than our best quantum gates. Now, in 2025, that's beginning to change.
Let's dive into why this is happening and explore some of the cool new ideas lighting up the field.
The Classic Readout Dilemma
First, a quick refresher on why measurement is so hard. To build a useful, error-corrected quantum computer, we need to be able to check our qubits' states quickly and reliably without disturbing them, a property we call Quantum Non-Demolition (QND).
The standard method, dispersive readout, works by linking a qubit to a microwave resonator. The qubit's state (is it a 0 or a 1?) causes a tiny shift in the resonator's frequency, which we can then detect.
The problem is that to get a fast, clear signal, you need to probe the resonator with a lot of microwave photons. But a powerful pulse can cause two major problems:
The Purcell Effect: The qubit can permanently leak its energy into the resonator it's connected to, fundamentally limiting the qubits lifetime.
Measurement-Induced Transitions: Too many photons can activate unwanted multi-excitation resonances, literally knocking the qubit out of its computational state into higher energy levels. This is a catastrophic error that quantum error correction codes aren't designed to handle.
For years, we've been stuck in this trade-off: measure gently and be slow and noisy, or measure strongly and risk breaking the computation. But now, several teams have proposed brilliant ways to escape this dilemma.
Three Paths to a Perfect Measurement
This year, three papers (two experimental, one theory) in particular showcase distinct and powerful strategies for next-generation readout.
1. The High-Frequency Escape (Devoret Group, Yale)
This approach tackles the problem of unwanted resonances by creating a massive frequency difference between the qubit and the readout resonator. In their experiment, the readout frequency was twelve times higher than the qubit frequency.
The Core Idea: The unwanted transitions that cause leakage are typically multi-photon processes, where the transmon absorbs several readout photons at once. The strength of these processes is found to decrease exponentially when the frequency of the readout tone is moved far away from the qubit's own frequency. By making the readout frequency drastically higher, they effectively "turn off" these unwanted transition pathways, allowing for a powerful measurement pulse that doesn't excite the qubit .
This elegant solution led to a stunning 99.93% QND fidelity with a leakage probability of just 0.02%. The huge frequency separation also provides natural, robust protection against the Purcell effect, eliminating the need for a Purcell filter.

Source [1]
2. Balanced Cancellation (Blais Group, Sherbrooke)
Where the first approach was about avoidance, this one is about active cancellation within the circuit itself. The team proposes a "junction readout" circuit that uses two parallel pathways for the qubit-resonator interaction: one through a Josephson junction and another through a standard capacitor.
The Core Idea: The design couples the qubit to the resonator via a Josephson junction, which provides the necessary nonlinear interaction for readout. However, this junction also introduces a linear coupling that causes energy loss (Purcell decay). The innovation is adding a capacitor in parallel. This capacitor also introduces a linear coupling, but its effect can be engineered to be opposite to that of the junction. The two unwanted linear interactions destructively interfere and cancel each other out, leaving only the desired nonlinear cross-Kerr interaction for a clean, protected readout .
This design promises to be robust to fabrication imperfections and simulations predict it can achieve fidelities exceeding 99.99% in under 30 ns - all while being intrinsically Purcell protected.

Source [2]
3. The Isolated Ancilla (Buisson & Roch Groups, Grenoble/Madrid)
The third strategy focuses on isolation, creating a protective buffer between the fragile qubit and the powerful measurement pulse. This "transmon molecule" design uses a three-part system: the qubit, an intermediary "ancilla" circuit, and the readout resonator.
The Core Idea: This architecture physically separates the qubit from the high-power measurement. The qubit is gently connected to an intermediary quantum circuit (an "ancilla") through a purely nonlinear coupling. This ancilla is then strongly coupled to the main readout resonator. The qubit's state affects the ancilla, and it's the ancilla's properties that are then measured by the powerful external pulse. The qubit itself never directly "feels" the strong measurement tone, ensuring it remains protected.
This work demonstrated the concept's power by achieving 99.21% fidelity with a very high readout power of 89 photons, proving the architecture's incredible robustness.

Source [3]
The Broader Landscape
These three papers are part of a larger wave of innovation. We shouldn't forget other creative ideas, like Mikko Möttönen's calorimetric readout, which takes a completely different approach by measuring the tiny burst of heat released when a qubit decays, or the work at SEEQC using Single Flux Quantum (SFQ) logic to build digital Josephson phase detectors.
It's clear that the community is no longer accepting the old trade-offs. We are seeing a renaissance in readout, driven by a deeper understanding of the underlying physics and new circuit designs.
Industry Check-In
So, how do these new results stack up against the current state-of-the-art from major players? Players like Google and IBM have made incredible progress. Google's latest Willow processor, for example, has demonstrated a median readout fidelity of 99.23%. Similarly, IBM's Heron processors show an impressive median fidelity of approximately 99.27%.
It is truly amazing how far these teams have pushed the standard dispersive readout model through heroic efforts in engineering. But is perfecting the current paradigm enough for the massive scale of fault-tolerant machines?
Fundamental limitations remain, and this is why these new approaches are so exciting. They aren't just optimisations - they are proper shifts, moving from fighting readout errors to designing them out of the system entirely.
Until next time,

References
[1] Pavel D. Kurilovich, Thomas Connolly, et al. "High-frequency readout free from transmon multi-excitation resonances." arXiv:2501.09161v1 [quant-ph] (2025).
[2] Alex A. Chapple, Othmane Benhayoune-Khadraoui, et al. "Balanced cross-Kerr coupling for superconducting qubit readout." arXiv:2501.09010v1 [quant-ph] (2025).
[3] Cyril Mori, Vladimir Milchakov, et al. "High-power readout of a transmon qubit using a nonlinear coupling." arXiv:2507.03642v1 [quant-ph] (2025).
Reply