DQC State of the Art Today: the TLDR Overview

14th of October 2025

In just seven years, DQC went from theoretical sketches to experimental claims of distribution, though we are still far from practical DQC. This post lists what's missing, what we have, and what you can actually try today.

← Back to all posts

Distributed quantum computing is young. The first theoretical papers appeared around 2018, and already we have experimental claims of distribution.
Xanadu's Aurora, announced about a year ago, was the first claim of a distributed quantum computer. The device itself has not been publicly released, but its photonic architecture is interesting because photons are the flying qubits that naturally carry quantum information between modules. In a sense, photonic quantum computers are distributed by design. You can move atoms or ions between locations by optical transport, but that approach is unlikely to scale over long distances, which is why most visions of quantum networking rely on photons as the carriers of entanglement.
Oxford's recent experiment claiming a distributed algorithm (technically a teleportation between two processors rather than a computation across them) marked another small but meaningful step.
In just seven years, the field has moved from theoretical sketches to physical demonstrations of distribution.

Still, we are missing the shared backbone that would let us test these ideas end-to-end. We have early-stage quantum computers and networks, but they rarely meet within a single experimental setup. A real shame, especially given that many institutions hold both under the same roof.
Given that public distributed testbeds are non-existent, we cannot yet benchmark distributed workloads or reproduce results across groups. This also means that the measurements required to calibrate distributed quantum simulators are non-existent.
On the other side of the spectrum, compiler stacks that treat communication as a first-class resource with explicit costs are few, and even fewer integrate real network models. Error models for quantum channels are scattered and hard to compare across platforms. Orchestration layers that manage heterogeneous devices and track entanglement as a consumable are only starting to appear.

In other words, we are missing the shared infrastructure and public access that would make distributed quantum computing a testable reality. So what are researchers doing in the meantime?

Emulation & Simulation

Emulation refers to running real quantum programs today while mimicking distribution mechanics. Simulation refers to classically modeling quantum devices and networks.
While we wait for accessible distributed hardware, we can emulate quantum channels through two methods:
We also have simulators, often the best worst option. Simulators allow us to model quantum networks and distributed protocols beyond any emulation capacity. They permit us to design and compare protocols, study scaling, and build architectural infrastructure for tomorrow's devices. Yet they must always be interpreted with care, especially given that they are not calibrated to real devices. At a fundamental level, simulating quantum mechanics on classical machines is not an ideal solution. It is this very limitation that got us into this whole business of building quantum computers in the first place. Normalizing simulators as a research tool leads us to papers talking about DQC tests done on simulations of 8 qubits, which is antithetical to the whole point of DQC for scalability, and in no way reflects the complexities that need to be tackled by large-scale distributed systems. Still, for now, these simulations are what we have, and I am perhaps too young to play the academic downer here.

If you want to explore DQC in simulation, these are common choices: Notably, companies seem increasingly interested in distributed compilers. Both Welinq's araQne and Cisco's network-aware compiler have been announced as attempts to link quantum compilation with networking, though neither is publicly available yet. Despite the increase in interest, none of these options deliver a public end-to-end DQC stack, but together they cover protocol design, network effects, and compiler-informed slicing. But even the best emulations fall short for one main reason: they can't capture how noise behaves in real distributed setups.

The Noisy Reality

Noise is the main reason simulators and emulators fall short. It dominates fidelity and defines whether any distributed computation survives execution. Early studies already show that network channels can inject up to an order of magnitude more loss than local gates and measurements [Campbell et al. 2022], when modeled with the same internal noise levels. The difference comes from how network noise behaves: it depends on distance, link quality, timing, and how qubits are moved or entangled across space. That spatial dependence cannot be emulated within a single chip, which makes calibration against real hardware impossible for now.

A second challenge is heterogeneity. Distributed systems will likely link devices built on entirely different technologies, each with its own strengths and weaknesses. One node might operate with trapped ions and include native error correction; another could use superconducting qubits with faster but noisier gates. Synchronizing these devices means managing multiple noise profiles and aligning control layers that were never designed to talk to each other. We can barely achieve that level of coordination even within monolithic setups.

Finally, all distribution costs reduce to communication. Moving quantum information between modules requires well-defined primitives and clear models of how they affect fidelity and runtime. Two remain central today:
CAT and TP primitives
Visual representation of CAT and TP primitive subcircuits used for distributed communication, taken from [Wu et al. 2022].
Both primitives are essential but come with trade-offs. Teleportation is simpler but communication-heavy; gate teleportation preserves circuit structure yet amplifies noise. Current research focuses on reducing this overhead and incorporating these trade-offs into compilers that model communication errors explicitly.

An open letter to anyone who will read: make a testbed

Almost every quantum hardware company out there talks about scalability, often directly pointing to DQC and quantum networking. Today we have theory, prototypes, simulators that claim to capture the future, and more importantly, an eager and growing interest from the research community. What we lack is a place where they can meet, a baseline to test and compare. We need a true distributed quantum testbed. One that would let us connect real processors (that are computationally relevant +50 qubit nodes), quantify network noise in real distributed computation settings, and check whether our emulations reflect physical reality. One that could turn distributed quantum computing from a concept into a science.

-> If you are a lab with both a quantum computer and a quantum network, connect them. You'll be one of the first to do so.
-> If you are a company connecting multiple devices, open a fraction of that infrastructure for research access. This is, in my personal view, how IBM got the monopoly it currently holds over quantum cloud access. It was the first company that opened the door in an accessible manner. The time is coming for DQC, gain that terrain.
-> If you are funding national or academic programs, build and make node pairs public. Shared hardware is how we build shared truth.

Until we can execute protocols on connected machines, DQC will remain a theoretical promise. The missing link between vision and validation is a public testbed. Build it (and let me know how it goes)!

References