Why am I working on Distributed Quantum Computing (DQC)?
1st of May 2025
Distributed Quantum Computing (DQC) offers a promising path to scale quantum capabilities beyond the limitations of single devices.
In this post, I share the motivation behind my work in this area:
why distributing quantum computations matters, what challenges it poses,
and how it connects to real limitations in current hardware.
This sets the stage for a broader discussion on how we can externalize,
coordinate, and benchmark quantum computation across multiple devices.
Building ever-larger quantum computers by simply adding more qubits into a single device isn't a sustainable strategy.
Let's look at why:
No matter how small we make qubits, we are still constrained by fundamental physical limits.
One such limit arises from the area law in quantum physics, which states that the entanglement entropy of a subsystem (a measure of quantum correlations) scales with the boundary area of that subsystem.
This concept was first introduced in black hole thermodynamics by Jacob Bekenstein and Stephen Hawking in the 1970s, who found that a black hole’s entropy is proportional to the area of its event horizon.
In the context of quantum hardware, this means that as we build larger devices, the entanglement structure becomes increasingly complex along subsystem boundaries.
Managing these correlations (generating, routing, and maintaining them accurately) is a growing challenge, especially in architectures with limited connectivity or distributed components.
Errors can arise not only from external noise but also from internal mismanagement of these correlations.
Entanglement that is too weak, too strong, misplaced, or accidentally extended can all lead to computational faults.
Unlike classical bits, quantum information is deeply relational: getting the structure of these relationships wrong introduces logical errors, even in the absence of decoherence.
The challenge is not just to keep qubits isolated from the environment, but to ensure they are entangled with the right partners, in the right way, at the right time.
At some point, adding more qubits to a single device is expected to yield diminishing returns, due to the increasing complexity of correlation control and system architecture.
Building wide forever is not viable.
Instead, we must build tall and distribute computations across multiple devices rather than growing one monolithic system indefinitely.
This architectural shift offers a path to scalability without running into the hard limits imposed by correlation structure, control precision, and spatial layout.
These challenges are reminiscent of the obstacles faced in classical computing during the late 20th century.
In the late 20th century, classical computers hit expected physical limits, similar to the ones we are predicting in the quantum world.
Moore's law kept pushing the number of transistors up, but at some point, packing more into a chip caused too much heat and reliability issues.
Processors started to run into both physical limits (how small you can make features) and thermal limits (how hot the chips get).
By the 1980s and 1990s, it became clear that scaling a single processor indefinitely wasn't going to work.
Clock speeds stopped rising dramatically after the early 2000s because of these heat and power problems.
So we stopped building wide, and started building tall.
People started working with multiple processors and building distributed systems.
Instead of making one super-powerful chip, we started connecting lots of smaller chips together.
Thus, avoiding the physical limits of single-chip designs, whilst still growing system size and performance.
This is how concepts like cluster computing, parallel computing, and eventually cloud computing took off.
Instead of making one super-powerful chip, we started connecting lots of smaller chips together.
Famous early examples include Beowulf clusters (mid-1990s) for scientific computing, and Google's early distributed file systems and computing frameworks (like MapReduce, early 2000s) for handling web-scale data.
The key takeaway from the classical distributed world beginings was:
When scaling one machine hits a wall, we have to change the architecture:
we network machines, distribute the workloads, and let many processors work together.
The bells are pointing to a similar moment in quantum computing.
Except this time the rules of the game are even weirder.
Quantum parallelism, entanglement, and superposition are not just fancy tricks to make things faster.
They are the very fabric of quantum computing.
And how they will behave in a set distributed open quantum systems (because afterall this is what quantum computers really are: open quantum systems) is a fascinating open question, that I hope to convince you is worth pursuing.
A Proven Classical Solution: Networks
In classical systems, networking machines means moving bits around. You send them, store them, copy them. No drama.
You just need to make sure your logic doesn't eat its own tail: avoid deadlocks, prevent resending the same bit, maintain cache consistency, and so on.
Over time, incredibly sophisticated protocols have been developed to keep everything running smoothly. A graduate-level distributed systems course will walk you through the basics.
And although you may grind your teeth while learning about Paxos, Raft, and other consensus algorithms, you eventually get the hang of it. You learn what kinds of problems to expect.
There are many, but they tend to follow familiar patterns.
Quantum systems, on the other hand, introduce a different class of challenges.
Qubits can exist in superpositions.
We are no longer dealing with combinations of zeros and ones, but with physical states—arguably some of the most complex objects in the universe.
These states can be entangled across nodes.
Not only can we move bits (for example, using photons), but we can now also move correlations.
Quantum teleportation opens a window to new capabilities, but it also brings a host of new, headache-inducing problems.
Alongside these new power plays, we lose some familiar ones.
A window may open, but a door closes.
You cannot simply read a qubit and resend it without destroying it.
Copying quantum information is impossible (no-cloning theorem).
We can move correlations between systems, but we cannot directly observe or duplicate them.
As a result, networked quantum computing introduces fundamentally different challenges from classical networked computing.
This is not just an engineering problem.
It forces us to grapple with an entirely new layer of physics, one that classical protocols never had to consider.
New Quantum-Specific Questions
What are the genuinely new challenges introduced by distributed quantum computing? At the implementation level, most concerns fall into two categories:
when to connect machines, and how to do it.
When should we connect quantum machines? Not all quantum computations benefit from distribution.
The overhead of inter-device communication and coordination can easily outweigh any potential advantages.
In particular, there is growing evidence that highly entangled regions of a quantum system are especially fragile when split across devices.
Improper partitioning can cause decoherence that is difficult or even impossible to recover from. Determining when distribution is helpful, and when it becomes harmful, remains an open problem.
Classical, quantum, or hybrid networks? Distributed quantum computing does not rely on quantum processors alone; it also depends on the networks that connect them.
These networks can be classical (such as standard optical fiber) or quantum (such as photonic entanglement links).
Each type of channel brings distinct advantages and limitations.
Classical communication is reliable and well-established, but only quantum channels enable entanglement distribution and teleportation. Choosing between classical, quantum, or hybrid channels is a critical architectural decision.
Each choice impacts latency, fidelity, scalability, and error tolerance differently. Understanding these trade-offs, and identifying how channel choice influences computation, is still an evolving research area.
So far, most efforts have focused on enabling distribution itself, rather than on studying how the choice of communication channel shapes the computation.
Parallelism: Lessons and Differences
One of the other motivations behind classical distributed computing was the discovery of performance gains through parallelism.
As early as the 1970s, Amdahl's Law formalized the limits of speedup in a system with a fixed sequential portion.
In the 1980s, Gustafson's Law reframed the discussion by showing that for larger problem sizes, more of the workload can be parallelized.
These insights supported the shift toward multi-core architectures and distributed systems, even when communication overhead was non-trivial.
Quantum computing approaches parallelism differently.
Superposition and entanglement give quantum systems a form of inherent parallelism.
You will often hear the simplification that quantum computers explore exponentially many paths at once within a single state. This is misleading.
Quantum parallelism is not directly accessible: we cannot observe all computational branches at the same time, nor can we selectively operate on individual ones.
At the same time, quantum devices can perform operations on separate qubits simultaneously.
This form of operational parallelism is fundamental to the model.
Because of this, the usual arguments for speedup through distributed execution do not carry over directly. In fact, current expectations suggest the opposite.
Distributing a quantum computation introduces new overheads, including the need for error correction, qubit teleportation, and additional coordination, all of which can slow down the overall process.
Still, scientists are not easily discouraged.
If you were to ask me, one of the many early in career researchers who are dedicating their PhD to this field, I believe
that distributed quantum computing will offer new kinds of speedup: not by splitting work across nodes in the classical sense, but by allowing more complex entangled systems, enabling new types of measurements, or increasing algorithmic diversity through heterogeneity of nodes.
When and how these speedups manifest is still poorly understood - ergo why we are eagerly working on it.
Infrastructure Challenges
Implementation challenges aside, there are also practical problems we can't ignore.
Right now, we're working within a fractured quantum ecosystem.
Superconducting qubits, trapped ions, photonics — each platform is racing to prove itself as the dominant one.
They all come with strengths and tradeoffs, but there's no clear winner.
Worse still, they don't speak the same language.
Not everything runs on circuits!
Today's quantum computers run on different computational models.
From measurement-based quantum computing (MBQC) to topological models, these machines don't perform computations in the same way.
They don't interoperate cleanly, and they certainly weren't designed to work together.
Despite this hardware reality, most of the current distributed infrastructure work assumes circuit-based computation.
But the moment you try cross between modalities, that assumption no longer holds.
Ecosystem heterogeneity becomes a serious problem when we start distributing.
Abstracting above or below the model is nice in theory, but today, distribution is anything but model-agnostic.
And inter-model interoperability?
Not even close.
This is more than a hardware limitation, it's a conceptual mismatch between how we compute and how we communicate.
Solving this will mean rethinking some of our core assumptions about distributed quantum computing.
That's where some of my current work is heading.
Keep an eye out for model-agnosticism on the horizon.
Current Strategies: Cutting and Knitting
So what do I mean by current distributed infrastructure assumes circuit-based computation?
Let's break it down.
The standard approach today is to take a quantum circuit (a sequence of gates acting on qubits) and split it into smaller parts that can run on different devices.
We usually model the circuit as a hypergraph, where qubit wires are nodes and gates are hyperedges connecting them.
The goal is to solve a cutting problem: how can we partition this graph across devices while keeping communication costs low?
This is an NP-hard problem, meaning even small circuits can produce difficult tradeoffs between cut size, depth, and overall fidelity.
Once the circuit is cut, the pieces need to be "knitted" back together.
This can happen in two main ways: we can cut qubit wires, which requires teleporting quantum states between devices, or we can cut gates, which involves creating long-range entanglement links to simulate the missing multi-qubit interaction.
Both approaches are valid in theory.
But they rely on real quantum networks with high-fidelity links, fast entanglement distribution, and precise coordination between remote devices.
And right now, quantum networks for quantum compute are not yet at that level.
The alternative we see available in industry today is to simulate these quantum links using classical communication.
This works in simulation and in small experimental settings, but it's a bit like trying to run a Formula 1 car on a bicycle track.
You can move forward, but the track isn't designed for that kind of speed or fidelity.
You're constantly adapting the system to the limitations of classical infrastructure, instead of building infrastructure suited to quantum behavior.
When we talk about circuit "knitting" today, we're often referring to this classical workaround, which is something of a misnomer.
To emulate a quantum link over a classical channel, we run many shots on both sides of the cut and stitch together correlated measurement statistics.
This allows us to reconstruct expectation values as if the circuit had been whole.
But we aren't actually knitting quantum circuits together in the physical sense.
We're approximating their output distributions using classical post-processing.
Which racks up costs.
The Hardware conversation
Many quantum platforms (e.g., superconducting qubits) were not built with networking in mind.
They're excellent for local, high-fidelity gates and fast control, but they cannot natively emit or absorb flying qubits like photons.
To interface with a quantum network, these qubits require an interconnect, a device that converts stationary qubits into photonic ones, typically via microwave-to-optical transduction (e.g., using resonators).
These interconnects are still in early stages.
They introduce unavoidable losses, noise, and latency.
The fidelity hit is significant, and the success rates for conversion remain low.
Even the "super" in superconducting can't dodge these super losses.
This creates a serious bottleneck.
A device may perform well in isolation but become nearly unusable when asked to communicate with others.
Until we have high-fidelity, low-loss interfaces / architectures that avoid the need for conversion entirely hardware will remain a core limitation for distributed quantum computing.
Recent Progress and Open Needs
But despite this bleak state, the first real steps toward distributed quantum computing have been arriving!
In early 2025, Xanadu introduced Aurora, the first modular, networked photonic quantum computer.
Aurora connects 35 photonic chips via 13 kilometers of optical fiber across four server racks, forming a 12-qubit system designed for scalability and fault tolerance.
This marks the first time a quantum computer has been built to be modular, networked, and truly scalable with quantum channels.
And although it is still in the early stages, it represents a significant step toward a future where quantum computers can be distributed across multiple locations and connected via quantum networks.
Around the same time, researchers at Oxford University demonstrated the first instance of distributed quantum computing by linking two separate quantum processors via a photonic network.
They executed a distributed version of Grover's search algorithm, achieving an 86% fidelity for a teleported controlled-Z gate between modules and a 71% success rate for the algorithm.
The experiment used trapped-ion QPUs and showed that non-trivial distributed algorithms are feasible even with current hardware.
Another major development came from TU Delft, where researchers at QuTech and the Quantum Internet Alliance introduced QNodeOS, the first operating system for quantum networks.
QNodeOS provides a hardware-agnostic abstraction layer, letting quantum applications issue high-level commands like entanglement generation or inter-node operations without being tied to specific devices.
These requests are translated into hardware-specific instructions that coordinate timing, entanglement, and classical messaging between nodes.
However, the system still assumes that each node runs gate-based quantum circuits.
The computational model remains fixed, even if the hardware platform is abstracted.
This is a major step forward, but it still leaves open the question of how to coordinate distributed computations across different models entirely.
When all is said and done these are significant milestones, but much remains to be done:
Conducting experiments on actual distributed computations, such as complex circuits, quantum walks, and MBQC patterns.
Developing quantum network experiments that specifically target quantum computing needs, moving beyond network oriented applications like quantum key distribution (QKD).
Demonstrating even short-distance server-to-server quantum communication, which is both urgent and impactful for validating distributed architectures.
The field is progressing, but to realize the full potential of distributed quantum computing, we need to focus on these open challenges and continue building upon these foundational experiments.
Closing: Why work on DQC?
Distributed Quantum Computing isn't just a niche topic.
It's essential if we want to scale quantum computing beyond the limits of single devices.
The physical constraints are real, and they won't be solved by squeezing more qubits onto a chip.
DQC forces us to ask hard questions:
How do we coordinate entanglement across distance?
How do we distribute algorithms across heterogeneous platforms?
How do we rethink hardware and software to support modular, network-aware architectures?
These aren't engineering challenges, they're fundamental questions about what it means to compute in a quantum world.
And they're questions we need to answer if we want quantum computing to move from lab demos to real-world impact.
That's why I work on DQC.
It's one of the few paths forward that can take quantum computing to its next phase.