The quantum Internet is a term that has been bandied about a lot recently. And, for the moment it is utter nonsense. The Internet connects computers, so the quantum Internet pre-supposes the existence of useful quantum computers. The Internet also involves arbitrary on-the-fly routing through many intermediate stations, while current quantum communications protocols rely on point-to-point connections. I can’t think of anything less Internet-like than that.
The nice thing about buzzwords, though, is that some people take them seriously while also recognizing the problems inherent to the idea. That leads to some fantastic research. A group of Japanese and British researchers have come up with a communications protocol that overcomes many of the fundamental problems associated with transferring quantum information over long distances. We still don’t have a quantum computer, but when we do, these guys know how to connect them up.
Quantum static on your phone line
The issue boils down to the very nature of the quantum state. Usually, when we consider a quantum bit (qubits) of information, we are talking about a single photon. In any optical fiber, there is a certain probability the photon will be absorbed, which increases exponentially with distance. Generally, this limits point-to-point quantum data transfer to distances under 100km.
To expand that range, you need to entangle qubits at distant locations. The idea is that you send photon pairs over short distances and entangle them with neighboring photon pairs. That entangles the two most distance photons with each other. This can be repeated, transferring entanglement over large distances.
The process of entangling two photons is, however, not always successful. So, in a chain of photon pair sources, you might manage half the links on the first attempt. These should be stored in a memory, while the other half tries again. After several repetitions, the end-points are entangled with each other. Unfortunately, this requires a long-lasting quantum memory that can store a lot of different photons. This way, the right photon can be used once the entire entangled chain is established. Such a memory doesn’t exist yet.
Quantum entanglement is one of the most misused concepts around. Entanglement is delicate, rare, and short-lived. At its heart, quantum entanglement is nothing more or less than a correlation between two apparently separate quantum objects. Having discovered that, you might ask “so what is all the fuss about?” The answer lies deep in quantum mechanics.
Spreading your bets on photon survival
This would all be so much easier if the quantum state was stored across multiple photons. This is what a team of Japanese and British researchers have been considering. Drawing on early results from optical quantum computing schemes, they realized the error correcting mechanisms proposed for those systems could be used to extend the distance over which a quantum state could be transported.
Their scheme relies on the fact that no one really wants to send a single qubit, instead we want to send lots. These individual qubits can then be blocked together to make one larger quantum state that is the superposition of the individual quantum states. In addition, we add more qubits whose values are determined by the results of mathematical operations on the data qubits. When we do this, a tiny bit of each qubit’s state is held by several of the photons within the block, creating lots of redundancy.
Now, it turns out that on the receiving end, one can retrieve the entire quantum state of the data block provided a couple of conditions are met. At least one logical qubit within the block must make it through without loss. And, for each logical qubit, one physical qubit (a photon) survives.
So, if you have a block of a thousand photons—encoding 10 logical qubits in 10 photons with the remaining 990 photons being redundant—we require that one of those 10 photons makes it through cleanly. We also require the remaining photons that make it through contain information about the state of all the remaining qubits. If that occurs, you can retrieve the other 10 from the built-in redundancy.
The downside is, of course, that it takes many more physical qubits to encode a single logical qubit, so you might think that the data rate will be rather low. Not so fast, say the researchers. Information can be encoded on many different quantum states of the qubit, so one qubit could be stored in a superposition of polarization states, while another could be stored in phase, and another in spatial mode profiles, etc. (The nature of these states are unimportant save that we can play with one and not change the other.)
Superposition is nothing more than addition for waves. Let’s say we have two sets of waves that overlap in space and time. At any given point, a trough may line up with a peak, their peaks may line up, or anything in between. Superposition tells us how to add up these waves so that the result reconstructs the patterns that we observe in nature.
Those of you paying attention will be questioning this assumption though. Imagine in our previous example that we store three logical qubit states in a single photon. Well, if that photon is absorbed, we have lost three different qubits. Now, our encoding scheme relies on one logical qubit making it through in its entirety. Naively, we might think that this would reduce the chances of this occurring by a factor of three, which would be disastrous.
Luckily, the researchers are not as naive as me, and they have an answer. The encoding system works by blocking data and encoding it together into a single giant quantum state. The trick is to make sure the different quantum states of the same photon are used to encode qubits from different blocks of data. This way, the loss of a photon still results in losing three qubits, but they are from different blocks, making them the equivalent of losing three independent photons.
But wait, it’s a router too
The big advantage, though, is this scheme also allows something that looks like quantum routing. The basic process is that the photonic qubits are generated from matter qubits at one end and stored into matter qubits at a node. The storage process involves the emission of photons that can be used to determine the channel losses and appropriately decode the actual quantum data from the block. But, unlike point-to-point schemes, the decoding process doesn’t involve measuring the quantum state, it only involves picking the right bits and performing operations on them. This is important, because a measurement would destroy the quantum state. Since the quantum state is preserved, it can be re-encoded in a new block and sent on to a new node.
One can then imagine blocks are chunked into super-blocks with the first block containing routing information, and the remaining blocks being data. The first block could be entirely classical and contain routing information, or it could be stored in qubits, which are read at a node (destroying the quantum nature of the state), and then re-encoded in new qubits. Either way would work.
Where do I order my quantum router?
All through reading this paper, I had my doubts. But it really is a solid bit of work. It is also, I think, going to be regarded as a key paper should quantum networks become ubiquitous and significant. Implementation, though, is going to be challenging. This is because the number of qubits per photon and the number of photons per qubit scale very fast. For instance, if we limit ourselves to losses of 50 percent—that is, the link destroys 50 percent of the photons sent to the receiver—and only store a single logical qubit per photon, then we require over 7000 photons (physical qubits) to transmit 10 logical qubits.
If we go the more complicated route and store multiple qubits per photon, then things get better: only 75 photons are required to transmit 15 photons. But, that assumes that 15 qubits are encoded on each photon, so, we actually need to send a minimum of 15 blocks of data, adding up to 1125 photons for 225 logical qubits.
Why do I care about these numbers? Well, that means we require 1125 matter qubits that can be set to the right value and retained until they are transferred to a photon. Furthermore, we require that each photon acquires 15 different qubits. To do that successfully 98 percent of the time, it will involve an encoding operation that works 99.8 percent of the time. At the moment, these operations have probabilities that are much lower, effectively limiting experiments to a couple of logical qubits per photon.
Nevertheless, these experimental difficulties are well-known and understood. Over time, the situation will improve and in a few years (less than 10), we will see encoding schemes similar to this that can encode more qubits per photon. The quantum Internet may yet lie ready, awaiting the arrival of a quantum computer.
Nature Photonics, 2012, DOI: 10.1038/nphoton.2012.243
Follow Radio Freedom on Facebook