Boffins have been working on a theoretical quantum computer that can work even if one in four quantum bits were missing or partying in Vegas with a dead cat and Elvis.
The plan works on paper and could help scientists build devices as large as three qubits and lower the engineering requirements of a functional machine.
University of Queensland physicist Thomas Stace worked with Sean Barrett of the Imperial College London said quantum computers that used photons as qubits risked losing some of these particles as they were scattered or absorbed.
While some of his fellow boffins have hatched out methods that could tolerate the loss of one in two qubits, other theories allowed for decoherence in one in a hundred qubits.
However, none of the theories allowed for decoherence and loss to a great degree as the potentially dead or alive cat tended to stick its paws on the final result.
According to this week’s Physical Review Letters, which we get for the five dimensional crossword, Stace and chums suggested that quantum computations be performed by measuring qubits initially laid out in a complex pattern.
He said that the Heisenberg Uncertainty Principle in quantum mechanics means that if you measure a quantum state, you change it.
So the idea is to look at the initial state of entangled qubits and measure them in an order defined by what a user wanted to achieve. This bypasses the cat which is probably snoozing in the sun.
It means that only one in four measurements need to occur, thanks to error-correcting code that used the context of remaining qubits to decipher the information in those that had been lost.
Stace described it as a “divide and conquer” approach that could be easily restarted if too many measurements failed.
Of course it all works on paper but large scale devices are still “easily a decade” away due to engineering difficulties.