Quantum computing

Abstract
Quantum computation is the extension of classical computation to the processing of quantum information, using quantum systems such as individual atoms, molecules, or photons. It has the potential to bring about a spectacular revolution in computer science. Current-day electronic computers are not fundamentally different from purely mechanical computers: the operation of either can be described completely in terms of classical physics. By contrast, computers could in principle be built to profit from genuine quantum phenomena that have no classical analogue, such as entanglement and interference, sometimes providing exponential speed-up compared with classical computers. All computers manipulate information, and the unit of quantum information is the quantum bit, or qubit. Classical bits can take either value 0 or 1, but qubits can be in a linear superposition of the two classical states. If we denote the classical bits by |0〉 and |1〉, a quantum bit can be in any state α|0〉 + β|1〉, where α and β are complex numbers called amplitudes subject to |α|2 + |β|2 = 1. Any attempt at measuring qubits induces an irreversible disturbance. For example, the most direct measurement on α|0〉 + β|1〉 results in the qubit making a probabilistic decision: with probability |α|2, it becomes |0〉 and with complementary probability |β|2, it becomes |1〉; in either case the measurement apparatus tells us which choice has been taken, but all previous memory of the original amplitudes α and β is lost. Unlike classical bits, where a single string of n zeros and ones suffices to describe the state of n bits, a physical system of n qubits requires 2n complex numbers to describe its state. For example, two qubits can be in the state α|00〉 + β|01〉 + γ|10〉 + δ|11〉 for arbitrary complex numbers α, β, γ, …

This publication has 11 references indexed in Scilit: