Welcome to Atlantic Quantum

Building a functional fault-tolerant quantum computer isn't easy.

Author: Bharath Kannan, CEO

A lot alike, but different

Academics in the community often compare the quantum computing timeline to that of classical computers. In this comparison, it is typically highlighted that classical computers took the better part of a century to reach our modern capabilities, and that quantum computers, by contrast, are quite nascent but will follow a similar path. However, this comparison can be misleading in certain aspects. An important consideration is that as classical computers increased in their complexity and sophistication, we were able to simultaneously extract new and meaningful value in their application.

The same cannot be said for the current state of quantum computers. While it is true that quantum computing hardware is just at the beginning, we also live in a world where classical supercomputers can already satisfy many of our computational needs. Therefore, we must have our eyes set on realizing the hardware necessary for a quantum computer that can outperform classical supercomputers in meaningful applications.

Walk, chew gum, correct errors

To reach the regime of practical application and value, we need quantum error-correction (QEC). This is because the physical quantum bits, or qubits, that form the computer are very fragile and the information stored within them can be easily lost. With QEC, we mitigate this loss of information by adding redundancy. Fault-tolerant “logical qubits”, which are constructed by synchronously operating many physical qubits together, will be the basis of the computation. Realizing QEC immediately brings two challenges to the forefront: reducing gate error-rates and increasing the scale (number of physical qubits) of the devices.

While these problems may seem obvious to most, approaches to solving them are typically done in isolation of each other. That is, protocols for improved gate fidelities may significantly increase device complexity such that they work in small-scale proof-of-concept devices but will prove problematic when increasing the size of the device. Similarly, proposals for scaling the size of the device may result in a loss of fidelity in the individual gate operations. The challenge is therefore to solve both issues simultaneously.

Our vision: viability and value

At Atlantic Quantum, we are singularly focused on reaching the million-qubit regime that will be required to construct a fault-tolerant quantum computer and provide meaningful value. While there may be some value in the noisy intermediate-scale quantum computing regime - thousands to tens of thousands of physical qubits, this should not distract from the end goal. And even though we still need to start from small few-qubit systems and work our way up towards larger devices, it is important that we do not do so in a vacuum.

The constraints and boundary conditions imposed by the vision of a million-qubit processor must be respected for devices of all sizes. That is, because we fundamentally require error-correction, device architectures should be continuously scrutinized for whether they are on a viable and realistic path to scale.

All that said, talk is just talk. Stay tuned and critique our results for yourselves.

Previous
Previous

Atlantic Quantum joins the QED-C

Next
Next

Atlantic Quantum Seed Raise