Skip to content

Improved Clifford gate counting #1454

Open
@wjhuggins

Description

@wjhuggins

With continued improvements in the cost of preparing resource states for non-Clifford gates, particularly the recent preprint Magic state cultivation: growing T states as cheap as CNOT gates, it is clear that we need to go beyond counting T and Toffoli gates in order to get accurate resource estimates.

Approximating the spacetime volume without laying out an algorithm

I propose that we quantify the cost of Clifford gates based on their spacetime volume in a surface code-based quantum computer. There are two cases to consider, which I will discuss separately. The first case is where, like with Google's proposed architecture, we have a two-dimensional grid of qubits that interact with their neighbors. In the second case, we imagine that the physical qubits have all-to-all connectivity. The assumption of all-to-all connectivity probably doesn't make sense at very large system sizes, but I would argue that it still serves as a reasonable starting point.

By counting the spacetime volume of the gates and adding in some additional overhead to account for routing and idling, we can approximate the actual physical resources (in physical qubit / seconds, for example) without worrying about laying things out precisely. However, it is true that approximating the routing overhead this way is less defensible than it was when we could assume that Clifford gates were free.

Two-dimensional architecture

With a grid of qubits connected to their neares neighbors, we expect to implement some Clifford gates using lattice surgery, whereas others (Pauli X, Y, Z) are essentially free. In general, gates that we implement using lattice surgery have a spacetime volume cost of $k \cdot d^3$, where $d$ is the surface code distance.

As a starting point, we could assign a value of $k$ to each elementary Clifford gate that we would like to support. Then it would be easy to calculate the spacetime cost of the Clifford gates in units of $d^3$. For example, a CNOT gate would be assigned a cost of $6 d^3$.

This way of quantifying the costs would ignore the overhead from routing (and idling), but it would already be more accurate than just counting the number of Clifford gates. In order to deal with the routing overhead, we could default to assuming a particular overhead of, say, 50% (as in the Gidney-Fowler cost model). Ideally we would let users override the routing overhead for particular bloqs. If someone has compiled a particular subroutine of an algorithm down to bloqs that respect the two-dimensional connectivity they might manually set the routing overhead for that subroutine to 0%. One risk that we would face is that a naive user might dramatically underestimate their costs.

All-to-all architecture

Given all-to-all connectivity, we can implement many (maybe all??) of the Clifford operations we might want using transversal gates and similar procedures. In this case, it would be more natural to count things in terms of surface code cycles directly. In other words, units of $d^2$ rather than $d^3$.

The all-to-all assumption also simplifies the routing problem, since it would never be necessary to insert SWAP gates or use extra qubits and long hallways for long range interactions. Some overhead would probably still be appropriate to account for the fact that qubits might end up idling while waiting for other operations to complete.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions