Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improved Clifford gate counting #1454

Open
wjhuggins opened this issue Oct 8, 2024 · 4 comments
Open

Improved Clifford gate counting #1454

wjhuggins opened this issue Oct 8, 2024 · 4 comments

Comments

@wjhuggins
Copy link
Collaborator

With continued improvements in the cost of preparing resource states for non-Clifford gates, particularly the recent preprint Magic state cultivation: growing T states as cheap as CNOT gates, it is clear that we need to go beyond counting T and Toffoli gates in order to get accurate resource estimates.

Approximating the spacetime volume without laying out an algorithm

I propose that we quantify the cost of Clifford gates based on their spacetime volume in a surface code-based quantum computer. There are two cases to consider, which I will discuss separately. The first case is where, like with Google's proposed architecture, we have a two-dimensional grid of qubits that interact with their neighbors. In the second case, we imagine that the physical qubits have all-to-all connectivity. The assumption of all-to-all connectivity probably doesn't make sense at very large system sizes, but I would argue that it still serves as a reasonable starting point.

By counting the spacetime volume of the gates and adding in some additional overhead to account for routing and idling, we can approximate the actual physical resources (in physical qubit / seconds, for example) without worrying about laying things out precisely. However, it is true that approximating the routing overhead this way is less defensible than it was when we could assume that Clifford gates were free.

Two-dimensional architecture

With a grid of qubits connected to their neares neighbors, we expect to implement some Clifford gates using lattice surgery, whereas others (Pauli X, Y, Z) are essentially free. In general, gates that we implement using lattice surgery have a spacetime volume cost of $k \cdot d^3$, where $d$ is the surface code distance.

As a starting point, we could assign a value of $k$ to each elementary Clifford gate that we would like to support. Then it would be easy to calculate the spacetime cost of the Clifford gates in units of $d^3$. For example, a CNOT gate would be assigned a cost of $6 d^3$.

This way of quantifying the costs would ignore the overhead from routing (and idling), but it would already be more accurate than just counting the number of Clifford gates. In order to deal with the routing overhead, we could default to assuming a particular overhead of, say, 50% (as in the Gidney-Fowler cost model). Ideally we would let users override the routing overhead for particular bloqs. If someone has compiled a particular subroutine of an algorithm down to bloqs that respect the two-dimensional connectivity they might manually set the routing overhead for that subroutine to 0%. One risk that we would face is that a naive user might dramatically underestimate their costs.

All-to-all architecture

Given all-to-all connectivity, we can implement many (maybe all??) of the Clifford operations we might want using transversal gates and similar procedures. In this case, it would be more natural to count things in terms of surface code cycles directly. In other words, units of $d^2$ rather than $d^3$.

The all-to-all assumption also simplifies the routing problem, since it would never be necessary to insert SWAP gates or use extra qubits and long hallways for long range interactions. Some overhead would probably still be appropriate to account for the fact that qubits might end up idling while waiting for other operations to complete.

@mpharrigan
Copy link
Collaborator

This sounds like an excellent use case for a new CostKey. The QECGatesCost can be used for inspiration https://github.com/quantumlib/Qualtran/blob/main/qualtran/resource_counting/_bloq_counts.py#L281

You can use pattern matching within the compute method to encode the costs of supported gates with a fallback that adds up the contributions from the descendant bloqs. This could be a place to put in the routing overhead. I.e. if you're querying the cost of a CNOT it can tell you 6 cubes but if you are asking for a the cost of a bloq that is composed of $n$ CNOTs, it can say $6n$ + overhead(n) cubes. The routing behavior can be parameterized by attributes on the new CostKey class.

The 2D vs a2a can either be its own cost key or a boolean attribute on one cost key.

@wjhuggins
Copy link
Collaborator Author

That does sound like a good plan. I guess a good first step would be making a skeleton for this in a different branch? Then it will take some work to track down and cite the right costs for different operations.

@mpharrigan
Copy link
Collaborator

If you create an independent CostKey in its own module with the "pattern matching" approach, it's easy to start with a rough approximation (special case some bloqs you already know about; otherwise fall back to the bloq_is_clifford helper and just give it a constant value, e.g. one cube) and continue to upgrade it on main without causing churn in the rest of the library.

@wjhuggins
Copy link
Collaborator Author

I've started working on this in https://github.com/wjhuggins/Qualtran/tree/improved_clifford_costing. I've also started a rough list of surface code spacetime volume (for a 2d architecture using lattice surgery) here: https://docs.google.com/spreadsheets/d/1b21sGKJYo83wJxfGXdNT42pUR65970frZpiz98RJ7kY/edit?usp=sharing. Don't take the numbers seriously yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants