-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improved Clifford gate counting #1454
Comments
This sounds like an excellent use case for a new You can use pattern matching within the The 2D vs a2a can either be its own cost key or a boolean attribute on one cost key. |
That does sound like a good plan. I guess a good first step would be making a skeleton for this in a different branch? Then it will take some work to track down and cite the right costs for different operations. |
If you create an independent |
I've started working on this in https://github.com/wjhuggins/Qualtran/tree/improved_clifford_costing. I've also started a rough list of surface code spacetime volume (for a 2d architecture using lattice surgery) here: https://docs.google.com/spreadsheets/d/1b21sGKJYo83wJxfGXdNT42pUR65970frZpiz98RJ7kY/edit?usp=sharing. Don't take the numbers seriously yet. |
With continued improvements in the cost of preparing resource states for non-Clifford gates, particularly the recent preprint Magic state cultivation: growing T states as cheap as CNOT gates, it is clear that we need to go beyond counting T and Toffoli gates in order to get accurate resource estimates.
Approximating the spacetime volume without laying out an algorithm
I propose that we quantify the cost of Clifford gates based on their spacetime volume in a surface code-based quantum computer. There are two cases to consider, which I will discuss separately. The first case is where, like with Google's proposed architecture, we have a two-dimensional grid of qubits that interact with their neighbors. In the second case, we imagine that the physical qubits have all-to-all connectivity. The assumption of all-to-all connectivity probably doesn't make sense at very large system sizes, but I would argue that it still serves as a reasonable starting point.
By counting the spacetime volume of the gates and adding in some additional overhead to account for routing and idling, we can approximate the actual physical resources (in physical qubit / seconds, for example) without worrying about laying things out precisely. However, it is true that approximating the routing overhead this way is less defensible than it was when we could assume that Clifford gates were free.
Two-dimensional architecture
With a grid of qubits connected to their neares neighbors, we expect to implement some Clifford gates using lattice surgery, whereas others (Pauli X, Y, Z) are essentially free. In general, gates that we implement using lattice surgery have a spacetime volume cost of$k \cdot d^3$ , where $d$ is the surface code distance.
As a starting point, we could assign a value of$k$ to each elementary Clifford gate that we would like to support. Then it would be easy to calculate the spacetime cost of the Clifford gates in units of $d^3$ . For example, a CNOT gate would be assigned a cost of $6 d^3$ .
This way of quantifying the costs would ignore the overhead from routing (and idling), but it would already be more accurate than just counting the number of Clifford gates. In order to deal with the routing overhead, we could default to assuming a particular overhead of, say, 50% (as in the Gidney-Fowler cost model). Ideally we would let users override the routing overhead for particular bloqs. If someone has compiled a particular subroutine of an algorithm down to bloqs that respect the two-dimensional connectivity they might manually set the routing overhead for that subroutine to 0%. One risk that we would face is that a naive user might dramatically underestimate their costs.
All-to-all architecture
Given all-to-all connectivity, we can implement many (maybe all??) of the Clifford operations we might want using transversal gates and similar procedures. In this case, it would be more natural to count things in terms of surface code cycles directly. In other words, units of$d^2$ rather than $d^3$ .
The all-to-all assumption also simplifies the routing problem, since it would never be necessary to insert SWAP gates or use extra qubits and long hallways for long range interactions. Some overhead would probably still be appropriate to account for the fact that qubits might end up idling while waiting for other operations to complete.
The text was updated successfully, but these errors were encountered: