-
Notifications
You must be signed in to change notification settings - Fork 289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proper ZK treatment in plonky2
#1625
base: main
Are you sure you want to change the base?
Conversation
This PR aims at addressing #1625, based on this note https://eprint.iacr.org/2024/1037.pdf .
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just did a first pass and mostly pointed out nits.
I guess this is what the note meant (Protocol 2, isn't it?), because why doing two proofs if you can batch them and compute only one proof? And batching them is like doing FRI for the large poly. |
After an initial review by @ulrich-haboeck, it came out that the random Moreover, he also mentioned that I therefore updated the implementation to include both changes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here's my feedback:
- We would improve proof sizes by gathering all
round-2 polynomials, i.e. the partial products from the permutation argument, the pole sums and the table authenticator sum for the lookup argument, the components of the quotient polynomial, and the zk masking polynomial for batch-FRIround-3 polynomials, the components of the quotient polynomial, and the zk masking polynomial for batch-FRI. - I was not able to find out whether the lookup argument polynomials from the second round (for the "pole sums" over table and witness area) are randomized. If not, we need to do this by expanding the randomization of "regular" polynomial (and likewise) to auxiliary ones.
- Due to the treatment of the permutation argument, the current implementation is only statistical zero-knowledge:
In order that the prover is able to craft a valid proof, the round-1 verifier challenges(beta, gamma)
must not produce a zero in one of the partial products. Hence, with each valid proof the verifier learns a little piece of information on the witnesses, namely all linear factors in the (virtual) permutation argument polynomial
Sigma(X,Y) = \prod_{i,x} (X - x - w_i(x) * Y),
where the product ranges over all wired columns of the chip, are non-zero at (beta, gamma)
. (Funnily, this is also the case for Plonk's randomization, see footnote 10 on zero-knowledge in the Plonk paper.)
In my opinion, perfect zero-knowledge would be a nice feature. But that would come at a certain extra cost:
- The technique from the mir blog would need to be replaced by a strategy that works for every
(beta, gamma)
.
A naive approach would cost the double of auxilary columns, by proving
\prod_{i,x} (beta - x - w_i(x) * gamma),
and
\prod_{i,x} (beta - sigma_{i}(x) - w_i(x) * gamma)
via separate running products, with their start values enforced to be 1, and their end values enforced to be equal.
Randomization of the partial products can be done by regular noop-gates plus a selector for the permutation argument (excluding the zk area on the chip), or by multiples of the domain vanishing polynomial. The latter randomization allows to keep the same number of columns per "partial lookup", if one implements a "greedy" evaluation logic of their constraints.
- Although not strictly needed, it would be good practice to do proper error handling for the case when the lookup random challenge
(alpha, ChallangeA)
hits a zero of the (virtual) table polynomial
t(X,Y) = \prod_i (X - t_{i,0} - t_{i,1}* Y),
where the product ranges over all table entries t_i=(t_{i,0}, t_{i,1})
of the functional relation to be looked up.
Sorry for the bad formatting - markdown is a pain.
alpha.shift_poly(&mut final_poly); | ||
final_poly += quotient; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, as in the batch FRI oracle. Shouldn't we shift quotient
?
.map(|p| (p.oracle_index == PlonkOracle::R.index) as usize) | ||
.sum(); | ||
let last_poly = polynomials.len() - nb_r_polys * (idx == 0) as usize; | ||
let evals = polynomials[..last_poly] | ||
.iter() | ||
.map(|p| { | ||
let poly_blinding = instance.oracles[p.oracle_index].blinding; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, for the line below: what is the purpose of the &&
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am unsure (I did not change this code), but I assume this is to allow us to have polynomial that we do not necessarily want to blind?
Spending another thought on perfect zero-knowledge, the following weakened constraints on a running product should suffice. For simplicity, I explain it for a single-column Plonk with a single witness column
for every x in
with
at |
Also, we can take the same random |
Actually, there is a gap in the above constraints, which still lets the prover not succeed in certain cases. Notably, this gap also occurs in the Halo2 book (thanks to @Al-Kindi-0 for the reference, and also for proposing the countermeasure) :
While this leaves
for all Let me point out, that the solution described here is tailored for AIRs, and hopefully can be further optimized. |
Corrected a mistaken comment on the common Merkle root for each round. Round 2 is fine as implemented (gathering the permutation and lookup argument polynomials), but Round 3 gives us the opportunity to put the masking polynomial |
I'm sorry, I'm not sure I understand what you mean here. What are the "regular" and "auxiliary" polynomials here? Do you mean we should randomize the SLDC polynomials in some way? |
Exactly. One could probably do a more fine-grained analysis, similar to the permutation argument polys, arguing statistical zero-knowledge, but I need to think about it. |
After another round of contemplation, I see the following problem with the current statistically zero-knowledge approach: We take several base field samples, instead of a single one from the extension field. While this is a good approach for amplifying soundness, it actually is bad for statistical zero-knowledge. The verifier learns that not just for one, but for several
to its That being said, I see only to options to remedy this issue:
The latter is actually quite costly for the Plonk permutation argument (a side effect that I missed in my above elaboration in the single-column setting), practically doubling the number of 2-nd round polynomials (in comparison to non-zk). Besides, in the world of hash-based proofs, perfect zk of the IOP is downgraded to statistical zk anyways. |
Fully agree, the case of challenges from the base field is worse from the statistical distance point of view. This gets worse the larger the witness size gets.
Just to clarify, the doubling of the number of polynomials in the second round is due to the increase in the degree, right? My proposal is to go with statistical zero-knowledge but give an explicit bound on the statistical distance. |
@Al-Kindi-0 exactly, the doubling of second round polys is due to the increased degree of the constraints. |
@LindaGuiga I'm opening a draft PR to be able to comment on the code