You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Memory usage is a bottleneck that we hit for large deployments.
The peak memory usage at the moment is during the LDT Reducer. The LDT reducer takes a random linear combination of all constraints over the large codeword domain (L in papers) from the protocol. The current API is that it evaluates all constraints in full, and then takes a random linear combination of them all. Instead, we should stream these oracles into the LDT reducer one by one. The LDT Reducer would maintain some 'current state' and keep updating this with every streamed oracle.
This should significantly lower the memory requirement, and if we eventually use a 'codeword pool' it would likely improve prover time.
The text was updated successfully, but these errors were encountered:
Memory usage is a bottleneck that we hit for large deployments.
The peak memory usage at the moment is during the LDT Reducer. The LDT reducer takes a random linear combination of all constraints over the large codeword domain (
L
in papers) from the protocol. The current API is that it evaluates all constraints in full, and then takes a random linear combination of them all. Instead, we should stream these oracles into the LDT reducer one by one. The LDT Reducer would maintain some 'current state' and keep updating this with every streamed oracle.This should significantly lower the memory requirement, and if we eventually use a 'codeword pool' it would likely improve prover time.
The text was updated successfully, but these errors were encountered: