You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's possible to obtain significantly higher throughput by downloading entries in parallel. There are several challenges with this approach, however:
We don't know how many entries a log will return, so it's hard to know what the optimal job size is.
We'll need to gracefully handle rate limiting.
It's much easier to update the Collapsed Merkle Tree serially. I see a few ways to handle this:
We could force the entries to be processed in serial by using a buffered channel of buffered channels. However, I think it will be difficult to reason about buffers.
We could split up jobs on subtree boundaries, making it easy to merge the Collapsed Merkle Trees of all the jobs at the end.
We could devise a sparse Collapsed Merkle Tree data structure which allows nodes to be appended in any order.
The text was updated successfully, but these errors were encountered:
typeDiscontiguousCollapsedTree []CollapsedTreeSegmenttypeCollapsedTreeSegmentstruct {
Indexuint64TreeCollapsedTree
}
func (t*DiscontiguousCollapsedTree) Add(indexuint64, treeCollapsedTree) {
maxSize:=uint64(1) <<bits.TrailingZeros64(index)
iftree.size>maxSize {
// error: tree too large to be placed at this index
}
// find segment prior to index in *t. append tree to this segment if tree is contiguous (and also see if we can merge in the following segment), otherwise insert a new segment in *t at this location
}
It's possible to obtain significantly higher throughput by downloading entries in parallel. There are several challenges with this approach, however:
The text was updated successfully, but these errors were encountered: