-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
faster slicing for vectors? #62
Comments
Currently
|
+1 - would be great to have an O(1) way to do slices that takes advantage of structural sharing |
Well...just so you know this problem is actually quite difficult because the OG Clojure that popularized this idea doesn't support prepending/drop front either: https://stackoverflow.com/questions/4095714/what-is-the-idiomatic-way-to-prepend-to-a-vector-in-clojure If we can make it so that we can efficiently drop front, then implementing slice/ops::Range can be amortized to O(log n), by repeating drop front until range start, and repeating drop back until range end. If we can have the other way round, I think we need to modify the data structure used for persistent vector in rpds to support efficient meld operation. That means we can quickly access the "chunks" of different shared vectors, separated like a B-Tree, and we can first persistent clone the whole tree, then find the specific leaf, and clone it to a normal vector, remove the sub-index in the cloned normal vector according to the item position in the chunk, and then meld (aka tree merge) this new vector back to the rest of the tree. This can achieve structural sharing to some sort, but it will sure be hell to implement. (I took this idea from rope data structure)
|
I was using
im
and recently moving torpds
.im
as methods likeskip
take
drop_first
...which create slices from existing vectors. I can't find these methods inrpds
so I have to usefor... .iter()
and then copy elements into new vectors, and that would be slow due to lack of structural sharing.Is there suggesting ways to make slices?
updated,
probably just #61 .
The text was updated successfully, but these errors were encountered: