-
Notifications
You must be signed in to change notification settings - Fork 71
Make it possible to undeclare variables #289
Comments
I thought about this a bunch in detail and this definitely appears to be a lot less trivial than I originally thought. One problem here is guarantees on ordering:
I'm looping @russelldb in to see what he thinks, since this is a problem that's at least been thought of in the context of Riak many times. |
@russelldb is also correct to point out that a single logical clock for the entire node can also be used to handle a removal as well. The complications here come from the fact that internally in Lasp's key-value store, that any CRDT supporting an interface (Erlang behavior) is supported, so we can't assume a uniform data representation. So, either we need to extend the types in a generic way, or we need to support something specific in the backend (ie. Russell's solution or the partial replication scheme I proposed.) |
Obviously, another issue here to worry about is concurrent updating with a undeclare -- which would effectively restore the value. In fact, the declare operation is superfluous anyway, because all it does is create a local register with the bottom value for the lattice, which is implicitly done through the update operation. |
This is what I was planning to look at in case it was the cause of our memory leak. I think the memory spikes too quickly to this be the main issue we are seeing, but it could still be adding to it over time I figure. |
Can you extrapolate on why you think this might be the memory leak you are experiencing? |
@cmeiklejohn based on the comment from @bullno1 it sounded like no longer used variables would continue to take up space, and our staging environment costs of continually creating new devices/channels that get registered, used briefly, and never used again, 24/7. Just a thought, even if it is leaving the space taken it may be so little it doesn't matter and we have other concerns, I'm still trying to find where the issue is. |
I'd been trying to find what was eating up all the memory on our node for a while now and finally discovered why it was so hard to find :). |
Yes, that's right. So, you're creating new keys often and abandoning other keys? Is that the root cause of the issue? If so, we probably need to come up with a solution for this sooner rather than later. Can you confirm this is the actual issue? |
Just realized I never responded here and only in gitter. I think the abandoning is not currently the issue. I'd expect that growth to be much slower if not for the growing |
Can you dump the ets table (or, even just a single table entry) so we can inspect the stored state and identify where the bloating is coming from? |
FWIW: the previous fix on mater should have removed the issue with |
Oh, I'll try master. And from the ets table the other day:
|
I'll be pushing the use of master tomorrow to see how it does. It looks like there were more commits after the |
In some cases, unneeded variables could be removed to save space.
The text was updated successfully, but these errors were encountered: