-
-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding multiple buckets with the same ID causes CPU to be pegged #13
Comments
Here's a sketch somewhat like what I'm thinking of. Note that since this is nesting swaps this should almost certainly replace the (defn stop-bucket! [bucket]
(doseq [{:keys [stop-fn]} (vals (:triggers bucket))
:when stop-fn]
(stop-fn)))
(defmulti merge-triggers (fn [_bucket kind _trigger1 _trigger2]
kind))
(defmethod merge-triggers :default [_ _ _ trigger2] trigger2)
(defmethod merge-triggers :queue-size [_ _ trigger1 trigger2]
(update trigger1 :threshold + (:threshold trigger2)))
(defn merge-buckets [bucket1 bucket2]
(when (:queue bucket2)
(swap! (:queue bucket1) into @(:queue bucket2)))
(update bucket1 :triggers
(fn [triggers]
(reduce-kv
(fn [ts kind trigger]
(if (contains? ts kind)
(update ts kind #(merge-triggers kind % trigger))
(assoc ts kind trigger)))
triggers
(:triggers bucket2)))))
(defn add-bucket! [ctx id opts]
(swap!
(:buckets ctx)
(fn [buckets]
(update buckets id
(fn [bucket]
(if bucket
(do
(stop-bucket! bucket)
(merge-buckets bucket opts))
(#'sl.core/start-bucket! ctx id opts))))))
ctx) |
Hi, Thank you for this very detailed issue. My original intention for buckets is for them to either be perpetual and constant (i.e. something like fetch every 100ms) or to be created for a specific purpose (this node has six children, create a sized bucket for fetching those six). If I understand your particular use case, you have a list of nodes In this case I would consider adding a new function called What do you think? |
yeah I'll have to look into what Lacinia does - maybe I'll send them a patch if they don't.
The individual resolver doesn't know if it has any siblings, though - so in that case it wouldn't know whether to call update or add. Maybe the update semantics could nil-pun.
I still think there's an underlying bug here, which is that buckets that get (potentially accidentally) overwritten never get their triggers stopped. I think separate to the |
When adding a bucket with an ID that's already used, currently the previous bucket is just silently overwritten. This causes issues if the bucket has for example an interval trigger, since that continues to run unchecked indefinitely, which was the original issue described in oliyh#13 This change stops any overwritten bucket and logs a warning.
Agree that siblings aren't aware of each other so an update/upsert behaviour would create the bucket if it didn't already exist, but I would like the developer to make the decision on how to merge buckets. Superlifter could of course provide some common strategies, such as you have suggested above. Lacinia does do breadth-first resolution and I'm fairly certain that most other graphql implementations do the same but I thought it worth noting as superlifter can be used in any context, not just graphql. |
When adding a bucket with an ID that's already used, currently the previous bucket is just silently overwritten. This causes issues if the bucket has for example an interval trigger, since that continues to run unchecked indefinitely, which was the original issue described in oliyh#13 This change stops any overwritten bucket and logs a warning.
When adding a bucket with an ID that's already used, currently the previous bucket is just silently overwritten. This causes issues if the bucket has for example an interval trigger, since that continues to run unchecked indefinitely, which was the original issue described in #13 This change stops any overwritten bucket and logs a warning.
I have rewritten quite a lot of superlifter to avoid the nested atoms (there's only one at the top level now for buckets) and to also ensure that enqueuing and fetching are atomic. I have changed the implementation of One of the bugs was that if you have a bucket of size 3, you need to ensure that all 3 muses destined for it end up there otherwise it will never trigger. This means you can't do a partial fetch early when there is say 1 muse because its 2 siblings will be added later and never reach the threshold of 3, which is why I changed Let me know what you think? Cheers, |
See #12 (comment) |
I'll hopefully get a chance to look at this this weekend. |
@oliyh did something about the wrapping API change? because now my tests are failing with |
Hello, Nothing changed intentionally, I will add with-context to the example project and work out what has happened. Regarding your NPE you need to ensure the bucket exists first before calling update trigger - either by making it part of your initial superlifter options (preferred, and how the example is now configured) or by calling |
The NPE was just due to a missing |
Feel okay with closing this for now |
At least I think that's what's happening - it looks like since superlifter
assoc
s in the bucket directly with no regard to what's already there and then only stops the buckets it knows about on shutdown any overwritten buckets just loop forever. This is relatively easy to hit by running any three-level request, since all the buckets in the second level just overwrite each other. What I'd love to see is some sort of intelligent merging of bucket options, to eg sum together thresholds or take the max of all intervals.The text was updated successfully, but these errors were encountered: