You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I entered Symbolic Wilds 1 and noticed that after all of the boards were loaded, there was a GetAsync queue warning. I think this is because Persistence uses the budget down to 0 (but never below that), and then some other module (metaadmin, metaportal, Avatar?) does a getAsync while there's still zero budget.
All GetAsync's in Persistence are guarded by a check to make sure the budget is non-zero.
localfunctionget(dataStore: DataStore, key: string)
whileDataStoreService:GetRequestBudgetForRequestType(Enum.DataStoreRequestType.GetAsync) <=0do-- TODO maybe make this longer?task.wait()
end--[[ No pcall because we catch the error in `restoreAll` Note we are not storing the 2nd, 3rd, 4th return values of getAsync--]]-- print("getting key", key)localresult=dataStore:GetAsync(key)
returnresultend
So I see two options here
I set a lower limit for the GetAsync get async budget in persistence (so it says while budget <= LOWER_LIMIT instead)
Every other use of GetAsync should be responsible for not requesting when there's no budget (likewise for other DataStore calls)
I guess doing (2) doesn't mean not doing (1). Perhaps every module can make use of some small lower limit like 5, just in case some additional rogue code is too greedy. If we were to just do (1), then I'd have to set the lower limit much higher because I'm babysitting demanding children that will query the datastore whenever the heck they want.
I'm curious how other developers handle datastore budgets. The way it's documented, it seems like the suggestion is "don't make requests too often, and you'll be fine", so then you go and try to tune the frequency of your datastore requests according to the rate that they refill the budget, with some generous margins to avoid tempting fate. This is how old persistence worked.
But it seems inevitable to me that when you combine multiple independent systems that are all doing this, the timing logic fails and you accidentally send too many requests. Even on its own, if your timing logic is too careful, you retrieve data much slower than you could, and if you try to time things as close as possible to the limit without exceeding it, you'll probably exceed it by accident from time to time.
I haven't seen the technique I implemented suggested anywhere, is there some reason we shouldn't just do it in all metauni code?
The text was updated successfully, but these errors were encountered:
From metauni-dev meeting: We should do this budget check for every async call in all code, and just set some small "in case" lower limit, like 5 or 10.
I entered Symbolic Wilds 1 and noticed that after all of the boards were loaded, there was a GetAsync queue warning. I think this is because Persistence uses the budget down to 0 (but never below that), and then some other module (metaadmin, metaportal, Avatar?) does a getAsync while there's still zero budget.
All GetAsync's in Persistence are guarded by a check to make sure the budget is non-zero.
So I see two options here
while budget <= LOWER_LIMIT
instead)I guess doing (2) doesn't mean not doing (1). Perhaps every module can make use of some small lower limit like 5, just in case some additional rogue code is too greedy. If we were to just do (1), then I'd have to set the lower limit much higher because I'm babysitting demanding children that will query the datastore whenever the heck they want.
I'm curious how other developers handle datastore budgets. The way it's documented, it seems like the suggestion is "don't make requests too often, and you'll be fine", so then you go and try to tune the frequency of your datastore requests according to the rate that they refill the budget, with some generous margins to avoid tempting fate. This is how old persistence worked.
But it seems inevitable to me that when you combine multiple independent systems that are all doing this, the timing logic fails and you accidentally send too many requests. Even on its own, if your timing logic is too careful, you retrieve data much slower than you could, and if you try to time things as close as possible to the limit without exceeding it, you'll probably exceed it by accident from time to time.
I haven't seen the technique I implemented suggested anywhere, is there some reason we shouldn't just do it in all metauni code?
The text was updated successfully, but these errors were encountered: