You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now the request will run with the capacity that it saw in the beginning. But in reality, when other requests finished and freed CPU, the request would speed up.
We need to apply this logic to the current cpu model.
The text was updated successfully, but these errors were encountered:
jchesterpivotal on Jun 6 comment
I ignored it in the original version and relied on the Sakasegawa approximation. I'd put off a deeper simulation like this one as a placeholder, because I felt it would become a serious rabbit hole. There's a queueing theory finding called the "arrival theorem", loosely that an arrival sees the same time averages as an outside observer would. In that sense the figure we calculate at arrival time encodes the future evolution of the queue.
But some day I would definitely like to increase model fidelity. I don't feel mathematically mature enough to state confidently whether the arrival theorem applies in this case.
To what extent do we need to handle it?
Let’s see deeply.
At first, it seems that it’s pretty reasonable to speed up Request 5. But let’s look into the situation when we take advantage of it.
As we see in the picture we can benefit from injecting speeding-up logic into the model only in such a kind of situation. But this situation is highly unlikely.
Now the request will run with the capacity that it saw in the beginning. But in reality, when other requests finished and freed CPU, the request would speed up.
We need to apply this logic to the current cpu model.
The text was updated successfully, but these errors were encountered: