-
Notifications
You must be signed in to change notification settings - Fork 593
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proof of concept for MaxDispatches implementation #2598
Conversation
I opened PR #2603 to just add the MaxDispatches property. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do plan to handle batch requests?
With the current code (and a few fixes), I believe you would accept the batch request and stop reading when the dispatch count is >= max dispatches. So with MaxDispatches = 10, I could dispatch a batch request with 20 elements. Is this a desirable/expected behavior?
@@ -115,4 +119,19 @@ internal static async Task allTests(global::Test.TestHelper helper) | |||
} | |||
output.WriteLine("ok"); | |||
} | |||
|
|||
private static async Task testMaxDispatches(Test.TestIntfPrx p, TextWriter output) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should naturally test 'max dispatches' in its own test fixture, not in the idle timeout test fixture.
csharp/src/Ice/ConnectionI.cs
Outdated
@@ -2516,6 +2532,12 @@ private void sendResponse(OutgoingResponse response, bool isTwoWay, byte compres | |||
sendMessage(new OutgoingMessage(response.outputStream, compress > 0, adopt: true)); | |||
} | |||
|
|||
if (_maxDispatches > 0 && _state != StateHolding && _dispatchCount == _maxDispatches) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not following here. The _maxDispatches is "active" only state = StateHolding? Should it be only when state = StateActive?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you expect _state == StateActive
instead? It's pretty much the same.
csharp/src/Ice/ConnectionI.cs
Outdated
@@ -969,6 +969,12 @@ public override void message(ThreadPoolCurrent current) | |||
} | |||
} | |||
|
|||
if (_maxDispatches > 0 && _dispatchCount == _maxDispatches) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be _dispatchCount >= _maxDispatches
, since _dispatchCount can be incremented by more than 1 when we received a batch request?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't solve however the handling of thread starvation for the idle timeout implementation when still using the thread pool configuration...
In my view, once we implement MaxDispatches, we should not worry about the idle check aborting the connection when the thread pool is starved. If you want to apply back pressure, use MaxDispatches, not thread pool starvation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The relationship between MaxDispatches and the thread pool max size isn't obvious with this new property.
I don't think there is any relationship between them.
Say you use AMD, and you have a single thread in your thread pool. You could still easily write a test with 100 outstanding dispatches.
It's for sure the simplest to implement and I don't think it's worth implementing something complicated for a corner case scenario like this one. |
True but the goal is to ensure that the idle timeout works and for the idle timeout to work you need and thread pool large enough to accommodate with the case where you only have sync dispatches. I wouldn't keep the default single threaded thread pool if |
That's not how I see it. In a healthy application, you can occasionally hit the thread pool limit for a short amount of time (like a few seconds), and this won't trigger an idle check abort since the default (and typical) idle timeout is 60 seconds. As of Ice 3.8, an application should not use thread-pool exhaustion to apply back pressure (because the correct tool for that is MaxDispatches). And as a result, the idle check logic doesn't need to account for a legitimate use of thread-pool exhaustion since there isn't one anymore. Say you upgrade from 3.7 to 3.8 and notice your upgraded application (that doesn't worry about flow control/back pressure) sees idle-check aborts. It's probably a sign you have long & unplanned thread pool exhaustions and you want to increase your thread pool size. |
I believe we need to handle this limit properly. Say you set MaxDispatches=10 as a "security" measure, to make sure your client doesn't bombard you with concurrent requests. But then a client can easily bypass this limit by crafting a batch with 100 oneway requests? This doesn't make sense. |
On the wire, 100 oneways requests isn't much heavier than a batch of 100 oneway requests. Batch oneway requests are also serialized for dispatching. The main issue is for asynchronous oneway dispatch since the lifetime of a dispatch is bounded by the completion of the AMD callback. Or do you see any other scenarios where it's a problem? |
The whole point of batching is to make this 100-batch message smaller/more efficient than 100 individual oneway requests.
That's actually not necessarily true. Take our latest Swift code, where we launch a Task for each incoming request. Unless we put some limit, all the requests in a batch can be dispatched concurrently.
Yes, the issue is with AMD dispatches. In Swift (on main) this is the only kind of dispatch. I think we can add some batch-specific code in "dispatchAll" to make sure we don't dispatch more than _maxDispatches concurrently when _dispatchCount > _maxDispatches. |
I don't think size on the wire was the only reason for supporting batch messages. A "send" socket call was considered an expensive call at the time Ice was written. Batching oneway messages allowed to make a single "send" call instead of many. See https://doc.zeroc.com/ice/3.7/client-side-features/batched-invocations
Serializing the dispatch of oneway from a batch is a documented guarantee (on the page mentioned above): "On the server side, batched messages are dispatched by a single thread, in the order in which they were written into the batch. This means that messages from a single batch cannot appear to be reordered in the server." What was the reason for not providing this guarantee in Swift?
I'll look into it. |
I don't see how we can implement this and keep the guarantee that oneway requests from a batch are dispatched with the same thread pool thread... Is your idea to hold on the thread with a sync semaphore wait? To me, the ordering guarantee is more important than using the same thread so one option could be to await the I don't think the comment about the continuation running on the .NET thread pool from ice/csharp/src/Ice/Internal/ThreadPool.cs Line 796 in 1ae9e22
Changing this will change the AMD dispatch of oneways from a batch however... since the next oneway won't be dispatch before the AMD callback is completed. |
No, I don't think we should block the dispatch thread to implement MaxDispatches with batches.
I think you're reading too much into this guarantee, and how important it is to keep it. We documented an implementation detail and now we're stuck with it? How would you provide this guarantee in Swift? As a reminder, in Swift, each dispatch corresponds to the creation of a Task executed by the Swift global concurrent executor. And there is no limit / serialization other than the future MaxDispatches. So N oneway requests in a batch are dispatched exactly like N oneway requests in separate requests. |
I don't think we should change this behavior. And yes, we should fix the comment (in ConnectionI.cs) since it's inaccurate. |
To me we document a guarantee that we put forward in the past 🙂. Batching is useful to ensure multiple client invocations are executed in same order on the server. Without it, you need a single threaded server or serialized twoway invocations. Do we have clients relying on this? Probably not... I doubt many clients actually use batching. Glacier2/IceStorm do use it however (to reduce the number of oneway messages to forwarded by batching them).
That's an implementation detail 🙂.
Actually, I don't think my previous comment about the synchronization context is correct since we call
I still don't see how to limit the number of dispatch with batch requests. The servant needs to unmarshal the input parameters of a oneway message before we can dispatch the next one. Waiting for a batch request to be totally dispatched (i.e.: once the AMD callback is called) sounds like the simplest (only?) solution to me. Wouldn't it also provide the ordering guarantee for Swift? |
After discussing this issue with Joe, we agree with you:
|
This PR shows how
MaxDispatches
can be implemented on C#. The implementation for other language mappings would pretty much be the same. Relying onMaxDispatches
for back pressure also requires to implementMaxConnections
.The relationship between
MaxDispatches
and the thread pool max size isn't obvious with this new property. The optimal thread pool maximum size would need to be something likeMaxDispatches * MaxConnections + CPU_Count_Thread_ForIO
.With IceRPC, we don't have dedicated thread pools so we basically assume that the .NET thread pool has an infinite size.
One possibility could be to make
MaxDispatches/MaxConnections
incompatible with the thread pool properties. You can't set both. IfMaxDispatches/MaxConnections
are set, we configure the thread pool size with a large enough number of threads to allow both IO and dispatches to work (with C# we could even use the .NET thread pool).This doesn't solve however the handling of thread starvation for the idle timeout implementation when still using the thread pool configuration...