Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce reactive support #289

Open
hutchig opened this issue Jun 19, 2018 · 6 comments
Open

Introduce reactive support #289

hutchig opened this issue Jun 19, 2018 · 6 comments
Milestone

Comments

@hutchig
Copy link
Member

hutchig commented Jun 19, 2018

Fault Tolerance @Asynchronous has a means of queueing and hooks that control managed execution.

I would be a natural follow on to add a feature where we submit work in response to
org.reactivestreams.Subscription.request(long n)
to support https://github.com/eclipse/microprofile-reactive.

It could perhaps have a specced way of handling too much back pressure.
This might usefully serve as an 'adapter' to make upstream function 'more' reactive
using FT semantics.

This issue can be a place to discuss this support.

@hutchig
Copy link
Member Author

hutchig commented Jun 19, 2018

@jroper ^^^

@hutchig
Copy link
Member Author

hutchig commented Jun 19, 2018

pre-reqs #110 :-)

@hutchig
Copy link
Member Author

hutchig commented Jun 19, 2018

When one has a microservice that is not fully reactive and one wishes to plumb it upstream into
a system that is reactive - there is a possibility that one will not only have to handle the
backpressure but to fail in a controlled manner when there is 'too much' back pressure.
This failure is a form of 'fault tolerance' (use a fallback etc.) which is another reason why it
fits here.
As well as this there is also a good fit where a reactive upstream system's rate of calls can be controlled via synthetic 'requests' in order to keep a @Bulkheaded business method appropriately busy but not overwhelmed based on the queue length with back pressure (requests for new data) being automatically managed by the MP FT implementation.

#1 traditional ----> mpFT(reactive)
#2 reactive ----> mpFT(traditional)

@hutchig hutchig changed the title Introduce reactive support into @Asynchronous Introduce reactive support Jun 20, 2018
@hutchig
Copy link
Member Author

hutchig commented Jun 20, 2018

OK, here is a strawman to ellicit comment.

We are trying to help solve two problems that might exist in JEE/Microprofile systems over the next few years:

1 - Improve the ease of plumbing between reactive components and non-reactive components
2 - Have defined semantics for controlling and consuming backpressure at the 'edges' of reactive components or graphs of components and traditional systems.
3 - Pure reactive does end-to-end backpressure. A hybrid system model must define semantics when there is too much backpressure 'at-the-edge' and these failure semantics are naturally handled using MP Fault Tolerance spec/annotations/semantics - similar to when @Bulkheads are full ( or perhaps even rate-limiters like https://github.com/resilience4j/resilience4j/tree/master/resilience4j-ratelimiter ).

@hutchig
Copy link
Member Author

hutchig commented Jun 27, 2018

Use cases:
#UC1 We have a non reactive data source that we want to connect to a reactive system that
should not be overloaded with calls i.e. the data reciever uses the reactive streams semantics to indicate what capacity it has for more work but the feeding component is not a reactive stream and thus does not 'speak' to reactive backpressure.

A call to the method that is being mapped to onNext (it could ethier be "onNext" if the bean is a Subscriber or annotated to be @OnNext) will be called if the recieving system call has requested data (using Subscriber.request or request(n)) one or more times more than the method has been called. If more data has not been requested, the call will queue with similar semantics to an @Asynchronous @Bulkhead but will not 'enter' the bulkheaded method (onNext/@onNext) until there are enough requests for data recieved.

Any call to the onNext method, via a signatire match or class or method annotation, will initiate FT to generate a synthetic equivalent of a reactive streams Producer instance if that does not exist, and 'subscribe' to it using the FT proxy of the underlying Subscriber bean.

A call to the method will be passed on if there is 'request' capacity - equivalent to if there is
capacity in a '@bulkhead.A call to the method will be queued if there is not enoughrequests for data. If the queue becomes full then the caller recieves similar results to a full queue on an@asynchronous @Bulkhead`.

If the Subscriber calls cancel the semantics are similar to a Circuit Breaker that
is opened.

[To BE DISCUSSED]
The 'build' and 'start' of the stream occur when the annotated onNext() method is called and so is
Producer initiated. How do we differentiate different 'instances' of Producer and thus 'different' subscriptions - we can use the 'scope' of the bean of the business method with one bean instance (in a particular scope) causing a single subscription and thus stream connection. Any callers to this same bean instance consume a 'request' of the intial synthetic conscription. This can be reset by the bean using Subscription.cancel() - which will cause a new synthetic subscription on the next call to the annotated business method. The equivalent of the Publisher.subscribe() call occurs with the equivalent behaviour taken from https://github.com/eclipse/microprofile-reactive/tree/master/srteams/messaging for 'managed' streams (that are plumbed together by the container).
FT will call Subscriber.onSubscribe( subscription) on the users downstream @OnSubscribe/onSubscribe method passing the FT generated Subscription object.

@hutchig
Copy link
Member Author

hutchig commented Jun 27, 2018

#UC2 Is plugging a reactive upstream component into a non-reactive downstream component with FT semantics (e.g. bulkhead size/queue length) used to control the 'request' of data flow.
request() for data calls will be made up to the size of the bulkhead.

It may improve system performance to have a 'target queue length' to keep some requests 'in hand' to go into the @Bulkhead (without having to wait for a Subscription.request()->Subscriber.onNext() call round trip each time a method call exits the @Bulkhead but this should be controlled via an annotation parameter.

@Emily-Jiang Emily-Jiang added this to the Future milestone Dec 20, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants