-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using a client inside an web handler #1164
Comments
Yes some of the issues you mentioned are the reason why
Or in other word personally my preference on resolving these issue is to wait for async Rust progression and possibly utilize some cutting edge features to improve the API. Namely features like As for making the client more modular I'm all for it. Making connection pool generic is a good idea if it contributes to solve these issue. |
BTW we have this utility type called FakeSend to help bridging the
|
#1168 removed lifetime param from response body type. For use std::{
io,
pin::Pin,
sync::Arc,
task::{Context, Poll},
};
use futures::stream::Stream;
use xitca_client::Client;
use xitca_web::{
App,
body::{RequestBody, ResponseBody},
error::{Error, ErrorStatus},
handler::{body::Body, handler_service, state::StateRef},
http::{WebRequest, WebResponse},
};
#[tokio::main]
async fn main() -> io::Result<()> {
App::new()
.at("/*", handler_service(handler))
.with_state(Arc::new(Client::new()))
.serve()
.bind("0.0.0.0:8080")?
.run()
.await
}
async fn handler(
StateRef(cli): StateRef<'_, Arc<Client>>,
mut req: WebRequest<()>,
Body(body): Body<RequestBody>,
) -> Result<WebResponse, Error> {
if req.headers_mut().remove("proxy-connection").is_none() {
return Err(ErrorStatus::bad_request().into());
}
let res = cli
.request(req.map(|_| FakeSend::new(body)))
.send()
.await
.map_err(|_| ErrorStatus::internal())?;
Ok(res.into_inner().map(ResponseBody::box_stream))
}
struct FakeSend<B>(xitca_unsafe_collection::fake::FakeSend<B>);
impl<B> FakeSend<B> {
fn new(body: B) -> Self {
Self(xitca_unsafe_collection::fake::FakeSend::new(body))
}
}
impl<B> Stream for FakeSend<B>
where
B: Stream + Unpin,
{
type Item = B::Item;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
Pin::new(&mut *self.get_mut().0).poll_next(cx)
}
fn size_hint(&self) -> (usize, Option<usize>) {
(0, Some(0))
}
} |
Thanks a lot for your insight and removing the lifetime, i updated my draft PR related to this change (i already did use fake send before, but your way of doing it seems simpler will look into it), and everythings works nicely. I will certainly throw other PR to allow better extensibility of some points in the client, but not sure if this is wanted, i plan to do the following changes :
|
Thanks for your PRs and all contributions are welcomed. I'll look into and review them later as it's pretty busy here during the holiday. Hope you don't mind some delay. |
Don't worry take your time, it's open source, it's normal to have other priorities, thanks anyway for your quick response time 🫶 |
I can close this, all pr "needed" to do a reverse proxy have been made, and we can discuss each subject on each pr, will certainly do more depending on bug / feature that i encounter. Thanks for you time |
I was looking into creating a reverse proxy with only xitca libs, i did manage to have a "working" thing but i encountered several problems during the process and i'm wondering if some changes for this to be working properly would be accepted / wanted ?
Streaming the request body
First problem comes from streaming the request body from the web request to the client request. The web request is bounded to the local thread which makes it impossible to be
Send
which is required for the client.I did manage to relax this constraint, however it makes the middleware not working (since they require to be
Send
also for them) There may be a way to relax the constraint also on middleware but need to have some sort of factory for them to be able to do that (like the one that is done on the web part)Streaming the response body
Second main problem is that the response is linked to the client (due to the shared pool), so streaming it back directly is not possible (as we don't know if client will still live when streaming back the response, current work around was to make the body into an owned version, but i do believe this make it impossible for the connection to going back to the pool once it is owned.
Also there would be a client per thread, which means that they would have a different pool of connections which is not "ideal" (but not that much of a problem)
I'm not sure how this can resolved however, i'm wondering if it's possible to split the pool from the client (maybe even by default there is no pool of connections, and add api in the builder to either have a pool that is shared across thread or not). Or maybe there is a possibility to have a shared pool, and when we acquire the connection we clone it to the local thread, remove the one from the pool and do the inverse operation when it's not needed (need to look into the low level api to be sure that this is doable).
You can close this if this is not wanted, just wanted to share some problems that i encountered during the building of a reverse proxy.
The text was updated successfully, but these errors were encountered: