Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stop relying on isLast #2195

Merged
merged 1 commit into from
Aug 26, 2024
Merged

Conversation

StylianosGakis
Copy link
Member

After feedback by Martin here
apollographql/apollo-kotlin#6129 Replace the implementation which relied on the isLast to a much more robust implementation which should cover what the previous implementation did plus more.

After feedback by Martin here
apollographql/apollo-kotlin#6129
Replace the implementation which relied on the isLast to a much more
robust implementation which should cover what the previous
implementation did plus more.
@StylianosGakis StylianosGakis requested a review from a team as a code owner August 26, 2024 11:09
.map { either { parseResponse(it) } }
val errorResponseValue = errorResponse
if (!hasEmitted && errorResponseValue != null) {
// Flow has terminated without a valid response, emit the error one if it exists.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but we're not logging the first errorResponse this way, only the last one will be emitted, right? I guess we don't really need it, if it's cache error?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On an app-wide scale, the logs will always happen due to this

internal class LoggingInterceptor : ApolloInterceptor {
interceptor.
That interceptor is added at the top of the list of interceptors, particularly here even before the normalizedCache interceptors are added.
That means that this one sees everything, and will in turn log everything as it needs to.

Regardless, yes, this one will "save" the error in a var until we know if we need to emit it or not to the caller. If the called receives a proper response afterwards, the error happening wasn't important anyway, we got the data anyway, so we get what we need.

Btw I would love to link you to the discussion inside https://slack-chats.kotlinlang.org/c/apollo-kotlin but it seems like it's having some troubles syncing the latest discussions. I will just paste it here, but I understand that the readability of this is not optimal. If they fix it I will link you to it properly.

Slack conversation follows:


Stylianos Gakis
Thursday at 19:18
I got an interceptor like this:

internal class SomeLoggingInterceptor : ApolloInterceptor {
    override fun <D : Operation.Data> intercept(request: ApolloRequest<D>, chain: ApolloInterceptorChain): Flow<ApolloResponse<D>> {
        return chain.proceed(request).onEach { response ->
            logResponse(response)
        }
    }
}

Which I've atatched to my entire ApolloClient.
Then in a query with fetchPolicy(FetchPolicy.CacheAndNetwork), judging from the internals of the CacheAndNetworkInterceptor, I see two emits here, one for the cache emit, and one for the network emit.
I expect my logResponse(response) to be hit twice, since I am doing an onEach on the incoming flow.
However it looks like I am only receiving the network hit, since the cache fetch is failing and it actually emits
Thing is on the place where I do this query itself I get both emissions perfectly fine, I get one response with the CacheMiss exception as expected, and then the data response, but my interceptor does not get to see the cache miss emission at all.
What am I missing here? I can't figure out at all what exactly is going on here. Any ideas of something obvious I am missing, or what I can do to debug this?


Benoit 'BoD' Lubek
:apollo: Thursday at 19:25
Did you setup your interceptor after the normalized cache? If yes, since the FetchPolicyInterceptor doesn't call chain.proceed when it's getting the value from the cache, your interceptor is not called in that case


Stylianos Gakis
Thursday at 19:29
You know what, I did in fact setup my interceptor after I did .normalizedCache(...). So, I suppose I want my normalized cache to go at the end of whatever interceptor chain I got, never thought of that to be honest!
So how does the value actually end up being received at the caller which should be at the end of this entire chain? Or am I misunderstanding how this chain is setup?
I will try to move my interceptor above now btw to see how that works
19:30
Yup, switching them around I properly get both emissions there, and they even got the right isLast, the cache says isLast = false and the network request then returns isLast = true, as I was expecting in the first place! Quite interesting, thanks for this!


Benoit 'BoD' Lubek
:apollo: Thursday at 19:36
So, at the end of the chain there is always the network call, which is added after all the other interceptors.
Since each interceptor can call the next one down the chain or not, the order in which you add them can impact on whether they are called or not.
19:38
(notice we already have a special one, retryOnErrorInterceptor which should always be in front of the network)


Stylianos Gakis
Thursday at 19:41
Yup, debugged inside the proceed and I see the chain now:

0 = {LoggingInterceptor@44117} com.hedvig.android.app.apollo.LoggingInterceptor@4ca57ea
1 = {LogoutOnUnauthenticatedInterceptor@44118} com.hedvig.android.app.apollo.LogoutOnUnauthenticatedInterceptor@9eceadb
2 = {WatcherInterceptor@44119} com.apollographql.apollo.cache.normalized.internal.WatcherInterceptor@9d0e978
3 = {FetchPolicyRouterInterceptor@44120} com.apollographql.apollo.cache.normalized.FetchPolicyRouterInterceptor@fe62851
4 = {ApolloCacheInterceptor@44121} com.apollographql.apollo.cache.normalized.internal.ApolloCacheInterceptor@4ee08b6
5 = {DefaultRetryOnErrorInterceptorImpl@44122} com.apollographql.apollo.interceptor.DefaultRetryOnErrorInterceptorImpl@2c559b7
6 = {NetworkInterceptor@44123} com.apollographql.apollo.interceptor.NetworkInterceptor@9e72524

So this may be my flow knowledge failing me, but how does the result end up on the call site then? I imagine this tree as a long flow with many chained operations where at the end of it is the caller which does .collect and gets the emissions. How can one interceptor in this chain not receive the emission, but the caller does in fact receive that emission with the exception being there? I feel like I must have a faulty understanding of this somehow


Benoit 'BoD' Lubek
:apollo: Thursday at 19:50
it's not exactly that but in essence:
you call toFlow()
toFlow calls the first interceptor
1st interceptor calls the 2nd interceptor
2nd interceptor calls the network which returns a Flow
2nd interceptor returns the Flow
1st interceptor returns the Flow
toFlow returns the Flow
you have a Flow, you call collect() on it
and in each step the interceptor can call e.g. onEach and add their own stuff. Also in each step maybe the interceptor doesn't call the next one (returns their own Flow instead)


Stylianos Gakis
Thursday at 19:52
Aha wait, any interceptor can return a flow, but can just abruptly "end the chain" for that one emission by not allowing it to proceed further down first, but instead they emit something and can just end the chain. If they want as a follow up they can do an emitAll(...proceed) to "continue" the chain for the follow up emissions.
So in a way, the higher up you are in the chain, the more power you got to allow or not allow things to fall downwards to the rest of the interceptors, is that an okay understanding?


Benoit 'BoD' Lubek
:apollo: Thursday at 19:52
exactly ✅

@StylianosGakis StylianosGakis merged commit 16d3194 into develop Aug 26, 2024
4 checks passed
@StylianosGakis StylianosGakis deleted the apollo/islast-deprecation-changes branch August 26, 2024 11:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants