Replies: 1 comment 1 reply
-
I'm also interested to see what would be the recommended approach for this case. I personally think it's important for any apps there that display data in multiple places like feed, details info and allows modifying like liking, changing data, etc. There are tons of example can be made where data in different places needs to be kept in sync like changing it in one place should update in all the places. Maybe even seeing a post in the feed -> clicking to see details -> it shows what it has from the feed as well as fetches extra and likes count changes. Going back keeps the old count though as it's coming from a different source so not updated. I have actually solved it by using hive as a source of truth and having all providers listen and display data from it. {
"post_id": "id",
"author": "author_id"
} and then author also gets it's own normalized json that could even contain tons of posts that are only ids etc As I mentioned, the data stored is super small and can live in memory for instant access and providers then parse these jsons to models and serve them. With this approach I get up to date data everywhere in my app as whenever new data comes in no matter how deep it is in the nested json object it will be updated in the cache and then displayed wherever is needed. This is only high level overview and there's quite a lot of code involved but if anyone interested I may write an article with details or create a package. |
Beta Was this translation helpful? Give feedback.
-
The problem is that sometimes the same models/data need to come from different sources which inevitably results in duplication of requests and managed state. For example, consider the requirements for a social media. To greatly simplify things, let's say that there only exists user feeds and user profiles. A user feed consists of a list of posts which are aggregated from all of the user's friends most recent posts. The profile consists of user data (name, handle, etc.), but then it also contains a list of their posts. This is where the problems begin to become apparent.
With the way Riverpod works in terms of reactive programming, there is a unidirectional flow of data. This is honestly preferable except that the implementation is somewhat restrictive in terms of the data's ultimate source. If I have an autodisposable PostFamilyProvider which takes a post id as the family input and returns Future, I can create the provider to automatically get the exact post from the API. However, in the example of a feed, I wouldn't want it to work like that at all. It would be hugely redundant to have an API request that first gets a list of post ids and then each post has to do a separate request as well, when I can just treat it like a paginator and instead get a List from a single source. To accomplish this with Riverpod, the only way that I can see how this would be done is to use ProviderScopes and override PostFamilyProvider to instead get the post from the paginator.
The problem with overriding the PostFamilyProvider for the feed is that then I also have to do it for the profile. When a profile is retrieved from the API, it's going to get the ENTIRE profile in one request which includes all of the profile's posts (or at least the most recent ones). Now, I have to override the scope there as well to get the post from the profile. This means that the profile could get a more recently updated version of a post which isn't propagated back to the feed's version of post and results in an inconsistent state.
How can this be accomplished? I would love to have an autodisposable family provider for a model that is flexible in terms of its source of data, but caches the data for all possible sources and has the same provider implementation across the entire application so that all versions of it are kept in sync and the data is never duplicated.
Beta Was this translation helpful? Give feedback.
All reactions