Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Updated Feast model Inference Architecture #4570

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

franciscojavierarceo
Copy link
Member

@franciscojavierarceo franciscojavierarceo commented Sep 24, 2024

What this PR does / why we need it:

This PR updates the documentation to provide a diagram of how an ML Platform can work with Feast.

Which issue(s) this PR fixes:

#4455

Misc

@franciscojavierarceo franciscojavierarceo changed the title Feast model Inference Architecture feat: Updated Feast model Inference Architecture Sep 24, 2024
@franciscojavierarceo franciscojavierarceo marked this pull request as ready for review September 24, 2024 20:22
Copy link
Collaborator

@HaoXuAI HaoXuAI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.
I think the application -> online store can also be async write, just not supported yet.
That's something I think we can explore more, as it gives user more flexibility to write to feast directly.

@franciscojavierarceo
Copy link
Member Author

Looks good. I think the application -> online store can also be async write, just not supported yet. That's something I think we can explore more, as it gives user more flexibility to write to feast directly.

I actually covered some of that in the new architecture section under Communication Patterns.

Communication Patterns

There are two ways a client (or Data Producer) can send data to the online store:

  1. Synchronously
  1. Asynchronously

Note, in some contexts, developers may "batch" a group of entities together and write them to the online store in a single API call. This is a common pattern when writing data to the online store to reduce write loads but we would not qualify this as a batch job.

Let me know if you think I should add some info there. In general, if users are going to do an async write, I think that has different tradeoffs that we want people to be thoughtful about, which is what I tried to say more broadly in that Write Patterns architecture document.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants