Skip to content
This repository has been archived by the owner on Dec 2, 2021. It is now read-only.

Longer term roadmap for kubeflow/metadata #166

Open
karlschriek opened this issue Nov 13, 2019 · 3 comments
Open

Longer term roadmap for kubeflow/metadata #166

karlschriek opened this issue Nov 13, 2019 · 3 comments

Comments

@karlschriek
Copy link

/kind feature

Describe the solution you'd like
In the circles that I move around in - i.e. developers and product owners who need to create productive Machine Learning solutions - one of the key issues that everyone is try to solve is Model Management (which really comes down to proper logging of Metadata, tracking of Lineage etc.). There are a few solutions out there that are gaining in popularity (see MLFlow from Databricks) but there is nothing (aside from the TFX MLDB, which KF Metastore seems to be based on) that offers real integration with a project of the size and ambition of Kubeflow.

I realise that metastore is currently still in early alpha, and don't get me wrong, I'm really looking forward to see where you guys take it. But given the above I do actually find it quite surpising that metastore is (compared to pipelines for example) currently still such a small project.

We are currently evaluating whether Kubeflow is the right fit for our organisation and one of the pivotal points is how we will do model management on the platform. We would find it very useful to know what the longer term vision is here. What sort of functionality are you planning to build? How will metastore integrate with other components, such as pipelines? How do you plan to get community/user input? When do you plan to make examples available for the user community to get into? Kubeflow is target a v1.0 release by early 2020. Where do you see metastore at that point? Etc.

Anything else you would like to add:
Not meant as criticism. I think the whole Kubeflow community is awesome!

@jtfogarty
Copy link

/area engprod
/priority p2

@alexlatchford
Copy link

alexlatchford commented Feb 17, 2020

Hi @karlschriek when you mention "model management" are you referring to easily be able to see just which experiments have been run or are you thinking more deeply about understanding which models are deployed, similar to the MLFlow model registry?


I too am at the evaluation stage and wondering this question. I'm loving pipelines and discovered the kfserving pipeline component which seems like a convenient way to deploy models to different environments once they've passed some validation. Hook that up with Argo's suspend template functionality (for manual approvals if you need them) and you've got a pretty nice looking deployment pipeline. The next problem then becomes neatly tracking what is actually running in each environment and how it got there (ie. which pipeline) via some UI+datastore.

@karlschriek
Copy link
Author

karlschriek commented Feb 18, 2020

Yes, in fact at the moment we roll out MLFlow within our Kubeflow Cluster, as it seems to be the most mature solution that is currently out there. However, MLFlow doesn't nearly cover what we need

When I talk about model management, I broadly mean a system that abstracts away the nitty-gritty of:

  • Uniquely naming, versioning and describing models and saving them in a remote location
  • Doing the same for datasets, and then defining relationships between datasets and models (for example, 1-to-1 relationship between model and training datasets, 1-to-n relationship between model and test datasets).
  • Capturing metrics for various combinations of model and dataset, both on aggregate level as well as on more granular levels (e.g. per label)
  • A process that facilitates selecting the best model. For example, I choose a test dataset, a model purpose and a metric and base on that make a choice which model we will use in production. The metastore tracks these selections. It also tracks when models become "deactivated".
  • Directly serving models from a model registry is less interesting for out use case, but I can see the appeal of something like that

For the moment we need to rely on a lot of bespoke code to do this.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants