You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Title: Add support for new types which store hyperparameter tuning related metadata
Introduction
The current Model Registry (MR) is designed to save and manage model metadata efficiently, allowing users to track and maintain the lifecycle of their machine learning models. This service is essential for ensuring that model information is easily accessible and utilizable.
However, users often require more detailed information about a model, such as the hyperparameters and values used during its creation or training. This is especially critical for those performing hyperparameter tuning, where multiple trials with varying hyperparameter configurations are conducted to optimize model performance.
Goal
To address this need, we propose enhancing the Model Registry to support the storage of detailed experiment, trial, and Hyperparameter Optimization (HPO) configuration information. Specifically, we aim to introduce new data types: HPO Experiment, HPO Trial, and HPO Config into the Model Registry.
Run an HPO experiment
Using a Jupyter notebook where the user can keep their laptop/ server ON for the whole period of the experiment which might take days to weeks based on the trials. This can be two ways - running the HPO experiment on a notebook server or on the Rayclusters.
Using Data Science Pipelines which is automated.
Hyper Parameter Optimization (HPO) Model Registry Integration
Overview
In order to integrate Hyper Parameter Optimization (HPO) functionality into OpenShift AI, we need to include additional types in the Model Registry (MR) to save the metadata of each HPO experiment. This work is detailed in the ADR: ODH-ADR-0011-hpo-raytune. We have minimized the number of new types and unnecessary additions by reusing existing ML Metadata (MLMD) and Kubeflow (KF) types.
New Types for HPO Integration
kf.HPOConfig (Artifact)
Captures the hyperparameter configuration for all trials of HPO experiments. Links one-to-one with trials.
Field
Type
Description
hpoconfig_id
string
URI of the artifact
name
string
Name of the artifact
state
ArtifactState
State of the artifact
description
string
Description of the configuration
configuration
string[map]
Metadata of the configuration
trial_id
string
ID of the trial the config is linked to
trial_name
string
Name of the trial the config is linked to
kf.HPOTrial (Context)
Provides information about individual trials conducted within an experiment, aiding in experiment management.
Field
Type
Description
trial_id
string
URI of the context
context_state
ContextState
State of the context
trial_name
string
Name of the model associated with this version
kf.HPOExperiment (Context)
The parent context of the trials, having a one-to-many relationship with them. Serves as a parent context for trials, facilitating the organization and comparison of experiments.
Field
Type
Description
experiment_name
string
Name of the experiment associated with this version
description
string
Description of the trial, optional field
The text was updated successfully, but these errors were encountered:
Title: Add support for new types which store hyperparameter tuning related metadata
Introduction
The current Model Registry (MR) is designed to save and manage model metadata efficiently, allowing users to track and maintain the lifecycle of their machine learning models. This service is essential for ensuring that model information is easily accessible and utilizable.
However, users often require more detailed information about a model, such as the hyperparameters and values used during its creation or training. This is especially critical for those performing hyperparameter tuning, where multiple trials with varying hyperparameter configurations are conducted to optimize model performance.
Goal
To address this need, we propose enhancing the Model Registry to support the storage of detailed experiment, trial, and Hyperparameter Optimization (HPO) configuration information. Specifically, we aim to introduce new data types: HPO Experiment, HPO Trial, and HPO Config into the Model Registry.
Run an HPO experiment
Using a Jupyter notebook where the user can keep their laptop/ server ON for the whole period of the experiment which might take days to weeks based on the trials. This can be two ways - running the HPO experiment on a notebook server or on the Rayclusters.
Using Data Science Pipelines which is automated.
Hyper Parameter Optimization (HPO) Model Registry Integration
Overview
In order to integrate Hyper Parameter Optimization (HPO) functionality into OpenShift AI, we need to include additional types in the Model Registry (MR) to save the metadata of each HPO experiment. This work is detailed in the ADR: ODH-ADR-0011-hpo-raytune. We have minimized the number of new types and unnecessary additions by reusing existing ML Metadata (MLMD) and Kubeflow (KF) types.
New Types for HPO Integration
kf.HPOConfig (Artifact)
Captures the hyperparameter configuration for all trials of HPO experiments. Links one-to-one with trials.
kf.HPOTrial (Context)
Provides information about individual trials conducted within an experiment, aiding in experiment management.
kf.HPOExperiment (Context)
The parent context of the trials, having a one-to-many relationship with them. Serves as a parent context for trials, facilitating the organization and comparison of experiments.
The text was updated successfully, but these errors were encountered: