Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement the independent standard error as a metric #255

Open
mastoffel opened this issue Oct 9, 2024 · 0 comments
Open

Implement the independent standard error as a metric #255

mastoffel opened this issue Oct 9, 2024 · 0 comments

Comments

@mastoffel
Copy link
Collaborator

The ISE is a metric for models where the predict method has a return_std argument. It essentially calculates how many of the true values fall within 2 std of the predictions, so something like this:

def ise(estimator, X, y_true):
    """
    Scorer: the Independent Standard Error (ISE).
    
    Parameters:
    y_true (array-like): True output values
    y_pred (array-like): Predictions from the model (should be a tuple of mean and std)
    
    Returns:
    float: ISE value as a percentage
    """
    if not has_predict_with_std(estimator):
        return np.nan
    y_pred_mean, y_pred_std = estimator.predict(X, return_std=True)
    within_2std = np.abs(y_pred_mean - y_true) < 2 * y_pred_std
    ise_value = np.mean(within_2std) * 100  
    return ise_value

The problem is that this can't just be wrapped using make_scorer in cross_validate.py, because it requires predict to be called with return_std=True (default is False). And then it needs to work with multioutput too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant