-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What else do we need for postprocessing? #68
Comments
Hey @topepo! I'm not entirely sure I understand how this is meant to work, but I've poked around the package some, so I'll do my best. Partially I'll reference the API of the postprocessor that we've been building. Big picture: it seems to me that it's not the Proceeding along the lines of the Cubist example and checking my understanding of the tailor API:
It seems like:
Footnotes
|
Sorry for such a high-level comment in what is a thread about details. Broadly speaking, for traditional "batch" prediction problems (i.e., NOT sequential prediction, as in time series), I think it makes sense to allow the post-processor to access what they need in order to make new predictions on a calibration set. It sounds like what you're describing accommodates this. (p.s. Thanks for the pointer to the Quinlan 1993a paper, which I learned about from your blog post --- vaguely it seems like this is may be seen as a batch analog of online calibration/debiasing methods that me and others have been working on recently). For "online" prediction problems (i.e., sequential prediction, as in time series), there's a different set of for post-processing methods which recalibrate based on the past data and past predictions themselves. Minimally, for some methods, you actually just need single most recent data point, and the most recent corresponding prediction (along with say, the most recent parameter value, in some recalibration model that you're using). So it's really pretty minimal, and you don't really need all of the framework you're describing. That said, if you're also looking to accommodate online prediction here, then it would be nice to allow this to fit in somehow: I just need to pass in y_t (data), \hat{y}_t (prediction), and \theta_t (parameter) to the post-processor at time t. |
I have a specific argument to make regarding two potential adjustments. However, it would also be good to get a broader set of opinions from others. Maybe @ryantibs and/or @dajmcdon have thoughts.
My thought: three things that we might consider being optional arguments to the tailor (or an individual adjustment):
Why? Two similar calibration tools prompted these ideas. To demonstrate, let's look at what Cubist does to postprocess. This is discussed and illustrated in this blog post. The other is discussed in #67 and has requirements similar to those of the Cubist adjustment.
After the supervised model predicts, Cubist finds its nearest neighbors in the training set. It adjusts a prediction based on the distances to the neighbors and the training set predictions for the neighbors.
We don't have to use the training set; it could conceivably be a calibration set. To generalize, I'll call it the reference data set.
To do this with a tailor, we would already have the current prediction from the model (which may have already been adjusted by other postprocessors) and perhaps the reference set predictions if we are properly prepared.
To find the neighbors, we will need to process both the reference set and the new predictors in the same way as the data was given to the supervised model. For this, we'd need the mold from the workflow.
When making the tailor, we could specify the number of neighbors, pass the reference data set, and the mold. We could require the predictions for the reference set to be in the reference set data frame, avoiding the workflow need.
The presence of the workflow is a little dangerous; it would likely include the tailor. Apart from the infinite recursion of the workflow being added to a workflow that contains the tailor adjustment in the workflow, we would want to avoid people accidentally misapplying the workflow. Let's exclude the workflow from being an input into a tailor adjustment but keep the idea of adding a data set of predictors and/or the workflow mold.
Where would we specify the mold or data? In the main
tailor()
call or to the adjustments? The mold is independent of the data set and would not vary from adjustment to adjustment, so an option totailor()
would be my suggestion.The data set is probably in the adjustments. Unfortunately, multiple data sets could be included depending on what is computed after the model prediction (relevant is #4).
The text was updated successfully, but these errors were encountered: