-
Notifications
You must be signed in to change notification settings - Fork 532
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Experimental command line interface UX #6135
base: branch-24.12
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good job! Just have a few comments. I focused my review on the estimator proxy and the interception function.
@@ -471,6 +477,45 @@ class Base(TagsMixin, | |||
func = nvtx_annotate(message=msg, domain="cuml_python")(func) | |||
setattr(self, func_name, func) | |||
|
|||
@classmethod | |||
def _hyperparam_translator(cls, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that the function could be simplified. Also edge case, but what happens when the result of the translation is supposed to be an "accept" or "dispatch" string?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also think we could improve this a bit.
Before looking at how it was done in this PR my expectation was to find something like this:
class Base:
def _translate_hyperparameters(self, foo="some value", **hyper_parameters):
if foo != "some value":
hyper_parameters["foo"] = "some value"
return hyper_parameters
class A(Base):
def _translate_hyperparameters(self, bar=True, **hyper_parameters):
hyper_parameters = super()._translate_hyperparameters(**hyper_parameters)
if not bar:
raise NotImplemented(f"Can't dispatch with '{bar=}', has to be True.")
return hyper_parameters
So that we can accumulate translations from Base
upwards, save on having the extra layer of the translation dictionary
(I need to think a bit more/try out the code, so bugs ahead but right now I'm channeling my best inner Raymond Hettinger: "there must be a better way" :D)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@betatim the reason to avoid a method per class like that is to avoid a ton of extra code per algorithm, the dictionary is actually based on scikit-learn's parameter constraints just simplified for now,
@viclafargue not sure I understand the edge case? what happens when the result of the translation is supposed to be an "accept" or "dispatch" string?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's say there's an hyperparameter called nan_values_handling
. It can take the value "allow" in cuML, but its equivalency in sklearn is "accept". Is there a way to write a dictionary that does the translation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just use an Enum
for accept
and dispatch
? To me it feels brittle if there's some estimator in the future that uses "accept" string as a valid hyperparam value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure we have to worry about the case where the value is "accept" or "dispatch". The way I understand the dictionary like
_hyperparam_interop_translator = {
"solver": {
"lbfgs": "qn",
"liblinear": "qn",
"newton-cg": "qn",
"newton-cholesky": "qn",
"sag": "qn",
"saga": "qn"
},
}
is that it lists those hyper-parameters that needs translating and for those it lists the values and their translations. For example solver="lbfgs"
needs translating to solver="qn"
, but say foobar=42
won't need translating because foobar
isn't listed. Similarly solver="dazzle"
doesn't need translating because it isn't listed. This makes me think we don't really need an entry with a value of "accept"
, we could just have no entry ("accept"
is the default assumption).
That is the "accept"
magic value dealt with. Then there is the case of "dispatch"
which is used for parameter values that can't be translated. It means "use scikit-learn". I think we should replace it by something like NotImplemented
or some other exception or similar. This would deal with the case where a cuml value for a parameter should be "dispatch"
- here we aren't able to tell if we should use scikit-learn or use cuml with this value. Hence lets use something like NotImplemented
.
The parameter values used in scikit-learn don't matter because they are keys in the dictionaries.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having slept on my comment I've changed my mind. I still think having a method on each class that modifies the parameters as it sees fit could be nice. Not sure if it will create more typing/work than using a dict
¯_(ツ)_/¯.
I also pondered the _hyperparam_translator
function and I think this is how I'd write it. It doesn't use my suggestion from above (NotImplemented
) but it could. The main changes are that it isn't a class method anymore (I couldn't work out why it was one), it merged the base classes translations with those of the derived class (I assume those are the only two interesting ones, we don't need to merge them together from the whole inheritance tree), I removed the if
cases that were for "accept"
(I think we don't do anything in those cases other than 👍 )
def _hyperparam_translator(self, **kwargs):
"""
This method is meant to do checks and translations of hyperparameters
at estimator creating time.
Each children estimator can override the method, returning either
modifier **kwargs with equivalent options, or
"""
gpuaccel = True
# Copy it so we can modify it
translations = dict(super()._hyperparam_interop_translator)
# Allow the derived class to overwrite the base class
translations.update(self._hyperparam_interop_translator)
for parameter_name, value in kwargs.items():
# maybe clean up using: translations.get(parameter_name, {}).get(value, None)?
if parameter_name in translations:
if value in translations[parameter_name]:
if translations[parameter_name][value] == "dispatch":
gpuaccel = False
else:
kwargs[arg] = translations[parameter_name][value]
return kwargs, gpuaccel
One more thought on "dispatch"
: if we don't replace it with NotImplemented
or similar, can we use use_cpu
or something? I can't keep it straight in my head what "dispatch" means in the various libraries (most mean "use a GPU" when they talk about dispatching, in cuml it means "use a CPU"). Maybe something explicit like "use_cpu"
makes it easier to reason about what is happening (thought my preference is still NotImplemented
or similar).
|
||
# GPU case | ||
if device_type == DeviceType.device: | ||
if device_type == DeviceType.device or func_name not in ['fit', 'fit_transform', 'fit_predict']: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you explain the change?
@@ -234,7 +234,7 @@ class UMAP(UniversalBase, | |||
are returned when transform is called on the same data upon | |||
which the model was trained. This enables consistent | |||
behavior between calling ``model.fit_transform(X)`` and | |||
calling ``model.fit(X).transform(X)``. Not that the CPU-based | |||
calling ``model.fit(X).transform(X)``. Note that the CPU-based |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for fixing this one ^^
) | ||
super().__init__() | ||
self.import_cpu_model() | ||
self._cpu_model = self._cpu_model_class() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be preferable to instantiate with hyperparameters if available. Aren't they stored as attributes? Why not call build_cpu_model
? Maybe we should add _full_kwargs
to the state
dict before serialization?
self.output_type = "numpy" | ||
self.output_mem_type = MemoryType.host |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you explain?
# currently we just use this dictionary for debugging purposes | ||
patched_classes = {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we plan to keep this for merge?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If so, we should make it threadsafe.
with disable_module_accelerator(): | ||
filename = self.__class__.__name__ + "_sklearn" | ||
with open(filename, "wb") as f: | ||
pickle.dump(self._cpu_model_class, f) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you explain this part?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, can pickle.dumps
be used instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm also interested in understand why this is here/what it does. If we really have to write to a file it should probably be a tempfile
.
But ideally we don't have to do that?
The whole story around pickling warrants some developer documentation I think
from ..estimator_proxy import intercept | ||
|
||
|
||
UMAP = intercept( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
HDBSCAN
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Finished first half of the review but would like to continue going over the rest more carefully from here.
|
||
from .magics import load_ipython_extension | ||
|
||
# from .profiler import Profiler |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's remove the comment.
__all__ = ["load_ipython_extension", "install"] | ||
|
||
|
||
LOADED = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this threadsafe? My initial instinct is that this requires protection by a lock, but maybe there is some reason why this wouldn't be an issue?
original_class_name="DBSCAN", | ||
) | ||
|
||
# HDBSCAN = intercept( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we add a comment explaining why the commented-out algorithms are disabled here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd be in favour of the proposal from your other comment: remove commented out code and code that does nothing. We can easily add it back (from the git commit history) if we want to
# currently we just use this dictionary for debugging purposes | ||
patched_classes = {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If so, we should make it threadsafe.
self._cpu_model_class = ( | ||
original_class_a # Store a reference to the original class | ||
) | ||
# print("HYPPPPP") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# print("HYPPPPP") |
) | ||
|
||
def __setstate__(self, state): | ||
print(f"state: {state}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
print(f"state: {state}") |
# limitations under the License. | ||
# | ||
|
||
from __future__ import annotations |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's remove any unused code from this file.
frame = sys._getframe() | ||
# We cannot possibly be at the top level. | ||
assert frame.f_back | ||
calling_module = pathlib.PurePath(frame.f_back.f_code.co_filename) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have we profiled to understand the performance implications of this mechanism? Inspecting the frame seems like something we should do only if we really have to. In the context of cuML, we're already tracking whether or not we're internal to the cuML API, so do we need to use this?
f".._wrappers.{mode.slow_lib}", __name__ | ||
) | ||
try: | ||
(self,) = ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the tuple unpacking here?
return self | ||
|
||
|
||
def disable_module_accelerator() -> contextlib.ExitStack: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a follow-on, we should be able to make this much faster using our global settings object. Not critical for the initial merge though.
@click.command() | ||
@click.option("-m", "module", required=False, help="Module to run") | ||
@click.option( | ||
"--profile", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add a warning here that this is an unused parameter at the moment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about removing parameters that at the moment do nothing? If we want to add them (with functionality) it is easy enough to do. And it keeps things tidy, both for the reader of the help message and the reader of the code
help="Perform per-function profiling of this script.", | ||
) | ||
@click.option( | ||
"--line-profile", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above
# from .profiler import Profiler, lines_with_profiling | ||
|
||
# @magics_class | ||
# class CumlAccelMagic(Magics): | ||
# @cell_magic("cuml.accelerator.profile") | ||
# def profile(self, _, cell): | ||
# with Profiler() as profiler: | ||
# get_ipython().run_cell(cell) # noqa: F821 | ||
# profiler.print_per_function_stats() | ||
|
||
# @cell_magic("cuml.accelerator.line_profile") | ||
# def line_profile(self, _, cell): | ||
# new_cell = lines_with_profiling(cell.split("\n")) | ||
# get_ipython().run_cell(new_cell) # noqa: F821 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove?
@@ -471,6 +477,45 @@ class Base(TagsMixin, | |||
func = nvtx_annotate(message=msg, domain="cuml_python")(func) | |||
setattr(self, func_name, func) | |||
|
|||
@classmethod | |||
def _hyperparam_translator(cls, **kwargs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just use an Enum
for accept
and dispatch
? To me it feels brittle if there's some estimator in the future that uses "accept" string as a valid hyperparam value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No major blockers
|
||
# else: | ||
# gpuaccel = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this line commented out?
) | ||
|
||
|
||
def pytest_load_initial_conftests(early_config, parser, args): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the no-code change magic kicks in when running the pytest suite? Very cool.
By the way, does it affect the other pytests that are outside cuml.experimental.accel
? Many of our existing tests assert that cuML algorithm matches output of sklearn's counterpart.
I ran the following small snippet to see things in action, but I'm now puzzled about whether or not cuml was used. Is there an easy way to tell (assume I'm a simple minded user who isn't going to dig into the cuml codebase)? import cuml.experimental.accel
cuml.experimental.accel.install()
from sklearn.datasets import make_blobs
from sklearn.cluster import KMeans
X, y = make_blobs()
km = KMeans()
km.fit(X, y)
print(f"{km.cluster_centers_=}")
print(km.score(X, y)) This outputs the following:
I was expecting to see either a log message saying "This was run on the GPU!" (or something similarly positive and simple) or as an alternative something like what I proposed in scikit-image where we issue a The second thing I thought might tell me if it was dispatched was inspecting a fitted attribute, though I guess cuml array works hard to make that hard :-/ |
original_class_a # Store a reference to the original class | ||
) | ||
# print("HYPPPPP") | ||
kwargs, self._gpuaccel = self._hyperparam_translator(**kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to handle things like KMeans(8)
as well. So not just translating keyword arguments but also the (few?) occasions where positional arguments are allowed for the scikit-learn estimator.
Right now kwargs = {'args': 8}
when you instantiate KMeans(8)
like this. That seems definitely not what we want :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can use the result of inspect.signature(self._cpu_model_class).bind(*args, **kwargs).arguments
to get a dictionary that contains everything the user passed.
To get all arguments, including defaults that the user didn't pass:
signature = inspect.signature(self._cpu_model_class).bind(*args, **kwargs)
signature.apply_defaults()
print(signature.arguments)
I think this is what we need to pass to the translator
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had never seen this KMeans(8)
, what is the expected behavior here? I thought non positional arguments except for X and y were not allowed?
def __repr__(cls): | ||
return repr(original_class_a) | ||
|
||
class ProxyEstimator(class_b, metaclass=ProxyEstimatorMeta): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What was the thinking around making this inherit from class_b
(the cuml class right?)? Naively I'd have made this a class that doesn't inherit from either class A or class B, but has an instance of each stored as an attribute and then proxies to the appropriate one.
The next thing I'd have tried is inheriting from the sklearn class, in the hopes of making isinstance
checks "just work". Though I think I'd probably gone back to using attributes because the asymmetry for proxying A and B would have confused me.
Looking at this class it mostly does hyper-parameter translation and then deals with serialisation. I think this is because the cuml class that we inherit from already has all the methods we need and they get called directly. Which might be the answer to my original question?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The machinery of dispatching lives in the cuML class, otherwise we would need to recreate dispatching here when we already have in cuML, as you mention at the end indeed. So essentially the ProxyEstimator is just an extension to the existing cuML estimators that have functionality for cuml-cpu, so there could be an argument in the future that we could just upstream the functionality here to Base
as we refactor our Python codebase.
|
||
|
||
def reconstruct_proxy(original_module, new_module, class_name_a, args, kwargs): | ||
"Function needed to pickle since ProxyEstimator is" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs proper triple quotes and it seems the sentence ended mid sentence.
In general I think we can fix/change most things here after people start trying it. These are things I'd fix before:
|
No description provided.