-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clean up get_all_devices
api to clear up confusion over OpenVINO "GPU" and Tensorflow "GPU"
#35
Comments
Instead of returning I'm also open to major overhauls, if anyone has any clever ideas. |
If it's an enum, would that prevent users from implementing their own DeviceMappers that map to their own hardware, since they can't add to the enum at runtime? I'm not sure if that's something anyone could do in practice though. |
|
That was a typo, oops! I actually meant The current device mapper filters expect a list of Another option would be to make a class Provider(Enum)
TENSORFLOW = "tensorflow"
OPENVINO = "openvino"
...
future providers
@dataclass
class DeviceType:
provider: Provider
name: str
def get_all_devices() -> List[DeviceType]:
... Thoughts on this? |
My vote is for the dataclass 👍. Could (should?) |
Technically Order doesn't matter. |
Alex and I talked about a possible design for this where |
Why key them by string name instead of something higher-level? |
Something higher level would be fine, I just don't think it should be an enum because that would prevent application developers from adding their own providers. |
Now that some implementations of the OpenVisionCapsules spec are integrating support for loading OpenVINO models onto integrated GPU's, there's a problem of name collisions from the
device_mapping.get_all_devices()
function. Curreuntly it returns all devices discovered by Tensorflow and OpenVINO combined.The text was updated successfully, but these errors were encountered: