Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update/readme #157

Merged
merged 6 commits into from
Nov 21, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
- PyMilo exception types added in `pymilo/exceptions/__init__.py`
- PyMilo exception types added in `pymilo/__init__.py`
### Changed
- `README.md` updated
- `communication_protocol` parameter added to `PyMiloClient` class
- `communication_protocol` parameter added to `PyMiloServer` class
- `ML Streaming` testcases updated to support protocol selection
Expand Down
22 changes: 16 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,27 +157,37 @@ You can easily serve your ML model from a remote server using `ML streaming` fea

⚠️ In order to use `ML streaming` feature, make sure you've installed the `streaming` mode of PyMilo

You can choose either `REST` or `WebSocket` as the communication medium protocol.

#### Server
Let's assume you are in the remote server and you want to import the exported JSON file and start serving your model!
Let's assume you are in the remote server and you want to import the exported JSON file and start serving your model through `REST` protocol!
```pycon
>>> from pymilo import Import
>>> from pymilo.streaming import PymiloServer
>>> from pymilo.streaming import PymiloServer, CommunicationProtocol
>>> my_model = Import("model.json").to_model()
>>> communicator = PymiloServer(model=my_model, port=8000).communicator
>>> communicator = PymiloServer(
>>> model=my_model,
>>> port=8000,
>>> communication_protocol=CommunicationProtocol["REST"],
>>> ).communicator
>>> communicator.run()
```
Now `PymiloServer` runs on port `8000` and exposes REST API to `upload`, `download` and retrieve **attributes** either **data attributes** like `model._coef` or **method attributes** like `model.predict(x_test)`.

#### Client
By using `PymiloClient` you can easily connect to the remote `PymiloServer` and execute any functionalities that the given ML model has, let's say you want to run `predict` function on your remote ML model and get the result:
```pycon
>>> from pymilo.streaming import PymiloClient
>>> pymilo_client = PymiloClient(mode=PymiloClient.Mode.LOCAL, server_url="SERVER_URL")
>>> from pymilo.streaming import PymiloClient, CommunicationProtocol
>>> pymilo_client = PymiloClient(
>>> mode=PymiloClient.Mode.LOCAL,
>>> server_url="SERVER_URL",
>>> communication_protocol=CommunicationProtocol["REST"],
>>> )
>>> pymilo_client.toggle_mode(PymiloClient.Mode.DELEGATE)
>>> result = pymilo_client.predict(x_test)
```

ℹ️ If you've deployed `PymiloServer` locally (on port `8000` for instance), then `SERVER_URL` would be `http://127.0.0.1:8000`
ℹ️ If you've deployed `PymiloServer` locally (on port `8000` for instance), then `SERVER_URL` would be `http://127.0.0.1:8000` or `ws://127.0.0.1:8000` based on the selected protocol for the communication medium.

You can also download the remote ML model into your local and execute functions locally on your model.

Expand Down
Loading