This server was generated by the swagger-codegen project. By using the OpenAPI-Spec from a remote server, you can easily generate a server stub.
This example uses the Connexion library on top of Flask.
Python 3.7
Docker
To build the microservice image
docker build -t wml-service .
To run the microservice, you will need to provide the following parameters:
- WML_API_KEY : your Watson Machine Learning service apikey
- WML_URL : your Watson Machine Learning service url
- WML_SPACE_ID : your Watson Machine Learning service space id
docker run \
-p 8080:8080 \
-e WML_API_KEY=<WML_API_KEY> \
-e WML_URL=<WML_URL> \
-e WML_SPACE_ID=<WML_SPACE_ID> \
--name wml-service \
wml-service
If you want to run the microservice on another port
docker run \
-p <PORT>:8080 \
-e WML_API_KEY=<WML_API_KEY> \
-e WML_URL=<WML_URL> \
-e WML_SPACE_ID=<WML_SPACE_ID> \
--name wml-service \
wml-service
To check that you have a running container
docker ps -f name=wml-service
Your predictive service is available at http://localhost:8080/.
Swagger UI documentation is available at http://localhost:8080/ui
Or on the port of choice respectively at http://localhost:<PORT>/
and http://localhost:<PORT>/ui
To stop the container
docker stop wml-service
pip3 install -r requirements.txt
python3 setup.py install
python3 -m swagger_server
To launch unit tests, use tox:
pip3 install tox
tox
To launch tests on your WML instance:
First, make sure you have a working and configured environment with the following global variables:
- WML_API_KEY : your Watson Machine Learning service apikey
- WML_URL : your Watson Machine Learning service url
- WML_SPACE_ID : your Watson Machine Learning service space id
Then, run tests using tox:
pip3 install tox
tox swagger_server/test