You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -101,7 +101,7 @@ You can skip dependencies that are [pre-installed](../../../docs/deployments/pyt
101
101
102
102
## Configure your API
103
103
104
-
Create a `cortex.yaml` file and add the configuration below. An `api` provides a runtime for inference and makes your `predictor.py` implementation available as a web service that can serve real-time predictions:
104
+
Create a `cortex.yaml` file and add the configuration below and replace `cortex-examples` with your S3 bucket. An `api` provides a runtime for inference and makes your `predictor.py` implementation available as a web service that can serve real-time predictions:
105
105
106
106
```yaml
107
107
# cortex.yaml
@@ -133,7 +133,7 @@ Track the status of your api using `cortex get`:
133
133
$ cortex get iris-classifier --watch
134
134
135
135
status up-to-date requested last update avg inference 2XX
@@ -194,7 +194,7 @@ After making more predictions, your `cortex get` command will show information a
194
194
$ cortex get iris-classifier --watch
195
195
196
196
status up-to-date requested last update avg inference 2XX
197
-
live 1 1 10m28ms 14
197
+
live 1 1 1m 1.1 ms 14
198
198
199
199
class count
200
200
setosa 8
@@ -209,6 +209,8 @@ virginica 4
209
209
This model is fairly small but larger models may require more compute resources. You can configure this in your `cortex.yaml`:
210
210
211
211
```yaml
212
+
# cortex.yaml
213
+
212
214
- name: iris-classifier
213
215
predictor:
214
216
type: python
@@ -237,7 +239,7 @@ Run `cortex get` again:
237
239
$ cortex get iris-classifier --watch
238
240
239
241
status up-to-date requested last update avg inference 2XX
240
-
live 1 1 10m24ms 14
242
+
live 1 1 1m 1.1 ms 14
241
243
242
244
class count
243
245
setosa 8
@@ -252,6 +254,8 @@ virginica 4
252
254
If you trained another model and want to A/B test it with your previous model, simply add another `api` to your configuration and specify the new model:
253
255
254
256
```yaml
257
+
# cortex.yaml
258
+
255
259
- name: iris-classifier
256
260
predictor:
257
261
type: python
@@ -295,7 +299,7 @@ $ cortex get --watch
295
299
296
300
api status up-to-date requested last update
297
301
iris-classifier live 1 1 5m
298
-
another-iris-classifier live 1 1 8s
302
+
another-iris-classifier live 1 1 1m
299
303
```
300
304
301
305
## Add a batch API
@@ -335,6 +339,8 @@ class PythonPredictor:
335
339
Next, add the `api` to `cortex.yaml`:
336
340
337
341
```yaml
342
+
# cortex.yaml
343
+
338
344
- name: iris-classifier
339
345
predictor:
340
346
type: python
@@ -391,9 +397,9 @@ Since a new file was added to the directory, and all files in the directory cont
0 commit comments