You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
status up-to-date requested last update avg inference
112
-
live 1 1 8s 24ms
103
+
status up-to-date requested last update avg inference 2XX
104
+
live 1 1 8s 24ms 12
113
105
114
106
class count
115
107
positive 8
@@ -133,8 +125,8 @@ The CLI sends configuration and code to the cluster every time you run `cortex d
133
125
## Examples of Cortex deployments
134
126
135
127
<!-- CORTEX_VERSION_README_MINOR x5 -->
136
-
*[Sentiment analysis](https://github.com/cortexlabs/cortex/tree/0.12/examples/tensorflow/sentiment-analyzer): deploy a BERT model for sentiment analysis.
137
-
*[Image classification](https://github.com/cortexlabs/cortex/tree/0.12/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
138
-
*[Search completion](https://github.com/cortexlabs/cortex/tree/0.12/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
139
-
*[Text generation](https://github.com/cortexlabs/cortex/tree/0.12/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.
140
-
*[Iris classification](https://github.com/cortexlabs/cortex/tree/0.12/examples/sklearn/iris-classifier): deploy a scikit-learn model to classify iris flowers.
128
+
*[Sentiment analysis](https://github.com/cortexlabs/cortex/tree/0.13/examples/tensorflow/sentiment-analyzer): deploy a BERT model for sentiment analysis.
129
+
*[Image classification](https://github.com/cortexlabs/cortex/tree/0.13/examples/tensorflow/image-classifier): deploy an Inception model to classify images.
130
+
*[Search completion](https://github.com/cortexlabs/cortex/tree/0.13/examples/pytorch/search-completer): deploy Facebook's RoBERTa model to complete search terms.
131
+
*[Text generation](https://github.com/cortexlabs/cortex/tree/0.13/examples/pytorch/text-generator): deploy Hugging Face's DistilGPT2 model to generate text.
132
+
*[Iris classification](https://github.com/cortexlabs/cortex/tree/0.13/examples/sklearn/iris-classifier): deploy a scikit-learn model to classify iris flowers.
Copy file name to clipboardExpand all lines: docs/deployments/onnx.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -55,7 +55,7 @@ You can log information about each request by adding a `?debug=true` parameter t
55
55
An ONNX Predictor is a Python class that describes how to serve your ONNX model to make predictions.
56
56
57
57
<!-- CORTEX_VERSION_MINOR -->
58
-
Cortex provides an `onnx_client` and a config object to initialize your implementation of the ONNX Predictor class. The `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session and helps make predictions using your model. Once your implementation of the ONNX Predictor class has been initialized, the replica is available to serve requests. Upon receiving a request, your implementation's `predict()` function is called with the JSON payload and is responsible for returning a prediction or batch of predictions. Your `predict()` function should call `onnx_client.predict()` to make an inference against your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
58
+
Cortex provides an `onnx_client` and a config object to initialize your implementation of the ONNX Predictor class. The `onnx_client` is an instance of [ONNXClient](https://github.com/cortexlabs/cortex/tree/0.13/pkg/workloads/cortex/lib/client/onnx.py) that manages an ONNX Runtime session and helps make predictions using your model. Once your implementation of the ONNX Predictor class has been initialized, the replica is available to serve requests. Upon receiving a request, your implementation's `predict()` function is called with the JSON payload and is responsible for returning a prediction or batch of predictions. Your `predict()` function should call `onnx_client.predict()` to make an inference against your exported ONNX model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
Copy file name to clipboardExpand all lines: docs/deployments/tensorflow.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -56,7 +56,7 @@ You can log information about each request by adding a `?debug=true` parameter t
56
56
A TensorFlow Predictor is a Python class that describes how to serve your TensorFlow model to make predictions.
57
57
58
58
<!-- CORTEX_VERSION_MINOR -->
59
-
Cortex provides a `tensorflow_client` and a config object to initialize your implementation of the TensorFlow Predictor class. The `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/master/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container via gRPC to make predictions using your model. Once your implementation of the TensorFlow Predictor class has been initialized, the replica is available to serve requests. Upon receiving a request, your implementation's `predict()` function is called with the JSON payload and is responsible for returning a prediction or batch of predictions. Your `predict()` function should call `tensorflow_client.predict()` to make an inference against your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
59
+
Cortex provides a `tensorflow_client` and a config object to initialize your implementation of the TensorFlow Predictor class. The `tensorflow_client` is an instance of [TensorFlowClient](https://github.com/cortexlabs/cortex/tree/0.13/pkg/workloads/cortex/lib/client/tensorflow.py) that manages a connection to a TensorFlow Serving container via gRPC to make predictions using your model. Once your implementation of the TensorFlow Predictor class has been initialized, the replica is available to serve requests. Upon receiving a request, your implementation's `predict()` function is called with the JSON payload and is responsible for returning a prediction or batch of predictions. Your `predict()` function should call `tensorflow_client.predict()` to make an inference against your exported TensorFlow model. Preprocessing of the JSON payload and postprocessing of predictions can be implemented in your `predict()` function as well.
Copy file name to clipboardExpand all lines: docs/packaging-models/tensorflow.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
# TensorFlow
2
2
3
3
<!-- CORTEX_VERSION_MINOR -->
4
-
Export your trained model and upload the export directory, or a checkpoint directory containing the export directory (which is usually the case if you used `estimator.train_and_evaluate`). An example is shown below (here is the [complete example](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/sentiment-analyzer)):
4
+
Export your trained model and upload the export directory, or a checkpoint directory containing the export directory (which is usually the case if you used `estimator.train_and_evaluate`). An example is shown below (here is the [complete example](https://github.com/cortexlabs/cortex/blob/0.13/examples/tensorflow/sentiment-analyzer)):
Copy file name to clipboardExpand all lines: examples/tensorflow/image-classifier/inception.ipynb
+1-1
Original file line number
Diff line number
Diff line change
@@ -204,7 +204,7 @@
204
204
},
205
205
"source": [
206
206
"<!-- CORTEX_VERSION_MINOR -->\n",
207
-
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/image-classifier) for how to deploy the model as an API."
207
+
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/0.13/examples/tensorflow/image-classifier) for how to deploy the model as an API."
Copy file name to clipboardExpand all lines: examples/tensorflow/iris-classifier/tensorflow.ipynb
+1-1
Original file line number
Diff line number
Diff line change
@@ -289,7 +289,7 @@
289
289
},
290
290
"source": [
291
291
"<!-- CORTEX_VERSION_MINOR -->\n",
292
-
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/iris-classifier) for how to deploy the model as an API."
292
+
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/0.13/examples/tensorflow/iris-classifier) for how to deploy the model as an API."
Copy file name to clipboardExpand all lines: examples/tensorflow/sentiment-analyzer/bert.ipynb
+1-1
Original file line number
Diff line number
Diff line change
@@ -1000,7 +1000,7 @@
1000
1000
},
1001
1001
"source": [
1002
1002
"<!-- CORTEX_VERSION_MINOR -->\n",
1003
-
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/sentiment-analyzer) for how to deploy the model as an API."
1003
+
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/0.13/examples/tensorflow/sentiment-analyzer) for how to deploy the model as an API."
Copy file name to clipboardExpand all lines: examples/tensorflow/text-generator/gpt-2.ipynb
+2-2
Original file line number
Diff line number
Diff line change
@@ -346,7 +346,7 @@
346
346
},
347
347
"source": [
348
348
"<!-- CORTEX_VERSION_MINOR x2 -->\n",
349
-
"We also need to upload `vocab.bpe` and `encoder.json`, so that the [encoder](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/text-generator/encoder.py) in the [Predictor](https://github.com/cortexlabs/cortex/blob/master/examples/tensorflow/text-generator/predictor.py) can encode the input text before making a request to the model."
349
+
"We also need to upload `vocab.bpe` and `encoder.json`, so that the [encoder](https://github.com/cortexlabs/cortex/blob/0.13/examples/tensorflow/text-generator/encoder.py) in the [Predictor](https://github.com/cortexlabs/cortex/blob/0.13/examples/tensorflow/text-generator/predictor.py) can encode the input text before making a request to the model."
350
350
]
351
351
},
352
352
{
@@ -376,7 +376,7 @@
376
376
},
377
377
"source": [
378
378
"<!-- CORTEX_VERSION_MINOR -->\n",
379
-
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/text-generator) for how to deploy the model as an API."
379
+
"That's it! See the [example on GitHub](https://github.com/cortexlabs/cortex/tree/0.13/examples/tensorflow/text-generator) for how to deploy the model as an API."
Copy file name to clipboardExpand all lines: examples/xgboost/iris-classifier/xgboost.ipynb
+1-1
Original file line number
Diff line number
Diff line change
@@ -237,7 +237,7 @@
237
237
},
238
238
"source": [
239
239
"<!-- CORTEX_VERSION_MINOR -->\n",
240
-
"That's it! See the [example](https://github.com/cortexlabs/cortex/tree/master/examples/xgboost/iris-classifier) for how to deploy the model as an API."
240
+
"That's it! See the [example](https://github.com/cortexlabs/cortex/tree/0.13/examples/xgboost/iris-classifier) for how to deploy the model as an API."
0 commit comments