File tree Expand file tree Collapse file tree 8 files changed +21
-203
lines changed
deploy-model-with-fastapi Expand file tree Collapse file tree 8 files changed +21
-203
lines changed Original file line number Diff line number Diff line change 1- # getting-started-examples
2- Examples to get started with using TrueFoundry
1+ # Deploy Scikit-Learn Iris flower classification model with FastAPI
32
4- Deployment
53---
6- This example runs a simple iris app for inferring using a iris classifier.
7- Mainly this example shows how to deploy to TrueFoundry using a Pythonfile and TrueFoundry Python SDK.
84
9- ## Run Locally
5+ ### Install requirements
106
1171 . Install requirements
128
139``` shell
1410python -m pip install -r requirements.txt
1511```
1612
17- 2 . Start the iris app
13+ ### Start the server
1814
1915``` shell
2016export MODEL_DIR=" $( pwd) "
2117gunicorn -k uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000 server:app
2218```
2319
24- ## Deploy with TrueFoundry
25-
26- 1 . Install ` truefoundry `
27-
28- ``` shell
29- python -m pip install -U " truefoundry>=0.10.0,<0.11.0"
30- ```
31-
32- 2 . Login
33-
34- ``` shell
35- tfy login --host " <Host name of TrueFoundry UI. e.g. https://company.truefoundry.cloud>"
36- ```
37-
38- 3 . Deploy!
39-
40- > Please refer to following docs
41- > - [ Getting workspace FQN] ( https://docs.truefoundry.com/docs/key-concepts#get-workspace-fqn )
42- > - [ Get host and path for deploying applications] ( https://docs.truefoundry.com/docs/define-ports-and-domains#identifying-available-domains )
20+ ### Example Inference Call
4321
4422``` shell
45- python deploy.py --name iris --workspace-fqn < Workspace FQN> --host < Ingress Host for the cluster> --path < optional path>
23+ curl -X ' POST' \
24+ ' http://0.0.0.0:8000/predict?sepal_length=1&sepal_width=1&petal_length=1&petal_width=1' \
25+ -H ' accept: application/json'
4626```
Load Diff This file was deleted.
Original file line number Diff line number Diff line change 11model /
22logs /
3+ model_store /
Original file line number Diff line number Diff line change 1+ logs /
Original file line number Diff line number Diff line change @@ -33,10 +33,6 @@ WORKDIR /home/model-server
3333COPY requirements.txt requirements.txt
3434RUN python -m pip install --no-cache-dir -r requirements.txt
3535
36- # Here we are copying the models to the image. But for large models it is recommended to cache the models in a volume.
37- # See: https://docs.truefoundry.com/docs/download-and-cache-models#download-models-and-artifacts
38- COPY model_store/ model_store/
39-
4036COPY config.properties config.properties
4137
4238ENTRYPOINT ["multi-model-server" ]
Original file line number Diff line number Diff line change 11# Deploy MNIST Model with MMS
2+
23---
34
5+ ### Install requirements
6+
7+ ``` bash
8+ python -m pip install -r requirements.txt
9+ ```
10+
411### Package the model
512
613``` bash
714model-archiver --model-name mnist --model-path model/ --handler mnist_handler.py:handle --export-path model_store/ --runtime python --force
815```
916
10- ### Deploy
17+ ### Start the server
1118
12- ``` shell
13- python deploy.py --workspace-fqn ... --host ... --path ...
19+ ``` bash
20+ export MODEL_DIR=" $( pwd) /model_store"
21+ multi-model-server --foreground --model-store $MODEL_DIR --start --mms-config config.properties
1422```
1523
1624### Example Inference Call
1725
1826``` bash
19- curl -X POST -H " Content-Type: application/json" https ://< endpoint > /predictions/mnist -T 0.png
27+ curl -X POST -H " Content-Type: application/json" http ://0.0.0.0:8080 /predictions/mnist -T 0.png
2028```
Original file line number Diff line number Diff line change @@ -11,5 +11,4 @@ load_models=ALL
1111management_address =http://0.0.0.0:8081
1212max_request_size =6553500
1313max_response_size =6553500
14- model_store =/home/model-server/model_store
1514number_of_netty_threads =8
Load Diff This file was deleted.
You can’t perform that action at this time.
0 commit comments