-
Notifications
You must be signed in to change notification settings - Fork 101
Container build not installing/finding dependencies contained in model.tar file #167
Comments
Hi @taylorsweet, it looks like your Therefore, in your case, the directory structure should look like this:
|
Thanks Chuyang! The directory that I'm tarring now looks like this, but I'm getting the same error |
@taylorsweet @chuyang-deng I have similar issue! I previously managed to deploy the model and invoke it in a notebook and then do the post processing in a jupyter notebook with the following code:
now i want to do the post processing by providing an inference.py file so i followed the docs here: and used this snippet:
The dependencies i added:
My problem is:
doesn't complete, and when i checked cloud watch i found the following:
and
which led me to believe that my inference.py was used by the container but not the requirements.txt file i provided therefore No module named 'numpy'! I've aslo tarred my model file as follows:
and when i use
it does deploy my model successfully but completely ignore my code/ directory. |
I face exactly the same issue, were you able to resolve it? |
Hi I am having same problem. can someone help pls? |
Issue: Inference.py dependencies aren't installed in SageMaker tensorflow serving container.
Resulting error: ModuleNotFoundError: No module named 'nltk'
Versioning details
Sagemaker env: conda_python3
Tensorflow version: 2.3.0
Tensorflow serving container versions: 2.0 (also tried 2.1, 2.2, 2.3)
Directory structure containing model & dependencies (prior to tarring)
+-- 1
| +-- variables
| +-- +-- variables.data-00000-of-00001
| +-- +-- variables.index
| +-- saved_model.pb
| +-- code
| +-- +-- inference.py
| +-- +-- requirements.txt
| +-- +-- word_vectors.txt
| +-- +-- bigram.pkl
I have also tried deploying from a separate directory that has a code>lib>external_module, which contains the nltk module itself rather than a requirements file. Neither of these approaches work - both return the same module not found error.
Deployment from SageMaker notebook using the Python SDK:
tensorflow_serving_model = Model(model_data = model_data
,role=role
,framework_version='2.0'
,entry_point='inference.py') #running without the entry point works as expected
tensorflow_serving_model.deploy(initial_instance_count=1,
instance_type='ml.c4.xlarge')
requirements.txt
nltk==3.4.5
more_itertools==8.2.0
gensim==3.8.3
Note: There are no issues with the model file. When I instantiate the tensorflow_serving_model.Model() instance without specifying the inference.py entry point, my model runs successfully and I get predictions back after passing an ndarry.
Thoughts on how to get nltk (and other dependencies) loaded on the serving container? Thank you!!
The text was updated successfully, but these errors were encountered: