Skip to content
This repository has been archived by the owner on May 23, 2024. It is now read-only.

how to handle application/x-image ? #138

Open
Adblu opened this issue Jun 8, 2020 · 9 comments
Open

how to handle application/x-image ? #138

Adblu opened this issue Jun 8, 2020 · 9 comments

Comments

@Adblu
Copy link

Adblu commented Jun 8, 2020

Im getting:

"2020-06-05T11:05:57.016:[sagemaker logs]: data/Third_Set_Case_12774_Bilde_18076231_17.jpg: {"error": "Unsupported Media Type: application/x-image"}" 

Its documented in official documentation:

tensorflow_serving_transformer.transform(input_path, content_type='application/x-image')

How to get rid of this error ?

@nadiaya
Copy link
Contributor

nadiaya commented Jun 8, 2020

It is not a supported by default content type.
Here is an example of how to write a pre- and post-processing script in order for it to work: https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker_batch_transform/tensorflow_open-images_jpg/tensorflow-serving-jpg-python-sdk.ipynb

Could you share a link to the official documentation you are referring to, so we could clarify it?

@Adblu
Copy link
Author

Adblu commented Jun 9, 2020

It is here: link 1, link 2

@laurenyu
Copy link
Contributor

sorry for the delayed response here. in the links you provided, I'm seeing "application/x-image" only as an example.

documentation for supported content types: https://github.com/aws/sagemaker-tensorflow-serving-container/blob/master/README.md#prepost-processing

@kafka399
Copy link

@laurenyu I would suggest adding an explicit comment, that 'application/x-image' is not supported by default.

@laurenyu
Copy link
Contributor

@kafka399 thanks for the suggestion! I'm no longer a maintainer of this repository, so I probably won't get to this particularly soon - feel free to submit a PR yourself if you'd like.

cc @ChuyangDeng

@asmaier
Copy link

asmaier commented Dec 22, 2021

What about the example for inference.py given here: https://docs.aws.amazon.com/sagemaker/latest/dg/neo-deployment-hosting-services-prerequisites.html

It says:

The SageMaker client sends the image file as an application/x-image content type to the input_handler function, where it is converted to JSON.

Is this obsolete?

@asmaier
Copy link

asmaier commented Dec 22, 2021

Here is another example showing that one can send application/x-image to a sagemaker endpoint:

https://sagemaker-examples.readthedocs.io/en/latest/introduction_to_amazon_algorithms/imageclassification_caltech/Image-classification-fulltraining.html#Evaluation

People might really think it should work from all these examples, but it doesn't.

@rbavery
Copy link

rbavery commented Jul 21, 2022

Can we get clarity on this? if this is not supported why are there examples on aws docs using application x-image? it seems like this should be supported by default since it's easiest to send a put request with curl http://localhost:8080/invocations/ -T img.jpg

@ShoSoejima
Copy link

ShoSoejima commented Feb 24, 2023

In a transform job, you can use application/x-image as the content type.
Once you've saved your model and created your model.tar.gz, which doesn't have to include inference.py, you can use your custom input/output handlers. Please look at the following code.

(I checked the code worked with boto3==1.26.31 and sagemaker==2.124.0 .)

import os

import boto3
import sagemaker
from sagemaker.tensorflow import TensorFlowModel

sess = boto3.session.Session()
sagemaker_session = sagemaker.Session(sess)

role = "ARN_OF_YOUR_SAGEMAKER_ROLE"
bucket = "YOUR_BUCKET_NAME"
prefix = "YOUR_OBJECT_PREFIX"
s3_path = f"s3://{bucket}/{prefix}"
input_path = os.path.join(s3_path, "input")
output_path = os.path.join(s3_path, "output")

model_data = sagemaker_session.upload_data(
    "path/to/your/model.tar.gz",
    bucket,
    os.path.join(prefix, "models"),
)
tensorflow_serving_model = TensorFlowModel(
    model_data=model_data,
    role=role,
    framework_version="2.8.0",
    sagemaker_session=sagemaker_session,
    entry_point="path/to/your/inference.py",  # specify your inference.py in your local file system
    name="YOUR_MODEL_NAME",
)
transformer = tensorflow_serving_model.transform(
    instance_count=1,
    instance_type="ml.m5.xlarge",
    max_concurrent_transforms=16,  # optional
    max_payload=4,  # optional
    output_path=output_path,
)
transformer.transform(
    data=input_path,
    data_type="S3Prefix",
    job_name="YOUR_JOB_NAME",
    content_type="application/x-image",
    wait=False,
)

You can include your inference.py in your model.tar.gz. In that case, you need to place the module under a directory named code, and you don't need to specify entry_point or source_dir in the constructor of TensorFlowModel.

Please remember that your custom handlers can handle the content type as follows.

def input_handler(data, context):
    if context.request_content_type == "application/x-image":
        ...

def output_hander(data, context):
    ...

Maybe I'm too late, but I hope this will help you.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

7 participants