You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd like to quantize large input size models such as 1x1024x1024x3.
However, when I try the method of RyzenAI_quant_tutorial/onnx_example/onnx_model_ptq, out of memory issues occurred.
As documents illustrated, calibration needs around 100~1000 images.
But one image takes ~3g. Is there any method to quantize models with large input resolution?
PS: If it is transformer-based, is there any recommended method to do before quantization?
Thanks
The text was updated successfully, but these errors were encountered:
Dear great authors,
I'd like to quantize large input size models such as 1x1024x1024x3.
However, when I try the method of RyzenAI_quant_tutorial/onnx_example/onnx_model_ptq, out of memory issues occurred.
As documents illustrated, calibration needs around 100~1000 images.
But one image takes ~3g. Is there any method to quantize models with large input resolution?
PS: If it is transformer-based, is there any recommended method to do before quantization?
Thanks
The text was updated successfully, but these errors were encountered: