Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Handle meta tensors in FX quantization (pytorch#2622)
Summary: Pull Request resolved: pytorch#2622 X-link: pytorch/pytorch#142262 If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization. Quantization should also not fail if new quantized module is created on a meta device. If devices contain meta, copying from meta to meta is not necessary, copying from another device to meta can be skipped. Reviewed By: emlin Differential Revision: D66895899 fbshipit-source-id: bba8de9ddc5f86292521985dc588f9dbe14b4b4c
- Loading branch information