Skip to content

CoreML quantizer does not work if model has nn.Embedding #9969

Open
@metascroy

Description

@metascroy

🚀 The feature, motivation and pitch

CoreML fails during lowering when the model has an embedding op and the CoreML quantizer is used.

quantization_config = LinearQuantizerConfig.from_dict(
    {
        "global_config": {
            "quantization_scheme": QuantizationScheme.symmetric,
            "activation_dtype": torch.quint8,
            "weight_dtype": torch.qint8,
            "weight_per_channel": True,
        }
    }
)
quantizer = CoreMLQuantizer(quantization_config)
prepared_graph = prepare_pt2e(pre_autograd_aten_dialect, quantizer)

Alternatives

No response

Additional context

No response

RFC (Optional)

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    actionableItems in the backlog waiting for an appropriate impl/fixtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

    Type

    No type

    Projects

    Status

    Todo

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions