Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Executorch initial support #28425

Merged
Merged
Show file tree
Hide file tree
Changes from 10 commits
Commits
Show all changes
68 commits
Select commit Hold shift + click to select a range
af9d22d
Support for executorch OV backend
ynimmaga Nov 13, 2024
41109cb
Merge branch 'openvinotoolkit:master' into executorch_ov_backend
ynimmaga Dec 19, 2024
f393338
Added missing executorch copy ops
ynimmaga Dec 19, 2024
01e7517
ExecuTorchPythonDecoder added
cavusmustafa Jan 14, 2025
d94df45
Merge branch 'master' into executorch_initial_support
cavusmustafa Jan 14, 2025
b0e0e29
Fix op conversion issues due to merging master
cavusmustafa Jan 14, 2025
c369635
Use argument to enable executorch instead of setting option
cavusmustafa Jan 14, 2025
fa160e6
Merge branch 'master' into executorch_initial_support
cavusmustafa Jan 14, 2025
291379d
Temporary solution for ExecuTorch and GPTQ conflict
cavusmustafa Jan 16, 2025
d2de31f
Merge branch 'master' into executorch_initial_support
cavusmustafa Jan 16, 2025
3f56967
Update src/bindings/python/src/openvino/frontend/pytorch/torchdynamo/…
cavusmustafa Jan 24, 2025
8843ce3
Disable model caching for executorch
cavusmustafa Jan 24, 2025
0973735
Added tests for new _copy ops
cavusmustafa Jan 25, 2025
8e70bc8
Merge branch 'master' into executorch_initial_support
cavusmustafa Jan 25, 2025
db5feec
Constructor updates for TorchFX and ExecuTorch decoders
cavusmustafa Feb 5, 2025
6de5476
Merge branch 'master' into executorch_initial_support
cavusmustafa Feb 5, 2025
ee71caf
Merged ExecutorchPythonDecoder into TorchFXPythonDecoder
cavusmustafa Feb 6, 2025
f978196
New tests enabled only for torch fx
cavusmustafa Feb 7, 2025
0bbfb3e
executorch test workflow file added
cavusmustafa Feb 7, 2025
586e9a0
executorch test workflow updated
cavusmustafa Feb 7, 2025
1971705
executorch test workflow updated
cavusmustafa Feb 7, 2025
39315f6
executorch test workflow updated
cavusmustafa Feb 7, 2025
d3e5ef9
executorch test workflow updated
cavusmustafa Feb 7, 2025
719b027
executorch test workflow updated
cavusmustafa Feb 7, 2025
497e286
executorch test workflow updated
cavusmustafa Feb 7, 2025
681ba8f
executorch test workflow updated
cavusmustafa Feb 7, 2025
5f0cc92
executorch test workflow updated
cavusmustafa Feb 7, 2025
040a858
Merge pull request #10 from cavusmustafa/executorch_tests
cavusmustafa Feb 11, 2025
89df4f4
updated executorch test workflow
cavusmustafa Feb 11, 2025
75d6451
updated executorch test workflow
cavusmustafa Feb 11, 2025
438bb6d
updated executorch test workflow
cavusmustafa Feb 11, 2025
66b4ee5
Merge pull request #11 from cavusmustafa/executorch_tests
cavusmustafa Feb 11, 2025
4b4d686
updated executorch test workflow
cavusmustafa Feb 11, 2025
032d03e
updated executorch test workflow
cavusmustafa Feb 11, 2025
5252c73
Update src/bindings/python/src/openvino/frontend/pytorch/fx_decoder.py
cavusmustafa Feb 11, 2025
709099f
Update src/bindings/python/src/openvino/frontend/pytorch/torchdynamo/…
cavusmustafa Feb 11, 2025
151cf2e
Updated executorch test workflow
cavusmustafa Feb 11, 2025
e9c1979
Update src/bindings/python/src/openvino/frontend/pytorch/fx_decoder.py
cavusmustafa Feb 11, 2025
539cb90
Updated executorch test workflow
cavusmustafa Feb 11, 2025
ef2ca00
Updated executorch test workflow
cavusmustafa Feb 11, 2025
b974ba9
Updated executorch test workflow
cavusmustafa Feb 11, 2025
3b425db
Updated executorch test workflow
cavusmustafa Feb 11, 2025
c31dc3e
Updated executorch test workflow
cavusmustafa Feb 11, 2025
b60e4e1
Updated executorch test workflow
cavusmustafa Feb 11, 2025
2296d0c
Updated executorch test workflow
cavusmustafa Feb 12, 2025
fd96d44
Updated executorch test workflow
cavusmustafa Feb 12, 2025
dbc46ba
Updated executorch test workflow
cavusmustafa Feb 12, 2025
217c005
Updated executorch test workflow
cavusmustafa Feb 12, 2025
b8a3184
Updated executorch test workflow
cavusmustafa Feb 12, 2025
7fcffc0
Updated executorch test workflow
cavusmustafa Feb 12, 2025
aaa5e29
Updated executorch test workflow
cavusmustafa Feb 12, 2025
b91f2d2
Updated executorch test workflow
cavusmustafa Feb 12, 2025
8cb1752
Updated executorch workflow
cavusmustafa Feb 12, 2025
22df91b
Updated executorch workflow
cavusmustafa Feb 12, 2025
773827f
Updated executorch workflow
cavusmustafa Feb 12, 2025
3ef968b
Updated executorch workflow
cavusmustafa Feb 12, 2025
491cf03
Updated executorch workflow
cavusmustafa Feb 12, 2025
9588ec5
Updated executorch workflow
cavusmustafa Feb 13, 2025
478f91b
Updated executorch workflow
cavusmustafa Feb 13, 2025
97d1e0f
Updated executorch workflow
cavusmustafa Feb 13, 2025
378e7dc
Updated executorch workflow
cavusmustafa Feb 13, 2025
c58cbbd
Updated executorch workflow
cavusmustafa Feb 13, 2025
fd6bb06
Updated executorch workflow
cavusmustafa Feb 13, 2025
fc38083
Updated executorch workflow
cavusmustafa Feb 13, 2025
9189a3b
Updated executorch workflow
cavusmustafa Feb 13, 2025
f0ea5c2
Updated executorch workflow
cavusmustafa Feb 13, 2025
db9c0ab
Remove executorch test workflow
cavusmustafa Feb 13, 2025
e7bbf25
Remove env setting used for debugging
cavusmustafa Feb 13, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
101 changes: 101 additions & 0 deletions src/bindings/python/src/openvino/frontend/pytorch/fx_decoder.py
Original file line number Diff line number Diff line change
Expand Up @@ -453,3 +453,104 @@ def mark_node(self, node):
node.set_friendly_name(name)
super().mark_node(node)
return node


class ExecuTorchPythonDecoder (TorchFXPythonDecoder):

# TODO: The constructor of ExecuTorchPythonDecoder is mostly similar to the
# constructor TorchFXTorchPythonDecoder. Update this to utilize a common
# implementation.
def __init__(self, pt_module, fx_gm=None, nodes=None,
mark_node_callback=None, input_shapes=[], input_types=[], dynamic_shapes=False):
super().__init__(mark_node_callback)
self.pt_module = pt_module
self.fx_gm = fx_gm if fx_gm is not None else pt_module
self.input_types = [OVAny(pt_to_ov_type_map[str(t)])
for t in input_types]
self.input_shapes = input_shapes

self._input_signature = []
self._example_input = None

if issubclass(type(pt_module), torch.fx.graph_module.GraphModule):
self._input_is_list = None
self._nodes = list(pt_module.graph.nodes)
found_types = []
found_shapes = []
for i, value in enumerate(self._nodes):
if value.op == 'placeholder':
self._inputs.append(i)
self._input_signature.append(value.name)
if hasattr(value, "meta") and ('tensor_meta' in value.meta.keys()) and value.meta['tensor_meta']:
found_shapes.append(value.meta['tensor_meta'].shape)
found_types.append(
OVAny(pt_to_ov_type_map[str(value.meta['tensor_meta'].dtype)]))
else:
if hasattr(value, "meta") and ('val' in value.meta.keys()):
found_shapes.append(value.meta["val"].shape)
found_types.append(None)
else:
found_shapes.append(None)
found_types.append(None)
elif value.op == 'output':
# Instead of putting output index, refer to its target
uargs = self.unpack_containers(value.args)
self._outputs = [(arg[0], self._nodes.index(arg[1]))
for arg in uargs if arg[1] is not None]
for idx, shape in enumerate(found_shapes):
if shape is not None:
new_shape = []
for dim in shape:
if (dynamic_shapes or type(dim).__name__ == "SymInt"):
new_shape.append(-1)
else:
new_shape.append(dim)
found_shapes[idx] = torch.Size(new_shape)

if not input_shapes or len(input_shapes) == 0:
self.input_shapes = found_shapes
if not input_types or len(input_types) == 0:
self.input_types = found_types

if hasattr(pt_module, "forward"):
input_params = inspect.signature(pt_module.forward).parameters
self._input_signature = list(input_params)

elif issubclass(type(pt_module), torch.fx.Node):
self._nodes = nodes # passed from outer context

# FIXME: Quadratic complexity nodes*nodes considering the outer loop over all nodes
self._outputs = [("", self._nodes.index(pt_module))]

self.input_types = []
for arg in pt_module.args:
if isinstance(arg, torch.fx.Node):
self._inputs.append(self._nodes.index(arg))
else:
# Not a node, consider it inlined
self._inputs.append(InlinedInput(arg))
self.input_types.append(
BaseFXDecoder.get_type_for_value(arg))

def visit_subgraph(self, node_visitor):
# make sure topological order is satisfied
for node in self._nodes:
if node.op == 'placeholder' or node.op == 'output':
continue # skipping non-operational nodes
if node.op == 'call_function' and str(node.target) in ["aten._assert_async.msg"]:
continue
decoder = ExecuTorchPythonDecoder(
node, self.fx_gm, self._nodes, mark_node_callback=self.mark_node_callback)
self.m_decoders.append(decoder)
node_visitor(decoder)

def get_op_type(self):
if "getitem" in str(self.pt_module.target):
return str(self.pt_module.target)
elif self.pt_module.op == 'call_function':
return self.pt_module.target.__name__
elif self.pt_module.op == 'get_attr':
return 'get_attr' # FIXME should be aligned with get_attr from TS implementation
else:
return 'UNKNOWN_TYPE_' + str(self.pt_module.op)

Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@
from torch.fx import GraphModule

from openvino.frontend import FrontEndManager
from openvino.frontend.pytorch.fx_decoder import TorchFXPythonDecoder
from openvino import Core, Type, PartialShape, serialize
from openvino.frontend.pytorch.fx_decoder import TorchFXPythonDecoder, ExecuTorchPythonDecoder
from openvino.runtime import Core, Type, PartialShape, serialize
from openvino.frontend.pytorch.torchdynamo.backend_utils import _get_cache_dir, _get_device, _get_config, _is_cache_dir_in_config

from typing import Callable, Optional
Expand Down Expand Up @@ -78,7 +78,7 @@ def openvino_compile_cached_model(cached_model_path, options, *example_inputs):

return compiled_model

def openvino_compile(gm: GraphModule, *args, model_hash_str: str = None, options=None):
def openvino_compile(gm: GraphModule, *args, model_hash_str: str = None, options=None, executorch=False):
core = Core()

device = _get_device(options)
Expand All @@ -101,7 +101,10 @@ def openvino_compile(gm: GraphModule, *args, model_hash_str: str = None, options
input_types.append(input_data.type())
input_shapes.append(input_data.size())

decoder = TorchFXPythonDecoder(gm)
if executorch:
decoder = ExecuTorchPythonDecoder(gm)
else:
decoder = TorchFXPythonDecoder(gm)

im = fe.load(decoder)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,7 @@ def __init__(self, options):
"torch.ops.aten.argmin.default": None,
"torch.ops.aten.as_strided.default": None,
"torch.ops.aten.as_strided_.default": None,
"torch.ops.aten.as_strided_copy.default": None,
"torch.ops.aten.asin.default": None,
"torch.ops.aten.asinh.default": None,
"torch.ops.aten.asinh.default": None,
Expand Down Expand Up @@ -118,6 +119,7 @@ def __init__(self, options):
"torch.ops.aten.erf.default": None,
"torch.ops.aten.exp.default": None,
"torch.ops.aten.expand.default": None,
"torch.ops.aten.expand_copy.default": None,
"torch.ops.aten.fake_quantize_per_channel_affine_cachemask.default": None,
"torch.ops.aten.fill.Scalar": None,
"torch.ops.aten.fill_.Scalar": None,
Expand Down Expand Up @@ -196,6 +198,7 @@ def __init__(self, options):
"torch.ops.aten.new_zeros.default": None,
"torch.ops.aten.ones.default": None,
"torch.ops.aten.permute.default": None,
"torch.ops.aten.permute_copy.default": None,
"torch.ops.aten.pow.Scalar": None,
"torch.ops.aten.pow.Tensor_Scalar": None,
"torch.ops.aten.pow.Tensor_Tensor": None,
Expand All @@ -213,6 +216,7 @@ def __init__(self, options):
"torch.ops.aten.scatter.src": None,
"torch.ops.aten.scatter.value": None,
"torch.ops.aten.select.int": None,
"torch.ops.aten.select_copy.int": None,
"torch.ops.aten.select_scatter.default": None,
"torch.ops.aten.sigmoid.default": None,
"torch.ops.aten.sigmoid_.default": None,
Expand All @@ -222,13 +226,16 @@ def __init__(self, options):
"torch.ops.aten.sin.default": None,
"torch.ops.aten.sinh.default": None,
"torch.ops.aten.slice.Tensor": None,
"torch.ops.aten.slice_copy.Tensor": None,
"torch.ops.aten.slice_scatter.default": None,
"torch.ops.aten.sort.default": None,
"torch.ops.aten.split.Tensor": None,
"torch.ops.aten.split_with_sizes.default": None,
"torch.ops.aten.split_with_sizes_copy.default": None,
"torch.ops.aten.sqrt.default": None,
"torch.ops.aten.squeeze.dim": None,
"torch.ops.aten.squeeze.dims": None,
"torch.ops.aten.squeeze_copy.dims": None,
"torch.ops.aten.stack.default": None,
"torch.ops.aten.std.correction": None,
"torch.ops.aten.sub.default": None,
Expand All @@ -246,10 +253,12 @@ def __init__(self, options):
"torch.ops.aten.unbind.int": None,
"torch.ops.aten.unfold.default": None,
"torch.ops.aten.unsqueeze.default": None,
"torch.ops.aten.unsqueeze_copy.default": None,
"torch.ops.aten.upsample_nearest2d.default": None,
"torch.ops.aten.var.correction": None,
"torch.ops.aten.var_mean.correction": None,
"torch.ops.aten.view.default": None,
"torch.ops.aten.view_copy.default": None,
"torch.ops.aten.where.self": None,
"torch.ops.aten.zeros.default": None,
"torch.ops.aten.zeros_like.default": None,
Expand Down
9 changes: 9 additions & 0 deletions src/frontends/pytorch/src/op_table.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -789,6 +789,7 @@ const std::unordered_map<std::string, CreatorFunction> get_supported_ops_fx() {
{"aten.argmin.default", op::translate_argmin},
{"aten.as_strided.default", op::translate_as_strided},
{"aten.as_strided_.default", op::translate_as_strided},
{"aten.as_strided_copy.default", op::translate_as_strided},
{"aten.asin.default", op::translate_1to1_match_1_inputs_with_fp32_type_alignment<opset10::Asin>},
{"aten.asinh.default", op::translate_1to1_match_1_inputs_with_fp32_type_alignment<opset10::Asinh>},
{"aten.atan.default", op::translate_1to1_match_1_inputs_with_fp32_type_alignment<opset10::Atan>},
Expand Down Expand Up @@ -839,6 +840,7 @@ const std::unordered_map<std::string, CreatorFunction> get_supported_ops_fx() {
{"aten.exp.default", op::translate_1to1_match_1_inputs_with_fp32_type_alignment<opset10::Exp>},
{"aten.expm1.default", op::translate_expm1},
{"aten.expand.default", op::translate_expand},
{"aten.expand_copy.default", op::translate_expand},
{"aten.eye.m", op::translate_eye_fx},
{"aten.fake_quantize_per_channel_affine_cachemask.default", op::translate_fake_quantize_per_channel_affine_fx},
{"aten.fill.Scalar", op::translate_fill},
Expand Down Expand Up @@ -920,6 +922,7 @@ const std::unordered_map<std::string, CreatorFunction> get_supported_ops_fx() {
{"aten.ones.names", op::translate_ones_fx},
{"aten.ones_like.default", op::translate_ones_like_fx},
{"aten.permute.default", op::translate_permute},
{"aten.permute_copy.default", op::translate_1to1_match_2_inputs<opset10::Transpose>},
{"aten.pow.Scalar", op::translate_pow},
{"aten.pow.Tensor_Scalar", op::translate_pow},
{"aten.pow.Tensor_Tensor", op::translate_pow},
Expand All @@ -942,6 +945,7 @@ const std::unordered_map<std::string, CreatorFunction> get_supported_ops_fx() {
{"aten.scatter.value", op::translate_scatter},
{"aten.scatter_add.default", op::translate_scatter_add},
{"aten.select.int", op::translate_select},
{"aten.select_copy.int", op::translate_select},
{"aten.select_scatter.default", op::translate_select_scatter_fx},
{"aten.sigmoid.default", op::translate_1to1_match_1_inputs_with_fp32_type_alignment<opset10::Sigmoid>},
{"aten.sigmoid_.default", op::translate_1to1_match_1_inputs_with_fp32_type_alignment<opset10::Sigmoid>},
Expand All @@ -951,13 +955,16 @@ const std::unordered_map<std::string, CreatorFunction> get_supported_ops_fx() {
{"aten.sin.default", op::translate_1to1_match_1_inputs_with_fp32_type_alignment<opset10::Sin>},
{"aten.sinh.default", op::translate_1to1_match_1_inputs_with_fp32_type_alignment<opset10::Sinh>},
{"aten.slice.Tensor", op::translate_slice_fx},
{"aten.slice_copy.Tensor", op::translate_slice_fx},
{"aten.slice_scatter.default", op::translate_slice_scatter_fx},
{"aten.sort.default", op::translate_sort_fx},
{"aten.split.Tensor", op::translate_chunk_fx},
{"aten.split_with_sizes.default", op::translate_split_with_sizes_fx},
{"aten.split_with_sizes_copy.default", op::translate_split_with_sizes_fx},
{"aten.sqrt.default", op::translate_1to1_match_1_inputs_with_fp32_type_alignment<opset10::Sqrt>},
{"aten.squeeze.dim", op::translate_squeeze},
{"aten.squeeze.dims", op::translate_squeeze},
{"aten.squeeze_copy.dims", op::translate_squeeze},
{"aten.stack.default", op::translate_stack_fx},
{"aten.std.correction", op::translate_std_fx},
{"aten.sub.default", op::translate_sub_fx},
Expand All @@ -975,10 +982,12 @@ const std::unordered_map<std::string, CreatorFunction> get_supported_ops_fx() {
{"aten.unbind.int", op::translate_unbind_int_fx},
{"aten.unfold.default", op::translate_unfold},
{"aten.unsqueeze.default", op::translate_1to1_match_2_inputs<opset10::Unsqueeze>},
{"aten.unsqueeze_copy.default", op::translate_1to1_match_2_inputs<opset10::Unsqueeze>},
{"aten.upsample_nearest2d.default", op::translate_upsample_nearest2d},
{"aten.var.correction", op::translate_var_fx},
{"aten.var_mean.correction", op::translate_var_mean_fx},
{"aten.view.default", op::translate_reshape},
{"aten.view_copy.default", op::translate_reshape},
{"aten.where.self", op::translate_where},
{"aten.zeros.default", op::translate_zeros_fx},
{"aten.zeros.names", op::translate_zeros_fx},
Expand Down
Loading