[PyTorch FE] Support aten::_to_copy operation #34028
Open
+84
−0
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
[PyTorch FE] Support
aten::_to_copyoperationThis PR teaches the PyTorch Frontend (TorchScript path) how to convert graphs that contain
aten::_to_copy. Recent PyTorch versions increasingly lowerTensor.to(...)into_to_copywhen a real copy and/or dtype conversion is needed, and without this support the conversion fails with an “unsupported op” error.Registered ops:
aten::_to_copy(TorchScript) →op::translate_toCloses #29687
Details
PyTorch reference
PyTorch schema (kwarg-only args):
aten::_to_copyis the internal “do the work” op used byat::to()when something must change (copy and/or dtype). See: Deep dive intoat::to(),at::copy_and memory format.What was broken
We already had
translate_tohandling severalaten::to*variants, but:_to_copywasn’t registered in the TorchScript op table, so the frontend rejected it as unsupported._to_copyshows up as a 7-input schema in TorchScript graphs, andtranslate_todidn’t have a branch for that arity.Why we reuse
translate_to(instead of adding a new file)It’s tempting to add a standalone
translate_to_copy, but that tends to start “simple” and then slowly re-implements the same tricky logic we already have intranslate_to. Reusing the existing translator keeps behavior consistent and avoids duplication.In particular,
translate_toalready correctly handles:prim::dtype(ConvertLike)Convert)ComplexTypeMark)to-family ops (ignore non-functional kwargs likenon_blocking/memory_formatin OpenVINO IR)So the smallest and safest fix is: incorporate
translate_toabout the 7-input_to_copyschema and register_to_copyto that translator.Semantics (what we model)
OpenVINO IR is functional and doesn’t model storage aliasing, so
_to_copyis represented as:dtype=None(no dtype change requested)Other kwargs (layout/device/pin_memory/non_blocking/memory_format) are handled the same way we already handle them for
aten::to.Implementation
src/frontends/pytorch/src/op/to.cppinput_size == 7branch for_to_copydtype=Noneas an early identity returnsrc/frontends/pytorch/src/op_table.cppaten::_to_copy→op::translate_toin the TorchScript op mapTests
Adds deterministic TorchScript layer tests that directly emit
_to_copyusingtorch.ops.aten._to_copy(...):TestAtenToCopy: covers common target dtypes (u8/i*/f16/f32/f64)TestAtenToCopyNoDtype: verifies the dtype=None caseTestAtenToCopyPrimDtype: usesdtype=ref.dtypewithtrace_model=Falseto exercise theprim::dtypepathTests are added to:
tests/layer_tests/pytorch_tests/test_to.pyRequesting ### [PyTorch FE] Support
aten::_to_copyoperationThis PR teaches the PyTorch Frontend (TorchScript path) how to convert graphs that contain
aten::_to_copy. Recent PyTorch versions increasingly lowerTensor.to(...)into_to_copywhen a real copy and/or dtype conversion is needed, and without this support the conversion fails with an “unsupported op” error.Registered ops:
aten::_to_copy(TorchScript) →op::translate_toCloses #29687
Details
PyTorch reference
PyTorch schema (kwarg-only args):
aten::_to_copyis the internal “do the work” op used byat::to()when something must change (copy and/or dtype). See: Deep dive intoat::to(),at::copy_and memory format.What was broken
We already had
translate_tohandling severalaten::to*variants, but:_to_copywasn’t registered in the TorchScript op table, so the frontend rejected it as unsupported._to_copyshows up as a 7-input schema in TorchScript graphs, andtranslate_todidn’t have a branch for that arity.Why we reuse
translate_to(instead of adding a new file)It’s tempting to add a standalone
translate_to_copy, but that tends to start “simple” and then slowly re-implements the same tricky logic we already have intranslate_to. Reusing the existing translator keeps behavior consistent and avoids duplication.In particular,
translate_toalready correctly handles:prim::dtype(ConvertLike)Convert)ComplexTypeMark)to-family ops (ignore non-functional kwargs likenon_blocking/memory_formatin OpenVINO IR)So the smallest and safest fix is: teach
translate_toabout the 7-input_to_copyschema and register_to_copyto that translator.Semantics (what we model)
OpenVINO IR is functional and doesn’t model storage aliasing, so
_to_copyis represented as:dtype=None(no dtype change requested)Other kwargs (layout/device/pin_memory/non_blocking/memory_format) are handled the same way we already handle them for
aten::to.Implementation
src/frontends/pytorch/src/op/to.cppinput_size == 7branch for_to_copydtype=Noneas an early identity returnsrc/frontends/pytorch/src/op_table.cppaten::_to_copy→op::translate_toin the TorchScript op mapTests
Adds deterministic TorchScript layer tests that directly emit
_to_copyusingtorch.ops.aten._to_copy(...):TestAtenToCopy: covers common target dtypes (u8/i*/f16/f32/f64)TestAtenToCopyNoDtype: verifies the dtype=None caseTestAtenToCopyPrimDtype: usesdtype=ref.dtypewithtrace_model=Falseto exercise theprim::dtypepathTests are added to:
tests/layer_tests/pytorch_tests/test_to.pyReferences