Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Removed deprecated ov::affinity property #28247

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -349,7 +349,7 @@ following usage message:
[-api {sync,async}] [-nireq NUMBER_INFER_REQUESTS] [-nstreams NUMBER_STREAMS] [-inference_only [INFERENCE_ONLY]]
[-infer_precision INFER_PRECISION] [-ip {bool,f16,f32,f64,i8,i16,i32,i64,u8,u16,u32,u64}]
[-op {bool,f16,f32,f64,i8,i16,i32,i64,u8,u16,u32,u64}] [-iop INPUT_OUTPUT_PRECISION] [--mean_values [R,G,B]] [--scale_values [R,G,B]]
[-nthreads NUMBER_THREADS] [-pin {YES,NO,NUMA,HYBRID_AWARE}] [-latency_percentile LATENCY_PERCENTILE]
[-nthreads NUMBER_THREADS] [-pin {YES,NO}] [-latency_percentile LATENCY_PERCENTILE]
[-report_type {no_counters,average_counters,detailed_counters}] [-report_folder REPORT_FOLDER] [-pc [PERF_COUNTS]]
[-pcsort {no_sort,sort,simple_sort}] [-pcseq [PCSEQ]] [-exec_graph_path EXEC_GRAPH_PATH] [-dump_config DUMP_CONFIG] [-load_config LOAD_CONFIG]

Expand Down Expand Up @@ -462,10 +462,8 @@ following usage message:
-nthreads NUMBER_THREADS, --number_threads NUMBER_THREADS
Number of threads to use for inference on the CPU (including HETERO and MULTI cases).

-pin {YES,NO,NUMA,HYBRID_AWARE}, --infer_threads_pinning {YES,NO,NUMA,HYBRID_AWARE}
Optional. Enable threads->cores ('YES' which is OpenVINO runtime's default for conventional CPUs), threads->(NUMA)nodes ('NUMA'),
threads->appropriate core types ('HYBRID_AWARE', which is OpenVINO runtime's default for Hybrid CPUs) or completely disable ('NO') CPU threads
pinning for CPU-involved inference.
-pin {YES,NO}, --infer_threads_pinning {YES,NO}
Optional. Enable threads->cores pinning for CPU-involved inference.


Statistics dumping options:
Expand Down Expand Up @@ -577,11 +575,7 @@ following usage message:

Device-specific performance options:
-nthreads <integer> Optional. Number of threads to use for inference on the CPU (including HETERO and MULTI cases).
-pin <string> ("YES"|"CORE") / "HYBRID_AWARE" / ("NO"|"NONE") / "NUMA" Optional. Explicit inference threads binding options (leave empty to let the OpenVINO make a choice):
enabling threads->cores pinning("YES", which is already default for any conventional CPU),
letting the runtime to decide on the threads->different core types("HYBRID_AWARE", which is default on the hybrid CPUs)
threads->(NUMA)nodes("NUMA") or
completely disable("NO") CPU inference threads pinning
-pin <string> "YES" / "NO" Optional. Explicit threads->cores pinning for CPU inference tasks (leave empty to let the OpenVINO make a choice).

Statistics dumping options:
-latency_percentile Optional. Defines the percentile to be reported in latency metric. The valid range is [1, 100]. The default value is 50 (median).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -357,7 +357,6 @@ All parameters must be set before calling ``ov::Core::compile_model()`` in order
- ``ov::hint::enable_hyper_threading``
- ``ov::hint::enable_cpu_pinning``
- ``ov::num_streams``
- ``ov::affinity``
- ``ov::inference_num_threads``
- ``ov::cache_dir``
- ``ov::intel_cpu::denormals_optimization``
Expand All @@ -373,8 +372,6 @@ Read-only properties
- ``ov::device::full_name``
- ``ov::device::capabilities``

.. note::
``ov::affinity`` is replaced by ``ov::hint::enable_cpu_pinning``. As such, it is deprecated in the 2024.0 release and will be removed in the 2025 release.

External Dependencies
###########################################################
Expand Down
12 changes: 3 additions & 9 deletions samples/cpp/benchmark_app/benchmark_app.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -179,13 +179,8 @@ static const char infer_num_threads_message[] = "Optional. Number of threads to
"(including HETERO and MULTI cases).";

// @brief message for CPU threads pinning option
static const char infer_threads_pinning_message[] =
"Optional. Explicit inference threads binding options (leave empty to let the OpenVINO make a choice):\n"
"\t\t\t\tenabling threads->cores pinning(\"YES\", which is already default for any conventional CPU), \n"
"\t\t\t\tletting the runtime to decide on the threads->different core types(\"HYBRID_AWARE\", which is default on "
"the hybrid CPUs) \n"
"\t\t\t\tthreads->(NUMA)nodes(\"NUMA\") or \n"
"\t\t\t\tcompletely disable(\"NO\") CPU inference threads pinning";
static const char infer_threads_pinning_message[] = "Optional. Explicit threads->cores pinning for CPU inference tasks "
"(leave empty to let the OpenVINO make a choice).";

// @brief message for switching memory allocation type option
static const char use_device_mem_message[] =
Expand Down Expand Up @@ -426,8 +421,7 @@ static void show_usage() {
std::cout << std::endl;
std::cout << "Device-specific performance options:" << std::endl;
std::cout << " -nthreads <integer> " << infer_num_threads_message << std::endl;
std::cout << " -pin <string> (\"YES\"|\"CORE\") / \"HYBRID_AWARE\" / (\"NO\"|\"NONE\") / \"NUMA\" "
<< infer_threads_pinning_message << std::endl;
std::cout << " -pin <string> \"YES\" / \"NO\" " << infer_threads_pinning_message << std::endl;
std::cout << " -use_device_mem " << use_device_mem_message << std::endl;
std::cout << std::endl;
std::cout << "Statistics dumping options:" << std::endl;
Expand Down
16 changes: 3 additions & 13 deletions samples/cpp/benchmark_app/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -490,21 +490,11 @@ int main(int argc, char* argv[]) {
}
};

auto fix_pin_option = [](const std::string& str) -> std::string {
if (str == "NO")
return "NONE";
else if (str == "YES")
return "CORE";
else
return str;
};

auto set_nthreads_pin = [&](const std::string& str) {
OPENVINO_SUPPRESS_DEPRECATED_START
auto property_name = str == "nthreads" ? ov::inference_num_threads.name() : ov::affinity.name();
auto property_name =
str == "nthreads" ? ov::inference_num_threads.name() : ov::hint::enable_cpu_pinning.name();
auto property = str == "nthreads" ? ov::inference_num_threads(int(FLAGS_nthreads))
: ov::affinity(fix_pin_option(FLAGS_pin));
OPENVINO_SUPPRESS_DEPRECATED_END
: ov::hint::enable_cpu_pinning(FLAGS_pin);
if (supported(property_name) || device_name == "AUTO") {
// create nthreads/pin primary property for HW device or AUTO if -d is AUTO directly.
device_config[property.first] = property.second;
Expand Down
7 changes: 0 additions & 7 deletions src/bindings/c/include/openvino/c/ov_property.h
Original file line number Diff line number Diff line change
Expand Up @@ -123,13 +123,6 @@ ov_property_key_cache_encryption_callbacks;
OPENVINO_C_VAR(const char*)
ov_property_key_num_streams;

/**
* @brief Read-write property to set/get the name for setting CPU affinity per thread option.
* @ingroup ov_property_c_api
*/
OPENVINO_C_VAR(const char*)
ov_property_key_affinity;

/**
* @brief Read-write property<int32_t string> to set/get the maximum number of threads that can be used
* for inference tasks.
Expand Down
1 change: 0 additions & 1 deletion src/bindings/c/src/ov_property.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@ const char* ov_property_key_max_batch_size = "MAX_BATCH_SIZE";
const char* ov_property_key_cache_dir = "CACHE_DIR";
const char* ov_property_key_cache_mode = "CACHE_MODE";
const char* ov_property_key_num_streams = "NUM_STREAMS";
const char* ov_property_key_affinity = "AFFINITY";
const char* ov_property_key_inference_num_threads = "INFERENCE_NUM_THREADS";
const char* ov_property_key_hint_performance_mode = "PERFORMANCE_HINT";
const char* ov_property_key_hint_enable_cpu_pinning = "ENABLE_CPU_PINNING";
Expand Down
4 changes: 0 additions & 4 deletions src/bindings/js/node/src/helper.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -414,10 +414,6 @@ Napi::Value any_to_js(const Napi::CallbackInfo& info, ov::Any value) {
else if (value.is<int>()) {
return Napi::Number::New(info.Env(), value.as<int>());
}
// Check for ov::Affinity
else if (value.is<ov::Affinity>()) {
return Napi::String::New(info.Env(), value.as<std::string>());
}
// Check for ov::element::Type
else if (value.is<ov::element::Type>()) {
return Napi::String::New(info.Env(), value.as<std::string>());
Expand Down
1 change: 0 additions & 1 deletion src/bindings/python/src/openvino/properties/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@
# SPDX-License-Identifier: Apache-2.0

# Enums
from openvino._pyopenvino.properties import Affinity
from openvino._pyopenvino.properties import CacheMode
from openvino._pyopenvino.properties import WorkloadType

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@
# SPDX-License-Identifier: Apache-2.0

# Enums
from openvino._pyopenvino.properties import Affinity
from openvino._pyopenvino.properties import CacheMode
from openvino._pyopenvino.properties import WorkloadType

Expand All @@ -15,7 +14,6 @@
from openvino._pyopenvino.properties import num_streams
from openvino._pyopenvino.properties import inference_num_threads
from openvino._pyopenvino.properties import compilation_num_threads
from openvino._pyopenvino.properties import affinity
from openvino._pyopenvino.properties import force_tbb_terminate
from openvino._pyopenvino.properties import enable_mmap
from openvino._pyopenvino.properties import supported_properties
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,6 @@ void regmodule_properties(py::module m) {
// Top submodule
py::module m_properties = m.def_submodule("properties", "openvino.properties submodule");

// Submodule properties - enums
py::enum_<ov::Affinity>(m_properties, "Affinity", py::arithmetic())
.value("NONE", ov::Affinity::NONE)
.value("CORE", ov::Affinity::CORE)
.value("NUMA", ov::Affinity::NUMA)
.value("HYBRID_AWARE", ov::Affinity::HYBRID_AWARE);

py::enum_<ov::WorkloadType>(m_properties, "WorkloadType", py::arithmetic())
.value("DEFAULT", ov::WorkloadType::DEFAULT)
.value("EFFICIENT", ov::WorkloadType::EFFICIENT);
Expand All @@ -38,9 +31,6 @@ void regmodule_properties(py::module m) {
wrap_property_RW(m_properties, ov::num_streams, "num_streams");
wrap_property_RW(m_properties, ov::inference_num_threads, "inference_num_threads");
wrap_property_RW(m_properties, ov::compilation_num_threads, "compilation_num_threads");
OPENVINO_SUPPRESS_DEPRECATED_START
wrap_property_RW(m_properties, ov::affinity, "affinity");
OPENVINO_SUPPRESS_DEPRECATED_END
wrap_property_RW(m_properties, ov::force_tbb_terminate, "force_tbb_terminate");
wrap_property_RW(m_properties, ov::enable_mmap, "enable_mmap");
wrap_property_RW(m_properties, ov::weights_path, "weights_path");
Expand Down
8 changes: 1 addition & 7 deletions src/bindings/python/src/pyopenvino/utils/utils.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -217,8 +217,6 @@ py::object from_ov_any(const ov::Any& any) {
return py::cast(any.as<ov::device::Type>());
} else if (any.is<ov::streams::Num>()) {
return py::cast(any.as<ov::streams::Num>());
} else if (any.is<ov::Affinity>()) {
return py::cast(any.as<ov::Affinity>());
} else if (any.is<ov::WorkloadType>()) {
return py::cast(any.as<ov::WorkloadType>());
} else if (any.is<ov::CacheMode>()) {
Expand Down Expand Up @@ -372,9 +370,7 @@ ov::AnyMap py_object_to_any_map(const py::object& py_obj) {
for (auto& item : py::cast<py::dict>(py_obj)) {
std::string key = py::cast<std::string>(item.first);
py::object value = py::cast<py::object>(item.second);
if (py::isinstance<ov::Affinity>(value)) {
return_value[key] = py::cast<ov::Affinity>(value);
} else if (py_object_is_any_map(value)) {
if (py_object_is_any_map(value)) {
return_value[key] = Common::utils::py_object_to_any_map(value);
} else {
return_value[key] = Common::utils::py_object_to_any(value);
Expand Down Expand Up @@ -449,8 +445,6 @@ ov::Any py_object_to_any(const py::object& py_obj) {
return py::cast<ov::device::Type>(py_obj);
} else if (py::isinstance<ov::streams::Num>(py_obj)) {
return py::cast<ov::streams::Num>(py_obj);
} else if (py::isinstance<ov::Affinity>(py_obj)) {
return py::cast<ov::Affinity>(py_obj);
} else if (py::isinstance<ov::WorkloadType>(py_obj)) {
return py::cast<ov::WorkloadType>(py_obj);
} else if (py::isinstance<ov::Tensor>(py_obj)) {
Expand Down
18 changes: 0 additions & 18 deletions src/bindings/python/tests/test_runtime/test_properties.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,15 +45,6 @@ def test_properties_rw_base():
@pytest.mark.parametrize(
("ov_enum", "expected_values"),
[
(
props.Affinity,
(
(props.Affinity.NONE, "Affinity.NONE", -1),
(props.Affinity.CORE, "Affinity.CORE", 0),
(props.Affinity.NUMA, "Affinity.NUMA", 1),
(props.Affinity.HYBRID_AWARE, "Affinity.HYBRID_AWARE", 2),
),
),
(
props.CacheMode,
(
Expand Down Expand Up @@ -259,11 +250,6 @@ def test_properties_ro(ov_property_ro, expected_value):
"COMPILATION_NUM_THREADS",
((44, 44),),
),
(
props.affinity,
"AFFINITY",
((props.Affinity.NONE, props.Affinity.NONE),),
),
(props.force_tbb_terminate, "FORCE_TBB_TERMINATE", ((True, True), (False, False))),
(props.enable_mmap, "ENABLE_MMAP", ((True, True), (False, False))),
(
Expand Down Expand Up @@ -539,7 +525,6 @@ def test_single_property_setting(device):
props.enable_profiling(True),
props.cache_dir("./"),
props.inference_num_threads(9),
props.affinity(props.Affinity.NONE),
hints.inference_precision(Type.f32),
hints.performance_mode(hints.PerformanceMode.LATENCY),
hints.enable_cpu_pinning(True),
Expand All @@ -554,7 +539,6 @@ def test_single_property_setting(device):
props.enable_profiling: True,
props.cache_dir: "./",
props.inference_num_threads: 9,
props.affinity: props.Affinity.NONE,
hints.inference_precision: Type.f32,
hints.performance_mode: hints.PerformanceMode.LATENCY,
hints.enable_cpu_pinning: True,
Expand All @@ -568,7 +552,6 @@ def test_single_property_setting(device):
props.enable_profiling: True,
"CACHE_DIR": "./",
props.inference_num_threads: 9,
props.affinity: "NONE",
"INFERENCE_PRECISION_HINT": Type.f32,
hints.performance_mode: hints.PerformanceMode.LATENCY,
hints.scheduling_core_type: hints.SchedulingCoreType.PCORE_ONLY,
Expand All @@ -589,7 +572,6 @@ def test_core_cpu_properties(properties_to_set):
assert core.get_property("CPU", props.enable_profiling) is True
assert core.get_property("CPU", props.cache_dir) == "./"
assert core.get_property("CPU", props.inference_num_threads) == 9
assert core.get_property("CPU", props.affinity) == props.Affinity.NONE
assert core.get_property("CPU", streams.num) == 5

# RO properties
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,19 +51,6 @@ class OPENVINO_RUNTIME_API IStreamsExecutor : virtual public ITaskExecutor {
Task task;
};

/**
* @brief Defines inference thread binding type
*/
enum ThreadBindingType : std::uint8_t {
NONE, //!< Don't bind the inference threads
CORES, //!< Bind inference threads to the CPU cores (round-robin)
// the following modes are implemented only for the TBB code-path:
NUMA, //!< Bind to the NUMA nodes (default mode for the non-hybrid CPUs on the Win/MacOS, where the 'CORES' is
//!< not implemeneted)
HYBRID_AWARE //!< Let the runtime bind the inference threads depending on the cores type (default mode for the
//!< hybrid CPUs)
};

/**
* @brief Defines IStreamsExecutor configuration
*/
Expand Down
58 changes: 0 additions & 58 deletions src/inference/include/openvino/runtime/properties.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -1289,64 +1289,6 @@ static constexpr Property<int32_t, PropertyMutability::RW> inference_num_threads
*/
static constexpr Property<int32_t, PropertyMutability::RW> compilation_num_threads{"COMPILATION_NUM_THREADS"};

/**
* @brief Enum to define possible affinity patterns
* @ingroup ov_runtime_cpp_prop_api
*/
enum class Affinity {
NONE = -1, //!< Disable threads affinity pinning
CORE = 0, //!< Pin threads to cores, best for static benchmarks
NUMA = 1, //!< Pin threads to NUMA nodes, best for real-life, contented cases. On the Windows and MacOS* this
//!< option behaves as CORE
HYBRID_AWARE = 2, //!< Let the runtime to do pinning to the cores types, e.g. prefer the "big" cores for latency
//!< tasks. On the hybrid CPUs this option is default
};

/** @cond INTERNAL */
inline std::ostream& operator<<(std::ostream& os, const Affinity& affinity) {
switch (affinity) {
case Affinity::NONE:
return os << "NONE";
case Affinity::CORE:
return os << "CORE";
case Affinity::NUMA:
return os << "NUMA";
case Affinity::HYBRID_AWARE:
return os << "HYBRID_AWARE";
default:
OPENVINO_THROW("Unsupported affinity pattern");
}
}

inline std::istream& operator>>(std::istream& is, Affinity& affinity) {
std::string str;
is >> str;
if (str == "NONE") {
affinity = Affinity::NONE;
} else if (str == "CORE") {
affinity = Affinity::CORE;
} else if (str == "NUMA") {
affinity = Affinity::NUMA;
} else if (str == "HYBRID_AWARE") {
affinity = Affinity::HYBRID_AWARE;
} else {
OPENVINO_THROW("Unsupported affinity pattern: ", str);
}
return is;
}
/** @endcond */

/**
* @deprecated Use ov::hint::enable_cpu_pinning
* @brief The name for setting CPU affinity per thread option.
* @ingroup ov_runtime_cpp_prop_api
* @note The setting is ignored, if the OpenVINO compiled with OpenMP and any affinity-related OpenMP's
* environment variable is set (as affinity is configured explicitly)
*/
OPENVINO_DEPRECATED(
"This property is deprecated and will be removed soon. Use ov::hint::enable_cpu_pinning instead of it.")
static constexpr Property<Affinity> affinity{"AFFINITY"};

/**
* @brief The devices that the inference task been executed.
* @ingroup ov_runtime_cpp_prop_api
Expand Down
5 changes: 1 addition & 4 deletions src/plugins/auto/src/cumulative_schedule.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -73,10 +73,7 @@ void CumuSchedule::init() {
idx++;
} else {
cpu_device_information = device;
OPENVINO_SUPPRESS_DEPRECATED_START
cpu_device_information.config.insert(
{ov::affinity.name(), ov::Any(ov::Affinity::CORE).as<std::string>()});
OPENVINO_SUPPRESS_DEPRECATED_END
cpu_device_information.config.insert({ov::hint::enable_cpu_pinning.name(), "YES"});
}
}
if (!cpu_device_information.device_name.empty())
Expand Down
Loading
Loading