The OpenVINO backend enables optimized execution of deep learning models on Intel hardware, leveraging Intel's OpenVINO toolkit for inference acceleration.
OpenVINO backend supports the following hardware:
- Intel CPUs
- Intel integrated GPUs
- Intel discrete GPUs
- Intel NPUs
For more information on the supported hardware, please refer to OpenVINO System Requirements page.
executorch
├── backends
│ └── openvino
│ ├── quantizer
│ ├── observers
│ └── nncf_observers.py
│ ├── __init__.py
│ └── quantizer.py
│ ├── runtime
│ ├── OpenvinoBackend.cpp
│ └── OpenvinoBackend.h
│ ├── scripts
│ └── openvino_build.sh
│ ├── tests
│ ├── CMakeLists.txt
│ ├── README.md
│ ├── __init__.py
│ ├── partitioner.py
│ ├── preprocess.py
│ └── requirements.txt
└── examples
└── openvino
├── aot_optimize_and_infer.py
└── README.md
Before you begin, ensure you have openvino installed and configured on your system.
-
Download the OpenVINO release package from here. Make sure to select your configuration and click on OpenVINO Archives under the distribution section to download the appropriate archive for your platform.
-
Extract the release package from the archive and set the environment variables.
tar -zxf openvino_toolkit_<your_release_configuration>.tgz cd openvino_toolkit_<your_release_configuration> source setupvars.sh
git clone https://github.com/openvinotoolkit/openvino.git
cd openvino
git submodule update --init --recursive
sudo ./install_build_dependencies.sh
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release -DENABLE_PYTHON=ON
make -j$(nproc)
cd ..
cmake --install build --prefix <your_preferred_install_location>
cd <your_preferred_install_location>
source setupvars.shFor more information about OpenVINO build, refer to the OpenVINO Build Instructions.
Follow the steps below to setup your build environment:
- Create a Virtual Environment
- Create a virtual environment and activate it by executing the commands below.
python -m venv env source env/bin/activate
- Clone ExecuTorch Repository from Github
- Clone Executorch repository by executing the command below.
git clone --recurse-submodules https://github.com/pytorch/executorch.git
- Build ExecuTorch with OpenVINO Backend
-
Ensure that you are inside
executorch/backends/openvino/scriptsdirectory. The following command builds and installs ExecuTorch with the OpenVINO backend, also compiles the C++ runtime libraries and binaries into<executorch_root>/cmake-outfor quick inference testing.openvino_build.sh
-
Optionally,
openvino_build.shscript can be used to build python package or C++ libraries/binaries seperately.Build OpenVINO Backend Python Package with Pybindings: To build and install the OpenVINO backend Python package with Python bindings, run the
openvino_build.shscript with the--enable_pythonargument as shown in the below command. This will compile and install the ExecuTorch Python package with the OpenVINO backend into your Python environment. This option will also enable python bindings required to execute OpenVINO backend tests andaot_optimize_and_infer.pyscript insideexecutorch/examples/openvinofolder../openvino_build.sh --enable_python
Build C++ Runtime Libraries for OpenVINO Backend: Run the
openvino_build.shscript with the--cpp_runtimeflag to build the C++ runtime libraries as shown in the below command. The compiled libraries files and binaries can be found in the<executorch_root>/cmake-outdirectory. The binary located at<executorch_root>/cmake-out/executor_runnercan be used to run inference with vision models../openvino_build.sh --cpp_runtime
Build C++ Runtime Libraries with LLM Extension: Run the
openvino_build.shscript with the--cpp_runtime_llmflag to build the C++ runtime libraries with LLM extension as shown in the below command. Use this option instead of--cpp_runtimefor LLM extension support which is required by LLM examples../openvino_build.sh --cpp_runtime_llm
For more information about ExecuTorch environment setup, refer to the Environment Setup guide.
Please refer to README.md for instructions on running examples of various of models with openvino backend.