Skip to content

Verisilicon Tensor Interface Module

License

Notifications You must be signed in to change notification settings

Stella0816/TIM-VX

 
 

Repository files navigation

TIM-VX - Tensor Interface Module

VSim.X86.UnitTest

TIM-VX is a software integration module provided by VeriSilicon to facilitate deployment of Neural-Networks on Verisilicon ML accelerators. It serves as the backend binding for runtime frameworks such as Android NN, Tensorflow-Lite, MLIR, TVM and more.

Main Features

  • Over 150 operators with rich format support for both quantized and floating point
  • Simplified C++ binding API calls to create Tensors and Operations Guide
  • Dynamic graph construction with support for shape inference and layout inference
  • Built-in custom layer extensions
  • A set of utility functions for debugging

Framework Support

Feel free to raise a github issue if you wish to add TIM-VX for other frameworks.

Get started

Build and Run

TIM-VX supports both bazel and cmake. Install bazel to get started.

TIM-VX needs to be compiled and linked against VeriSilicon OpenVX SDK which provides related header files and pre-compiled libraries. A default linux-x86_64 SDK is provided which contains the simulation environment on PC. Platform specific SDKs can be obtained from respective SoC vendors.

To build TIM-VX

bazel build libtim-vx.so

To run sample LeNet

# set VIVANTE_SDK_DIR for runtime compilation environment
export VIVANTE_SDK_DIR=`pwd`/prebuilt-sdk/x86_64_linux

bazel build //samples/lenet:lenet_asymu8_cc
bazel run //samples/lenet:lenet_asymu8_cc

To build and run Tensorflow-Lite with TIM-VX, please see README

To build and run TVM with TIM-VX, please see TVM README

About

Verisilicon Tensor Interface Module

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C 91.4%
  • C++ 8.2%
  • Other 0.4%