Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Overview.modified #1611

Merged
merged 4 commits into from
Apr 1, 2022
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 26 additions & 0 deletions docs/platform/what-is-tizen/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,32 @@ The most important strengths that enable Tizen to move to the next level are IoT

![IoTivity for Connectivity](media/about_tizen_6.png)

## AI and Machine Learning
### NN Runtime
NN Runtime serves as a backend for machine learning APIs for accelerating neural network inference on Tizen devices. It supports heterogeneous computing by combining CPU and GPU, and we plan to expand support to NPU in the near future. It is based on the independent open source project [ONE (On-device Neural Engine)](https://github.com/Samsung/ONE), which consists of a runtime virtual machine running on the Tizen device and a compiler toolchain running on the developer's host computer.

Runtime

- Provides an optimized execution combination of various open kernels and private kernels based on a proprietary algorithm developed in-house.
- Kernels from open sources such as ARM Compute Library (ACL), Ruy, and XNNPACK, and customized improvements according to needs.
- Support for dynamic tensors whose shape keeps changing during inference.
- Support for models with control flow operators. (IF, WHILE)
- Provides various executors and is expandable.
- Linear executor.
- Parallel executor using CPU and GPU together.
- Partitioning and multithreading of neural network models at runtime to improve overall inference throughput.

Compiler Toolchain

- Support for interworking with various neural network frameworks and their models.
- TensorFlow & TensorFlow lite v1.x & v2.x.
- PyTorch & ONNX v1.10.
- Defining and serving extensible universal container called ‘NN package’.
- Accommodating circle (ONE), tflite (TensorFlow lite) model, and meta-data in JSON format under directory structure.
- Includes various development tools that use common IR (circle) as standard input/output format.
- Graph-level neural network model optimizer.
- Neural network model quantizer.
- Various profiles and test scripts to evaluate performance on target.

## Convergence Platform for the Emerging Era

Expand Down