Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add support to export XNNPACK based static_llama
Summary: Add support to export XNNPACK based static_llama - static_llama is the QNN backend hybrid/prefill+decode model with KV cache as the inference input - https://www.internalfb.com/code/fbsource/fbcode/executorch/examples/qualcomm/oss_scripts/llama2/model/static_llama.py Differential Revision: D67867190
- Loading branch information