You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Qualcomm AI Engine Direct - Enable AR-N model for prompt processing in hybrid mode (#8210)
* Qualcomm AI Engine Direct - Enable AR-N mode to process prompt in hybrid
mode
Summary:
- Add `max_seq_len` to refer to maximum number of tokens that the model can process & consider at once to generate predictions/responses.
- Add `prefill_ar_n` to determine the number of tokens to consume and the number of logits to produce for prompt processor in hybrid mode.
- Remove prefill mode
* fixed CI
* Add the figure to readme and fixed unused variable
* fixed linting
Copy file name to clipboardexpand all lines: examples/qualcomm/oss_scripts/llama/README.md
+13-8
Original file line number
Diff line number
Diff line change
@@ -8,11 +8,16 @@ This file provides you the instructions to run LLAMA model with different parame
8
8
9
9
We offer the following modes to execute the model:
10
10
11
-
Prefill Mode: This is also known as batch prefill mode, where the model takes in a list of tokens as input and generates the next token along with the key-value (KV) cache for all tokens. This mode is efficient for encoding the user's prompt.
12
-
13
11
KV Cache Mode: In KV Cache mode, the model takes in a single previous token and generates the next predicted token along with its KV cache. It is efficient for generating subsequent tokens after the initial prompt.
14
12
15
-
Hybrid Mode: Hybrid mode leverages the strengths of both batch prefill and KV cache modes to optimize token generation speed. Initially, it uses prefill mode to efficiently generate the prompt's key-value (KV) cache. Then, the mode switches to KV cache mode, which excels at generating subsequent tokens.
13
+
Hybrid Mode: Hybrid mode leverages the strengths of both AR-N model and KV cache modes to optimize token generation speed. Initially, it uses AR-N model to efficiently generate the prompt's key-value (KV) cache. Then, the mode switches to KV cache mode, which excels at generating subsequent tokens.
14
+
- AR-N model: The auto-regression (AR) length determines the number of tokens to consume and the number of logits to produce. Use it to process the prompt and generate the key-value (kv) cache, which serves as a prompt processor in hybrid mode.
15
+
- Prompt processing with AR-N model:
16
+
<figure>
17
+
<imgsrc="./assets/PromptProcessingWithARN.png"alt="Prompt Processing With AR-N Model">
18
+
<figcaption>Prompt processing is done using a for-loop. An N-token block is taken, and the KV cache is updated for that block. This process is repeated until all tokens are consumed, with the last block potentially requiring padding. For flexibility, the AR-N model can handle any input length less than the maximum sequence length. For TTFT, the input length (or number of blocks) will vary depending on the actual input length, rather than always being the same.
19
+
</figcaption>
20
+
</figure>
16
21
17
22
18
23
## Instructions
@@ -50,13 +55,13 @@ At the end of this step, users should have the following files ready: `consolida
50
55
### Step3: Run default examples using hybrid mode.
On the other hand, if you already have a pre-compiled .pte model, you can perform inference by providing the flag `--pre_gen_pte` and specifying the folder that contains the .pte model. Taking LLAMA3.2 as an example:
You can select the KV Cache update mechanism at runtime by setting the `KV_UPDATER` variable to either "shift_pointer" or "smart_mask". By default, it is set to "smart_mask".
0 commit comments