-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Support sending additional outputs from vLLM inference #70
base: main
Are you sure you want to change the base?
Changes from 8 commits
10a5b94
892f0d0
58ee481
5e605ca
f35e9c4
e6e6404
44edd6e
1773dea
29099df
457eeaa
5e9b09f
dae3c13
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,185 @@ | ||
# Copyright 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved. | ||
# | ||
# Redistribution and use in source and binary forms, with or without | ||
# modification, are permitted provided that the following conditions | ||
# are met: | ||
# * Redistributions of source code must retain the above copyright | ||
# notice, this list of conditions and the following disclaimer. | ||
# * Redistributions in binary form must reproduce the above copyright | ||
# notice, this list of conditions and the following disclaimer in the | ||
# documentation and/or other materials provided with the distribution. | ||
# * Neither the name of NVIDIA CORPORATION nor the names of its | ||
# contributors may be used to endorse or promote products derived | ||
# from this software without specific prior written permission. | ||
# | ||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY | ||
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE | ||
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR | ||
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR | ||
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, | ||
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, | ||
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR | ||
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY | ||
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT | ||
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE | ||
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | ||
|
||
import json | ||
|
||
import numpy as np | ||
import pytest | ||
import tritonclient.grpc as grpcclient | ||
|
||
|
||
class TestAdditionalOutputs: | ||
_grpc_url = "localhost:8001" | ||
_model_name = "vllm_opt" | ||
_sampling_parameters = {"temperature": "0", "top_p": "1"} | ||
_prompt = "In this example," | ||
|
||
def _get_inputs( | ||
self, | ||
prompt, | ||
stream=True, | ||
sampling_parameters=None, | ||
output_finish_reason=None, | ||
output_cumulative_logprob=None, | ||
output_num_token_ids=None, | ||
): | ||
inputs = [] | ||
|
||
inputs.append(grpcclient.InferInput("text_input", [1], "BYTES")) | ||
inputs[-1].set_data_from_numpy( | ||
np.array([prompt.encode("utf-8")], dtype=np.object_) | ||
) | ||
|
||
inputs.append(grpcclient.InferInput("stream", [1], "BOOL")) | ||
inputs[-1].set_data_from_numpy(np.array([stream], dtype=bool)) | ||
|
||
if sampling_parameters is not None: | ||
inputs.append(grpcclient.InferInput("sampling_parameters", [1], "BYTES")) | ||
inputs[-1].set_data_from_numpy( | ||
np.array( | ||
[json.dumps(sampling_parameters).encode("utf-8")], dtype=np.object_ | ||
) | ||
) | ||
|
||
if output_finish_reason is not None: | ||
inputs.append(grpcclient.InferInput("output_finish_reason", [1], "BOOL")) | ||
inputs[-1].set_data_from_numpy(np.array([output_finish_reason], dtype=bool)) | ||
|
||
if output_cumulative_logprob is not None: | ||
inputs.append( | ||
grpcclient.InferInput("output_cumulative_logprob", [1], "BOOL") | ||
) | ||
inputs[-1].set_data_from_numpy( | ||
np.array([output_cumulative_logprob], dtype=bool) | ||
) | ||
|
||
if output_num_token_ids is not None: | ||
inputs.append(grpcclient.InferInput("output_num_token_ids", [1], "BOOL")) | ||
inputs[-1].set_data_from_numpy(np.array([output_num_token_ids], dtype=bool)) | ||
|
||
return inputs | ||
|
||
def _callback(self, result, error): | ||
self._responses.append({"result": result, "error": error}) | ||
|
||
def _llm_infer(self, inputs): | ||
self._responses = [] | ||
with grpcclient.InferenceServerClient(self._grpc_url) as client: | ||
client.start_stream(self._callback) | ||
client.async_stream_infer( | ||
self._model_name, inputs=inputs, parameters=self._sampling_parameters | ||
) | ||
client.stop_stream() | ||
assert len(self._responses) > 0 | ||
|
||
def _assert_text_output_valid(self): | ||
text_output = "" | ||
for response in self._responses: | ||
result, error = response["result"], response["error"] | ||
assert error is None | ||
text_output += result.as_numpy(name="text_output")[0].decode("utf-8") | ||
assert len(text_output) > 0, "output is empty" | ||
assert text_output.count(" ") > 4, "output is not a sentence" | ||
|
||
def _assert_finish_reason(self, output_finish_reason): | ||
for i in range(len(self._responses)): | ||
result, error = self._responses[i]["result"], self._responses[i]["error"] | ||
assert error is None | ||
finish_reason_np = result.as_numpy(name="finish_reason") | ||
if output_finish_reason is None or output_finish_reason == False: | ||
assert finish_reason_np is None | ||
continue | ||
finish_reason = finish_reason_np[0].decode("utf-8") | ||
if i < len(self._responses) - 1: | ||
assert finish_reason == "None" | ||
else: | ||
assert finish_reason == "length" | ||
|
||
def _assert_cumulative_logprob(self, output_cumulative_logprob): | ||
prev_cumulative_logprob = 0.0 | ||
for response in self._responses: | ||
result, error = response["result"], response["error"] | ||
assert error is None | ||
cumulative_logprob_np = result.as_numpy(name="cumulative_logprob") | ||
if output_cumulative_logprob is None or output_cumulative_logprob == False: | ||
assert cumulative_logprob_np is None | ||
continue | ||
cumulative_logprob = cumulative_logprob_np[0].astype(float) | ||
assert cumulative_logprob != prev_cumulative_logprob | ||
prev_cumulative_logprob = cumulative_logprob | ||
|
||
def _assert_num_token_ids(self, output_num_token_ids): | ||
for response in self._responses: | ||
result, error = response["result"], response["error"] | ||
assert error is None | ||
num_token_ids_np = result.as_numpy(name="num_token_ids") | ||
if output_num_token_ids is None or output_num_token_ids == False: | ||
assert num_token_ids_np is None | ||
continue | ||
num_token_ids = num_token_ids_np[0].astype(int) | ||
# TODO: vLLM may return token ids identical to the previous one when | ||
# streaming, for example: | ||
# | ||
# prev: None | ||
# curr: text=' the', token_ids=array('l', [5]) | ||
# | ||
# prev: text=' the', token_ids=array('l', [5, 1385]) | ||
# curr: text=' the term', token_ids=array('l', [5, 1385]) | ||
# | ||
# prev: text=' the term', token_ids=array('l', [5, 1385, 44]) | ||
# curr: text=' the term', token_ids=array('l', [5, 1385, 44]) | ||
# | ||
# prev: text=' the term', token_ids=array('l', [5, 1385, 44, 48]) | ||
# curr: text=' the term “', token_ids=array('l', [5, 1385, 44, 48]) | ||
# | ||
# If this is no longer the case in a future release, change the assert | ||
# to assert num_token_ids > 0. | ||
assert num_token_ids >= 0 | ||
|
||
@pytest.mark.parametrize("stream", [True, False]) | ||
@pytest.mark.parametrize("output_finish_reason", [None, True, False]) | ||
@pytest.mark.parametrize("output_cumulative_logprob", [None, True, False]) | ||
@pytest.mark.parametrize("output_num_token_ids", [None, True, False]) | ||
def test_additional_outputs( | ||
self, | ||
stream, | ||
output_finish_reason, | ||
output_cumulative_logprob, | ||
output_num_token_ids, | ||
): | ||
inputs = self._get_inputs( | ||
self._prompt, | ||
stream=stream, | ||
sampling_parameters=self._sampling_parameters, | ||
output_finish_reason=output_finish_reason, | ||
output_cumulative_logprob=output_cumulative_logprob, | ||
output_num_token_ids=output_num_token_ids, | ||
) | ||
self._llm_infer(inputs) | ||
self._assert_text_output_valid() | ||
self._assert_finish_reason(output_finish_reason) | ||
self._assert_cumulative_logprob(output_cumulative_logprob) | ||
self._assert_num_token_ids(output_num_token_ids) |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,66 @@ | ||
#!/bin/bash | ||
# Copyright 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved. | ||
# | ||
# Redistribution and use in source and binary forms, with or without | ||
# modification, are permitted provided that the following conditions | ||
# are met: | ||
# * Redistributions of source code must retain the above copyright | ||
# notice, this list of conditions and the following disclaimer. | ||
# * Redistributions in binary form must reproduce the above copyright | ||
# notice, this list of conditions and the following disclaimer in the | ||
# documentation and/or other materials provided with the distribution. | ||
# * Neither the name of NVIDIA CORPORATION nor the names of its | ||
# contributors may be used to endorse or promote products derived | ||
# from this software without specific prior written permission. | ||
# | ||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY | ||
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE | ||
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR | ||
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR | ||
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, | ||
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, | ||
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR | ||
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY | ||
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT | ||
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE | ||
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | ||
|
||
export CUDA_VISIBLE_DEVICES=0 | ||
source ../common/util.sh | ||
|
||
pip3 install pytest==8.1.1 | ||
pip3 install tritonclient[grpc] | ||
|
||
# Prepare Model | ||
rm -rf models vllm_baseline_output.pkl && mkdir -p models | ||
SAMPLE_MODELS_REPO="../../samples/model_repository" | ||
cp -r $SAMPLE_MODELS_REPO/vllm_model models/vllm_opt | ||
sed -i 's/"gpu_memory_utilization": 0.5/"gpu_memory_utilization": 0.3/' models/vllm_opt/1/model.json | ||
|
||
RET=0 | ||
|
||
# Test | ||
SERVER_LOG="vllm_opt.server.log" | ||
SERVER_ARGS="--model-repository=models" | ||
run_server | ||
if [ "$SERVER_PID" == "0" ]; then | ||
echo -e "\n***\n*** Failed to start $SERVER\n***" | ||
cat $SERVER_LOG | ||
exit 1 | ||
fi | ||
set +e | ||
python3 -m pytest --junitxml=test_additional_outputs.xml -s -v additional_outputs_test.py | ||
if [ $? -ne 0 ]; then | ||
echo -e "\n***\n*** additional_outputs_test FAILED. \n***" | ||
RET=1 | ||
fi | ||
set -e | ||
kill $SERVER_PID | ||
wait $SERVER_PID | ||
|
||
if [ $RET -eq 0 ]; then | ||
echo -e "\n***\n*** Test Passed\n***" | ||
else | ||
echo -e "\n***\n*** Test FAILED\n***" | ||
fi | ||
exit $RET |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,107 @@ | ||
<!-- | ||
# Copyright 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved. | ||
# | ||
# Redistribution and use in source and binary forms, with or without | ||
# modification, are permitted provided that the following conditions | ||
# are met: | ||
# * Redistributions of source code must retain the above copyright | ||
# notice, this list of conditions and the following disclaimer. | ||
# * Redistributions in binary form must reproduce the above copyright | ||
# notice, this list of conditions and the following disclaimer in the | ||
# documentation and/or other materials provided with the distribution. | ||
# * Neither the name of NVIDIA CORPORATION nor the names of its | ||
# contributors may be used to endorse or promote products derived | ||
# from this software without specific prior written permission. | ||
# | ||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY | ||
# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE | ||
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR | ||
# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR | ||
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, | ||
# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, | ||
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR | ||
# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY | ||
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT | ||
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE | ||
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | ||
--> | ||
|
||
# Additional Outputs from vLLM | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Excellent documentation and concise reference from top-level README! |
||
|
||
The vLLM backend supports sending additional outputs from vLLM on top of the | ||
usual `text_output` when requested. | ||
|
||
All additional outputs are disabled by default and they need to be enabled on a | ||
per-request basis. If enabled, the corresponding output tensor will be set for | ||
all responses from the request. | ||
|
||
## Supported Additional Outputs | ||
|
||
### Finish Reason | ||
|
||
The reason why the sequence is finished. See | ||
[here](https://github.com/vllm-project/vllm/blob/v0.6.3.post1/vllm/outputs.py#L26) | ||
for more details. | ||
|
||
To enable, set `output_finish_reason` input tensor to `True`. The reason will be | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. FYI - In TRT-LLM backend, the input tensors to enable optional outputs are named with "return_XXX" convention. I don't think we must be aligned with the naming though, I'd prefer to use what vLLM users are used to have. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is our first attempt to send optional outputs on vLLM, we can name the input tensor switches to "return_*". @rmccorm4 what do you think? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The model.py on the CI pipeline has been replaced with the version on this commit. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm in favor of being as similar across both backends as possible - unless there's a clear framework-specific idiom like Kris mentioned (ex: something vllm users are already comfortable with). |
||
sent as a string on the `finish_reason` output tensor. | ||
|
||
Supported since r24.11. | ||
|
||
### Cumulative Log Probabilities | ||
|
||
The cumulative log probability of the generated output text. See | ||
[here](https://github.com/vllm-project/vllm/blob/v0.6.3.post1/vllm/outputs.py#L22) | ||
for more details. | ||
|
||
To enable, set `output_cumulative_logprob` input tensor to `True`. The floating | ||
point value will be sent on the `cumulative_logprob` output tensor. | ||
|
||
Supported since r24.11. | ||
|
||
### Number of token IDs | ||
|
||
The number of token IDs of the generated output text sent on this response. It | ||
is the difference in length of the token IDs generated from the last response to | ||
this response. If this is the first response, the last response length is | ||
presumed to be zero. See | ||
[here](https://github.com/vllm-project/vllm/blob/v0.6.3.post1/vllm/outputs.py#L21) | ||
for more details on the token IDs of the generated output text. | ||
|
||
To enable, set `output_num_token_ids` input tensor to `True`. The unsigned | ||
integer value will be sent on the `num_token_ids` output tensor. | ||
|
||
Supported since r24.11. | ||
|
||
## Examples | ||
|
||
### Add Finish Reason to Outputs | ||
|
||
```python | ||
import numpy as np | ||
import tritonclient.grpc as grpcclient | ||
|
||
inputs = [] | ||
|
||
inputs.append(grpcclient.InferInput("text_input", [1], "BYTES")) | ||
inputs[-1].set_data_from_numpy( | ||
np.array(["example prompt".encode("utf-8")], dtype=np.object_) | ||
) | ||
|
||
inputs.append(grpcclient.InferInput("output_finish_reason", [1], "BOOL")) | ||
inputs[-1].set_data_from_numpy(np.array([True], dtype=bool)) | ||
|
||
def callback(result, error): | ||
... | ||
print(result.as_numpy(name="finish_reason")) | ||
|
||
with grpcclient.InferenceServerClient("localhost:8001") as client: | ||
client.start_stream(callback) | ||
client.async_stream_infer("vLLM_model_name", inputs=inputs, ...) | ||
client.stop_stream() | ||
``` | ||
|
||
## Notes | ||
|
||
* Enabling additional outputs may impact performance, only add additional | ||
outputs when necessary. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not totally following this section. Can you show me an example output from calling
/generate_stream
on a vllm model withoutput_num_token_ids=True
andstream=True
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On 0.5.3.post1:
which is expected, but on 0.5.5:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It appears the previous output (the
token_ids
field) is overwritten by the current output, when the engine is streaming outputs for a request.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The issue appears fixed on a later release, i.e. 0.6.3.post1:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think
num_token_ids
could probably be better named to reflect that it is theoutput
tokens. OpenAI APIs return information about bothinput
(prompt/context) andoutput
(decode, generation) tokens - so we should leave room for that with clear naming, even if we only implement theoutput
tokens in this PR.How about,
num_output_tokens
, and in future if requested,num_input_tokens
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original ask is for "number of tokens", but I agree that we could return
token_ids
instead, because from a feature perspective we are making the "return_*" space less crowded if the outputtingtoken_ids
would be requested in the future, in addition tonum_token_ids
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry edited my responses above. While returning
token_ids
is probably more generalized, for large sequences it could actually be more costly than returning a single number if all they want is the count (for non-streaming) - and would introduce some postprocessing logic requirement if all they want is the count, so I removed that part for now. I'm a little torn on it.We can probably just stick to the ask of returning the token counts, and consider either verbose logging (debugging) or further outputs for actual token_ids if requested later?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Return token ids instead of number of token ids
The model.py on the CI pipeline has been replaced with the version on this commit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we can just add
token_ids
if requested later. Will update the namenum_token_ids
tonum_output_tokens
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rename num_token_ids to num_output_tokens
The model.py on the CI pipeline has been replaced with the version on this commit.