Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

impl of computing a message's token count for providers other than OpenAI #406

Open
Intex32 opened this issue Sep 8, 2023 · 0 comments
Open
Labels

Comments

@Intex32
Copy link
Member

Intex32 commented Sep 8, 2023

In LLM there is a function called tokensFromMessages. Current default implementation is using the models encoding (from ModelType) to compute the token count locally.

Problem: Afaik, the encoding is not made publicly available by Google. Thus we have to make an API call to GCP (https://cloud.google.com/vertex-ai/docs/generative-ai/get-token-count).

TODO: default implementation of tokensFromMessages has to be removed and replaced by provider specific implementations (for OpenAI based on encoding, and for GCP on external API call)

depends on #393
depends on #405

@Intex32 Intex32 added help wanted Extra attention is needed HuggingFace Vertex AI labels Sep 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant