You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to use lm-eval-harness to evaluate Claude Sonnet, Haiku, Mistral, Meta Llama, and Titan models. I am accessing these models through AWS Bedrock. In anthropic_llms.py, there is a function called self.client.get_tokenizer() that gets the Anthropic tokenizer. But, I am using an AWS Bedrock client instead of an Anthropic client. How do I get access to this tokenizer without having an Anthropic OpenAI key? The tokenizer does not appear to be part of tiktoken or HuggingFace. Is there a different way to run lm-eval-harness on Amazon Bedrock Models
The text was updated successfully, but these errors were encountered:
I am trying to use lm-eval-harness to evaluate Claude Sonnet, Haiku, Mistral, Meta Llama, and Titan models. I am accessing these models through AWS Bedrock. In anthropic_llms.py, there is a function called self.client.get_tokenizer() that gets the Anthropic tokenizer. But, I am using an AWS Bedrock client instead of an Anthropic client. How do I get access to this tokenizer without having an Anthropic OpenAI key? The tokenizer does not appear to be part of tiktoken or HuggingFace. Is there a different way to run lm-eval-harness on Amazon Bedrock Models
The text was updated successfully, but these errors were encountered: