Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot bypass the OPENAI_API_KEY requirement when using local server #6

Closed
18a93a664c opened this issue Dec 12, 2023 · 8 comments
Closed

Comments

@18a93a664c
Copy link

This is a continuation from my previous issue...

I'm using a local server setup with the LMStudio app. This dev environment does not require an openai key. They instruct you to leave that field blank.

I have created a .env file and didn't assign a value to OPENAI_API_KEY...I still get an error message stating it needs to be set.
I have commented out every line of code that references anything to do with api_key...I still get an error message stating it needs to be set.

How do I bypass the need for an api key?

@brainlid
Copy link
Owner

brainlid commented Dec 14, 2023

I'm guessing the created .env file isn't being used. There's nothing in the project that reads the file, it counts on something else like direnv to load the values into the environment.

Try this:

Edit config/config.exs line 54:

config :langchain, :openai_key, "UNUSED"

That should do it.

My other guess is that it was trying to connect to the actual OpenAI API and the errors requiring the key may have been coming from there.

@brainlid
Copy link
Owner

Also, make sure you're using the latest version of the langchain library and that you are providing the custom local endpoint:

https://github.com/brainlid/langchain?tab=readme-ov-file#alternative-openai-compatible-apis

@18a93a664c
Copy link
Author

18a93a664c commented Dec 15, 2023

The mod of adding, UNUSED to the config.exs file still doesn't bypass.

This is so odd. The error message simply states

Incorrect API key provided: UNUSED. You can find your API key at https://platform.openai.com/account/api-keys.

There is some code that is forcing an api key to be included. In the get_api_key function, I see your comments:

if no API key is set default to "" which will raise a Stripe API error

Could this be the cause for rejecting a null value?

By the way, I'm adding the endpoint to the chat_open_ai.ex as follows:

    field :endpoint, :string, default: "http://localhost:1234/v1/chat/completions"
    # field :model, :string, default: "gpt-4"
    field :model, :string, default: "mistral-7b-instruct-v0.2.Q5_K_M"

I opted to do it this way because I don't understand in which file I'm supposed to add alternate endpoints per your updated langchain v0.15:

{:ok, updated_chain, %Message{} = message} =
  LLMChain.new!(%{
    llm: ChatOpenAI.new!(%{endpoint: "http://localhost:1234/v1/chat/completions"}),
  })
  |> LLMChain.add_message(Message.new_user!("Hello!"))
  |> LLMChain.run()

@brainlid
Copy link
Owner

Ha! The "Stripe API error" message was because I copied the initial API key management code from the Elixir Stripe client library. I updated the comment to remove the Stripe reference.

From what I can tell, the error message is coming from OpenAI. Meaning the langchain library is not raising the error. Instead, an API request is being made to OpenAI and they are rejecting the call because an API key is required.

Depending on how you're doing the field overrides, it probably isn't working.

The demo project has been updated to use a newer version of the langchain library. Ensure you are using v0.1.6 or later. A recent fix was made to allow the endpoint setting to be overriden.

With that done, you can test out the override in the two following places:

In those code locations, the LLMChain is setup and the OpenAI model is configured. You'd add the endpoint override there and change the model.

Hope that helps!

@18a93a664c
Copy link
Author

SUCCESS!

Updating to 0.1.6, and placing the overrides in those two locations resolved the error message. I'm now having conversations locally using mistral 7B instruct.

@brainlid
Copy link
Owner

Awesome @18a93a664c! I'd love to hear how well it does at running functions. The physical fitness agent has three functions it uses.

  • records your fitness plan
  • gets/reads fitness logs
  • creates new fitness logs

Glad it's working for you!

@18a93a664c
Copy link
Author

The fitness agent doesn't record anything with this setup. I'm entering the same responses as you did in your video and I don't receive the "recorded" responses from the agent.

@brainlid
Copy link
Owner

Thanks for trying that out and letting me know! Functions are really hard (like impossible) to get working right when the LLM doesn't specifically support them. I'm not surprised they don't work, but I am a little bummed. I was hoping they had figured out a way to do it with a general model like Mistral.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants