Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: 通过反向代理接入openai chatgpt #900

Closed
hongyi-zhao opened this issue Jun 22, 2023 · 53 comments
Closed

[Feature]: 通过反向代理接入openai chatgpt #900

hongyi-zhao opened this issue Jun 22, 2023 · 53 comments

Comments

@hongyi-zhao
Copy link
Collaborator

hongyi-zhao commented Jun 22, 2023

Class | 类型

大语言模型

Feature Request | 功能请求

有如下两个特性,非常不错:

  1. Cloudflare bypass proxy 支持,比如下面这个:https://github.com/acheong08/ChatGPT-Proxy-V4
  2. 基于1. 构建的反向代理,并利用Access token访问,从而绕过OpenAI API过期问题。
@binary-husky
Copy link
Owner

binary-husky commented Jun 24, 2023

谢谢,不过鉴于目前api已经很便宜了,我不想再引入更多反向工程的东西了,因为反向工程的东西维护负担太重了……
欢迎讨论


7.5 补充:

通过在config.py中添加以下配置可以使用 @acheong08 提供的反代项目

https://github.com/acheong08/ChatGPTProxy
https://github.com/acheong08/ChatGPT-to-API/

API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions":"https://reverse-proxy-url/v1/chat/completions"}

@hongyi-zhao
Copy link
Collaborator Author

hongyi-zhao commented Jun 24, 2023

V1.py 是基于ChatGPT-Proxy-V4的具体实现,或许可以通过移植作为一个插件来集成到gpt_academic中。

@acheong08
Copy link

acheong08 commented Jul 1, 2023

There is no need to use ChatGPT-Proxy-V4 or add a dependency on revChatGPT.

To use ChatGPT for free, just swap the endpoint here:

openai_endpoint = "https://api.openai.com/v1/chat/completions"

To one hosted via https://github.com/acheong08/ChatGPT-to-API/ which serves as a clone of the official API but via chat.openai.com.

A free endpoint to try out: https://free.churchless.tech/v1/chat/completions

===================================================

Edit by binary husky:
For simplicity, add this line to config.py or config_private.py will work:

API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions":"https://A-free-endpoint-to-try-out/v1/chat/completions"}

@hongyi-zhao
Copy link
Collaborator Author

hongyi-zhao commented Jul 1, 2023

Nice, it does the trick.

BTW, I still have some puzzles:

  1. Whether this method relies on the access token?
  2. Whether the access token is obtained based on the API key?

@acheong08
Copy link

Whether this method relies on the access token?

yes

Whether the access token is obtained based on the API key?

No. Cycled through

@hongyi-zhao
Copy link
Collaborator Author

No. Cycled through

Could you please describe the detailed steps?

@acheong08
Copy link

I upload a list of access tokens and it loops through them for each request. No API is used/accepted

@hongyi-zhao
Copy link
Collaborator Author

I upload a list of access tokens and it loops through them for each request. No API is used/accepted

This means that you must periodically check the expiration status of these access tokens and dynamically update them.

@acheong08
Copy link

I have a bash script running in a for loop to update every week. It's all automated.

@hongyi-zhao
Copy link
Collaborator Author

hongyi-zhao commented Jul 3, 2023

Some additional related questions:

  1. One access token needs one account, so you must have many accounts.
  2. If I use the free endpoint provided by you, aka, https://free.churchless.tech/v1/chat/completions, then no api-key of mine is needed. But based on my tries, if I don't set the API_KEY variable when using the above free endpoint with https://github.com/binary-husky/gpt_academic, it doesn't work at all, as shown below:
werner@X10DAi:~$ gpt_academic 
 [PROXY] 网络代理状态:未配置。无代理状态下很可能无法访问OpenAI家族的模型。建议:检查USE_PROXY选项是否修改。 
 [API_KEY] 本项目现已支持OpenAI和API2D的api-key。也支持同时填写多个api-key,如API_KEY="openai-key1,openai-key2,api2d-key3" 
 [API_KEY] 您既可以在config.py中修改api-key(s),也可以在问题输入区输入临时的api-key(s),然后回车键提交后即可生效。 
 [API_KEY] 正确的 API_KEY 是'sk'开头的51位密钥(OpenAI),或者 'fk'开头的41位密钥,请在config文件中修改API密钥之后再运行。 
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
[ENV_VAR] 尝试加载API_URL_REDIRECT,默认值:{} --> 修正值:{"https://api.openai.com/v1/chat/completions": "https://free.churchless.tech/v1/chat/completions"}
 [ENV_VAR] 成功读取环境变量API_URL_REDIRECT 
[ENV_VAR] 尝试加载LLM_MODEL,默认值:gpt-3.5-turbo --> 修正值:gpt-3.5-turbo-16k
 [ENV_VAR] 成功读取环境变量LLM_MODEL 
所有问询记录将自动保存在本地目录./gpt_log/chat_secrets.log, 请注意自我隐私保护哦!
查询代理的地理位置,返回的结果是{}
代理配置 无, 代理所在地:China
如果浏览器没有自动打开,请复制并转到以下URL:
	(亮色主题): http://localhost:55087
	(暗色主题): http://localhost:55087/?__theme=dark
正在执行一些模块的预热...
正在加载tokenizer,如果是第一次运行,可能需要一点时间下载参数
自动更新程序:已禁用
加载tokenizer完毕
正在加载tokenizer,如果是第一次运行,可能需要一点时间下载参数
加载tokenizer完毕
Running on local URL:  http://0.0.0.0:55087

To create a public link, set `share=True` in `launch()`.
[GFX1-]: glxtest: VA-API test failed: no supported VAAPI profile found.
ATTENTION: default value of option mesa_glthread overridden by environment.
ATTENTION: default value of option mesa_glthread overridden by environment.
ATTENTION: default value of option mesa_glthread overridden by environment.

image

@acheong08
Copy link

One access token needs one account, so you must have many accounts.

500 for public use and 500 for private use

@acheong08
Copy link

If I use the free endpoint provided by you, aka, https://free.churchless.tech/v1/chat/completions, then no api-key of mine is needed. But based on my tries, if I don't set the API_KEY variable when using the above free endpoint with https://github.com/binary-husky/gpt_academic, it doesn't work at all, as shown below:

This is a front end thing. set randomly

@hongyi-zhao
Copy link
Collaborator Author

I still cannot understand what do you mean by saying that set randomly, more precisely, is it really necessary to set up this API_KEY when working via your free endpoint?

On the other hand, I also tried to call your revChatGPT.V3 as follows:

This way works:

$ API_URL="https://free.churchless.tech/v1/chat/completions" python -m revChatGPT.V3 --api_key sk-xxx --truncate_limit $(( 16 * 1024 )) --model gpt-3.5-turbo-16k

This way fails:

$ API_URL="https://free.churchless.tech/v1/chat/completions" python -m revChatGPT.V3 --truncate_limit $(( 16 * 1024 )) --model gpt-3.5-turbo-16k

The error message is as follows:

V3.py: error: the following arguments are required: --api_key

@hongyi-zhao
Copy link
Collaborator Author

I have a bash script running in a for loop to update every week. It's all automated.

Does this script call https://github.com/acheong08/OpenAIAuth under the hood?

500 for public use and 500 for private use

  1. How to create so many accounts automatically?
  2. What's the purpose of the 500 for private use?

@acheong08
Copy link

On the other hand, I also tried to call your revChatGPT.V3 as follows:

V3 uses the official API. It is not relevant here

@acheong08
Copy link

I still cannot understand what do you mean by saying that set randomly, more precisely, is it really necessary to set up this API_KEY when working via your free endpoint?

It is necessary to set an API key to use this specific repository. You can set something like blahblahblah and it would still work with the free endpoint

@acheong08
Copy link

     How to create so many accounts automatically?

Browser automation with SMS verification from smspool

What's the purpose of the 500 for private use?

Automation stuff. Closed source

@hongyi-zhao
Copy link
Collaborator Author

hongyi-zhao commented Jul 4, 2023

V3 uses the official API. It is not relevant here

If so, why do you still read the environment variable as follows?

os.environ.get("API_URL") or "https://api.openai.com/v1/chat/completions",

@hongyi-zhao
Copy link
Collaborator Author

I upload a list of access tokens and it loops through them for each request. No API is used/accepted

This also means that each of these access tokens is precisely bound to one instance that provides the service, am I right?

@hongyi-zhao
Copy link
Collaborator Author

Browser automation with SMS verification from smspool

Are there some free smspool providers for such purpose? BTW, I noticed the following website, but not sure if it's truly free:

image

@acheong08
Copy link

If so, why do you still read the environment variable as follows:

V3 uses the official API. Some people require proxies

This also means that each of these access tokens is precisely bound to one instance that provides the service, am I right?

Yes

Are there some free smspool providers for such purpose? BTW, I noticed the following website, but not sure if it's truly free:

None that works

@hongyi-zhao
Copy link
Collaborator Author

hongyi-zhao commented Jul 4, 2023

I still cannot understand what do you mean by saying that set randomly, more precisely, is it really necessary to set up this API_KEY when working via your free endpoint?

It is necessary to set an API key to use this specific repository. You can set something like blahblahblah and it would still work with the free endpoint

If so, why do you still read the environment variable as follows:

V3 uses the official API.

Based on my further tries, both gpt_academic and your revChatGPT.V3 work with an arbitrary fake formal api_key as follows, when calling your free endpoint:

werner@X10DAi:~$ echo sk-$(tr -dc A-Za-z0-9 </dev/urandom | head -c 48)
sk-JF7HaOK6K01wTNxR6pjoH1VB2uT58xdrDMFn6xAdlioOGmET

Some people require proxies

So, I come to the following question: with a customized API_URL, can I tweak V3 to use Email/Password based authentication, just as the one used by V1?

@hongyi-zhao
Copy link
Collaborator Author

hongyi-zhao commented Jul 4, 2023

None that works

Then, what's your solution for such a tedious job?

@acheong08
Copy link

None that works

Then, what's your solution for such a tedious job?

I pay $0.1 per account for sms verification

@acheong08
Copy link

So, I come to the following question: with a customized API_URL, can I tweak V3 to use Email/Password based authentication, just as the one used by V1?

Yes. If you include access token as API key in request to ChatGPT-to-API, it will use your access token instead of the built in ones

@hongyi-zhao
Copy link
Collaborator Author

hongyi-zhao commented Jul 6, 2023

My original intention of this issue is not to advocate or suggest the use of free endpoints provided by others. This is because such an influx of access can easily overwhelm the aforementioned endpoints. Instead, the goal is to inform people to build their own free endpoints using a tool like ChatGPT-to-API, which is worth considering due to its superior safety, efficiency, and stability.

See acheong08/ChatGPT-to-API#81 for the related discussion.

@acheong08
Copy link

Replace the demo with your own endpoint. The endpoint provided is an instance of ChatGPT-to-API

@binary-husky
Copy link
Owner

binary-husky commented Jul 6, 2023 via email

@binary-husky
Copy link
Owner

binary-husky commented Jul 6, 2023 via email

@binary-husky binary-husky changed the title [Feature]: 通过免费的反向代理接入openai chatgpt [Feature]: 通过ChatGPT-to-API的反向代理接入openai chatgpt Jul 6, 2023
@binary-husky binary-husky changed the title [Feature]: 通过ChatGPT-to-API的反向代理接入openai chatgpt [Feature]: 通过ChatGPT-to-API & ChatGPTProxy 的反向代理接入openai chatgpt Jul 6, 2023
@binary-husky
Copy link
Owner

@acheong08 hello, I'm trying to deploy ChatGPT-to-API
but I'm encountering 500 or 404, this is my docker-compose setting, is it correct?

version: '3'

services:
  app:
    image: acheong08/chatgpt-to-api # 总是使用latest,更新时重新pull该tag镜像即可
    container_name: chatgpttoapi
    restart: unless-stopped
    ports:
      - '8080:8080'
    environment:
      SERVER_HOST: 0.0.0.0
      SERVER_PORT: 8080
      ADMIN_PASSWORD: TotallySecurePassword
      PUID: user-DwkoYzRgkApoWn2Yxxxxxxxxx
      http_proxy: socks5h://localhost:11284
      Access_Token: eyJhbGciOiJSUzI1Nxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

image

@hongyi-zhao
Copy link
Collaborator Author

hongyi-zhao commented Jul 7, 2023

  1. Do you really have a plus account?
  2. Remove PUID and try again, see here for the related comment.
  3. If you really want to use PUID, it should be obtained from your browser's cookie cache due that PUID is a cookie _puid and the description here, as shown below, is wrong:

image

@binary-husky
Copy link
Owner

  1. Do you really have a plus account?
  2. Remove PUID and try again, see here for the related comment.
  3. If you really want to use PUID, it should be obtained from your browser's cookie cache due that PUID is a cookie _puid and the description here, as shown below, is wrong:

image

  1. No, PUID=Plus_User_ID instead of Personal_User_ID?
  2. what is the alternative if I do not have a plus (if I have plus and bind a credit card, I could have simply used APIs)?

@acheong08
Copy link

No, PUID=Plus_User_ID instead of Personal_User_ID?

yes

what is the alternative if I do not have a plus (if I have plus and bind a credit card, I could have simply used APIs)?

Plus not required. You just have to deal with rate limits. Alternative is proxies since it's IP based

@hongyi-zhao
Copy link
Collaborator Author

@binary-husky

2. what is the alternative if I do not have a plus (if I have plus and bind a credit card, I could have simply used APIs)?

I don't think so, due that OpenAI doesn't provide APIs even for plus accounts. Instead, the API access for the plus account is invited and managed by OpenAI, which doesn't ship with the plus account.

@hongyi-zhao
Copy link
Collaborator Author

hongyi-zhao commented Jul 7, 2023

@acheong08

Plus not required. You just have to deal with rate limits. Alternative is proxies since it's IP based

The key is to use a proxy pool managed by, say, haproxy, which also answered the question filed by me.

@hongyi-zhao
Copy link
Collaborator Author

hongyi-zhao commented Jul 7, 2023

@acheong08

So, it seems that the following comment in the template docker-compose.yml is not so accurate:

https://github.com/acheong08/ChatGPT-to-API/blob/091f2b4851aba597a5f47e1d0532ad3cf071b32d/docker-compose.yml#L16-L18

      # If the parameter API_REVERSE_PROXY is empty, the default request URL is https://chat.openai.com/backend-api/conversation, and the PUID is required.
      # You can get your PUID for Plus account from the following link: https://chat.openai.com/api/auth/session.
      PUID: xxx

@binary-husky
Copy link
Owner

binary-husky commented Jul 7, 2023

Still have some problem, without PUID, should I use Access Token?

Should I pass Access_Token, AccessToken or accessToken?

Following configuration still return 500 error, the proxy network to US is tested to be fine, and the docker port projection is added as well:

version: '3'

services:
  app:
    image: acheong08/chatgpt-to-api # 总是使用latest,更新时重新pull该tag镜像即可
    container_name: chatgpttoapi
    restart: unless-stopped
    ports:
      - '8080:8080'
    environment:
      SERVER_HOST: 0.0.0.0
      SERVER_PORT: 8080
      ADMIN_PASSWORD: TotallySecurePassword
      http_proxy:  socks5h://docker.for.win.localhost:11284
      https_proxy: socks5h://docker.for.win.localhost:11284
      Access_Token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsImtpxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

@hongyi-zhao
Copy link
Collaborator Author

hongyi-zhao commented Jul 7, 2023

@acheong08
Just for confirmation, is the following bypass endpoint built with ChatGPTProxy?

https://github.com/acheong08/ChatGPT-to-API/blob/091f2b4851aba597a5f47e1d0532ad3cf071b32d/docker-compose.yml#L15
API_REVERSE_PROXY: https://bypass.churchless.tech/api/conversation

@binary-husky
Copy link
Owner

@binary-husky As described here, the access token retrieval has been automated. So, I think you should instead use OPENAI_EMAIL and OPENAI_PASSWORD, as documented here.

still no luck, give me some bad errors this time

image

@acheong08
Copy link

Just for confirmation, is the following bypass endpoint built with ChatGPTProxy?

oops. it should've been removed. works standalone

@acheong08
Copy link

docker version unmaintained. the binary is very lightweight.

@acheong08
Copy link

binary built via github actions available in releases; https://github.com/acheong08/ChatGPT-to-API/releases/tag/1.5.2

@hongyi-zhao
Copy link
Collaborator Author

hongyi-zhao commented Jul 31, 2023

Based on the discussions in "GPT Academic Developers #chat2 群号 610599535", patching as follows does the trick:

$ git log -1
commit 27f65c251a83c9b19ea5707938ae51683f1f2d8a (HEAD -> master, origin/master, origin/HEAD)
Author: binary-husky <[email protected]>
Date:   Mon Jul 31 15:57:18 2023 +0800

    Update 图片生成.py
$ git diff
diff --git a/request_llm/bridge_chatgpt.py b/request_llm/bridge_chatgpt.py
index ea48fba..96af833 100644
--- a/request_llm/bridge_chatgpt.py
+++ b/request_llm/bridge_chatgpt.py
@@ -186,15 +186,16 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
                 try:
                     chunk_decoded = chunk.decode()
                     # 前者是API2D的结束条件,后者是OPENAI的结束条件
-                    if ('data: [DONE]' in chunk_decoded) or (len(json.loads(chunk_decoded[6:])['choices'][0]["delta"]) == 0):
+                    if 'data: [DONE]' in chunk_decoded:
                         # 判定为数据流的结束,gpt_replying_buffer也写完了
                         logging.info(f'[response] {gpt_replying_buffer}')
                         break
                     # 处理数据流的主体
                     chunkjson = json.loads(chunk_decoded[6:])
                     status_text = f"finish_reason: {chunkjson['choices'][0]['finish_reason']}"
-                    # 如果这里抛出异常,一般是文本过长,详情见get_full_error的输出
-                    gpt_replying_buffer = gpt_replying_buffer + json.loads(chunk_decoded[6:])['choices'][0]["delta"]["content"]
+                    delta = chunkjson['choices'][0]["delta"]
+                    if "content" in delta:
+                        gpt_replying_buffer = gpt_replying_buffer + delta["content"]
                     history[-1] = gpt_replying_buffer
                     chatbot[-1] = (history[-2], history[-1])
                     yield from update_ui(chatbot=chatbot, history=history, msg=status_text) # 刷新界面

Build, config, start, and test ChatGPT-to-API as follows:

$ git clone https://github.com/acheong08/ChatGPT-to-API.git && cd ChatGPT-to-API && go build
# Create the following configuration files and adjust their content according to your environment:
$ cat accounts.txt 
username:password
$ cat proxies.txt 
socks5://127.0.0.1:18890
$ SERVER_PORT=18080 ./freechatgpt

Then, tell gpt_academic the corresponding endpoint as follows:

API_URL_REDIRECT='{"https://api.openai.com/v1/chat/completions": "http://127.0.0.1:18080/v1/chat/completions"}'

See below for the related discussions:

acheong08/ChatGPT-to-API#104
linweiyuan/go-chatgpt-api#236

@binary-husky
Copy link
Owner

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants