-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: 通过反向代理接入openai chatgpt #900
Comments
谢谢,不过鉴于目前api已经很便宜了,我不想再引入更多反向工程的东西了,因为反向工程的东西维护负担太重了…… 7.5 补充: 通过在config.py中添加以下配置可以使用 @acheong08 提供的反代项目 https://github.com/acheong08/ChatGPTProxy API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions":"https://reverse-proxy-url/v1/chat/completions"} |
V1.py 是基于ChatGPT-Proxy-V4的具体实现,或许可以通过移植作为一个插件来集成到gpt_academic中。 |
There is no need to use ChatGPT-Proxy-V4 or add a dependency on revChatGPT. To use ChatGPT for free, just swap the endpoint here: gpt_academic/request_llm/bridge_all.py Line 55 in 59877dd
To one hosted via https://github.com/acheong08/ChatGPT-to-API/ which serves as a clone of the official API but via chat.openai.com. A free endpoint to try out: https://free.churchless.tech/v1/chat/completions =================================================== Edit by binary husky:
|
Nice, it does the trick. BTW, I still have some puzzles:
|
yes
No. Cycled through |
Could you please describe the detailed steps? |
I upload a list of access tokens and it loops through them for each request. No API is used/accepted |
This means that you must periodically check the expiration status of these access tokens and dynamically update them. |
I have a bash script running in a for loop to update every week. It's all automated. |
Some additional related questions:
werner@X10DAi:~$ gpt_academic
[PROXY] 网络代理状态:未配置。无代理状态下很可能无法访问OpenAI家族的模型。建议:检查USE_PROXY选项是否修改。
[API_KEY] 本项目现已支持OpenAI和API2D的api-key。也支持同时填写多个api-key,如API_KEY="openai-key1,openai-key2,api2d-key3"
[API_KEY] 您既可以在config.py中修改api-key(s),也可以在问题输入区输入临时的api-key(s),然后回车键提交后即可生效。
[API_KEY] 正确的 API_KEY 是'sk'开头的51位密钥(OpenAI),或者 'fk'开头的41位密钥,请在config文件中修改API密钥之后再运行。
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
[ENV_VAR] 尝试加载API_URL_REDIRECT,默认值:{} --> 修正值:{"https://api.openai.com/v1/chat/completions": "https://free.churchless.tech/v1/chat/completions"}
[ENV_VAR] 成功读取环境变量API_URL_REDIRECT
[ENV_VAR] 尝试加载LLM_MODEL,默认值:gpt-3.5-turbo --> 修正值:gpt-3.5-turbo-16k
[ENV_VAR] 成功读取环境变量LLM_MODEL
所有问询记录将自动保存在本地目录./gpt_log/chat_secrets.log, 请注意自我隐私保护哦!
查询代理的地理位置,返回的结果是{}
代理配置 无, 代理所在地:China
如果浏览器没有自动打开,请复制并转到以下URL:
(亮色主题): http://localhost:55087
(暗色主题): http://localhost:55087/?__theme=dark
正在执行一些模块的预热...
正在加载tokenizer,如果是第一次运行,可能需要一点时间下载参数
自动更新程序:已禁用
加载tokenizer完毕
正在加载tokenizer,如果是第一次运行,可能需要一点时间下载参数
加载tokenizer完毕
Running on local URL: http://0.0.0.0:55087
To create a public link, set `share=True` in `launch()`.
[GFX1-]: glxtest: VA-API test failed: no supported VAAPI profile found.
ATTENTION: default value of option mesa_glthread overridden by environment.
ATTENTION: default value of option mesa_glthread overridden by environment.
ATTENTION: default value of option mesa_glthread overridden by environment. |
500 for public use and 500 for private use |
This is a front end thing. set randomly |
I still cannot understand what do you mean by saying that On the other hand, I also tried to call your This way works: $ API_URL="https://free.churchless.tech/v1/chat/completions" python -m revChatGPT.V3 --api_key sk-xxx --truncate_limit $(( 16 * 1024 )) --model gpt-3.5-turbo-16k This way fails: $ API_URL="https://free.churchless.tech/v1/chat/completions" python -m revChatGPT.V3 --truncate_limit $(( 16 * 1024 )) --model gpt-3.5-turbo-16k The error message is as follows:
|
Does this script call https://github.com/acheong08/OpenAIAuth under the hood?
|
V3 uses the official API. It is not relevant here |
It is necessary to set an API key to use this specific repository. You can set something like |
Browser automation with SMS verification from smspool
Automation stuff. Closed source |
If so, why do you still read the environment variable as follows?
|
This also means that each of these access tokens is precisely bound to one instance that provides the service, am I right? |
Are there some free smspool providers for such purpose? BTW, I noticed the following website, but not sure if it's truly free: |
V3 uses the official API. Some people require proxies
Yes
None that works |
Based on my further tries, both werner@X10DAi:~$ echo sk-$(tr -dc A-Za-z0-9 </dev/urandom | head -c 48)
sk-JF7HaOK6K01wTNxR6pjoH1VB2uT58xdrDMFn6xAdlioOGmET
So, I come to the following question: with a customized |
Then, what's your solution for such a tedious job? |
I pay $0.1 per account for sms verification |
Yes. If you include access token as API key in request to ChatGPT-to-API, it will use your access token instead of the built in ones |
My original intention of this issue is not to advocate or suggest the use of free endpoints provided by others. This is because such an influx of access can easily overwhelm the aforementioned endpoints. Instead, the goal is to inform people to build their own free endpoints using a tool like ChatGPT-to-API, which is worth considering due to its superior safety, efficiency, and stability. See acheong08/ChatGPT-to-API#81 for the related discussion. |
Replace the demo with your own endpoint. The endpoint provided is an instance of ChatGPT-to-API |
Sorry, I misunderstood the intention of this issue.
…________________________________
发件人: Antonio Cheong ***@***.***>
发送时间: 2023年7月6日 09:17
收件人: binary-husky/gpt_academic ***@***.***>
抄送: binary-husky ***@***.***>; State change ***@***.***>
主题: Re: [binary-husky/gpt_academic] [Feature]: 通过免费的反向代理接入openai chatgpt (Issue #900)
Replace the demo with your own endpoint. The endpoint provided is an instance of ChatGPT-to-API<https://github.com/acheong08/ChatGPT-to-API/>
―
Reply to this email directly, view it on GitHub<#900 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AW54NR7YGXHJBA5CTOIMBZDXOYG2ZANCNFSM6AAAAAAZPXSZBM>.
You are receiving this because you modified the open/close state.Message ID: ***@***.***>
|
Sorry, I did not read this issue carefully and have previously misunderstood the intention of this issue.
Hope I'm not causing unnessary trouble, I apologize if there is any offense.
As remedy I will replace demo to fake url and try to write a document on how to use ChatGPT-to-API<https://github.com/acheong08/ChatGPT-to-API/>
…________________________________
发件人: Antonio Cheong ***@***.***>
发送时间: 2023年7月6日 09:17
收件人: binary-husky/gpt_academic ***@***.***>
抄送: binary-husky ***@***.***>; State change ***@***.***>
主题: Re: [binary-husky/gpt_academic] [Feature]: 通过免费的反向代理接入openai chatgpt (Issue #900)
Replace the demo with your own endpoint. The endpoint provided is an instance of ChatGPT-to-API<https://github.com/acheong08/ChatGPT-to-API/>
―
Reply to this email directly, view it on GitHub<#900 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AW54NR7YGXHJBA5CTOIMBZDXOYG2ZANCNFSM6AAAAAAZPXSZBM>.
You are receiving this because you modified the open/close state.Message ID: ***@***.***>
|
@acheong08 hello, I'm trying to deploy ChatGPT-to-API
|
|
|
yes
Plus not required. You just have to deal with rate limits. Alternative is proxies since it's IP based |
I don't think so, due that OpenAI doesn't provide APIs even for plus accounts. Instead, the API access for the plus account is invited and managed by OpenAI, which doesn't ship with the plus account. |
The key is to use a proxy pool managed by, say, haproxy, which also answered the question filed by me. |
So, it seems that the following comment in the template docker-compose.yml is not so accurate: # If the parameter API_REVERSE_PROXY is empty, the default request URL is https://chat.openai.com/backend-api/conversation, and the PUID is required.
# You can get your PUID for Plus account from the following link: https://chat.openai.com/api/auth/session.
PUID: xxx |
Still have some problem, without PUID, should I use Access Token? Should I pass Following configuration still return 500 error, the proxy network to US is tested to be fine, and the docker port projection is added as well:
|
@acheong08 https://github.com/acheong08/ChatGPT-to-API/blob/091f2b4851aba597a5f47e1d0532ad3cf071b32d/docker-compose.yml#L15 |
still no luck, give me some bad errors this time |
oops. it should've been removed. works standalone |
docker version unmaintained. the binary is very lightweight. |
binary built via github actions available in releases; https://github.com/acheong08/ChatGPT-to-API/releases/tag/1.5.2 |
Based on the discussions in "GPT Academic Developers #chat2 群号 610599535", patching as follows does the trick: $ git log -1
commit 27f65c251a83c9b19ea5707938ae51683f1f2d8a (HEAD -> master, origin/master, origin/HEAD)
Author: binary-husky <[email protected]>
Date: Mon Jul 31 15:57:18 2023 +0800
Update 图片生成.py
$ git diff
diff --git a/request_llm/bridge_chatgpt.py b/request_llm/bridge_chatgpt.py
index ea48fba..96af833 100644
--- a/request_llm/bridge_chatgpt.py
+++ b/request_llm/bridge_chatgpt.py
@@ -186,15 +186,16 @@ def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_promp
try:
chunk_decoded = chunk.decode()
# 前者是API2D的结束条件,后者是OPENAI的结束条件
- if ('data: [DONE]' in chunk_decoded) or (len(json.loads(chunk_decoded[6:])['choices'][0]["delta"]) == 0):
+ if 'data: [DONE]' in chunk_decoded:
# 判定为数据流的结束,gpt_replying_buffer也写完了
logging.info(f'[response] {gpt_replying_buffer}')
break
# 处理数据流的主体
chunkjson = json.loads(chunk_decoded[6:])
status_text = f"finish_reason: {chunkjson['choices'][0]['finish_reason']}"
- # 如果这里抛出异常,一般是文本过长,详情见get_full_error的输出
- gpt_replying_buffer = gpt_replying_buffer + json.loads(chunk_decoded[6:])['choices'][0]["delta"]["content"]
+ delta = chunkjson['choices'][0]["delta"]
+ if "content" in delta:
+ gpt_replying_buffer = gpt_replying_buffer + delta["content"]
history[-1] = gpt_replying_buffer
chatbot[-1] = (history[-2], history[-1])
yield from update_ui(chatbot=chatbot, history=history, msg=status_text) # 刷新界面
Build, config, start, and test ChatGPT-to-API as follows: $ git clone https://github.com/acheong08/ChatGPT-to-API.git && cd ChatGPT-to-API && go build
# Create the following configuration files and adjust their content according to your environment:
$ cat accounts.txt
username:password
$ cat proxies.txt
socks5://127.0.0.1:18890
$ SERVER_PORT=18080 ./freechatgpt Then, tell gpt_academic the corresponding endpoint as follows: API_URL_REDIRECT='{"https://api.openai.com/v1/chat/completions": "http://127.0.0.1:18080/v1/chat/completions"}' See below for the related discussions: |
Class | 类型
大语言模型
Feature Request | 功能请求
有如下两个特性,非常不错:
The text was updated successfully, but these errors were encountered: