Merge branch 'xtekky:main' into main

pull/1914/head
hdsz25 1 month ago committed by GitHub
commit 6a2e4f28bb
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

1
.gitignore vendored

@ -55,6 +55,7 @@ local.py
image.py
.buildozer
hardir
har_and_cookies
node_modules
models
projects/windows/g4f

@ -1,4 +1,5 @@
recursive-include g4f/gui/server *
recursive-include g4f/gui/client *
recursive-include g4f/Provider/npm *
recursive-include g4f/Provider/gigachat_crt *
recursive-include g4f/Provider/gigachat_crt *
recursive-include g4f/Provider/you *

@ -6,7 +6,7 @@ Written by [@xtekky](https://github.com/hlohaus) & maintained by [@hlohaus](http
<div id="top"></div>
> By using this repository or any code related to it, you agree to the [legal notice](LEGAL_NOTICE.md). The author is **not responsible for the usage of this repository nor endorses it**, nor is the author responsible for any copies, forks, re-uploads made by other users, or anything else related to GPT4Free. This is the author's only account and repository. To prevent impersonation or irresponsible actions, please comply with the GNU GPL license this Repository uses.
> By using this repository or any code related to it, you agree to the [legal notice](https://github.com/xtekky/gpt4free/blob/main/LEGAL_NOTICE.md). The author is **not responsible for the usage of this repository nor endorses it**, nor is the author responsible for any copies, forks, re-uploads made by other users, or anything else related to GPT4Free. This is the author's only account and repository. To prevent impersonation or irresponsible actions, please comply with the GNU GPL license this Repository uses.
> [!Warning]
*"gpt4free"* serves as a **PoC** (proof of concept), demonstrating the development of an API package with multi-provider requests, with features like timeouts, load balance and flow control.
@ -91,7 +91,7 @@ As per the survey, here is a list of improvements to come
```sh
docker pull hlohaus789/g4f
docker run -p 8080:8080 -p 1337:1337 -p 7900:7900 --shm-size="2g" -v ${PWD}/hardir:/app/hardir hlohaus789/g4f:latest
docker run -p 8080:8080 -p 1337:1337 -p 7900:7900 --shm-size="2g" -v ${PWD}/har_and_cookies:/app/har_and_cookies hlohaus789/g4f:latest
```
3. **Access the Client:**
@ -114,12 +114,12 @@ To ensure the seamless operation of our application, please follow the instructi
By following these steps, you should be able to successfully install and run the application on your Windows system. If you encounter any issues during the installation process, please refer to our Issue Tracker or try to get contact over Discord for assistance.
Run the **Webview UI** on other Platfroms:
- [/docs/guides/webview](/docs/webview.md)
- [/docs/guides/webview](https://github.com/xtekky/gpt4free/blob/main/docs/webview.md)
##### Use your smartphone:
Run the Web UI on Your Smartphone:
- [/docs/guides/phone](/docs/guides/phone.md)
- [/docs/guides/phone](https://github.com/xtekky/gpt4free/blob/main/docs/guides/phone.md)
#### Use python
@ -135,18 +135,18 @@ pip install -U g4f[all]
```
How do I install only parts or do disable parts?
Use partial requirements: [/docs/requirements](/docs/requirements.md)
Use partial requirements: [/docs/requirements](https://github.com/xtekky/gpt4free/blob/main/docs/requirements.md)
##### Install from source:
How do I load the project using git and installing the project requirements?
Read this tutorial and follow it step by step: [/docs/git](/docs/git.md)
Read this tutorial and follow it step by step: [/docs/git](https://github.com/xtekky/gpt4free/blob/main/docs/git.md)
##### Install using Docker:
How do I build and run composer image from source?
Use docker-compose: [/docs/docker](/docs/docker.md)
Use docker-compose: [/docs/docker](https://github.com/xtekky/gpt4free/blob/main/docs/docker.md)
## 💡 Usage
@ -184,13 +184,13 @@ image_url = response.data[0].url
```
[![Image with cat](/docs/cat.jpeg)](/docs/client.md)
[![Image with cat](/docs/cat.jpeg)](https://github.com/xtekky/gpt4free/blob/main/docs/client.md)
**Full Documentation for Python API**
- New AsyncClient API from G4F: [/docs/async_client](/docs/async_client.md)
- Client API like the OpenAI Python library: [/docs/client](/docs/client.md)
- Legacy API with python modules: [/docs/legacy](/docs/legacy.md)
- New AsyncClient API from G4F: [/docs/async_client](https://github.com/xtekky/gpt4free/blob/main/docs/async_client.md)
- Client API like the OpenAI Python library: [/docs/client](https://github.com/xtekky/gpt4free/blob/main/docs/client.md)
- Legacy API with python modules: [/docs/legacy](https://github.com/xtekky/gpt4free/blob/main/docs/legacy.md)
#### Web UI
@ -209,7 +209,7 @@ python -m g4f.cli gui -port 8080 -debug
You can use the Interference API to serve other OpenAI integrations with G4F.
See docs: [/docs/interference](/docs/interference.md)
See docs: [/docs/interference](https://github.com/xtekky/gpt4free/blob/main/docs/interference.md)
Access with: http://localhost:1337/v1
@ -217,10 +217,11 @@ Access with: http://localhost:1337/v1
#### Cookies
You need cookies for BingCreateImages and the Gemini Provider.
From Bing you need the "_U" cookie and from Gemini you need the "__Secure-1PSID" cookie.
Sometimes you doesn't need the "__Secure-1PSID" cookie, but some other auth cookies.
You can pass the cookies in the create function or you use the `set_cookies` setter before you run G4F:
Cookies are essential for using Meta AI and Microsoft Designer to create images.
Additionally, cookies are required for the Google Gemini and WhiteRabbitNeo Provider.
From Bing, ensure you have the "_U" cookie, and from Google, all cookies starting with "__Secure-1PSID" are needed.
You can pass these cookies directly to the create function or set them using the `set_cookies` method before running G4F:
```python
from g4f.cookies import set_cookies
@ -228,10 +229,25 @@ from g4f.cookies import set_cookies
set_cookies(".bing.com", {
"_U": "cookie value"
})
set_cookies(".google.com", {
"__Secure-1PSID": "cookie value"
})
...
```
Alternatively, you can place your .har and cookie files in the `/har_and_cookies` directory. To export a cookie file, use the EditThisCookie extension available on the Chrome Web Store: [EditThisCookie Extension](https://chromewebstore.google.com/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg).
You can also create .har files to capture cookies. If you need further assistance, refer to the next section.
```bash
python -m g4f.cli api --debug
```
```
Read .har file: ./har_and_cookies/you.com.har
Cookies added: 10 from .you.com
Read cookie file: ./har_and_cookies/google.json
Cookies added: 16 from .google.com
Starting server... [g4f v-0.0.0] (debug)
```
#### .HAR File for OpenaiChat Provider
@ -249,7 +265,7 @@ To utilize the OpenaiChat provider, a .har file is required from https://chat.op
##### Storing the .HAR File
- Place the exported .har file in the `./hardir` directory if you are using Docker. Alternatively, you can store it in any preferred location within your current working directory.
- Place the exported .har file in the `./har_and_cookies` directory if you are using Docker. Alternatively, you can store it in any preferred location within your current working directory.
Note: Ensure that your .har file is stored securely, as it may contain sensitive information.
@ -273,42 +289,50 @@ set G4F_PROXY=http://host:port
| Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth |
| ------ | ------- | ------- | ----- | ------ | ------ | ---- |
| [bing.com](https://bing.com/chat) | `g4f.Provider.Bing` | ❌ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chatgpt.ai](https://chatgpt.ai) | `g4f.Provider.ChatgptAi` | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [liaobots.site](https://liaobots.site) | `g4f.Provider.Liaobots` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chat.openai.com](https://chat.openai.com) | `g4f.Provider.OpenaiChat` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [bing.com](https://bing.com/chat) | `g4f.Provider.Bing` | ❌ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgpt.ai](https://chatgpt.ai) | `g4f.Provider.ChatgptAi` | ❌ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [liaobots.site](https://liaobots.site) | `g4f.Provider.Liaobots` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chat.openai.com](https://chat.openai.com) | `g4f.Provider.OpenaiChat` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌+✔️ |
| [raycast.com](https://raycast.com) | `g4f.Provider.Raycast` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [beta.theb.ai](https://beta.theb.ai) | `g4f.Provider.Theb` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [you.com](https://you.com) | `g4f.Provider.You` | ✔️ | ✔️ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [you.com](https://you.com) | `g4f.Provider.You` | ✔️ | ✔️ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
## Best OpenSource Models
While we wait for gpt-5, here is a list of new models that are at least better than gpt-3.5-turbo. **Some are better than gpt-4**. Expect this list to grow.
| Website | Provider | parameters | better than |
| ------ | ------- | ------ | ------ |
| [mixtral-8x22b](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) | `g4f.Provider.DeepInfra` | 176B / 44b active | gpt-3.5-turbo |
| [claude-3-opus](https://anthropic.com/) | `g4f.Provider.You` | ?B | gpt-4-0125-preview |
| [command-r+](https://txt.cohere.com/command-r-plus-microsoft-azure/) | `g4f.Provider.HuggingChat` | 104B | gpt-4-0314 |
| [llama-3-70b](https://meta.ai/) | `g4f.Provider.Llama` or `DeepInfra` | 70B | gpt-4-0314 |
| [claude-3-sonnet](https://anthropic.com/) | `g4f.Provider.You` | ?B | gpt-4-0314 |
| [reka-core](https://chat.reka.ai/) | `g4f.Provider.Reka` | 21B | gpt-4-vision |
| [dbrx-instruct](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm) | `g4f.Provider.DeepInfra` | 132B / 36B active| gpt-3.5-turbo |
| [command-r+](https://txt.cohere.com/command-r-plus-microsoft-azure/) | `g4f.Provider.HuggingChat` | 104B | gpt-4-0613 |
| [mixtral-8x22b](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) | `g4f.Provider.DeepInfra` | 176B / 44b active | gpt-3.5-turbo |
### GPT-3.5
| Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth |
| ------ | ------- | ------- | ----- | ------ | ------ | ---- |
| [chat3.aiyunos.top](https://chat3.aiyunos.top/) | `g4f.Provider.AItianhuSpace` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chatforai.store](https://chatforai.store) | `g4f.Provider.ChatForAi` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgpt4online.org](https://chatgpt4online.org) | `g4f.Provider.Chatgpt4Online` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgpt-free.cc](https://www.chatgpt-free.cc) | `g4f.Provider.ChatgptNext` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatgptx.de](https://chatgptx.de) | `g4f.Provider.ChatgptX` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [flowgpt.com](https://flowgpt.com/chat) | `g4f.Provider.FlowGpt` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [freegptsnav.aifree.site](https://freegptsnav.aifree.site) | `g4f.Provider.FreeGpt` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [gpttalk.ru](https://gpttalk.ru) | `g4f.Provider.GptTalkRu` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [koala.sh](https://koala.sh) | `g4f.Provider.Koala` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chat10.aichatos.xyz](https://chat10.aichatos.xyz) | `g4f.Provider.Aichatos` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chatforai.store](https://chatforai.store) | `g4f.Provider.ChatForAi` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chatgpt4online.org](https://chatgpt4online.org) | `g4f.Provider.Chatgpt4Online` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chatgpt-free.cc](https://www.chatgpt-free.cc) | `g4f.Provider.ChatgptNext` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chatgptx.de](https://chatgptx.de) | `g4f.Provider.ChatgptX` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [f1.cnote.top](https://f1.cnote.top) | `g4f.Provider.Cnote` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [duckduckgo.com](https://duckduckgo.com/duckchat) | `g4f.Provider.DuckDuckGo` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [ecosia.org](https://www.ecosia.org) | `g4f.Provider.Ecosia` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [feedough.com](https://www.feedough.com) | `g4f.Provider.Feedough` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [flowgpt.com](https://flowgpt.com/chat) | `g4f.Provider.FlowGpt` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [freegptsnav.aifree.site](https://freegptsnav.aifree.site) | `g4f.Provider.FreeGpt` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [gpttalk.ru](https://gpttalk.ru) | `g4f.Provider.GptTalkRu` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [koala.sh](https://koala.sh) | `g4f.Provider.Koala` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [app.myshell.ai](https://app.myshell.ai/chat) | `g4f.Provider.MyShell` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [perplexity.ai](https://www.perplexity.ai) | `g4f.Provider.PerplexityAi` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [poe.com](https://poe.com) | `g4f.Provider.Poe` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [talkai.info](https://talkai.info) | `g4f.Provider.TalkAi` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [chat.vercel.ai](https://chat.vercel.ai) | `g4f.Provider.Vercel` | ✔️ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [chat.vercel.ai](https://chat.vercel.ai) | `g4f.Provider.Vercel` | ✔️ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [aitianhu.com](https://www.aitianhu.com) | `g4f.Provider.AItianhu` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chatgpt.bestim.org](https://chatgpt.bestim.org) | `g4f.Provider.Bestim` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
| [chatbase.co](https://www.chatbase.co) | `g4f.Provider.ChatBase` | ✔️ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ❌ |
@ -326,50 +350,86 @@ While we wait for gpt-5, here is a list of new models that are at least better t
### Other
| Website | Provider | GPT-3.5 | GPT-4 | Stream | Status | Auth |
| ------ | ------- | ------- | ----- | ------ | ------ | ---- |
| [openchat.team](https://openchat.team) | `g4f.Provider.Aura` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [bard.google.com](https://bard.google.com) | `g4f.Provider.Bard` | ❌ | ❌ | ❌ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [deepinfra.com](https://deepinfra.com) | `g4f.Provider.DeepInfra` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [free.chatgpt.org.uk](https://free.chatgpt.org.uk) | `g4f.Provider.FreeChatgpt` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [gemini.google.com](https://gemini.google.com) | `g4f.Provider.Gemini` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
| [ai.google.dev](https://ai.google.dev) | `g4f.Provider.GeminiPro` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
| [gemini-chatbot-sigma.vercel.app](https://gemini-chatbot-sigma.vercel.app) | `g4f.Provider.GeminiProChat` | ❌ | ❌ | ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [huggingface.co](https://huggingface.co/chat) | `g4f.Provider.HuggingChat` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [huggingface.co](https://huggingface.co/chat) | `g4f.Provider.HuggingFace` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [llama2.ai](https://www.llama2.ai) | `g4f.Provider.Llama2` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [labs.perplexity.ai](https://labs.perplexity.ai) | `g4f.Provider.PerplexityLabs` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [pi.ai](https://pi.ai/talk) | `g4f.Provider.Pi` | ❌ | ❌ | ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [theb.ai](https://theb.ai) | `g4f.Provider.ThebApi` | ❌ | ❌ | ❌ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [open-assistant.io](https://open-assistant.io/chat) | `g4f.Provider.OpenAssistant` | ❌ | ❌ | ✔️ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ✔️ |
| Website | Provider | Stream | Status | Auth |
| ------ | ------- | ------ | ------ | ---- |
| [openchat.team](https://openchat.team) | `g4f.Provider.Aura`| ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [blackbox.ai](https://www.blackbox.ai) | `g4f.Provider.Blackbox`| ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [cohereforai-c4ai-command-r-plus.hf.space](https://cohereforai-c4ai-command-r-plus.hf.space) | `g4f.Provider.Cohere`| ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [deepinfra.com](https://deepinfra.com) | `g4f.Provider.DeepInfra`| ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [free.chatgpt.org.uk](https://free.chatgpt.org.uk) | `g4f.Provider.FreeChatgpt`| ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [gemini.google.com](https://gemini.google.com) | `g4f.Provider.Gemini`| ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
| [ai.google.dev](https://ai.google.dev) | `g4f.Provider.GeminiPro`| ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
| [gemini-chatbot-sigma.vercel.app](https://gemini-chatbot-sigma.vercel.app) | `g4f.Provider.GeminiProChat`| ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [developers.sber.ru](https://developers.sber.ru/gigachat) | `g4f.Provider.GigaChat`| ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [console.groq.com](https://console.groq.com/playground) | `g4f.Provider.Groq`| ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
| [huggingface.co](https://huggingface.co/chat) | `g4f.Provider.HuggingChat`| ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [huggingface.co](https://huggingface.co/chat) | `g4f.Provider.HuggingFace`| ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [llama2.ai](https://www.llama2.ai) | `g4f.Provider.Llama`| ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [meta.ai](https://www.meta.ai) | `g4f.Provider.MetaAI`| ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [openrouter.ai](https://openrouter.ai) | `g4f.Provider.OpenRouter`| ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ✔️ |
| [labs.perplexity.ai](https://labs.perplexity.ai) | `g4f.Provider.PerplexityLabs`| ✔️ | ![Active](https://img.shields.io/badge/Active-brightgreen) | ❌ |
| [pi.ai](https://pi.ai/talk) | `g4f.Provider.Pi`| ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [replicate.com](https://replicate.com) | `g4f.Provider.Replicate`| ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ❌ |
| [theb.ai](https://theb.ai) | `g4f.Provider.ThebApi`| ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [whiterabbitneo.com](https://www.whiterabbitneo.com) | `g4f.Provider.WhiteRabbitNeo`| ✔️ | ![Unknown](https://img.shields.io/badge/Unknown-grey) | ✔️ |
| [bard.google.com](https://bard.google.com) | `g4f.Provider.Bard`| ❌ | ![Inactive](https://img.shields.io/badge/Inactive-red) | ✔️ |
### Models
| Model | Base Provider | Provider | Website |
|-----------------------------| ------------- | -------- | ------- |
| gpt-3.5-turbo | OpenAI | 5+ Providers | [openai.com](https://openai.com/) |
| gpt-4 | OpenAI | 2+ Providers | [openai.com](https://openai.com/) |
| gpt-4-turbo | OpenAI | g4f.Provider.Bing | [openai.com](https://openai.com/) |
| Llama-2-7b-chat-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Llama-2-13b-chat-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Llama-2-70b-chat-hf | Meta | 3+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Meta-Llama-3-8b | Meta | 3+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Meta-Llama-3-70b | Meta | 3+ Providers | [llama.meta.com](https://llama.meta.com/) |
| CodeLlama-34b-Instruct-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
| CodeLlama-70b-Instruct-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Mixtral-8x7B-Instruct-v0.1 | Huggingface | 4+ Providers | [huggingface.co](https://huggingface.co/) |
| Mistral-7B-Instruct-v0.1 | Huggingface | 4+ Providers | [huggingface.co](https://huggingface.co/) |
| dolphin-2.6-mixtral-8x7b | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
| lzlv_70b_fp16_hf | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
| airoboros-70b | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
| airoboros-l2-70b-gpt4-1.4.1 | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
| openchat_3.5 | Huggingface | 2+ Providers | [huggingface.co](https://huggingface.co/) |
| gemini | Google | g4f.Provider.Gemini | [gemini.google.com](https://gemini.google.com/) |
| gemini-pro | Google | 2+ Providers | [gemini.google.com](https://gemini.google.com/) |
| claude-v2 | Anthropic | 1+ Providers | [anthropic.com](https://www.anthropic.com/) |
| claude-3-opus | Anthropic | g4f.Provider.You | [anthropic.com](https://www.anthropic.com/) |
| claude-3-sonnet | Anthropic | g4f.Provider.You | [anthropic.com](https://www.anthropic.com/) |
| pi | Inflection | g4f.Provider.Pi | [inflection.ai](https://inflection.ai/) |
| Model | Base Provider | Provider | Website |
| ----- | ------------- | -------- | ------- |
| gpt-3.5-turbo | OpenAI | 8+ Providers | [openai.com](https://openai.com/) |
| gpt-4 | OpenAI | 2+ Providers | [openai.com](https://openai.com/) |
| gpt-4-turbo | OpenAI | g4f.Provider.Bing | [openai.com](https://openai.com/) |
| Llama-2-7b-chat-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Llama-2-13b-chat-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Llama-2-70b-chat-hf | Meta | 3+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Meta-Llama-3-8b-instruct | Meta | 1+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Meta-Llama-3-70b-instruct | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
| CodeLlama-34b-Instruct-hf | Meta | g4f.Provider.HuggingChat | [llama.meta.com](https://llama.meta.com/) |
| CodeLlama-70b-Instruct-hf | Meta | 2+ Providers | [llama.meta.com](https://llama.meta.com/) |
| Mixtral-8x7B-Instruct-v0.1 | Huggingface | 4+ Providers | [huggingface.co](https://huggingface.co/) |
| Mistral-7B-Instruct-v0.1 | Huggingface | 3+ Providers | [huggingface.co](https://huggingface.co/) |
| Mistral-7B-Instruct-v0.2 | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
| zephyr-orpo-141b-A35b-v0.1 | Huggingface | 2+ Providers | [huggingface.co](https://huggingface.co/) |
| dolphin-2.6-mixtral-8x7b | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
| gemini | Google | g4f.Provider.Gemini | [gemini.google.com](https://gemini.google.com/) |
| gemini-pro | Google | 2+ Providers | [gemini.google.com](https://gemini.google.com/) |
| claude-v2 | Anthropic | 1+ Providers | [anthropic.com](https://www.anthropic.com/) |
| claude-3-opus | Anthropic | g4f.Provider.You | [anthropic.com](https://www.anthropic.com/) |
| claude-3-sonnet | Anthropic | g4f.Provider.You | [anthropic.com](https://www.anthropic.com/) |
| lzlv_70b_fp16_hf | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
| airoboros-70b | Huggingface | g4f.Provider.DeepInfra | [huggingface.co](https://huggingface.co/) |
| openchat_3.5 | Huggingface | 2+ Providers | [huggingface.co](https://huggingface.co/) |
| pi | Inflection | g4f.Provider.Pi | [inflection.ai](https://inflection.ai/) |
### Image and Vision Models
| Label | Provider | Image Model | Vision Model | Website |
| ----- | -------- | ----------- | ------------ | ------- |
| Microsoft Copilot in Bing | `g4f.Provider.Bing` | dall-e-3 | gpt-4-vision | [bing.com](https://bing.com/chat) |
| DeepInfra | `g4f.Provider.DeepInfra` | stability-ai/sdxl | llava-1.5-7b-hf | [deepinfra.com](https://deepinfra.com) |
| Gemini | `g4f.Provider.Gemini` | ✔️ | ✔️ | [gemini.google.com](https://gemini.google.com) |
| Gemini API | `g4f.Provider.GeminiPro` | ❌ | gemini-1.5-pro | [ai.google.dev](https://ai.google.dev) |
| Meta AI | `g4f.Provider.MetaAI` | ✔️ | ❌ | [meta.ai](https://www.meta.ai) |
| OpenAI ChatGPT | `g4f.Provider.OpenaiChat` | dall-e-3 | gpt-4-vision | [chat.openai.com](https://chat.openai.com) |
| Reka | `g4f.Provider.Reka` | ❌ | ✔️ | [chat.reka.ai](https://chat.reka.ai/) |
| Replicate | `g4f.Provider.Replicate` | stability-ai/sdxl| llava-v1.6-34b | [replicate.com](https://replicate.com) |
| You.com | `g4f.Provider.You` | dall-e-3| ✔️ | [you.com](https://you.com) |
```python
import requests
from g4f.client import Client
client = Client()
image = requests.get("https://change_me.jpg", stream=True).raw
response = client.chat.completions.create(
"",
messages=[{"role": "user", "content": "what is in this picture?"}],
image=image
)
print(response.choices[0].message.content)
```
## 🔗 Powered by gpt4free
@ -781,11 +841,11 @@ We welcome contributions from the community. Whether you're adding new providers
###### Guide: How do i create a new Provider?
- Read: [/docs/guides/create_provider](/docs/guides/create_provider.md)
- Read: [/docs/guides/create_provider](https://github.com/xtekky/gpt4free/blob/main/docs/guides/create_provider.md)
###### Guide: How can AI help me with writing code?
- Read: [/docs/guides/help_me](/docs/guides/help_me.md)
- Read: [/docs/guides/help_me](https://github.com/xtekky/gpt4free/blob/main/docs/guides/help_me.md)
## 🙌 Contributors
@ -799,8 +859,8 @@ A list of all contributors is available [here](https://github.com/xtekky/gpt4fre
<a href="https://github.com/Commenter123321" target="_blank"><img src="https://avatars.githubusercontent.com/u/36051603?v=4&s=45" width="45" title="Commenter123321"></a>
<a href="https://github.com/DanielShemesh" target="_blank"><img src="https://avatars.githubusercontent.com/u/20585236?v=4&s=45" width="45" title="DanielShemesh"></a>
<a href="https://github.com/Luneye" target="_blank"><img src="https://avatars.githubusercontent.com/u/73485421?v=4&s=45" width="45" title="Luneye"></a>
<a href="https://github.com/enganese" target="_blank"><img src="https://avatars.githubusercontent.com/u/69082498?v=4&s=45" width="45" title="enganese"></a>
<a href="https://github.com/ezerinz" target="_blank"><img src="https://avatars.githubusercontent.com/u/100193740?v=4&s=45" width="45" title="ezerinz"></a>
<a href="https://github.com/enganese" target="_blank"><img src="https://avatars.githubusercontent.com/u/69082498?v=4&s=45" width="45" title="enganese"></a>
<a href="https://github.com/Lin-jun-xiang" target="_blank"><img src="https://avatars.githubusercontent.com/u/63782903?v=4&s=45" width="45" title="Lin-jun-xiang"></a>
<a href="https://github.com/nullstreak" target="_blank"><img src="https://avatars.githubusercontent.com/u/139914347?v=4&s=45" width="45" title="nullstreak"></a>
<a href="https://github.com/valerii-chirkov" target="_blank"><img src="https://avatars.githubusercontent.com/u/81074936?v=4&s=45" width="45" title="valerii-chirkov"></a>
@ -808,16 +868,16 @@ A list of all contributors is available [here](https://github.com/xtekky/gpt4fre
<a href="https://github.com/repollo" target="_blank"><img src="https://avatars.githubusercontent.com/u/2671466?v=4&s=45" width="45" title="repollo"></a>
<a href="https://github.com/hpsj" target="_blank"><img src="https://avatars.githubusercontent.com/u/54535414?v=4&s=45" width="45" title="hpsj"></a>
<a href="https://github.com/taiyi747" target="_blank"><img src="https://avatars.githubusercontent.com/u/63543716?v=4&s=45" width="45" title="taiyi747"></a>
<a href="https://github.com/ostix360" target="_blank"><img src="https://avatars.githubusercontent.com/u/55257054?v=4&s=45" width="45" title="ostix360"></a>
<a href="https://github.com/WdR-Tech" target="_blank"><img src="https://avatars.githubusercontent.com/u/143020293?v=4&s=45" width="45" title="WdR-Tech"></a>
<a href="https://github.com/HexyeDEV" target="_blank"><img src="https://avatars.githubusercontent.com/u/65314629?v=4&s=45" width="45" title="HexyeDEV"></a>
<a href="https://github.com/9fo" target="_blank"><img src="https://avatars.githubusercontent.com/u/71867245?v=4&s=45" width="45" title="9fo"></a>
<a href="https://github.com/eltociear" target="_blank"><img src="https://avatars.githubusercontent.com/u/22633385?v=4&s=45" width="45" title="eltociear"></a>
<a href="https://github.com/ramonvc" target="_blank"><img src="https://avatars.githubusercontent.com/u/13617054?v=4&s=45" width="45" title="ramonvc"></a>
<a href="https://github.com/naa7" target="_blank"><img src="https://avatars.githubusercontent.com/u/44613678?v=4&s=45" width="45" title="naa7"></a>
<a href="https://github.com/zeng-rr" target="_blank"><img src="https://avatars.githubusercontent.com/u/47846202?v=4&s=45" width="45" title="zeng-rr"></a>
<a href="https://github.com/editor-syntax" target="_blank"><img src="https://avatars.githubusercontent.com/u/109844019?v=4&s=45" width="45" title="editor-syntax"></a>
<a href="https://github.com/HexyeDEV" target="_blank"><img src="https://avatars.githubusercontent.com/u/65314629?v=4&s=45" width="45" title="HexyeDEV"></a>
<a href="https://github.com/WdR-Tech" target="_blank"><img src="https://avatars.githubusercontent.com/u/143020293?v=4&s=45" width="45" title="WdR-Tech"></a>
<a href="https://github.com/ostix360" target="_blank"><img src="https://avatars.githubusercontent.com/u/55257054?v=4&s=45" width="45" title="ostix360"></a>
<a href="https://github.com/devAdityaa" target="_blank"><img src="https://avatars.githubusercontent.com/u/77636021?v=4&s=45" width="45" title="devAdityaa"></a>
<a href="https://github.com/editor-syntax" target="_blank"><img src="https://avatars.githubusercontent.com/u/109844019?v=4&s=45" width="45" title="editor-syntax"></a>
<a href="https://github.com/zeng-rr" target="_blank"><img src="https://avatars.githubusercontent.com/u/47846202?v=4&s=45" width="45" title="zeng-rr"></a>
<a href="https://github.com/naa7" target="_blank"><img src="https://avatars.githubusercontent.com/u/44613678?v=4&s=45" width="45" title="naa7"></a>
<a href="https://github.com/ramonvc" target="_blank"><img src="https://avatars.githubusercontent.com/u/13617054?v=4&s=45" width="45" title="ramonvc"></a>
<a href="https://github.com/eltociear" target="_blank"><img src="https://avatars.githubusercontent.com/u/22633385?v=4&s=45" width="45" title="eltociear"></a>
<a href="https://github.com/kggn" target="_blank"><img src="https://avatars.githubusercontent.com/u/95663228?v=4&s=45" width="45" title="kggn"></a>
<a href="https://github.com/xiangsx" target="_blank"><img src="https://avatars.githubusercontent.com/u/29322721?v=4&s=45" width="45" title="xiangsx"></a>
<a href="https://github.com/ggindinson" target="_blank"><img src="https://avatars.githubusercontent.com/u/97807772?v=4&s=45" width="45" title="ggindinson"></a>
@ -826,11 +886,14 @@ A list of all contributors is available [here](https://github.com/xtekky/gpt4fre
<img src="https://avatars.githubusercontent.com/u/12299238?s=45&v=4" width="45" title="xqdoo00o">
<img src="https://avatars.githubusercontent.com/u/97126670?s=45&v=4" width="45" title="nathanrchn">
<img src="https://avatars.githubusercontent.com/u/81407603?v=4&s=45" width="45" title="dsdanielpark">
<img src="https://avatars.githubusercontent.com/u/55200481?v=4&s=45" width="45" title="missuo">
- The [`Vercel.py`](g4f/Provider/Vercel.py) file contains code from [vercel-llm-api](https://github.com/ading2210/vercel-llm-api) by [@ading2210](https://github.com/ading2210)
- The [`har_file.py`](g4f/Provider/openai/har_file.py) has input from [xqdoo00o/ChatGPT-to-API](https://github.com/xqdoo00o/ChatGPT-to-API)
- The [`PerplexityLabs.py`](g4f/Provider/openai/har_file.py) has input from [nathanrchn/perplexityai](https://github.com/nathanrchn/perplexityai)
- The [`Gemini.py`](g4f/Provider/needs_auth/Gemini.py) has input from [dsdanielpark/Gemini-API](https://github.com/dsdanielpark/Gemini-API)
- The [`Vercel.py`](https://github.com/xtekky/gpt4free/blob/main/g4f/Provider/Vercel.py) file contains code from [vercel-llm-api](https://github.com/ading2210/vercel-llm-api) by [@ading2210](https://github.com/ading2210)
- The [`har_file.py`](https://github.com/xtekky/gpt4free/blob/main/g4f/Provider/openai/har_file.py) has input from [xqdoo00o/ChatGPT-to-API](https://github.com/xqdoo00o/ChatGPT-to-API)
- The [`PerplexityLabs.py`](https://github.com/xtekky/gpt4free/blob/main/g4f/Provider/openai/har_file.py) has input from [nathanrchn/perplexityai](https://github.com/nathanrchn/perplexityai)
- The [`Gemini.py`](https://github.com/xtekky/gpt4free/blob/main/g4f/Provider/needs_auth/Gemini.py) has input from [dsdanielpark/Gemini-API](https://github.com/dsdanielpark/Gemini-API)
- The [`MetaAI.py`](https://github.com/xtekky/gpt4free/blob/main/g4f/Provider/MetaAI.py) file contains code from [meta-ai-api](https://github.com/Strvm/meta-ai-api) by [@Strvm](https://github.com/Strvm)
- The [`proofofwork.py`](https://github.com/xtekky/gpt4free/blob/main/g4f/Provider/openai/proofofwork.py) has input from [missuo/FreeGPT35](https://github.com/missuo/FreeGPT35)
*Having input implies that the AI's code generation utilized it as one of many sources.*
@ -870,7 +933,7 @@ along with this program. If not, see <https://www.gnu.org/licenses/>.
</td>
<td>
<img src="https://img.shields.io/badge/License-GNU_GPL_v3.0-red.svg"/> <br>
This project is licensed under <a href="./LICENSE">GNU_GPL_v3.0</a>.
This project is licensed under <a href="https://github.com/xtekky/gpt4free/blob/main/LICENSE">GNU_GPL_v3.0</a>.
</td>
</tr>
</table>

@ -30,20 +30,13 @@ RUN if [ "$G4F_VERSION" = "" ] ; then \
apt-get -qqy install git \
; fi
# Python packages
# Install Python3, pip, remove OpenJDK 11, clean up
RUN apt-get -qqy update \
&& apt-get -qqy install \
python3 \
python-is-python3 \
pip
# Remove java
RUN apt-get -qyy remove openjdk-11-jre-headless
# Cleanup
RUN rm -rf /var/lib/apt/lists/* /var/cache/apt/* \
&& apt-get -qqy install python3 python-is-python3 pip \
&& apt-get -qyy remove openjdk-11-jre-headless \
&& apt-get -qyy autoremove \
&& apt-get -qyy clean
&& apt-get -qyy clean \
&& rm -rf /var/lib/apt/lists/* /var/cache/apt/*
# Update entrypoint
COPY docker/supervisor.conf /etc/supervisor/conf.d/selenium.conf
@ -57,15 +50,13 @@ RUN if [ "$G4F_NO_GUI" ] ; then \
# Change background image
COPY docker/background.png /usr/share/images/fluxbox/ubuntu-light.png
# Add user
# Add user, fix permissions
RUN groupadd -g $G4F_USER_ID $G4F_USER \
&& useradd -rm -G sudo -u $G4F_USER_ID -g $G4F_USER_ID $G4F_USER \
&& echo "${G4F_USER}:${G4F_PASS}" | chpasswd
# Fix permissions
RUN mkdir "${SE_DOWNLOAD_DIR}"
RUN chown "${G4F_USER_ID}:${G4F_USER_ID}" $SE_DOWNLOAD_DIR /var/run/supervisor /var/log/supervisor
RUN chown "${G4F_USER_ID}:${G4F_USER_ID}" -R /opt/bin/ /usr/bin/chromedriver /opt/selenium/
&& echo "${G4F_USER}:${G4F_PASS}" | chpasswd \
&& mkdir "${SE_DOWNLOAD_DIR}" \
&& chown "${G4F_USER_ID}:${G4F_USER_ID}" $SE_DOWNLOAD_DIR /var/run/supervisor /var/log/supervisor \
&& chown "${G4F_USER_ID}:${G4F_USER_ID}" -R /opt/bin/ /usr/bin/chromedriver /opt/selenium/
# Switch user
USER $G4F_USER_ID
@ -82,13 +73,11 @@ COPY requirements.txt $G4F_DIR
# Upgrade pip for the latest features and install the project's Python dependencies.
RUN pip install --break-system-packages --upgrade pip \
&& pip install --break-system-packages -r requirements.txt
# Install selenium driver and uninstall webdriver
RUN pip install --break-system-packages \
&& pip install --break-system-packages -r requirements.txt \
&& pip install --break-system-packages \
undetected-chromedriver selenium-wire \
&& pip uninstall -y --break-system-packages \
webdriver plyer nodriver
pywebview plyer
# Copy the entire package into the container.
ADD --chown=$G4F_USER:$G4F_USER g4f $G4F_DIR/g4f

@ -0,0 +1,19 @@
import requests
import json
url = "http://localhost:1337/v1/chat/completions"
body = {
"model": "",
"provider": "MetaAI",
"stream": True,
"messages": [
{"role": "assistant", "content": "What can you do? Who are you?"}
]
}
lines = requests.post(url, json=body, stream=True).iter_lines()
for line in lines:
if line.startswith(b"data: "):
try:
print(json.loads(line[6:]).get("choices", [{"delta": {}}])[0]["delta"].get("content", ""), end="")
except json.JSONDecodeError:
pass
print()

@ -0,0 +1,27 @@
# Image Chat with Reca
# !! YOU NEED COOKIES / BE LOGGED IN TO chat.reka.ai
# download an image and save it as test.png in the same folder
from g4f.client import Client
from g4f.Provider import Reka
client = Client(
provider = Reka # Optional if you set model name to reka-core
)
completion = client.chat.completions.create(
model = "reka-core",
messages = [
{
"role": "user",
"content": "What can you see in the image ?"
}
],
stream = True,
image = open("test.png", "rb") # open("path", "rb"), do not use .read(), etc. it must be a file object
)
for message in completion:
print(message.choices[0].delta.content or "")
# >>> In the image there is ...

@ -14,6 +14,8 @@ async def test_async(provider: ProviderType):
return False
messages = [{"role": "user", "content": "Hello Assistant!"}]
try:
if "webdriver" in provider.get_parameters():
return False
response = await asyncio.wait_for(ChatCompletion.create_async(
model=models.default,
messages=messages,
@ -88,7 +90,7 @@ def print_models():
"huggingface": "Huggingface",
"anthropic": "Anthropic",
"inflection": "Inflection",
"meta": "Meta"
"meta": "Meta",
}
provider_urls = {
"google": "https://gemini.google.com/",
@ -96,7 +98,7 @@ def print_models():
"huggingface": "https://huggingface.co/",
"anthropic": "https://www.anthropic.com/",
"inflection": "https://inflection.ai/",
"meta": "https://llama.meta.com/"
"meta": "https://llama.meta.com/",
}
lines = [
@ -108,6 +110,8 @@ def print_models():
if name not in ("gpt-3.5-turbo", "gpt-4", "gpt-4-turbo"):
continue
name = re.split(r":|/", model.name)[-1]
if model.base_provider not in base_provider_names:
continue
base_provider = base_provider_names[model.base_provider]
if not isinstance(model.best_provider, BaseRetryProvider):
provider_name = f"g4f.Provider.{model.best_provider.__name__}"
@ -121,7 +125,28 @@ def print_models():
print("\n".join(lines))
def print_image_models():
lines = [
"| Label | Provider | Image Model | Vision Model | Website |",
"| ----- | -------- | ----------- | ------------ | ------- |",
]
from g4f.gui.server.api import Api
for image_model in Api.get_image_models():
provider_url = image_model["url"]
netloc = urlparse(provider_url).netloc.replace("www.", "")
website = f"[{netloc}]({provider_url})"
label = image_model["provider"] if image_model["label"] is None else image_model["label"]
if image_model["image_model"] is None:
image_model["image_model"] = ""
if image_model["vision_model"] is None:
image_model["vision_model"] = ""
lines.append(f'| {label} | `g4f.Provider.{image_model["provider"]}` | {image_model["image_model"]}| {image_model["vision_model"]} | {website} |')
print("\n".join(lines))
if __name__ == "__main__":
print_providers()
#print_providers()
#print("\n", "-" * 50, "\n")
#print_models()
print("\n", "-" * 50, "\n")
print_models()
print_image_models()

@ -10,7 +10,7 @@ except ImportError:
from g4f.client import Client, ChatCompletion
from g4f.Provider import Bing, OpenaiChat, DuckDuckGo
DEFAULT_MESSAGES = [{"role": "system", "content": 'Response in json, Example: {"success: true"}'},
DEFAULT_MESSAGES = [{"role": "system", "content": 'Response in json, Example: {"success": false}'},
{"role": "user", "content": "Say success true in json"}]
class TestProviderIntegration(unittest.TestCase):
@ -19,6 +19,7 @@ class TestProviderIntegration(unittest.TestCase):
self.skipTest("nest_asyncio is not installed")
def test_bing(self):
self.skipTest("Not working")
client = Client(provider=Bing)
response = client.chat.completions.create(DEFAULT_MESSAGES, "", response_format={"type": "json_object"})
self.assertIsInstance(response, ChatCompletion)

@ -7,13 +7,13 @@ import time
import asyncio
from urllib import parse
from datetime import datetime, date
from aiohttp import ClientSession, ClientTimeout, BaseConnector, WSMsgType
from ..typing import AsyncResult, Messages, ImageType, Cookies
from ..image import ImageRequest
from ..errors import ResponseStatusError, RateLimitError
from ..errors import ResponseError, ResponseStatusError, RateLimitError
from ..requests import StreamSession, DEFAULT_HEADERS
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import get_connector, get_random_hex
from .helper import get_random_hex
from .bing.upload_image import upload_image
from .bing.conversation import Conversation, create_conversation, delete_conversation
from .BingCreateImages import BingCreateImages
@ -38,8 +38,9 @@ class Bing(AsyncGeneratorProvider, ProviderModelMixin):
supports_message_history = True
supports_gpt_4 = True
default_model = "Balanced"
default_vision_model = "gpt-4-vision"
models = [getattr(Tones, key) for key in Tones.__dict__ if not key.startswith("__")]
@classmethod
def create_async_generator(
cls,
@ -49,7 +50,6 @@ class Bing(AsyncGeneratorProvider, ProviderModelMixin):
timeout: int = 900,
api_key: str = None,
cookies: Cookies = None,
connector: BaseConnector = None,
tone: str = None,
image: ImageType = None,
web_search: bool = False,
@ -79,7 +79,6 @@ class Bing(AsyncGeneratorProvider, ProviderModelMixin):
return stream_generate(
prompt, tone, image, context, cookies, api_key,
get_connector(connector, proxy, True),
proxy, web_search, gpt4_turbo, timeout,
**kwargs
)
@ -102,25 +101,53 @@ def get_ip_address() -> str:
return f"13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}"
def get_default_cookies():
#muid = get_random_hex().upper()
sid = get_random_hex().upper()
guid = get_random_hex().upper()
isodate = date.today().isoformat()
timestamp = int(time.time())
zdate = "0001-01-01T00:00:00.0000000"
return {
'SRCHD' : 'AF=NOFORM',
'PPLState' : '1',
'KievRPSSecAuth': '',
'SUID' : '',
'SRCHUSR' : f'DOB={date.today().strftime("%Y%m%d")}&T={int(time.time())}',
'SRCHHPGUSR' : f'HV={int(time.time())}',
'BCP' : 'AD=1&AL=1&SM=1',
'_Rwho' : f'u=d&ts={date.today().isoformat()}',
"_C_Auth": "",
#"MUID": muid,
#"MUIDB": muid,
"_EDGE_S": f"F=1&SID={sid}",
"_EDGE_V": "1",
"SRCHD": "AF=hpcodx",
"SRCHUID": f"V=2&GUID={guid}&dmnchg=1",
"_RwBf": (
f"r=0&ilt=1&ihpd=0&ispd=0&rc=3&rb=0&gb=0&rg=200&pc=0&mtu=0&rbb=0&g=0&cid="
f"&clo=0&v=1&l={isodate}&lft={zdate}&aof=0&ard={zdate}"
f"&rwdbt={zdate}&rwflt={zdate}&o=2&p=&c=&t=0&s={zdate}"
f"&ts={isodate}&rwred=0&wls=&wlb="
"&wle=&ccp=&cpt=&lka=0&lkt=0&aad=0&TH="
),
'_Rwho': f'u=d&ts={isodate}',
"_SS": f"SID={sid}&R=3&RB=0&GB=0&RG=200&RP=0",
"SRCHUSR": f"DOB={date.today().strftime('%Y%m%d')}&T={timestamp}",
"SRCHHPGUSR": f"HV={int(time.time())}",
"BCP": "AD=1&AL=1&SM=1",
"ipv6": f"hit={timestamp}",
'_C_ETH' : '1',
}
def create_headers(cookies: Cookies = None, api_key: str = None) -> dict:
async def create_headers(cookies: Cookies = None, api_key: str = None) -> dict:
if cookies is None:
# import nodriver as uc
# browser = await uc.start(headless=False)
# page = await browser.get(Defaults.home)
# await asyncio.sleep(10)
# cookies = {}
# for c in await page.browser.cookies.get_all():
# if c.domain.endswith(".bing.com"):
# cookies[c.name] = c.value
# user_agent = await page.evaluate("window.navigator.userAgent")
# await page.close()
cookies = get_default_cookies()
if api_key is not None:
cookies["_U"] = api_key
headers = Defaults.headers.copy()
headers["cookie"] = "; ".join(f"{k}={v}" for k, v in cookies.items())
headers["x-forwarded-for"] = get_ip_address()
return headers
class Defaults:
@ -246,25 +273,13 @@ class Defaults:
}
# Default headers for requests
home = 'https://www.bing.com/chat?q=Bing+AI&FORM=hpcodx'
home = "https://www.bing.com/chat?q=Microsoft+Copilot&FORM=hpcodx"
headers = {
'sec-ch-ua': '"Chromium";v="122", "Not(A:Brand";v="24", "Google Chrome";v="122"',
'sec-ch-ua-mobile': '?0',
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36',
'sec-ch-ua-arch': '"x86"',
'sec-ch-ua-full-version': '"122.0.6261.69"',
'accept': 'application/json',
'sec-ch-ua-platform-version': '"15.0.0"',
**DEFAULT_HEADERS,
"accept": "application/json",
"referer": home,
"x-ms-client-request-id": str(uuid.uuid4()),
'sec-ch-ua-full-version-list': '"Chromium";v="122.0.6261.69", "Not(A:Brand";v="24.0.0.0", "Google Chrome";v="122.0.6261.69"',
'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.12.3 OS/Windows',
'sec-ch-ua-model': '""',
'sec-ch-ua-platform': '"Windows"',
'sec-fetch-site': 'same-origin',
'sec-fetch-mode': 'cors',
'sec-fetch-dest': 'empty',
'referer': home,
'accept-language': 'en-US,en;q=0.9',
"x-ms-useragent": "azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.15.1 OS/Windows",
}
def format_message(msg: dict) -> str:
@ -368,7 +383,6 @@ async def stream_generate(
context: str = None,
cookies: dict = None,
api_key: str = None,
connector: BaseConnector = None,
proxy: str = None,
web_search: bool = False,
gpt4_turbo: bool = False,
@ -393,14 +407,12 @@ async def stream_generate(
:param timeout: Timeout for the request.
:return: An asynchronous generator yielding responses.
"""
headers = create_headers(cookies, api_key)
headers = await create_headers(cookies, api_key)
new_conversation = conversation is None
max_retries = (5 if new_conversation else 0) if max_retries is None else max_retries
async with ClientSession(
timeout=ClientTimeout(total=timeout), connector=connector
) as session:
first = True
while first or conversation is None:
first = True
while first or conversation is None:
async with StreamSession(timeout=timeout, proxy=proxy) as session:
first = False
do_read = True
try:
@ -408,13 +420,13 @@ async def stream_generate(
conversation = await create_conversation(session, headers, tone)
if return_conversation:
yield conversation
except ResponseStatusError as e:
except (ResponseStatusError, RateLimitError) as e:
max_retries -= 1
if max_retries < 1:
raise e
if debug.logging:
print(f"Bing: Retry: {e}")
headers = create_headers()
headers = await create_headers()
await asyncio.sleep(sleep_retry)
continue
@ -434,7 +446,7 @@ async def stream_generate(
) as wss:
await wss.send_str(format_message({'protocol': 'json', 'version': 1}))
await wss.send_str(format_message({"type": 6}))
await wss.receive(timeout=timeout)
await wss.receive_str()
await wss.send_str(create_message(
conversation, prompt, tone,
context if new_conversation else None,
@ -445,16 +457,15 @@ async def stream_generate(
returned_text = ''
message_id = None
while do_read:
msg = await wss.receive(timeout=timeout)
if msg.type == WSMsgType.CLOSED:
break
if msg.type != WSMsgType.TEXT or not msg.data:
continue
objects = msg.data.split(Defaults.delimiter)
msg = await wss.receive_str()
objects = msg.split(Defaults.delimiter)
for obj in objects:
if obj is None or not obj:
continue
response = json.loads(obj)
try:
response = json.loads(obj)
except json.JSONDecodeError:
continue
if response and response.get('type') == 1 and response['arguments'][0].get('messages'):
message = response['arguments'][0]['messages'][0]
if message_id is not None and message_id != message["messageId"]:
@ -462,7 +473,7 @@ async def stream_generate(
message_id = message["messageId"]
image_response = None
if (raise_apology and message['contentOrigin'] == 'Apology'):
raise RuntimeError("Apology Response Error")
raise ResponseError("Apology Response Error")
if 'adaptiveCards' in message:
card = message['adaptiveCards'][0]['body'][0]
if "text" in card:
@ -473,7 +484,7 @@ async def stream_generate(
elif message.get('contentType') == "IMAGE":
prompt = message.get('text')
try:
image_client = BingCreateImages(cookies, proxy)
image_client = BingCreateImages(cookies, proxy, api_key)
image_response = await image_client.create_async(prompt)
except Exception as e:
if debug.logging:
@ -488,6 +499,7 @@ async def stream_generate(
yield image_response
elif response.get('type') == 2:
result = response['item']['result']
do_read = False
if result.get('error'):
max_retries -= 1
if max_retries < 1:
@ -497,10 +509,12 @@ async def stream_generate(
raise RuntimeError(f"{result['value']}: {result['message']}")
if debug.logging:
print(f"Bing: Retry: {result['value']}: {result['message']}")
headers = create_headers()
do_read = False
headers = await create_headers()
conversation = None
await asyncio.sleep(sleep_retry)
break
return
await delete_conversation(session, conversation, headers)
break
elif response.get('type') == 3:
do_read = False
break
if conversation is not None:
await delete_conversation(session, conversation, headers)

@ -13,12 +13,19 @@ from .bing.create_images import create_images, create_session, get_cookies_from_
class BingCreateImages(AsyncGeneratorProvider, ProviderModelMixin):
label = "Microsoft Designer"
parent = "Bing"
url = "https://www.bing.com/images/create"
working = True
needs_auth = True
image_models = ["dall-e"]
def __init__(self, cookies: Cookies = None, proxy: str = None) -> None:
self.cookies: Cookies = cookies
self.proxy: str = proxy
def __init__(self, cookies: Cookies = None, proxy: str = None, api_key: str = None) -> None:
if api_key is not None:
if cookies is None:
cookies = {}
cookies["_U"] = api_key
self.cookies = cookies
self.proxy = proxy
@classmethod
async def create_async_generator(
@ -30,9 +37,7 @@ class BingCreateImages(AsyncGeneratorProvider, ProviderModelMixin):
proxy: str = None,
**kwargs
) -> AsyncResult:
if api_key is not None:
cookies = {"_U": api_key}
session = BingCreateImages(cookies, proxy)
session = BingCreateImages(cookies, proxy, api_key)
yield await session.create_async(messages[-1]["content"])
def create(self, prompt: str) -> Iterator[Union[ImageResponse, str]]:

@ -1,7 +1,8 @@
from __future__ import annotations
import requests
from ..typing import AsyncResult, Messages
from ..typing import AsyncResult, Messages, ImageType
from ..image import to_data_uri
from .needs_auth.Openai import Openai
class DeepInfra(Openai):
@ -9,9 +10,14 @@ class DeepInfra(Openai):
url = "https://deepinfra.com"
working = True
needs_auth = False
has_auth = True
supports_stream = True
supports_message_history = True
default_model = 'HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1'
default_model = "meta-llama/Meta-Llama-3-70b-instruct"
default_vision_model = "llava-hf/llava-1.5-7b-hf"
model_aliases = {
'dbrx-instruct': 'databricks/dbrx-instruct',
}
@classmethod
def get_models(cls):
@ -27,19 +33,12 @@ class DeepInfra(Openai):
model: str,
messages: Messages,
stream: bool,
image: ImageType = None,
api_base: str = "https://api.deepinfra.com/v1/openai",
temperature: float = 0.7,
max_tokens: int = 1028,
**kwargs
) -> AsyncResult:
if not '/' in model:
models = {
'mixtral-8x22b': 'HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1',
'dbrx-instruct': 'databricks/dbrx-instruct',
}
model = models.get(model, model)
headers = {
'Accept-Encoding': 'gzip, deflate, br',
'Accept-Language': 'en-US',
@ -55,6 +54,19 @@ class DeepInfra(Openai):
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
}
if image is not None:
if not model:
model = cls.default_vision_model
messages[-1]["content"] = [
{
"type": "image_url",
"image_url": {"url": to_data_uri(image)}
},
{
"type": "text",
"text": messages[-1]["content"]
}
]
return super().create_async_generator(
model, messages,
stream=stream,

@ -9,8 +9,10 @@ from ..image import ImageResponse
class DeepInfraImage(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://deepinfra.com"
parent = "DeepInfra"
working = True
default_model = 'stability-ai/sdxl'
image_models = [default_model]
@classmethod
def get_models(cls):
@ -18,6 +20,7 @@ class DeepInfraImage(AsyncGeneratorProvider, ProviderModelMixin):
url = 'https://api.deepinfra.com/models/featured'
models = requests.get(url).json()
cls.models = [model['model_name'] for model in models if model["reported_type"] == "text-to-image"]
cls.image_models = cls.models
return cls.models
@classmethod

@ -15,7 +15,8 @@ class Ecosia(AsyncGeneratorProvider, ProviderModelMixin):
working = True
supports_gpt_35_turbo = True
default_model = "gpt-3.5-turbo-0125"
model_aliases = {"gpt-3.5-turbo": "gpt-3.5-turbo-0125"}
models = [default_model, "green"]
model_aliases = {"gpt-3.5-turbo": default_model}
@classmethod
async def create_async_generator(
@ -23,11 +24,10 @@ class Ecosia(AsyncGeneratorProvider, ProviderModelMixin):
model: str,
messages: Messages,
connector: BaseConnector = None,
green: bool = False,
proxy: str = None,
**kwargs
) -> AsyncResult:
cls.get_model(model)
model = cls.get_model(model)
headers = {
"authority": "api.ecosia.org",
"accept": "*/*",
@ -39,7 +39,7 @@ class Ecosia(AsyncGeneratorProvider, ProviderModelMixin):
data = {
"messages": base64.b64encode(json.dumps(messages).encode()).decode()
}
api_url = f"https://api.ecosia.org/v2/chat/?sp={'eco' if green else 'productivity'}"
api_url = f"https://api.ecosia.org/v2/chat/?sp={'eco' if model == 'green' else 'productivity'}"
async with session.post(api_url, json=data) as response:
await raise_for_status(response)
async for chunk in response.content.iter_any():

@ -11,12 +11,14 @@ from ..errors import MissingAuthError
from .helper import get_connector
class GeminiPro(AsyncGeneratorProvider, ProviderModelMixin):
label = "Gemini API"
url = "https://ai.google.dev"
working = True
supports_message_history = True
needs_auth = True
default_model = "gemini-pro"
models = ["gemini-pro", "gemini-pro-vision"]
default_model = "gemini-1.5-pro-latest"
default_vision_model = default_model
models = [default_model, "gemini-pro", "gemini-pro-vision"]
@classmethod
async def create_async_generator(
@ -32,11 +34,10 @@ class GeminiPro(AsyncGeneratorProvider, ProviderModelMixin):
connector: BaseConnector = None,
**kwargs
) -> AsyncResult:
model = "gemini-pro-vision" if not model and image is not None else model
model = cls.get_model(model)
if not api_key:
raise MissingAuthError('Missing "api_key"')
raise MissingAuthError('Add a "api_key"')
headers = params = None
if use_auth_header:

@ -6,12 +6,14 @@ from aiohttp import ClientSession, BaseConnector
from ..typing import AsyncResult, Messages
from ..requests.raise_for_status import raise_for_status
from ..providers.conversation import BaseConversation
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import format_prompt, get_connector
from .helper import format_prompt, get_connector, get_cookies
class HuggingChat(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://huggingface.co/chat"
working = True
needs_auth = True
default_model = "mistralai/Mixtral-8x7B-Instruct-v0.1"
models = [
"HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1",
@ -20,10 +22,11 @@ class HuggingChat(AsyncGeneratorProvider, ProviderModelMixin):
'google/gemma-1.1-7b-it',
'NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO',
'mistralai/Mistral-7B-Instruct-v0.2',
'meta-llama/Meta-Llama-3-70B-Instruct'
'meta-llama/Meta-Llama-3-70B-Instruct',
'microsoft/Phi-3-mini-4k-instruct'
]
model_aliases = {
"openchat/openchat_3.5": "openchat/openchat-3.5-0106",
"mistralai/Mistral-7B-Instruct-v0.1": "mistralai/Mistral-7B-Instruct-v0.2"
}
@classmethod
@ -45,9 +48,16 @@ class HuggingChat(AsyncGeneratorProvider, ProviderModelMixin):
connector: BaseConnector = None,
web_search: bool = False,
cookies: dict = None,
conversation: Conversation = None,
return_conversation: bool = False,
delete_conversation: bool = True,
**kwargs
) -> AsyncResult:
options = {"model": cls.get_model(model)}
if cookies is None:
cookies = get_cookies("huggingface.co", False)
if return_conversation:
delete_conversation = False
system_prompt = "\n".join([message["content"] for message in messages if message["role"] == "system"])
if system_prompt:
@ -61,9 +71,14 @@ class HuggingChat(AsyncGeneratorProvider, ProviderModelMixin):
headers=headers,
connector=get_connector(connector, proxy)
) as session:
async with session.post(f"{cls.url}/conversation", json=options) as response:
await raise_for_status(response)
conversation_id = (await response.json())["conversationId"]
if conversation is None:
async with session.post(f"{cls.url}/conversation", json=options) as response:
await raise_for_status(response)
conversation_id = (await response.json())["conversationId"]
if return_conversation:
yield Conversation(conversation_id)
else:
conversation_id = conversation.conversation_id
async with session.get(f"{cls.url}/conversation/{conversation_id}/__data.json") as response:
await raise_for_status(response)
data: list = (await response.json())["nodes"][1]["data"]
@ -72,7 +87,7 @@ class HuggingChat(AsyncGeneratorProvider, ProviderModelMixin):
message_id: str = data[message_keys["id"]]
options = {
"id": message_id,
"inputs": format_prompt(messages),
"inputs": format_prompt(messages) if conversation is None else messages[-1]["content"],
"is_continue": False,
"is_retry": False,
"web_search": web_search
@ -92,5 +107,10 @@ class HuggingChat(AsyncGeneratorProvider, ProviderModelMixin):
yield token
elif line["type"] == "finalAnswer":
break
async with session.delete(f"{cls.url}/conversation/{conversation_id}") as response:
await raise_for_status(response)
if delete_conversation:
async with session.delete(f"{cls.url}/conversation/{conversation_id}") as response:
await raise_for_status(response)
class Conversation(BaseConversation):
def __init__(self, conversation_id: str) -> None:
self.conversation_id = conversation_id

@ -11,7 +11,7 @@ class Llama(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://www.llama2.ai"
working = True
supports_message_history = True
default_model = "meta/llama-3-70b-chat"
default_model = "meta/meta-llama-3-70b-instruct"
models = [
"meta/llama-2-7b-chat",
"meta/llama-2-13b-chat",
@ -20,8 +20,8 @@ class Llama(AsyncGeneratorProvider, ProviderModelMixin):
"meta/meta-llama-3-70b-instruct",
]
model_aliases = {
"meta-llama/Meta-Llama-3-8b-instruct": "meta/meta-llama-3-8b-instruct",
"meta-llama/Meta-Llama-3-70b-instruct": "meta/meta-llama-3-70b-instruct",
"meta-llama/Meta-Llama-3-8B-Instruct": "meta/meta-llama-3-8b-instruct",
"meta-llama/Meta-Llama-3-70B-Instruct": "meta/meta-llama-3-70b-instruct",
"meta-llama/Llama-2-7b-chat-hf": "meta/llama-2-7b-chat",
"meta-llama/Llama-2-13b-chat-hf": "meta/llama-2-13b-chat",
"meta-llama/Llama-2-70b-chat-hf": "meta/llama-2-70b-chat",

@ -0,0 +1,237 @@
from __future__ import annotations
import json
import uuid
import random
import time
from typing import Dict, List
from aiohttp import ClientSession, BaseConnector
from ..typing import AsyncResult, Messages, Cookies
from ..requests import raise_for_status, DEFAULT_HEADERS
from ..image import ImageResponse, ImagePreview
from ..errors import ResponseError
from .base_provider import AsyncGeneratorProvider
from .helper import format_prompt, get_connector, format_cookies
class Sources():
def __init__(self, link_list: List[Dict[str, str]]) -> None:
self.link = link_list
def __str__(self) -> str:
return "\n\n" + ("\n".join([f"[{link['title']}]({link['link']})" for link in self.list]))
class AbraGeoBlockedError(Exception):
pass
class MetaAI(AsyncGeneratorProvider):
label = "Meta AI"
url = "https://www.meta.ai"
working = True
def __init__(self, proxy: str = None, connector: BaseConnector = None):
self.session = ClientSession(connector=get_connector(connector, proxy), headers=DEFAULT_HEADERS)
self.cookies: Cookies = None
self.access_token: str = None
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
proxy: str = None,
**kwargs
) -> AsyncResult:
async for chunk in cls(proxy).prompt(format_prompt(messages)):
yield chunk
async def update_access_token(self, birthday: str = "1999-01-01"):
url = "https://www.meta.ai/api/graphql/"
payload = {
"lsd": self.lsd,
"fb_api_caller_class": "RelayModern",
"fb_api_req_friendly_name": "useAbraAcceptTOSForTempUserMutation",
"variables": json.dumps({
"dob": birthday,
"icebreaker_type": "TEXT",
"__relay_internal__pv__WebPixelRatiorelayprovider": 1,
}),
"doc_id": "7604648749596940",
}
headers = {
"x-fb-friendly-name": "useAbraAcceptTOSForTempUserMutation",
"x-fb-lsd": self.lsd,
"x-asbd-id": "129477",
"alt-used": "www.meta.ai",
"sec-fetch-site": "same-origin"
}
async with self.session.post(url, headers=headers, cookies=self.cookies, data=payload) as response:
await raise_for_status(response, "Fetch access_token failed")
auth_json = await response.json(content_type=None)
self.access_token = auth_json["data"]["xab_abra_accept_terms_of_service"]["new_temp_user_auth"]["access_token"]
async def prompt(self, message: str, cookies: Cookies = None) -> AsyncResult:
if self.cookies is None:
await self.update_cookies(cookies)
if cookies is not None:
self.access_token = None
if self.access_token is None and cookies is None:
await self.update_access_token()
if self.access_token is None:
url = "https://www.meta.ai/api/graphql/"
payload = {"lsd": self.lsd, 'fb_dtsg': self.dtsg}
headers = {'x-fb-lsd': self.lsd}
else:
url = "https://graph.meta.ai/graphql?locale=user"
payload = {"access_token": self.access_token}
headers = {}
headers = {
'content-type': 'application/x-www-form-urlencoded',
'cookie': format_cookies(self.cookies),
'origin': 'https://www.meta.ai',
'referer': 'https://www.meta.ai/',
'x-asbd-id': '129477',
'x-fb-friendly-name': 'useAbraSendMessageMutation',
**headers
}
payload = {
**payload,
'fb_api_caller_class': 'RelayModern',
'fb_api_req_friendly_name': 'useAbraSendMessageMutation',
"variables": json.dumps({
"message": {"sensitive_string_value": message},
"externalConversationId": str(uuid.uuid4()),
"offlineThreadingId": generate_offline_threading_id(),
"suggestedPromptIndex": None,
"flashVideoRecapInput": {"images": []},
"flashPreviewInput": None,
"promptPrefix": None,
"entrypoint": "ABRA__CHAT__TEXT",
"icebreaker_type": "TEXT",
"__relay_internal__pv__AbraDebugDevOnlyrelayprovider": False,
"__relay_internal__pv__WebPixelRatiorelayprovider": 1,
}),
'server_timestamps': 'true',
'doc_id': '7783822248314888'
}
async with self.session.post(url, headers=headers, data=payload) as response:
await raise_for_status(response, "Fetch response failed")
last_snippet_len = 0
fetch_id = None
async for line in response.content:
if b"<h1>Something Went Wrong</h1>" in line:
raise ResponseError("Response: Something Went Wrong")
try:
json_line = json.loads(line)
except json.JSONDecodeError:
continue
bot_response_message = json_line.get("data", {}).get("node", {}).get("bot_response_message", {})
streaming_state = bot_response_message.get("streaming_state")
fetch_id = bot_response_message.get("fetch_id") or fetch_id
if streaming_state in ("STREAMING", "OVERALL_DONE"):
imagine_card = bot_response_message.get("imagine_card")
if imagine_card is not None:
imagine_session = imagine_card.get("session")
if imagine_session is not None:
imagine_medias = imagine_session.get("media_sets", {}).pop().get("imagine_media")
if imagine_medias is not None:
image_class = ImageResponse if streaming_state == "OVERALL_DONE" else ImagePreview
yield image_class([media["uri"] for media in imagine_medias], imagine_medias[0]["prompt"])
snippet = bot_response_message["snippet"]
new_snippet_len = len(snippet)
if new_snippet_len > last_snippet_len:
yield snippet[last_snippet_len:]
last_snippet_len = new_snippet_len
#if last_streamed_response is None:
# if attempts > 3:
# raise Exception("MetaAI is having issues and was not able to respond (Server Error)")
# access_token = await self.get_access_token()
# return await self.prompt(message=message, attempts=attempts + 1)
if fetch_id is not None:
sources = await self.fetch_sources(fetch_id)
if sources is not None:
yield sources
async def update_cookies(self, cookies: Cookies = None):
async with self.session.get("https://www.meta.ai/", cookies=cookies) as response:
await raise_for_status(response, "Fetch home failed")
text = await response.text()
if "AbraGeoBlockedError" in text:
raise AbraGeoBlockedError("Meta AI isn't available yet in your country")
if cookies is None:
cookies = {
"_js_datr": self.extract_value(text, "_js_datr"),
"abra_csrf": self.extract_value(text, "abra_csrf"),
"datr": self.extract_value(text, "datr"),
}
self.lsd = self.extract_value(text, start_str='"LSD",[],{"token":"', end_str='"}')
self.dtsg = self.extract_value(text, start_str='"DTSGInitialData",[],{"token":"', end_str='"}')
self.cookies = cookies
async def fetch_sources(self, fetch_id: str) -> Sources:
if self.access_token is None:
url = "https://www.meta.ai/api/graphql/"
payload = {"lsd": self.lsd, 'fb_dtsg': self.dtsg}
headers = {'x-fb-lsd': self.lsd}
else:
url = "https://graph.meta.ai/graphql?locale=user"
payload = {"access_token": self.access_token}
headers = {}
payload = {
**payload,
"fb_api_caller_class": "RelayModern",
"fb_api_req_friendly_name": "AbraSearchPluginDialogQuery",
"variables": json.dumps({"abraMessageFetchID": fetch_id}),
"server_timestamps": "true",
"doc_id": "6946734308765963",
}
headers = {
"authority": "graph.meta.ai",
"x-fb-friendly-name": "AbraSearchPluginDialogQuery",
**headers
}
async with self.session.post(url, headers=headers, cookies=self.cookies, data=payload) as response:
await raise_for_status(response, "Fetch sources failed")
text = await response.text()
if "<h1>Something Went Wrong</h1>" in text:
raise ResponseError("Response: Something Went Wrong")
try:
response_json = json.loads(text)
message = response_json["data"]["message"]
if message is not None:
searchResults = message["searchResults"]
if searchResults is not None:
return Sources(searchResults["references"])
except (KeyError, TypeError, json.JSONDecodeError):
raise RuntimeError(f"Response: {text}")
@staticmethod
def extract_value(text: str, key: str = None, start_str = None, end_str = '",') -> str:
if start_str is None:
start_str = f'{key}":{{"value":"'
start = text.find(start_str)
if start >= 0:
start+= len(start_str)
end = text.find(end_str, start)
if end >= 0:
return text[start:end]
def generate_offline_threading_id() -> str:
"""
Generates an offline threading ID.
Returns:
str: The generated offline threading ID.
"""
# Generate a random 64-bit integer
random_value = random.getrandbits(64)
# Get the current timestamp in milliseconds
timestamp = int(time.time() * 1000)
# Combine timestamp and random value
threading_id = (timestamp << 22) | (random_value & ((1 << 22) - 1))
return str(threading_id)

@ -0,0 +1,23 @@
from __future__ import annotations
from ..typing import AsyncResult, Messages, Cookies
from .helper import format_prompt, get_cookies
from .MetaAI import MetaAI
class MetaAIAccount(MetaAI):
needs_auth = True
parent = "MetaAI"
image_models = ["meta"]
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
proxy: str = None,
cookies: Cookies = None,
**kwargs
) -> AsyncResult:
cookies = get_cookies(".meta.ai", True, True) if cookies is None else cookies
async for chunk in cls(proxy).prompt(format_prompt(messages), cookies):
yield chunk

@ -0,0 +1,148 @@
from __future__ import annotations
import os, requests, time, json
from ..typing import CreateResult, Messages, ImageType
from .base_provider import AbstractProvider
from ..cookies import get_cookies
from ..image import to_bytes
class Reka(AbstractProvider):
url = "https://chat.reka.ai/"
working = True
supports_stream = True
default_vision_model = "reka"
cookies = {}
@classmethod
def create_completion(
cls,
model: str,
messages: Messages,
stream: bool,
proxy: str = None,
timeout: int = 180,
api_key: str = None,
image: ImageType = None,
**kwargs
) -> CreateResult:
cls.proxy = proxy
if not api_key:
cls.cookies = get_cookies("chat.reka.ai")
if not cls.cookies:
raise ValueError("No cookies found for chat.reka.ai")
elif "appSession" not in cls.cookies:
raise ValueError("No appSession found in cookies for chat.reka.ai, log in or provide bearer_auth")
api_key = cls.get_access_token(cls)
conversation = []
for message in messages:
conversation.append({
"type": "human",
"text": message["content"],
})
if image:
image_url = cls.upload_image(cls, api_key, image)
conversation[-1]["image_url"] = image_url
conversation[-1]["media_type"] = "image"
headers = {
'accept': '*/*',
'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
'authorization': f'Bearer {api_key}',
'cache-control': 'no-cache',
'content-type': 'application/json',
'origin': 'https://chat.reka.ai',
'pragma': 'no-cache',
'priority': 'u=1, i',
'sec-ch-ua': '"Chromium";v="124", "Google Chrome";v="124", "Not-A.Brand";v="99"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36',
}
json_data = {
'conversation_history': conversation,
'stream': True,
'use_search_engine': False,
'use_code_interpreter': False,
'model_name': 'reka-core',
'random_seed': int(time.time() * 1000),
}
tokens = ''
response = requests.post('https://chat.reka.ai/api/chat',
cookies=cls.cookies, headers=headers, json=json_data, proxies=cls.proxy, stream=True)
for completion in response.iter_lines():
if b'data' in completion:
token_data = json.loads(completion.decode('utf-8')[5:])['text']
yield (token_data.replace(tokens, ''))
tokens = token_data
def upload_image(cls, access_token, image: ImageType) -> str:
boundary_token = os.urandom(8).hex()
headers = {
'accept': '*/*',
'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
'cache-control': 'no-cache',
'authorization': f'Bearer {access_token}',
'content-type': f'multipart/form-data; boundary=----WebKitFormBoundary{boundary_token}',
'origin': 'https://chat.reka.ai',
'pragma': 'no-cache',
'priority': 'u=1, i',
'referer': 'https://chat.reka.ai/chat/hPReZExtDOPvUfF8vCPC',
'sec-ch-ua': '"Chromium";v="124", "Google Chrome";v="124", "Not-A.Brand";v="99"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36',
}
image_data = to_bytes(image)
boundary = f'----WebKitFormBoundary{boundary_token}'
data = f'--{boundary}\r\nContent-Disposition: form-data; name="image"; filename="image.png"\r\nContent-Type: image/png\r\n\r\n'
data += image_data.decode('latin-1')
data += f'\r\n--{boundary}--\r\n'
response = requests.post('https://chat.reka.ai/api/upload-image',
cookies=cls.cookies, headers=headers, proxies=cls.proxy, data=data.encode('latin-1'))
return response.json()['media_url']
def get_access_token(cls):
headers = {
'accept': '*/*',
'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
'cache-control': 'no-cache',
'pragma': 'no-cache',
'priority': 'u=1, i',
'referer': 'https://chat.reka.ai/chat',
'sec-ch-ua': '"Chromium";v="124", "Google Chrome";v="124", "Not-A.Brand";v="99"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36',
}
try:
response = requests.get('https://chat.reka.ai/bff/auth/access_token',
cookies=cls.cookies, headers=headers, proxies=cls.proxy)
return response.json()['accessToken']
except Exception as e:
raise ValueError(f"Failed to get access token: {e}, refresh your cookies / log in into chat.reka.ai")

@ -0,0 +1,87 @@
from __future__ import annotations
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import format_prompt, filter_none
from ..typing import AsyncResult, Messages
from ..requests import raise_for_status
from ..requests.aiohttp import StreamSession
from ..errors import ResponseError, MissingAuthError
class Replicate(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://replicate.com"
working = True
default_model = "meta/meta-llama-3-70b-instruct"
model_aliases = {
"meta-llama/Meta-Llama-3-70B-Instruct": default_model
}
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
api_key: str = None,
proxy: str = None,
timeout: int = 180,
system_prompt: str = None,
max_new_tokens: int = None,
temperature: float = None,
top_p: float = None,
top_k: float = None,
stop: list = None,
extra_data: dict = {},
headers: dict = {
"accept": "application/json",
},
**kwargs
) -> AsyncResult:
model = cls.get_model(model)
if cls.needs_auth and api_key is None:
raise MissingAuthError("api_key is missing")
if api_key is not None:
headers["Authorization"] = f"Bearer {api_key}"
api_base = "https://api.replicate.com/v1/models/"
else:
api_base = "https://replicate.com/api/models/"
async with StreamSession(
proxy=proxy,
headers=headers,
timeout=timeout
) as session:
data = {
"stream": True,
"input": {
"prompt": format_prompt(messages),
**filter_none(
system_prompt=system_prompt,
max_new_tokens=max_new_tokens,
temperature=temperature,
top_p=top_p,
top_k=top_k,
stop_sequences=",".join(stop) if stop else None
),
**extra_data
},
}
url = f"{api_base.rstrip('/')}/{model}/predictions"
async with session.post(url, json=data) as response:
message = "Model not found" if response.status == 404 else None
await raise_for_status(response, message)
result = await response.json()
if "id" not in result:
raise ResponseError(f"Invalid response: {result}")
async with session.get(result["urls"]["stream"], headers={"Accept": "text/event-stream"}) as response:
await raise_for_status(response)
event = None
async for line in response.iter_lines():
if line.startswith(b"event: "):
event = line[7:]
if event == b"done":
break
elif event == b"output":
if line.startswith(b"data: "):
new_text = line[6:].decode()
if new_text:
yield new_text
else:
yield "\n"

@ -11,12 +11,14 @@ from ..errors import ResponseError
class ReplicateImage(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://replicate.com"
parent = "Replicate"
working = True
default_model = 'stability-ai/sdxl'
default_versions = [
"39ed52f2a78e934b3ba6e2a89f5b1c712de7dfea535525255b1aa35c5565e08b",
"2b017d9b67edd2ee1401238df49d75da53c523f36e363881e057f5dc3ed3c5b2"
]
image_models = [default_model]
@classmethod
async def create_async_generator(

@ -8,18 +8,22 @@ import uuid
from ..typing import AsyncResult, Messages, ImageType, Cookies
from .base_provider import AsyncGeneratorProvider, ProviderModelMixin
from .helper import format_prompt
from ..image import ImageResponse, to_bytes, is_accepted_format
from ..image import ImageResponse, ImagePreview, to_bytes, is_accepted_format
from ..requests import StreamSession, FormData, raise_for_status
from .you.har_file import get_dfp_telemetry_id
from .you.har_file import get_telemetry_ids
from .. import debug
class You(AsyncGeneratorProvider, ProviderModelMixin):
label = "You.com"
url = "https://you.com"
working = True
supports_gpt_35_turbo = True
supports_gpt_4 = True
default_model = "gpt-3.5-turbo"
default_vision_model = "agent"
image_models = ["dall-e"]
models = [
"gpt-3.5-turbo",
default_model,
"gpt-4",
"gpt-4-turbo",
"claude-instant",
@ -28,13 +32,15 @@ class You(AsyncGeneratorProvider, ProviderModelMixin):
"claude-3-sonnet",
"gemini-pro",
"zephyr",
"dall-e",
default_vision_model,
*image_models
]
model_aliases = {
"claude-v2": "claude-2"
}
_cookies = None
_cookies_used = 0
_telemetry_ids = []
@classmethod
async def create_async_generator(
@ -49,7 +55,7 @@ class You(AsyncGeneratorProvider, ProviderModelMixin):
chat_mode: str = "default",
**kwargs,
) -> AsyncResult:
if image is not None:
if image is not None or model == cls.default_vision_model:
chat_mode = "agent"
elif not model or model == cls.default_model:
...
@ -60,13 +66,18 @@ class You(AsyncGeneratorProvider, ProviderModelMixin):
chat_mode = "custom"
model = cls.get_model(model)
async with StreamSession(
proxies={"all": proxy},
proxy=proxy,
impersonate="chrome",
timeout=(30, timeout)
) as session:
cookies = await cls.get_cookies(session) if chat_mode != "default" else None
upload = json.dumps([await cls.upload_file(session, cookies, to_bytes(image), image_name)]) if image else ""
upload = ""
if image is not None:
upload_file = await cls.upload_file(
session, cookies,
to_bytes(image), image_name
)
upload = json.dumps([upload_file])
headers = {
"Accept": "text/event-stream",
"Referer": f"{cls.url}/search?fromSearchBar=true&tbm=youchat",
@ -100,11 +111,17 @@ class You(AsyncGeneratorProvider, ProviderModelMixin):
if event == "youChatToken" and event in data:
yield data[event]
elif event == "youChatUpdate" and "t" in data and data["t"] is not None:
match = re.search(r"!\[fig\]\((.+?)\)", data["t"])
if match:
yield ImageResponse(match.group(1), messages[-1]["content"])
if chat_mode == "create":
match = re.search(r"!\[(.+?)\]\((.+?)\)", data["t"])
if match:
if match.group(1) == "fig":
yield ImagePreview(match.group(2), messages[-1]["content"])
else:
yield ImageResponse(match.group(2), match.group(1))
else:
yield data["t"]
else:
yield data["t"]
yield data["t"]
@classmethod
async def upload_file(cls, client: StreamSession, cookies: Cookies, file: bytes, filename: str = None) -> dict:
@ -159,7 +176,12 @@ class You(AsyncGeneratorProvider, ProviderModelMixin):
@classmethod
async def create_cookies(cls, client: StreamSession) -> Cookies:
if not cls._telemetry_ids:
cls._telemetry_ids = await get_telemetry_ids()
user_uuid = str(uuid.uuid4())
telemetry_id = cls._telemetry_ids.pop()
if debug.logging:
print(f"Use telemetry_id: {telemetry_id}")
async with client.post(
"https://web.stytch.com/sdk/v1/passwords",
headers={
@ -170,7 +192,7 @@ class You(AsyncGeneratorProvider, ProviderModelMixin):
"Referer": "https://you.com/"
},
json={
"dfp_telemetry_id": await get_dfp_telemetry_id(),
"dfp_telemetry_id": telemetry_id,
"email": f"{user_uuid}@gmail.com",
"password": f"{user_uuid}#{user_uuid}",
"session_duration_minutes": 129600

@ -9,7 +9,6 @@ from .deprecated import *
from .not_working import *
from .selenium import *
from .needs_auth import *
from .unfinished import *
from .Aichatos import Aichatos
from .Aura import Aura
@ -42,12 +41,16 @@ from .Koala import Koala
from .Liaobots import Liaobots
from .Llama import Llama
from .Local import Local
from .MetaAI import MetaAI
from .MetaAIAccount import MetaAIAccount
from .PerplexityLabs import PerplexityLabs
from .Pi import Pi
from .Replicate import Replicate
from .ReplicateImage import ReplicateImage
from .Vercel import Vercel
from .WhiteRabbitNeo import WhiteRabbitNeo
from .You import You
from .Reka import Reka
import sys

@ -1,3 +1,3 @@
from ..providers.base_provider import *
from ..providers.types import FinishReason
from ..providers.types import FinishReason, Streaming
from .helper import get_cookies, format_prompt

@ -1,7 +1,6 @@
from __future__ import annotations
from aiohttp import ClientSession
from ...requests import raise_for_status
from ...requests import StreamSession, raise_for_status
from ...errors import RateLimitError
from ...providers.conversation import BaseConversation
@ -22,7 +21,7 @@ class Conversation(BaseConversation):
self.clientId = clientId
self.conversationSignature = conversationSignature
async def create_conversation(session: ClientSession, headers: dict, tone: str) -> Conversation:
async def create_conversation(session: StreamSession, headers: dict, tone: str) -> Conversation:
"""
Create a new conversation asynchronously.
@ -42,6 +41,8 @@ async def create_conversation(session: ClientSession, headers: dict, tone: str)
raise RateLimitError("Response 404: Do less requests and reuse conversations")
await raise_for_status(response, "Failed to create conversation")
data = await response.json()
if not data:
raise RuntimeError('Empty response: Failed to create conversation')
conversationId = data.get('conversationId')
clientId = data.get('clientId')
conversationSignature = response.headers.get('X-Sydney-Encryptedconversationsignature')
@ -49,7 +50,7 @@ async def create_conversation(session: ClientSession, headers: dict, tone: str)
raise RuntimeError('Empty fields: Failed to create conversation')
return Conversation(conversationId, clientId, conversationSignature)
async def list_conversations(session: ClientSession) -> list:
async def list_conversations(session: StreamSession) -> list:
"""
List all conversations asynchronously.
@ -64,7 +65,7 @@ async def list_conversations(session: ClientSession) -> list:
response = await response.json()
return response["chats"]
async def delete_conversation(session: ClientSession, conversation: Conversation, headers: dict) -> bool:
async def delete_conversation(session: StreamSession, conversation: Conversation, headers: dict) -> bool:
"""
Delete a conversation asynchronously.

@ -16,6 +16,7 @@ try:
except ImportError:
pass
from ... import debug
from ...typing import Messages, Cookies, ImageType, AsyncResult
from ..base_provider import AsyncGeneratorProvider
from ..helper import format_prompt, get_cookies
@ -31,7 +32,7 @@ REQUEST_HEADERS = {
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36',
'x-same-domain': '1',
}
REQUEST_BL_PARAM = "boq_assistant-bard-web-server_20240201.08_p8"
REQUEST_BL_PARAM = "boq_assistant-bard-web-server_20240421.18_p0"
REQUEST_URL = "https://gemini.google.com/_/BardChatUi/data/assistant.lamda.BardFrontendService/StreamGenerate"
UPLOAD_IMAGE_URL = "https://content-push.googleapis.com/upload/"
UPLOAD_IMAGE_HEADERS = {
@ -53,6 +54,56 @@ class Gemini(AsyncGeneratorProvider):
url = "https://gemini.google.com"
needs_auth = True
working = True
image_models = ["gemini"]
default_vision_model = "gemini"
_cookies: Cookies = None
@classmethod
async def nodriver_login(cls) -> Cookies:
try:
import nodriver as uc
except ImportError:
return
try:
from platformdirs import user_config_dir
user_data_dir = user_config_dir("g4f-nodriver")
except:
user_data_dir = None
if debug.logging:
print(f"Open nodriver with user_dir: {user_data_dir}")
browser = await uc.start(user_data_dir=user_data_dir)
page = await browser.get(f"{cls.url}/app")
await page.select("div.ql-editor.textarea", 240)
cookies = {}
for c in await page.browser.cookies.get_all():
if c.domain.endswith(".google.com"):
cookies[c.name] = c.value
await page.close()
return cookies
@classmethod
async def webdriver_login(cls, proxy: str):
driver = None
try:
driver = get_browser(proxy=proxy)
try:
driver.get(f"{cls.url}/app")
WebDriverWait(driver, 5).until(
EC.visibility_of_element_located((By.CSS_SELECTOR, "div.ql-editor.textarea"))
)
except:
login_url = os.environ.get("G4F_LOGIN_URL")
if login_url:
yield f"Please login: [Google Gemini]({login_url})\n\n"
WebDriverWait(driver, 240).until(
EC.visibility_of_element_located((By.CSS_SELECTOR, "div.ql-editor.textarea"))
)
cls._cookies = get_driver_cookies(driver)
except MissingRequirementsError:
pass
finally:
if driver:
driver.close()
@classmethod
async def create_async_generator(
@ -72,47 +123,30 @@ class Gemini(AsyncGeneratorProvider):
if cookies is None:
cookies = {}
cookies["__Secure-1PSID"] = api_key
cookies = cookies if cookies else get_cookies(".google.com", False, True)
cls._cookies = cookies or cls._cookies or get_cookies(".google.com", False, True)
base_connector = get_connector(connector, proxy)
async with ClientSession(
headers=REQUEST_HEADERS,
connector=base_connector
) as session:
snlm0e = await cls.fetch_snlm0e(session, cookies) if cookies else None
snlm0e = await cls.fetch_snlm0e(session, cls._cookies) if cls._cookies else None
if not snlm0e:
driver = None
try:
driver = get_browser(proxy=proxy)
try:
driver.get(f"{cls.url}/app")
WebDriverWait(driver, 5).until(
EC.visibility_of_element_located((By.CSS_SELECTOR, "div.ql-editor.textarea"))
)
except:
login_url = os.environ.get("G4F_LOGIN_URL")
if login_url:
yield f"Please login: [Google Gemini]({login_url})\n\n"
WebDriverWait(driver, 240).until(
EC.visibility_of_element_located((By.CSS_SELECTOR, "div.ql-editor.textarea"))
)
cookies = get_driver_cookies(driver)
except MissingRequirementsError:
pass
finally:
if driver:
driver.close()
cls._cookies = await cls.nodriver_login();
if cls._cookies is None:
async for chunk in cls.webdriver_login(proxy):
yield chunk
if not snlm0e:
if "__Secure-1PSID" not in cookies:
if "__Secure-1PSID" not in cls._cookies:
raise MissingAuthError('Missing "__Secure-1PSID" cookie')
snlm0e = await cls.fetch_snlm0e(session, cookies)
snlm0e = await cls.fetch_snlm0e(session, cls._cookies)
if not snlm0e:
raise RuntimeError("Invalid auth. SNlM0e not found")
raise RuntimeError("Invalid cookies. SNlM0e not found")
image_url = await cls.upload_image(base_connector, to_bytes(image), image_name) if image else None
async with ClientSession(
cookies=cookies,
cookies=cls._cookies,
headers=REQUEST_HEADERS,
connector=base_connector,
) as client:

@ -4,7 +4,7 @@ from .Openai import Openai
from ...typing import AsyncResult, Messages
class Groq(Openai):
lebel = "Groq"
label = "Groq"
url = "https://console.groq.com/playground"
working = True
default_model = "mixtral-8x7b-32768"

@ -3,5 +3,6 @@ from __future__ import annotations
from .OpenaiChat import OpenaiChat
class OpenaiAccount(OpenaiChat):
label = "OpenAI ChatGPT with Account"
needs_auth = True
needs_auth = True
parent = "OpenaiChat"
image_models = ["dall-e"]

@ -29,9 +29,26 @@ from ...requests.aiohttp import StreamSession
from ...image import to_image, to_bytes, ImageResponse, ImageRequest
from ...errors import MissingAuthError, ResponseError
from ...providers.conversation import BaseConversation
from ..helper import format_cookies
from ..openai.har_file import getArkoseAndAccessToken, NoValidHarFileError
from ..openai.proofofwork import generate_proof_token
from ... import debug
DEFAULT_HEADERS = {
"accept": "*/*",
"accept-encoding": "gzip, deflate, br, zstd",
"accept-language": "en-US,en;q=0.5",
"referer": "https://chat.openai.com/",
"sec-ch-ua": "\"Brave\";v=\"123\", \"Not:A-Brand\";v=\"8\", \"Chromium\";v=\"123\"",
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-platform": "\"Windows\"",
"sec-fetch-dest": "empty",
"sec-fetch-mode": "cors",
"sec-fetch-site": "same-origin",
"sec-gpc": "1",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36"
}
class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
"""A class for creating and managing conversations with OpenAI chat service"""
@ -43,8 +60,14 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
supports_message_history = True
supports_system_message = True
default_model = None
default_vision_model = "gpt-4-vision"
models = ["gpt-3.5-turbo", "gpt-4", "gpt-4-gizmo"]
model_aliases = {"text-davinci-002-render-sha": "gpt-3.5-turbo", "": "gpt-3.5-turbo", "gpt-4-turbo-preview": "gpt-4"}
model_aliases = {
"text-davinci-002-render-sha": "gpt-3.5-turbo",
"": "gpt-3.5-turbo",
"gpt-4-turbo-preview": "gpt-4",
"dall-e": "gpt-4",
}
_api_key: str = None
_headers: dict = None
_cookies: Cookies = None
@ -334,9 +357,8 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
Raises:
RuntimeError: If an error occurs during processing.
"""
async with StreamSession(
proxies={"all": proxy},
proxy=proxy,
impersonate="chrome",
timeout=timeout
) as session:
@ -349,35 +371,46 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
cls._set_api_key(api_key)
if cls.default_model is None and (not cls.needs_auth or cls._api_key is not None):
if cls._api_key is None:
cls._create_request_args(cookies)
async with session.get(
f"{cls.url}/",
headers=DEFAULT_HEADERS
) as response:
cls._update_request_args(session)
await raise_for_status(response)
try:
if not model:
cls.default_model = cls.get_model(await cls.get_default_model(session, cls._headers))
else:
cls.default_model = cls.get_model(model)
except MissingAuthError:
pass
except Exception as e:
api_key = cls._api_key = None
cls._create_request_args()
if debug.logging:
print("OpenaiChat: Load default_model failed")
print("OpenaiChat: Load default model failed")
print(f"{e.__class__.__name__}: {e}")
arkose_token = None
if cls.default_model is None:
error = None
try:
arkose_token, api_key, cookies = await getArkoseAndAccessToken(proxy)
cls._create_request_args(cookies)
arkose_token, api_key, cookies, headers = await getArkoseAndAccessToken(proxy)
cls._create_request_args(cookies, headers)
cls._set_api_key(api_key)
except NoValidHarFileError as e:
...
error = e
if cls._api_key is None:
await cls.nodriver_access_token()
if cls._api_key is None and cls.needs_auth:
raise e
raise error
cls.default_model = cls.get_model(await cls.get_default_model(session, cls._headers))
async with session.post(
f"{cls.url}/backend-anon/sentinel/chat-requirements" if not cls._api_key else
f"{cls.url}/backend-anon/sentinel/chat-requirements"
if cls._api_key is None else
f"{cls.url}/backend-api/sentinel/chat-requirements",
json={"conversation_mode_kind": "primary_assistant"},
headers=cls._headers
@ -388,16 +421,23 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
blob = data["arkose"]["dx"]
need_arkose = data["arkose"]["required"]
chat_token = data["token"]
if debug.logging:
print(f'Arkose: {need_arkose} Turnstile: {data["turnstile"]["required"]}')
proofofwork = ""
if "proofofwork" in data:
proofofwork = generate_proof_token(**data["proofofwork"], user_agent=cls._headers["user-agent"])
if need_arkose and arkose_token is None:
arkose_token, api_key, cookies = await getArkoseAndAccessToken(proxy)
cls._create_request_args(cookies)
arkose_token, api_key, cookies, headers = await getArkoseAndAccessToken(proxy)
cls._create_request_args(cookies, headers)
cls._set_api_key(api_key)
if arkose_token is None:
raise MissingAuthError("No arkose token found in .har file")
if debug.logging:
print(
'Arkose:', False if not need_arkose else arkose_token[:12]+"...",
'Turnstile:', data["turnstile"]["required"],
'Proofofwork:', False if proofofwork is None else proofofwork[:12]+"...",
)
try:
image_request = await cls.upload_image(session, cls._headers, image, image_name) if image else None
@ -406,7 +446,8 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
print("OpenaiChat: Upload image failed")
print(f"{e.__class__.__name__}: {e}")
model = cls.get_model(model).replace("gpt-3.5-turbo", "text-davinci-002-render-sha")
model = cls.get_model(model)
model = "text-davinci-002-render-sha" if model == "gpt-3.5-turbo" else model
if conversation is None:
conversation = Conversation(conversation_id, str(uuid.uuid4()) if parent_id is None else parent_id)
else:
@ -431,12 +472,14 @@ class OpenaiChat(AsyncGeneratorProvider, ProviderModelMixin):
messages = messages if conversation_id is None else [messages[-1]]
data["messages"] = cls.create_messages(messages, image_request)
headers = {
"Accept": "text/event-stream",
"OpenAI-Sentinel-Chat-Requirements-Token": chat_token,
"accept": "text/event-stream",
"Openai-Sentinel-Chat-Requirements-Token": chat_token,
**cls._headers
}
if need_arkose:
headers["OpenAI-Sentinel-Arkose-Token"] = arkose_token
headers["Openai-Sentinel-Arkose-Token"] = arkose_token
if proofofwork is not None:
headers["Openai-Sentinel-Proof-Token"] = proofofwork
async with session.post(
f"{cls.url}/backend-anon/conversation" if cls._api_key is None else
f"{cls.url}/backend-api/conversation",
@ -595,8 +638,7 @@ this.fetch = async (url, options) => {
print(f"Open nodriver with user_dir: {user_data_dir}")
browser = await uc.start(user_data_dir=user_data_dir)
page = await browser.get("https://chat.openai.com/")
while await page.find("[id^=headlessui-menu-button-]") is None:
await asyncio.sleep(1)
await page.select("[id^=headlessui-menu-button-]", 240)
api_key = await page.evaluate(
"(async () => {"
"let session = await fetch('/api/auth/session');"
@ -614,7 +656,7 @@ this.fetch = async (url, options) => {
cookies[c.name] = c.value
user_agent = await page.evaluate("window.navigator.userAgent")
await page.close()
cls._create_request_args(cookies, user_agent)
cls._create_request_args(cookies, user_agent=user_agent)
cls._set_api_key(api_key)
@classmethod
@ -662,28 +704,16 @@ this.fetch = async (url, options) => {
@staticmethod
def get_default_headers() -> dict:
return {
"accept-language": "en-US",
**DEFAULT_HEADERS,
"content-type": "application/json",
"oai-device-id": str(uuid.uuid4()),
"oai-language": "en-US",
"sec-ch-ua": "\"Google Chrome\";v=\"123\", \"Not:A-Brand\";v=\"8\", \"Chromium\";v=\"123\"",
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-platform": "\"Linux\"",
"sec-fetch-dest": "empty",
"sec-fetch-mode": "cors",
"sec-fetch-site": "same-origin"
}
@staticmethod
def _format_cookies(cookies: Cookies):
return "; ".join(f"{k}={v}" for k, v in cookies.items() if k != "access_token")
@classmethod
def _create_request_args(cls, cookies: Cookies = None, user_agent: str = None):
cls._headers = cls.get_default_headers()
def _create_request_args(cls, cookies: Cookies = None, headers: dict = None, user_agent: str = None):
cls._headers = cls.get_default_headers() if headers is None else headers
if user_agent is not None:
cls._headers["user-agent"] = user_agent
cls._cookies = {} if cookies is None else cookies
cls._cookies = {} if cookies is None else {k: v for k, v in cookies.items() if k != "access_token"}
cls._update_cookie_header()
@classmethod
@ -696,11 +726,13 @@ this.fetch = async (url, options) => {
def _set_api_key(cls, api_key: str):
cls._api_key = api_key
cls._expires = int(time.time()) + 60 * 60 * 4
cls._headers["Authorization"] = f"Bearer {api_key}"
cls._headers["authorization"] = f"Bearer {api_key}"
@classmethod
def _update_cookie_header(cls):
cls._headers["Cookie"] = cls._format_cookies(cls._cookies)
cls._headers["cookie"] = format_cookies(cls._cookies)
if "oai-did" in cls._cookies:
cls._headers["oai-device-id"] = cls._cookies["oai-did"]
class Conversation(BaseConversation):
"""

@ -1,3 +1,5 @@
from __future__ import annotations
import json
import base64
import hashlib

@ -1,3 +1,5 @@
from __future__ import annotations
import base64
import json
import os
@ -28,6 +30,7 @@ sessionUrl = "https://chat.openai.com/api/auth/session"
chatArk: arkReq = None
accessToken: str = None
cookies: dict = None
headers: dict = None
def readHAR():
dirPath = "./"
@ -59,17 +62,21 @@ def readHAR():
except KeyError:
continue
cookies = {c['name']: c['value'] for c in v['request']['cookies']}
headers = get_headers(v)
if not accessToken:
raise NoValidHarFileError("No accessToken found in .har files")
if not chatArks:
return None, accessToken, cookies
return chatArks.pop(), accessToken, cookies
return None, accessToken, cookies, headers
return chatArks.pop(), accessToken, cookies, headers
def get_headers(entry) -> dict:
return {h['name'].lower(): h['value'] for h in entry['request']['headers'] if h['name'].lower() not in ['content-length', 'cookie'] and not h['name'].startswith(':')}
def parseHAREntry(entry) -> arkReq:
tmpArk = arkReq(
arkURL=entry['request']['url'],
arkBx="",
arkHeader={h['name'].lower(): h['value'] for h in entry['request']['headers'] if h['name'].lower() not in ['content-length', 'cookie'] and not h['name'].startswith(':')},
arkHeader=get_headers(entry),
arkBody={p['name']: unquote(p['value']) for p in entry['request']['postData']['params'] if p['name'] not in ['rnd']},
arkCookies={c['name']: c['value'] for c in entry['request']['cookies']},
userAgent=""
@ -123,11 +130,11 @@ def getN() -> str:
timestamp = str(int(time.time()))
return base64.b64encode(timestamp.encode()).decode()
async def getArkoseAndAccessToken(proxy: str):
global chatArk, accessToken, cookies
async def getArkoseAndAccessToken(proxy: str) -> tuple[str, str, dict, dict]:
global chatArk, accessToken, cookies, headers
if chatArk is None or accessToken is None:
chatArk, accessToken, cookies = readHAR()
chatArk, accessToken, cookies, headers = readHAR()
if chatArk is None:
return None, accessToken, cookies
return None, accessToken, cookies, headers
newReq = genArkReq(chatArk)
return await sendRequest(newReq, proxy), accessToken, cookies
return await sendRequest(newReq, proxy), accessToken, cookies, headers

@ -0,0 +1,39 @@
import random
import hashlib
import json
import base64
from datetime import datetime, timedelta, timezone
def generate_proof_token(required: bool, seed: str, difficulty: str, user_agent: str):
if not required:
return
cores = [8, 12, 16, 24]
screens = [3000, 4000, 6000]
core = random.choice(cores)
screen = random.choice(screens)
# Get current UTC time
now_utc = datetime.now(timezone.utc)
# Convert UTC time to Eastern Time
now_et = now_utc.astimezone(timezone(timedelta(hours=-5)))
parse_time = now_et.strftime('%a, %d %b %Y %H:%M:%S GMT')
config = [core + screen, parse_time, 4294705152, 0, user_agent]
diff_len = len(difficulty) // 2
for i in range(100000):
config[3] = i
json_data = json.dumps(config)
base = base64.b64encode(json_data.encode()).decode()
hash_value = hashlib.sha3_512((seed + base).encode()).digest()
if hash_value.hex()[:diff_len] <= difficulty:
result = "gAAAAAB" + base
return result
fallback_base = base64.b64encode(f'"{seed}"'.encode()).decode()
return "gAAAAABwQ8Lk5FbGpA2NcR9dShT6gYjU7VxZ4D" + fallback_base

@ -1,78 +0,0 @@
from __future__ import annotations
import asyncio
from ..base_provider import AsyncGeneratorProvider, ProviderModelMixin
from ..helper import format_prompt, filter_none
from ...typing import AsyncResult, Messages
from ...requests import StreamSession, raise_for_status
from ...image import ImageResponse
from ...errors import ResponseError, MissingAuthError
class Replicate(AsyncGeneratorProvider, ProviderModelMixin):
url = "https://replicate.com"
working = True
default_model = "mistralai/mixtral-8x7b-instruct-v0.1"
api_base = "https://api.replicate.com/v1/models/"
@classmethod
async def create_async_generator(
cls,
model: str,
messages: Messages,
api_key: str = None,
proxy: str = None,
timeout: int = 180,
system_prompt: str = None,
max_new_tokens: int = None,
temperature: float = None,
top_p: float = None,
top_k: float = None,
stop: list = None,
extra_data: dict = {},
headers: dict = {},
**kwargs
) -> AsyncResult:
model = cls.get_model(model)
if api_key is None:
raise MissingAuthError("api_key is missing")
headers["Authorization"] = f"Bearer {api_key}"
async with StreamSession(
proxies={"all": proxy},
headers=headers,
timeout=timeout
) as session:
data = {
"stream": True,
"input": {
"prompt": format_prompt(messages),
**filter_none(
system_prompt=system_prompt,
max_new_tokens=max_new_tokens,
temperature=temperature,
top_p=top_p,
top_k=top_k,
stop_sequences=",".join(stop) if stop else None
),
**extra_data
},
}
url = f"{cls.api_base.rstrip('/')}/{model}/predictions"
async with session.post(url, json=data) as response:
await raise_for_status(response)
result = await response.json()
if "id" not in result:
raise ResponseError(f"Invalid response: {result}")
async with session.get(result["urls"]["stream"], headers={"Accept": "text/event-stream"}) as response:
await raise_for_status(response)
event = None
async for line in response.iter_lines():
if line.startswith(b"event: "):
event = line[7:]
elif event == b"output":
if line.startswith(b"data: "):
yield line[6:].decode()
elif not line.startswith(b"id: "):
continue#yield "+"+line.decode()
elif event == b"done":
break

@ -2,12 +2,12 @@ from __future__ import annotations
import json
import os
import os.path
import random
import uuid
import asyncio
import requests
from ...requests import StreamSession, raise_for_status
from ...errors import MissingRequirementsError
from ... import debug
class NoValidHarFileError(Exception):
...
@ -20,7 +20,8 @@ class arkReq:
self.arkCookies = arkCookies
self.userAgent = userAgent
arkPreURL = "https://telemetry.stytch.com/submit"
telemetry_url = "https://telemetry.stytch.com/submit"
public_token = "public-token-live-507a52ad-7e69-496b-aee0-1c9863c7c819"
chatArks: list = None
def readHAR():
@ -43,7 +44,7 @@ def readHAR():
# Error: not a HAR file!
continue
for v in harFile['log']['entries']:
if arkPreURL in v['request']['url']:
if v['request']['url'] == telemetry_url:
chatArks.append(parseHAREntry(v))
if not chatArks:
raise NoValidHarFileError("No telemetry in .har files found")
@ -61,95 +62,44 @@ def parseHAREntry(entry) -> arkReq:
return tmpArk
async def sendRequest(tmpArk: arkReq, proxy: str = None):
async with StreamSession(headers=tmpArk.arkHeaders, cookies=tmpArk.arkCookies, proxies={"all": proxy}) as session:
async with StreamSession(headers=tmpArk.arkHeaders, cookies=tmpArk.arkCookies, proxy=proxy) as session:
async with session.post(tmpArk.arkURL, data=tmpArk.arkBody) as response:
await raise_for_status(response)
return await response.text()
async def get_dfp_telemetry_id(proxy: str = None):
return await telemetry_id_with_driver(proxy)
async def create_telemetry_id(proxy: str = None):
global chatArks
if chatArks is None:
chatArks = readHAR()
return await sendRequest(random.choice(chatArks), proxy)
async def telemetry_id_with_driver(proxy: str = None):
from ...debug import logging
if logging:
print('getting telemetry_id for you.com with nodriver')
async def get_telemetry_ids(proxy: str = None) -> list:
try:
import nodriver as uc
from nodriver import start, cdp, loop
return [await create_telemetry_id(proxy)]
except NoValidHarFileError as e:
if debug.logging:
print(e)
if debug.logging:
print('Getting telemetry_id for you.com with nodriver')
try:
from nodriver import start
except ImportError:
if logging:
print('nodriver not found, random uuid (may fail)')
return str(uuid.uuid4())
CAN_EVAL = False
payload_received = False
payload = None
raise MissingRequirementsError('Add .har file from you.com or install "nodriver" package | pip install -U nodriver')
page = None
try:
browser = await start()
tab = browser.main_tab
async def send_handler(event: cdp.network.RequestWillBeSent):
nonlocal CAN_EVAL, payload_received, payload
if 'telemetry.js' in event.request.url:
CAN_EVAL = True
if "/submit" in event.request.url:
payload = event.request.post_data
payload_received = True
tab.add_handler(cdp.network.RequestWillBeSent, send_handler)
await browser.get("https://you.com")
while not CAN_EVAL:
await tab.sleep(1)
page = await browser.get("https://you.com")
await tab.evaluate('window.GetTelemetryID("public-token-live-507a52ad-7e69-496b-aee0-1c9863c7c819", "https://telemetry.stytch.com/submit");')
while not await page.evaluate('"GetTelemetryID" in this'):
await page.sleep(1)
while not payload_received:
await tab.sleep(.1)
except Exception as e:
print(f"Error occurred: {str(e)}")
async def get_telemetry_id():
return await page.evaluate(
f'this.GetTelemetryID("{public_token}", "{telemetry_url}");',
await_promise=True
)
return [await get_telemetry_id()]
finally:
try:
await tab.close()
except Exception as e:
print(f"Error occurred while closing tab: {str(e)}")
try:
await browser.stop()
except Exception as e:
pass
headers = {
'Accept': '*/*',
'Accept-Language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
'Connection': 'keep-alive',
'Content-type': 'application/x-www-form-urlencoded',
'Origin': 'https://you.com',
'Referer': 'https://you.com/',
'Sec-Fetch-Dest': 'empty',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Site': 'cross-site',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36',
'sec-ch-ua': '"Google Chrome";v="123", "Not:A-Brand";v="8", "Chromium";v="123"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"macOS"',
}
proxies = {
'http': proxy,
'https': proxy} if proxy else None
response = requests.post('https://telemetry.stytch.com/submit',
headers=headers, data=payload, proxies=proxies)
if '-' in response.text:
print(f'telemetry generated: {response.text}')
return (response.text)
if page is not None:
await page.close()

@ -1,21 +1,41 @@
from __future__ import annotations
import logging
import json
import uvicorn
import secrets
from fastapi import FastAPI, Response, Request
from fastapi.responses import StreamingResponse, RedirectResponse, HTMLResponse, JSONResponse
from fastapi.exceptions import RequestValidationError
from starlette.status import HTTP_422_UNPROCESSABLE_ENTITY
from fastapi.security import APIKeyHeader
from starlette.exceptions import HTTPException
from starlette.status import HTTP_422_UNPROCESSABLE_ENTITY, HTTP_401_UNAUTHORIZED, HTTP_403_FORBIDDEN
from fastapi.encoders import jsonable_encoder
from pydantic import BaseModel
from typing import List, Union, Optional
from typing import Union, Optional
import g4f
import g4f.debug
from g4f.client import AsyncClient
from g4f.typing import Messages
class ChatCompletionsConfig(BaseModel):
from g4f.cookies import read_cookie_files
def create_app():
app = FastAPI()
api = Api(app)
api.register_routes()
api.register_authorization()
api.register_validation_exception_handler()
if not AppConfig.ignore_cookie_files:
read_cookie_files()
return app
def create_app_debug():
g4f.debug.logging = True
return create_app()
class ChatCompletionsForm(BaseModel):
messages: Messages
model: str
provider: Optional[str] = None
@ -25,42 +45,65 @@ class ChatCompletionsConfig(BaseModel):
stop: Union[list[str], str, None] = None
api_key: Optional[str] = None
web_search: Optional[bool] = None
proxy: Optional[str] = None
class AppConfig():
list_ignored_providers: Optional[list[str]] = None
g4f_api_key: Optional[str] = None
ignore_cookie_files: bool = False
@classmethod
def set_list_ignored_providers(cls, ignored: list[str]):
cls.list_ignored_providers = ignored
@classmethod
def set_g4f_api_key(cls, key: str = None):
cls.g4f_api_key = key
@classmethod
def set_ignore_cookie_files(cls, value: bool):
cls.ignore_cookie_files = value
class Api:
def __init__(self, engine: g4f, debug: bool = True, sentry: bool = False,
list_ignored_providers: List[str] = None) -> None:
self.engine = engine
self.debug = debug
self.sentry = sentry
self.list_ignored_providers = list_ignored_providers
if debug:
g4f.debug.logging = True
def __init__(self, app: FastAPI) -> None:
self.app = app
self.client = AsyncClient()
self.app = FastAPI()
self.get_g4f_api_key = APIKeyHeader(name="g4f-api-key")
self.routes()
self.register_validation_exception_handler()
def register_authorization(self):
@self.app.middleware("http")
async def authorization(request: Request, call_next):
if AppConfig.g4f_api_key and request.url.path in ["/v1/chat/completions", "/v1/completions"]:
try:
user_g4f_api_key = await self.get_g4f_api_key(request)
except HTTPException as e:
if e.status_code == 403:
return JSONResponse(
status_code=HTTP_401_UNAUTHORIZED,
content=jsonable_encoder({"detail": "G4F API key required"}),
)
if not secrets.compare_digest(AppConfig.g4f_api_key, user_g4f_api_key):
return JSONResponse(
status_code=HTTP_403_FORBIDDEN,
content=jsonable_encoder({"detail": "Invalid G4F API key"}),
)
return await call_next(request)
def register_validation_exception_handler(self):
@self.app.exception_handler(RequestValidationError)
async def validation_exception_handler(request: Request, exc: RequestValidationError):
details = exc.errors()
modified_details = []
for error in details:
modified_details.append(
{
"loc": error["loc"],
"message": error["msg"],
"type": error["type"],
}
)
modified_details = [{
"loc": error["loc"],
"message": error["msg"],
"type": error["type"],
} for error in details]
return JSONResponse(
status_code=HTTP_422_UNPROCESSABLE_ENTITY,
content=jsonable_encoder({"detail": modified_details}),
)
def routes(self):
def register_routes(self):
@self.app.get("/")
async def read_root():
return RedirectResponse("/v1", 302)
@ -73,10 +116,10 @@ class Api:
@self.app.get("/v1/models")
async def models():
model_list = dict(
(model, g4f.models.ModelUtils.convert[model])
model_list = {
model: g4f.models.ModelUtils.convert[model]
for model in g4f.Model.__all__()
)
}
model_list = [{
'id': model_id,
'object': 'model',
@ -99,7 +142,7 @@ class Api:
return JSONResponse({"error": "The model does not exist."})
@self.app.post("/v1/chat/completions")
async def chat_completions(config: ChatCompletionsConfig = None, request: Request = None, provider: str = None):
async def chat_completions(config: ChatCompletionsForm, request: Request = None, provider: str = None):
try:
config.provider = provider if config.provider is None else config.provider
if config.api_key is None and request is not None:
@ -110,7 +153,7 @@ class Api:
config.api_key = auth_header
response = self.client.chat.completions.create(
**config.dict(exclude_none=True),
ignored=self.list_ignored_providers
ignored=AppConfig.list_ignored_providers
)
except Exception as e:
logging.exception(e)
@ -136,11 +179,7 @@ class Api:
async def completions():
return Response(content=json.dumps({'info': 'Not working yet.'}, indent=4), media_type="application/json")
def run(self, ip, use_colors : bool = False):
split_ip = ip.split(":")
uvicorn.run(app=self.app, host=split_ip[0], port=int(split_ip[1]), use_colors=use_colors)
def format_exception(e: Exception, config: ChatCompletionsConfig) -> str:
def format_exception(e: Exception, config: ChatCompletionsForm) -> str:
last_provider = g4f.get_last_provider(True)
return json.dumps({
"error": {"message": f"{e.__class__.__name__}: {e}"},
@ -148,7 +187,24 @@ def format_exception(e: Exception, config: ChatCompletionsConfig) -> str:
"provider": last_provider.get("name") if last_provider else config.provider
})
def run_api(host: str = '0.0.0.0', port: int = 1337, debug: bool = False, use_colors=True) -> None:
print(f'Starting server... [g4f v-{g4f.version.utils.current_version}]')
app = Api(engine=g4f, debug=debug)
app.run(f"{host}:{port}", use_colors=use_colors)
def run_api(
host: str = '0.0.0.0',
port: int = 1337,
bind: str = None,
debug: bool = False,
workers: int = None,
use_colors: bool = None
) -> None:
print(f'Starting server... [g4f v-{g4f.version.utils.current_version}]' + (" (debug)" if debug else ""))
if use_colors is None:
use_colors = debug
if bind is not None:
host, port = bind.split(":")
uvicorn.run(
f"g4f.api:{'create_app_debug' if debug else 'create_app'}",
host=host, port=int(port),
workers=workers,
use_colors=use_colors,
factory=True,
reload=debug
)

@ -1,6 +1,4 @@
import g4f
import g4f.api
if __name__ == "__main__":
print(f'Starting server... [g4f v-{g4f.version.utils.current_version}]')
g4f.api.Api(engine = g4f, debug = True).run(ip = "0.0.0.0:10000")
g4f.api.run_api(debug=True)

@ -1,35 +1,51 @@
from __future__ import annotations
import argparse
from enum import Enum
import g4f
from g4f import Provider
from g4f.gui.run import gui_parser, run_gui_args
def run_gui(args):
print("Running GUI...")
def main():
IgnoredProviders = Enum("ignore_providers", {key: key for key in Provider.__all__})
parser = argparse.ArgumentParser(description="Run gpt4free")
subparsers = parser.add_subparsers(dest="mode", help="Mode to run the g4f in.")
api_parser=subparsers.add_parser("api")
api_parser = subparsers.add_parser("api")
api_parser.add_argument("--bind", default="0.0.0.0:1337", help="The bind string.")
api_parser.add_argument("--debug", type=bool, default=False, help="Enable verbose logging")
api_parser.add_argument("--ignored-providers", nargs="+", choices=[provider.name for provider in IgnoredProviders],
default=[], help="List of providers to ignore when processing request.")
api_parser.add_argument("--debug", action="store_true", help="Enable verbose logging.")
api_parser.add_argument("--workers", type=int, default=None, help="Number of workers.")
api_parser.add_argument("--disable-colors", action="store_true", help="Don't use colors.")
api_parser.add_argument("--ignore-cookie-files", action="store_true", help="Don't read .har and cookie files.")
api_parser.add_argument("--g4f-api-key", type=str, default=None, help="Sets an authentication key for your API. (incompatible with --debug and --workers)")
api_parser.add_argument("--ignored-providers", nargs="+", choices=[provider.__name__ for provider in Provider.__providers__ if provider.working],
default=[], help="List of providers to ignore when processing request. (incompatible with --debug and --workers)")
subparsers.add_parser("gui", parents=[gui_parser()], add_help=False)
args = parser.parse_args()
if args.mode == "api":
from g4f.api import Api
controller=Api(engine=g4f, debug=args.debug, list_ignored_providers=args.ignored_providers)
controller.run(args.bind)
run_api_args(args)
elif args.mode == "gui":
run_gui_args(args)
else:
parser.print_help()
exit(1)
def run_api_args(args):
from g4f.api import AppConfig, run_api
AppConfig.set_ignore_cookie_files(
args.ignore_cookie_files
)
AppConfig.set_list_ignored_providers(
args.ignored_providers
)
AppConfig.set_g4f_api_key(
args.g4f_api_key
)
run_api(
bind=args.bind,
debug=args.debug,
workers=args.workers,
use_colors=not args.disable_colors
)
if __name__ == "__main__":
main()

@ -111,5 +111,6 @@ def get_last_provider(as_dict: bool = False) -> Union[ProviderType, dict[str, st
"name": last.__name__,
"url": last.url,
"model": debug.last_model,
"label": last.label if hasattr(last, "label") else None
}
return last

@ -2,6 +2,7 @@ from __future__ import annotations
import os
import time
import json
try:
from platformdirs import user_config_dir
@ -25,6 +26,15 @@ from . import debug
# Global variable to store cookies
_cookies: Dict[str, Cookies] = {}
DOMAINS = [
".bing.com",
".meta.ai",
".google.com",
"www.whiterabbitneo.com",
"huggingface.co",
"chat.reka.ai",
]
if has_browser_cookie3 and os.environ.get('DBUS_SESSION_BUS_ADDRESS') == "/dev/null":
_LinuxPasswordManager.get_password = lambda a, b: b"secret"
@ -38,6 +48,7 @@ def get_cookies(domain_name: str = '', raise_requirements_error: bool = True, si
Returns:
Dict[str, str]: A dictionary of cookie names and values.
"""
global _cookies
if domain_name in _cookies:
return _cookies[domain_name]
@ -46,6 +57,7 @@ def get_cookies(domain_name: str = '', raise_requirements_error: bool = True, si
return cookies
def set_cookies(domain_name: str, cookies: Cookies = None) -> None:
global _cookies
if cookies:
_cookies[domain_name] = cookies
elif domain_name in _cookies:
@ -84,6 +96,71 @@ def load_cookies_from_browsers(domain_name: str, raise_requirements_error: bool
print(f"Error reading cookies from {cookie_fn.__name__} for {domain_name}: {e}")
return cookies
def read_cookie_files(dirPath: str = "./har_and_cookies"):
def get_domain(v: dict) -> str:
host = [h["value"] for h in v['request']['headers'] if h["name"].lower() in ("host", ":authority")]
if not host:
return
host = host.pop()
for d in DOMAINS:
if d in host:
return d
global _cookies
harFiles = []
cookieFiles = []
for root, dirs, files in os.walk(dirPath):
for file in files:
if file.endswith(".har"):
harFiles.append(os.path.join(root, file))
elif file.endswith(".json"):
cookieFiles.append(os.path.join(root, file))
_cookies = {}
for path in harFiles:
with open(path, 'rb') as file:
try:
harFile = json.load(file)
except json.JSONDecodeError:
# Error: not a HAR file!
continue
if debug.logging:
print("Read .har file:", path)
new_cookies = {}
for v in harFile['log']['entries']:
domain = get_domain(v)
if domain is None:
continue
v_cookies = {}
for c in v['request']['cookies']:
v_cookies[c['name']] = c['value']
if len(v_cookies) > 0:
_cookies[domain] = v_cookies
new_cookies[domain] = len(v_cookies)
if debug.logging:
for domain, new_values in new_cookies.items():
print(f"Cookies added: {new_values} from {domain}")
for path in cookieFiles:
with open(path, 'rb') as file:
try:
cookieFile = json.load(file)
except json.JSONDecodeError:
# Error: not a json file!
continue
if not isinstance(cookieFile, list):
continue
if debug.logging:
print("Read cookie file:", path)
new_cookies = {}
for c in cookieFile:
if isinstance(c, dict) and "domain" in c:
if c["domain"] not in new_cookies:
new_cookies[c["domain"]] = {}
new_cookies[c["domain"]][c["name"]] = c["value"]
for domain, new_values in new_cookies.items():
if debug.logging:
print(f"Cookies added: {len(new_values)} from {domain}")
_cookies[domain] = new_values
def _g4f(domain_name: str) -> list:
"""
Load cookies from the 'g4f' browser (if exists).

@ -12,9 +12,6 @@ def run_gui(host: str = '0.0.0.0', port: int = 8080, debug: bool = False) -> Non
if import_error is not None:
raise MissingRequirementsError(f'Install "gui" requirements | pip install -U g4f[gui]\n{import_error}')
if debug:
from g4f import debug
debug.logging = True
config = {
'host' : host,
'port' : port,

@ -130,11 +130,7 @@
<textarea id="DeepInfra-api_key" name="DeepInfra[api_key]" class="DeepInfraImage-api_key" placeholder="api_key"></textarea>
</div>
<div class="field box">
<label for="Gemini-api_key" class="label" title="">Gemini:</label>
<textarea id="Gemini-api_key" name="Gemini[api_key]" placeholder="&quot;__Secure-1PSID&quot; cookie"></textarea>
</div>
<div class="field box">
<label for="GeminiPro-api_key" class="label" title="">GeminiPro API:</label>
<label for="GeminiPro-api_key" class="label" title="">Gemini API:</label>
<textarea id="GeminiPro-api_key" name="GeminiPro[api_key]" placeholder="api_key"></textarea>
</div>
<div class="field box">
@ -151,12 +147,16 @@
</div>
<div class="field box">
<label for="OpenaiAccount-api_key" class="label" title="">OpenAI ChatGPT:</label>
<textarea id="OpenaiAccount-api_key" name="OpenaiAccount[api_key]" placeholder="access_key"></textarea>
<textarea id="OpenaiAccount-api_key" name="OpenaiAccount[api_key]" class="OpenaiChat-api_key" placeholder="access_key"></textarea>
</div>
<div class="field box">
<label for="OpenRouter-api_key" class="label" title="">OpenRouter:</label>
<textarea id="OpenRouter-api_key" name="OpenRouter[api_key]" placeholder="api_key"></textarea>
</div>
<div class="field box">
<label for="Replicate-api_key" class="label" title="">Replicate:</label>
<textarea id="Replicate-api_key" name="Replicate[api_key]" class="ReplicateImage-api_key" placeholder="api_key"></textarea>
</div>
</div>
<div class="bottom_buttons">
<button onclick="delete_conversations()">
@ -230,9 +230,10 @@
<select name="provider" id="provider">
<option value="">Provider: Auto</option>
<option value="Bing">Bing</option>
<option value="OpenaiChat">OpenaiChat</option>
<option value="OpenaiChat">OpenAI ChatGPT</option>
<option value="Gemini">Gemini</option>
<option value="Liaobots">Liaobots</option>
<option value="MetaAI">Meta AI</option>
<option value="You">You</option>
<option value="">----</option>
</select>

@ -210,7 +210,9 @@ body {
.conversations .convo .fa-ellipsis-vertical {
position: absolute;
right: 14px;
right: 8px;
width: 14px;
text-align: center;
}
.conversations .convo .choise {
@ -224,6 +226,10 @@ body {
cursor: pointer;
}
.bottom_buttons i {
width: 14px;
}
.convo-title {
color: var(--colour-3);
font-size: 14px;
@ -232,9 +238,17 @@ body {
overflow: hidden;
white-space: nowrap;
margin-right: 10px;
background-color: transparent;
border: 0;
width: 100%;
}
.convo-title:focus {
outline: 1px solid var(--colour-3) !important;
}
.convo-title .datetime {
.convo .datetime {
white-space: nowrap;
font-size: 10px;
}
@ -890,7 +904,7 @@ a:-webkit-any-link {
resize: vertical;
max-height: 200px;
min-height: 80px;
min-height: 100px;
}
/* style for hljs copy */

@ -41,7 +41,9 @@ appStorage = window.localStorage || {
length: 0
}
const markdown = window.markdownit();
const markdown = window.markdownit({
html: true,
});
const markdown_render = (content) => {
return markdown.render(content
.replaceAll(/<!-- generated images start -->|<!-- generated images end -->/gm, "")
@ -302,7 +304,7 @@ async function add_message_chunk(message) {
window.provider_result = message.provider;
content.querySelector('.provider').innerHTML = `
<a href="${message.provider.url}" target="_blank">
${message.provider.name}
${message.provider.label ? message.provider.label : message.provider.name}
</a>
${message.provider.model ? ' with ' + message.provider.model : ''}
`
@ -312,6 +314,8 @@ async function add_message_chunk(message) {
window.error = message.error
console.error(message.error);
content_inner.innerHTML += `<p><strong>An error occured:</strong> ${message.error}</p>`;
} else if (message.type == "preview") {
content_inner.innerHTML = markdown_render(message.preview);
} else if (message.type == "content") {
window.text += message.content;
html = markdown_render(window.text);
@ -478,12 +482,35 @@ const clear_conversation = async () => {
}
};
async function set_conversation_title(conversation_id, title) {
conversation = await get_conversation(conversation_id)
conversation.new_title = title;
appStorage.setItem(
`conversation:${conversation.id}`,
JSON.stringify(conversation)
);
}
const show_option = async (conversation_id) => {
const conv = document.getElementById(`conv-${conversation_id}`);
const choi = document.getElementById(`cho-${conversation_id}`);
conv.style.display = "none";
choi.style.display = "block";
const el = document.getElementById(`convo-${conversation_id}`);
const trash_el = el.querySelector(".fa-trash");
const title_el = el.querySelector("span.convo-title");
if (title_el) {
const left_el = el.querySelector(".left");
const input_el = document.createElement("input");
input_el.value = title_el.innerText;
input_el.classList.add("convo-title");
input_el.onfocus = () => trash_el.style.display = "none";
input_el.onchange = () => set_conversation_title(conversation_id, input_el.value);
left_el.removeChild(title_el);
left_el.appendChild(input_el);
}
};
const hide_option = async (conversation_id) => {
@ -492,6 +519,18 @@ const hide_option = async (conversation_id) => {
conv.style.display = "block";
choi.style.display = "none";
const el = document.getElementById(`convo-${conversation_id}`);
el.querySelector(".fa-trash").style.display = "";
const input_el = el.querySelector("input.convo-title");
if (input_el) {
const left_el = el.querySelector(".left");
const span_el = document.createElement("span");
span_el.innerText = input_el.value;
span_el.classList.add("convo-title");
left_el.removeChild(input_el);
left_el.appendChild(span_el);
}
};
const delete_conversation = async (conversation_id) => {
@ -545,7 +584,8 @@ const load_conversation = async (conversation_id, scroll=true) => {
last_model = item.provider?.model;
let next_i = parseInt(i) + 1;
let next_provider = item.provider ? item.provider : (messages.length > next_i ? messages[next_i].provider : null);
let provider_link = item.provider?.name ? `<a href="${item.provider.url}" target="_blank">${item.provider.name}</a>` : "";
let provider_label = item.provider?.label ? item.provider.label : item.provider?.name;
let provider_link = item.provider?.name ? `<a href="${item.provider.url}" target="_blank">${provider_label}</a>` : "";
let provider = provider_link ? `
<div class="provider">
${provider_link}
@ -704,18 +744,15 @@ const load_conversations = async () => {
let html = "";
conversations.forEach((conversation) => {
if (conversation?.items.length > 0) {
let old_value = conversation.title;
if (conversation?.items.length > 0 && !conversation.new_title) {
let new_value = (conversation.items[0]["content"]).trim();
let new_lenght = new_value.indexOf("\n");
new_lenght = new_lenght > 200 || new_lenght < 0 ? 200 : new_lenght;
conversation.title = new_value.substring(0, new_lenght);
if (conversation.title != old_value) {
appStorage.setItem(
`conversation:${conversation.id}`,
JSON.stringify(conversation)
);
}
conversation.new_title = new_value.substring(0, new_lenght);
appStorage.setItem(
`conversation:${conversation.id}`,
JSON.stringify(conversation)
);
}
let updated = "";
if (conversation.updated) {
@ -725,9 +762,10 @@ const load_conversations = async () => {
}
html += `
<div class="convo" id="convo-${conversation.id}">
<div class="left" onclick="set_conversation('${conversation.id}')">
<div class="left">
<i class="fa-regular fa-comments"></i>
<span class="convo-title"><span class="datetime">${updated}</span> ${conversation.title}</span>
<span class="datetime" onclick="set_conversation('${conversation.id}')">${updated}</span>
<span class="convo-title" onclick="set_conversation('${conversation.id}')">${conversation.new_title}</span>
</div>
<i onclick="show_option('${conversation.id}')" class="fa-solid fa-ellipsis-vertical" id="conv-${conversation.id}"></i>
<div id="cho-${conversation.id}" class="choise" style="display:none;">
@ -1208,6 +1246,8 @@ async function load_provider_models(providerIndex=null) {
}
const provider = providerSelect.options[providerIndex].value;
if (!provider) {
modelProvider.classList.add("hidden");
modelSelect.classList.remove("hidden");
return;
}
const models = await api('models', provider);

@ -5,4 +5,5 @@ def gui_parser():
parser.add_argument("-host", type=str, default="0.0.0.0", help="hostname")
parser.add_argument("-port", type=int, default=8080, help="port")
parser.add_argument("-debug", action="store_true", help="debug mode")
parser.add_argument("--ignore-cookie-files", action="store_true", help="Don't read .har and cookie files.")
return parser

@ -1,6 +1,12 @@
from .gui_parser import gui_parser
from ..cookies import read_cookie_files
import g4f.debug
def run_gui_args(args):
if args.debug:
g4f.debug.logging = True
if not args.ignore_cookie_files:
read_cookie_files()
from g4f.gui import run_gui
host = args.host
port = args.port

@ -7,6 +7,7 @@ from typing import Iterator
from g4f import version, models
from g4f import get_last_provider, ChatCompletion
from g4f.errors import VersionNotFoundError
from g4f.image import ImagePreview
from g4f.Provider import ProviderType, __providers__, __map__
from g4f.providers.base_provider import ProviderModelMixin, FinishReason
from g4f.providers.conversation import BaseConversation
@ -15,7 +16,8 @@ conversations: dict[dict[str, BaseConversation]] = {}
class Api():
def get_models(self) -> list[str]:
@staticmethod
def get_models() -> list[str]:
"""
Return a list of all models.
@ -26,7 +28,8 @@ class Api():
"""
return models._all_models
def get_provider_models(self, provider: str) -> list[dict]:
@staticmethod
def get_provider_models(provider: str) -> list[dict]:
if provider in __map__:
provider: ProviderType = __map__[provider]
if issubclass(provider, ProviderModelMixin):
@ -39,7 +42,40 @@ class Api():
else:
return [];
def get_providers(self) -> list[str]:
@staticmethod
def get_image_models() -> list[dict]:
image_models = []
index = []
for provider in __providers__:
if hasattr(provider, "image_models"):
if hasattr(provider, "get_models"):
provider.get_models()
parent = provider
if hasattr(provider, "parent"):
parent = __map__[provider.parent]
if parent.__name__ not in index:
for model in provider.image_models:
image_models.append({
"provider": parent.__name__,
"url": parent.url,
"label": parent.label if hasattr(parent, "label") else None,
"image_model": model,
"vision_model": parent.default_vision_model if hasattr(parent, "default_vision_model") else None
})
index.append(parent.__name__)
elif hasattr(provider, "default_vision_model") and provider.__name__ not in index:
image_models.append({
"provider": provider.__name__,
"url": provider.url,
"label": provider.label if hasattr(provider, "label") else None,
"image_model": None,
"vision_model": provider.default_vision_model
})
index.append(provider.__name__)
return image_models
@staticmethod
def get_providers() -> list[str]:
"""
Return a list of all working providers.
"""
@ -57,7 +93,8 @@ class Api():
if provider.working
}
def get_version(self):
@staticmethod
def get_version():
"""
Returns the current and latest version of the application.
@ -99,7 +136,7 @@ class Api():
if api_key is not None:
kwargs["api_key"] = api_key
if json_data.get('web_search'):
if provider == "Bing":
if provider in ("Bing", "HuggingChat"):
kwargs['web_search'] = True
else:
from .internet import get_search_message
@ -146,6 +183,8 @@ class Api():
elif isinstance(chunk, Exception):
logging.exception(chunk)
yield self._format_json("message", get_error_message(chunk))
elif isinstance(chunk, ImagePreview):
yield self._format_json("preview", chunk.to_string())
elif not isinstance(chunk, FinishReason):
yield self._format_json("content", str(chunk))
except Exception as e:

@ -31,6 +31,10 @@ class Backend_Api(Api):
'function': self.get_provider_models,
'methods': ['GET']
},
'/backend-api/v2/image_models': {
'function': self.get_image_models,
'methods': ['GET']
},
'/backend-api/v2/providers': {
'function': self.get_providers,
'methods': ['GET']

@ -86,7 +86,7 @@ def is_data_uri_an_image(data_uri: str) -> bool:
if image_format not in ALLOWED_EXTENSIONS and image_format != "svg+xml":
raise ValueError("Invalid image format (from mime file type).")
def is_accepted_format(binary_data: bytes) -> bool:
def is_accepted_format(binary_data: bytes) -> str:
"""
Checks if the given binary data represents an image with an accepted format.
@ -210,7 +210,9 @@ def format_images_markdown(images: Union[str, list], alt: str, preview: Union[st
if not isinstance(preview, list):
preview = [preview.replace('{image}', image) if preview else image for image in images]
result = "\n".join(
f"[![#{idx+1} {alt}]({preview[idx]})]({image})" for idx, image in enumerate(images)
#f"[![#{idx+1} {alt}]({preview[idx]})]({image})"
f'[<img src="{preview[idx]}" width="200" alt="#{idx+1} {alt}">]({image})'
for idx, image in enumerate(images)
)
start_flag = "<!-- generated images start -->\n"
end_flag = "<!-- generated images end -->\n"
@ -239,6 +241,13 @@ def to_bytes(image: ImageType) -> bytes:
else:
return image.read()
def to_data_uri(image: ImageType) -> str:
if not isinstance(image, str):
data = to_bytes(image)
data_base64 = base64.b64encode(data).decode()
return f"data:{is_accepted_format(data)};base64,{data_base64}"
return image
class ImageResponse:
def __init__(
self,
@ -259,6 +268,13 @@ class ImageResponse:
def get_list(self) -> list[str]:
return [self.images] if isinstance(self.images, str) else self.images
class ImagePreview(ImageResponse):
def __str__(self):
return ""
def to_string(self):
return super().__str__()
class ImageRequest:
def __init__(
self,

@ -25,9 +25,11 @@ from .Provider import (
Llama,
OpenaiChat,
PerplexityLabs,
Replicate,
Pi,
Vercel,
You,
Reka
)
@ -137,19 +139,19 @@ llama2_13b = Model(
llama2_70b = Model(
name = "meta-llama/Llama-2-70b-chat-hf",
base_provider = "meta",
best_provider = RetryProvider([Llama, DeepInfra, HuggingChat])
best_provider = RetryProvider([Llama, DeepInfra])
)
llama3_8b_instruct = Model(
name = "meta-llama/Meta-Llama-3-8b-instruct",
name = "meta-llama/Meta-Llama-3-8B-Instruct",
base_provider = "meta",
best_provider = RetryProvider([Llama])
best_provider = RetryProvider([Llama, DeepInfra, Replicate])
)
llama3_70b_instruct = Model(
name = "meta-llama/Meta-Llama-3-70b-instruct",
name = "meta-llama/Meta-Llama-3-70B-Instruct",
base_provider = "meta",
best_provider = RetryProvider([Llama, HuggingChat])
best_provider = RetryProvider([Llama, DeepInfra])
)
codellama_34b_instruct = Model(
@ -168,7 +170,7 @@ codellama_70b_instruct = Model(
mixtral_8x7b = Model(
name = "mistralai/Mixtral-8x7B-Instruct-v0.1",
base_provider = "huggingface",
best_provider = RetryProvider([DeepInfra, HuggingChat, HuggingFace, PerplexityLabs])
best_provider = RetryProvider([DeepInfra, HuggingFace, PerplexityLabs])
)
mistral_7b = Model(
@ -186,7 +188,7 @@ mistral_7b_v02 = Model(
mixtral_8x22b = Model(
name = "HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1",
base_provider = "huggingface",
best_provider = RetryProvider([HuggingChat, DeepInfra])
best_provider = DeepInfra
)
# Misc models
@ -211,7 +213,7 @@ airoboros_70b = Model(
openchat_35 = Model(
name = "openchat/openchat_3.5",
base_provider = "huggingface",
best_provider = RetryProvider([DeepInfra, HuggingChat])
best_provider = DeepInfra
)
# Bard
@ -305,6 +307,12 @@ blackbox = Model(
best_provider = Blackbox
)
reka_core = Model(
name = 'reka-core',
base_provider = 'Reka AI',
best_provider = Reka
)
class ModelUtils:
"""
Utility class for mapping string identifiers to Model instances.
@ -332,8 +340,12 @@ class ModelUtils:
'llama2-7b' : llama2_7b,
'llama2-13b': llama2_13b,
'llama2-70b': llama2_70b,
'llama3-8b' : llama3_8b_instruct, # alias
'llama3-70b': llama3_70b_instruct, # alias
'llama3-8b-instruct' : llama3_8b_instruct,
'llama3-70b-instruct': llama3_70b_instruct,
'codellama-34b-instruct': codellama_34b_instruct,
'codellama-70b-instruct': codellama_70b_instruct,
@ -358,6 +370,11 @@ class ModelUtils:
'claude-3-opus': claude_3_opus,
'claude-3-sonnet': claude_3_sonnet,
# reka core
'reka-core': reka_core,
'reka': reka_core,
'Reka Core': reka_core,
# other
'blackbox': blackbox,
'command-r+': command_r_plus,

@ -271,13 +271,13 @@ class AsyncGeneratorProvider(AsyncProvider):
raise NotImplementedError()
class ProviderModelMixin:
default_model: str
default_model: str = None
models: list[str] = []
model_aliases: dict[str, str] = {}
@classmethod
def get_models(cls) -> list[str]:
if not cls.models:
if not cls.models and cls.default_model is not None:
return [cls.default_model]
return cls.models

@ -3,7 +3,7 @@ from __future__ import annotations
import random
import string
from ..typing import Messages
from ..typing import Messages, Cookies
def format_prompt(messages: Messages, add_special_tokens=False) -> str:
"""
@ -56,4 +56,7 @@ def filter_none(**kwargs) -> dict:
key: value
for key, value in kwargs.items()
if value is not None
}
}
def format_cookies(cookies: Cookies) -> str:
return "; ".join([f"{k}={v}" for k, v in cookies.items()])

@ -102,4 +102,11 @@ ProviderType = Union[Type[BaseProvider], BaseRetryProvider]
class FinishReason():
def __init__(self, reason: str):
self.reason = reason
self.reason = reason
class Streaming():
def __init__(self, data: str) -> None:
self.data = data
def __str__(self) -> str:
return self.data

@ -24,6 +24,7 @@ class StreamSession(ClientSession):
headers: dict = {},
timeout: int = None,
connector: BaseConnector = None,
proxy: str = None,
proxies: dict = {},
impersonate = None,
**kwargs
@ -38,11 +39,13 @@ class StreamSession(ClientSession):
connect, timeout = timeout;
if timeout is not None:
timeout = ClientTimeout(timeout, connect)
if proxy is None:
proxy = proxies.get("all", proxies.get("https"))
super().__init__(
**kwargs,
timeout=timeout,
response_class=StreamResponse,
connector=get_connector(connector, proxies.get("all", proxies.get("https"))),
connector=get_connector(connector, proxy),
headers=headers
)

@ -79,10 +79,10 @@ class StreamSession(AsyncSession):
return StreamResponse(super().request(method, url, stream=True, **kwargs))
def ws_connect(self, url, *args, **kwargs):
return WebSocket(self, url)
return WebSocket(self, url, **kwargs)
def _ws_connect(self, url):
return super().ws_connect(url)
def _ws_connect(self, url, **kwargs):
return super().ws_connect(url, **kwargs)
# Defining HTTP methods as partial methods of the request method.
head = partialmethod(request, "HEAD")
@ -102,20 +102,22 @@ else:
raise RuntimeError("CurlMimi in curl_cffi is missing | pip install -U g4f[curl_cffi]")
class WebSocket():
def __init__(self, session, url) -> None:
def __init__(self, session, url, **kwargs) -> None:
if not has_curl_ws:
raise RuntimeError("CurlWsFlag in curl_cffi is missing | pip install -U g4f[curl_cffi]")
self.session: StreamSession = session
self.url: str = url
del kwargs["autoping"]
self.options: dict = kwargs
async def __aenter__(self):
self.inner = await self.session._ws_connect(self.url)
self.inner = await self.session._ws_connect(self.url, **self.options)
return self
async def __aexit__(self, *args):
self.inner.aclose()
await self.inner.aclose()
async def receive_str(self) -> str:
async def receive_str(self, **kwargs) -> str:
bytes, _ = await self.inner.arecv()
return bytes.decode(errors="ignore")

@ -1,21 +1,27 @@
try:
import brotli
has_brotli = True
except ImportError:
has_brotli = False
DEFAULT_HEADERS = {
"sec-ch-ua": '"Chromium";v="122", "Not(A:Brand";v="24", "Google Chrome";v="122"',
"sec-ch-ua-mobile": "?0",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36",
"ec-ch-ua-arch": '"x86"',
"sec-ch-ua-full-version": '"122.0.6261.69"',
"accept": "*/*",
"sec-ch-ua-platform-version:": '"6.5.0"',
"sec-ch-ua-full-version-list": '"Chromium";v="122.0.6261.69", "Not(A:Brand";v="24.0.0.0", "Google Chrome";v="122.0.6261.69"',
"sec-ch-ua-bitness": '"64"',
"sec-ch-ua-model": '""',
"sec-ch-ua-platform": '"Windows"',
"sec-fetch-site": "same-site",
"sec-fetch-mode": "cors",
"sec-fetch-dest": "empty",
"referer": "",
"accept-encoding": "gzip, deflate, br",
"accept-encoding": "gzip, deflate" + (", br" if has_brotli else ""),
"accept-language": "en-US",
"referer": "",
"sec-ch-ua": "\"Google Chrome\";v=\"123\", \"Not:A-Brand\";v=\"8\", \"Chromium\";v=\"123\"",
"sec-ch-ua-arch": "\"x86\"",
"sec-ch-ua-bitness": "\"64\"",
"sec-ch-ua-full-version": "\"123.0.6312.122\"",
"sec-ch-ua-full-version-list": "\"Google Chrome\";v=\"123.0.6312.122\", \"Not:A-Brand\";v=\"8.0.0.0\", \"Chromium\";v=\"123.0.6312.122\"",
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-model": "\"\"",
"sec-ch-ua-platform": "\"Windows\"",
"sec-ch-ua-platform-version": '"15.0.0"',
"sec-fetch-dest": "empty",
"sec-fetch-mode": "cors",
"sec-fetch-site": "same-origin",
"user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36",
}
WEBVIEW_HAEDERS = {
"Accept": "*/*",

@ -65,7 +65,7 @@ def get_browser(
WebDriver: An instance of WebDriver configured with the specified options.
"""
if not has_requirements:
raise MissingRequirementsError('Webdriver packages are not installed | pip install -U g4f[webdriver]')
raise MissingRequirementsError('Install Webdriver packages | pip install -U g4f[webdriver]')
browser = find_chrome_executable()
if browser is None:
raise MissingRequirementsError('Install "Google Chrome" browser')

Loading…
Cancel
Save