mirror of https://github.com/xtekky/gpt4free
Update docs / readme, Improve Gemini auth
parent
f560bac946
commit
0a0698c7f3
@ -0,0 +1,66 @@
|
||||
### G4F - Installation Guide
|
||||
|
||||
Follow these steps to install G4F from the source code:
|
||||
|
||||
1. **Clone the Repository:**
|
||||
|
||||
```bash
|
||||
git clone https://github.com/xtekky/gpt4free.git
|
||||
```
|
||||
|
||||
2. **Navigate to the Project Directory:**
|
||||
|
||||
```bash
|
||||
cd gpt4free
|
||||
```
|
||||
|
||||
3. **(Optional) Create a Python Virtual Environment:**
|
||||
|
||||
It's recommended to isolate your project dependencies. You can follow the [Python official documentation](https://docs.python.org/3/tutorial/venv.html) for virtual environments.
|
||||
|
||||
```bash
|
||||
python3 -m venv venv
|
||||
```
|
||||
|
||||
4. **Activate the Virtual Environment:**
|
||||
|
||||
- On Windows:
|
||||
|
||||
```bash
|
||||
.\venv\Scripts\activate
|
||||
```
|
||||
|
||||
- On macOS and Linux:
|
||||
|
||||
```bash
|
||||
source venv/bin/activate
|
||||
```
|
||||
|
||||
5. **Install Minimum Requirements:**
|
||||
|
||||
Install the minimum required packages:
|
||||
|
||||
```bash
|
||||
pip install -r requirements-min.txt
|
||||
```
|
||||
|
||||
6. **Or Install All Packages from `requirements.txt`:**
|
||||
|
||||
If you prefer, you can install all packages listed in `requirements.txt`:
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
7. **Start Using the Repository:**
|
||||
|
||||
You can now create Python scripts and utilize the G4F functionalities. Here's a basic example:
|
||||
|
||||
Create a `test.py` file in the root folder and start using the repository:
|
||||
|
||||
```python
|
||||
import g4f
|
||||
# Your code here
|
||||
```
|
||||
|
||||
[Return to Home](/)
|
@ -0,0 +1,69 @@
|
||||
### Interference openai-proxy API
|
||||
|
||||
#### Run interference API from PyPi package
|
||||
|
||||
```python
|
||||
from g4f.api import run_api
|
||||
|
||||
run_api()
|
||||
```
|
||||
|
||||
#### Run interference API from repo
|
||||
|
||||
Run server:
|
||||
|
||||
```sh
|
||||
g4f api
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```sh
|
||||
python -m g4f.api.run
|
||||
```
|
||||
|
||||
```python
|
||||
from openai import OpenAI
|
||||
|
||||
client = OpenAI(
|
||||
api_key="",
|
||||
# Change the API base URL to the local interference API
|
||||
base_url="http://localhost:1337/v1"
|
||||
)
|
||||
|
||||
response = client.chat.completions.create(
|
||||
model="gpt-3.5-turbo",
|
||||
messages=[{"role": "user", "content": "write a poem about a tree"}],
|
||||
stream=True,
|
||||
)
|
||||
|
||||
if isinstance(response, dict):
|
||||
# Not streaming
|
||||
print(response.choices[0].message.content)
|
||||
else:
|
||||
# Streaming
|
||||
for token in response:
|
||||
content = token.choices[0].delta.content
|
||||
if content is not None:
|
||||
print(content, end="", flush=True)
|
||||
```
|
||||
|
||||
#### API usage (POST)
|
||||
Send the POST request to /v1/chat/completions with body containing the `model` method. This example uses python with requests library:
|
||||
```python
|
||||
import requests
|
||||
url = "http://localhost:1337/v1/chat/completions"
|
||||
body = {
|
||||
"model": "gpt-3.5-turbo-16k",
|
||||
"stream": False,
|
||||
"messages": [
|
||||
{"role": "assistant", "content": "What can you do?"}
|
||||
]
|
||||
}
|
||||
json_response = requests.post(url, json=body).json().get('choices', [])
|
||||
|
||||
for choice in json_response:
|
||||
print(choice.get('message', {}).get('content', ''))
|
||||
```
|
||||
|
||||
[Return to Home](/)
|
Loading…
Reference in New Issue