Update backend.py, index.html, requirements.txt (#1180)

* Update backend.py

change to the model that received from user interactive from the web interface model selection.

* Update index.html

added Llama2 as a provider selection and also include the model selection for Llama2: llama2-70b, llama2-13b, llama2-7b

* Update requirements.txt

add asgiref to enable async for Flask in api.
"RuntimeError: Install Flask with the 'async' extra in order to use async views"
pull/1182/head
hdsz25 7 months ago committed by GitHub
parent 1dc8e6d528
commit b8a3db526c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -130,9 +130,9 @@
<option value="google-bard">google-bard</option>
<option value="google-palm">google-palm</option>
<option value="bard">bard</option>
<option value="falcon-40b">falcon-40b</option>
<option value="falcon-7b">falcon-7b</option>
<option value="llama-13b">llama-13b</option>
<option value="llama2-7b">llama2-7b</option>
<option value="llama2-13b">llama2-13b</option>
<option value="llama2-70b">llama2-70b</option>
<option value="command-nightly">command-nightly</option>
<option value="gpt-neox-20b">gpt-neox-20b</option>
<option value="santacoder">santacoder</option>
@ -188,7 +188,7 @@
<option value="g4f.Provider.Aibn">Aibn</option>
<option value="g4f.Provider.Bing">Bing</option>
<option value="g4f.Provider.You">You</option>
<option value="g4f.Provider.H2o">H2o</option>
<option value="g4f.Provider.Llama2">Llama2</option>
<option value="g4f.Provider.Aivvm">Aivvm</option>
</select>
</div>
@ -203,4 +203,4 @@
</script>
</body>
</html>
</html>

@ -56,7 +56,7 @@ class Backend_Api:
def stream():
yield from g4f.ChatCompletion.create(
model=g4f.models.gpt_35_long,
model=model,
provider=get_provider(provider),
messages=messages,
stream=True,

@ -18,4 +18,5 @@ loguru
tiktoken
pillow
platformdirs
numpy
numpy
asgiref

Loading…
Cancel
Save