<ahref="https://gpt4all.io">GPT4All Website and Models</a> • <ahref="https://docs.gpt4all.io">GPT4All Documentation</a> • <ahref="https://discord.gg/mGZE39AS3e">Discord</a>
</p>
<palign="center">
<ahref="https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html">🦜️🔗 Official Langchain Backend</a>
</p>
@ -35,9 +33,6 @@ Run on an M1 macOS Device (not sped up!)
## GPT4All: An ecosystem of open-source on-edge large language models.
> [!IMPORTANT]
> GPT4All v2.5.0 and newer only supports models in GGUF format (.gguf). Models used with a previous version of GPT4All (.bin extension) will no longer work.
GPT4All is an ecosystem to run **powerful** and **customized** large language models that work locally on consumer grade CPUs and any GPU. Note that your CPU needs to support [AVX or AVX2 instructions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions).
Learn more in the [documentation](https://docs.gpt4all.io).