|
|
|
@ -11,7 +11,7 @@
|
|
|
|
|
"\n",
|
|
|
|
|
"LangChain has [integrations](https://integrations.langchain.com/) with many open-source LLMs that can be run locally.\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"See [here](/docs/tutorials/local_rag) for setup instructions for these LLMs. \n",
|
|
|
|
|
"See [here](/docs/how_to/local_llms) for setup instructions for these LLMs. \n",
|
|
|
|
|
"\n",
|
|
|
|
|
"For example, here we show how to run `GPT4All` or `LLaMA2` locally (e.g., on your laptop) using local embeddings and a local LLM.\n",
|
|
|
|
|
"\n",
|
|
|
|
@ -145,7 +145,7 @@
|
|
|
|
|
" \n",
|
|
|
|
|
"And / or, you can download a GGUF converted model (e.g., [here](https://huggingface.co/TheBloke)).\n",
|
|
|
|
|
"\n",
|
|
|
|
|
"Finally, as noted in detail [here](/docs/tutorials/local_rag) install `llama-cpp-python`"
|
|
|
|
|
"Finally, as noted in detail [here](/docs/how_to/local_llms) install `llama-cpp-python`"
|
|
|
|
|
]
|
|
|
|
|
},
|
|
|
|
|
{
|
|
|
|
|