Merge branch 'master' into erick/docs-baseurl-for-ganalytics

pull/21455/head
Erick Friis 3 weeks ago committed by GitHub
commit 1ca983e69a
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -48,7 +48,7 @@
- [by Rabbitmetrics](https://youtu.be/aywZrzNaKjs)
- [by Ivan Reznikov](https://medium.com/@ivanreznikov/langchain-101-course-updated-668f7b41d6cb)
## [Documentation: Use cases](/docs/use_cases)
## [Documentation: Use cases](/docs/how_to#use-cases)
---------------------

@ -185,7 +185,7 @@ Tool calling allows a model to respond to a given prompt by generating output th
matches a user-defined schema. While the name implies that the model is performing
some action, this is actually not the case! The model is coming up with the
arguments to a tool, and actually running the tool (or not) is up to the user -
for example, if you want to [extract output matching some schema](/docs/tutorial/extraction/)
for example, if you want to [extract output matching some schema](/docs/tutorials/extraction)
from unstructured text, you could give the model an "extraction" tool that takes
parameters matching the desired schema, then treat the generated output as your final
result.

@ -16,15 +16,15 @@ LangChain's documentation aspires to follow the [Diataxis framework](https://dia
Under this framework, all documentation falls under one of four categories:
- **Tutorials**: Lessons that take the reader by the hand through a series of conceptual steps to complete a project.
- An example of this is our [LCEL streaming guide](/docs/expression_language/streaming).
- Our guides on [custom components](/docs/modules/model_io/chat/custom_chat_model) is another one.
- An example of this is our [LCEL streaming guide](/docs/how_to/streaming).
- Our guides on [custom components](/docs/how_to/custom_chat_model) is another one.
- **How-to guides**: Guides that take the reader through the steps required to solve a real-world problem.
- The clearest examples of this are our [Use case](/docs/use_cases/) quickstart pages.
- The clearest examples of this are our [Use case](/docs/how_to#use-cases) quickstart pages.
- **Reference**: Technical descriptions of the machinery and how to operate it.
- Our [Runnable interface](/docs/expression_language/interface) page is an example of this.
- Our [Runnable interface](/docs/concepts#interface) page is an example of this.
- The [API reference pages](https://api.python.langchain.com/) are another.
- **Explanation**: Explanations that clarify and illuminate a particular topic.
- The [LCEL primitives pages](/docs/expression_language/primitives/sequence) are an example of this.
- The [LCEL primitives pages](/docs/how_to/sequence) are an example of this.
Each category serves a distinct purpose and requires a specific approach to writing and structuring the content.
@ -35,14 +35,14 @@ when contributing new documentation:
### Getting started
The [getting started section](/docs/get_started/introduction) includes a high-level introduction to LangChain, a quickstart that
The [getting started section](/docs/introduction) includes a high-level introduction to LangChain, a quickstart that
tours LangChain's various features, and logistical instructions around installation and project setup.
It contains elements of **How-to guides** and **Explanations**.
### Use cases
[Use cases](/docs/use_cases/) are guides that are meant to show how to use LangChain to accomplish a specific task (RAG, information extraction, etc.).
[Use cases](/docs/how_to#use-cases) are guides that are meant to show how to use LangChain to accomplish a specific task (RAG, information extraction, etc.).
The quickstarts should be good entrypoints for first-time LangChain developers who prefer to learn by getting something practical prototyped,
then taking the pieces apart retrospectively. These should mirror what LangChain is good at.
@ -55,7 +55,7 @@ The below sections are listed roughly in order of increasing level of abstractio
### Expression Language
[LangChain Expression Language (LCEL)](/docs/expression_language/) is the fundamental way that most LangChain components fit together, and this section is designed to teach
[LangChain Expression Language (LCEL)](/docs/concepts#langchain-expression-language) is the fundamental way that most LangChain components fit together, and this section is designed to teach
developers how to use it to build with LangChain's primitives effectively.
This section should contains **Tutorials** that teach how to stream and use LCEL primitives for more abstract tasks, **Explanations** of specific behaviors,
@ -63,7 +63,7 @@ and some **References** for how to use different methods in the Runnable interfa
### Components
The [components section](/docs/modules) covers concepts one level of abstraction higher than LCEL.
The [components section](/docs/concepts) covers concepts one level of abstraction higher than LCEL.
Abstract base classes like `BaseChatModel` and `BaseRetriever` should be covered here, as well as core implementations of these base classes,
such as `ChatPromptTemplate` and `RecursiveCharacterTextSplitter`. Customization guides belong here too.
@ -88,7 +88,7 @@ Concepts covered in `Integrations` should generally exist in `langchain_communit
### Guides and Ecosystem
The [Guides](/docs/guides) and [Ecosystem](/docs/langsmith/) sections should contain guides that address higher-level problems than the sections above.
The [Guides](/docs/tutorials) and [Ecosystem](/docs/langsmith/) sections should contain guides that address higher-level problems than the sections above.
This includes, but is not limited to, considerations around productionization and development workflows.
These should contain mostly **How-to guides**, **Explanations**, and **Tutorials**.
@ -102,7 +102,7 @@ LangChain's API references. Should act as **References** (as the name implies) w
We have set up our docs to assist a new developer to LangChain. Let's walk through the intended path:
- The developer lands on https://python.langchain.com, and reads through the introduction and the diagram.
- If they are just curious, they may be drawn to the [Quickstart](/docs/get_started/quickstart) to get a high-level tour of what LangChain contains.
- If they are just curious, they may be drawn to the [Quickstart](/docs/tutorials/llm_chain) to get a high-level tour of what LangChain contains.
- If they have a specific task in mind that they want to accomplish, they will be drawn to the Use-Case section. The use-case should provide a good, concrete hook that shows the value LangChain can provide them and be a good entrypoint to the framework.
- They can then move to learn more about the fundamentals of LangChain through the Expression Language sections.
- Next, they can learn about LangChain's various components and integrations.

@ -25,7 +25,7 @@
"source": [
"## Using AIMessage.response_metadata\n",
"\n",
"A number of model providers return token usage information as part of the chat generation response. When available, this is included in the [`AIMessage.response_metadata`](/docs/modules/model_io/chat/response_metadata/) field. Here's an example with OpenAI:"
"A number of model providers return token usage information as part of the chat generation response. When available, this is included in the [`AIMessage.response_metadata`](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.response_metadata) field. Here's an example with OpenAI:"
]
},
{

@ -142,7 +142,7 @@
"\n",
"## Chat history\n",
"\n",
"It's perfectly fine to store and pass messages directly as an array, but we can use LangChain's built-in [message history class](/docs/modules/memory/chat_messages/) to store and load messages as well. Instances of this class are responsible for storing and loading chat messages from persistent storage. LangChain integrates with many providers - you can see a [list of integrations here](/docs/integrations/memory) - but for this demo we will use an ephemeral demo class.\n",
"It's perfectly fine to store and pass messages directly as an array, but we can use LangChain's built-in [message history class](https://api.python.langchain.com/en/latest/langchain_api_reference.html#module-langchain.memory) to store and load messages as well. Instances of this class are responsible for storing and loading chat messages from persistent storage. LangChain integrates with many providers - you can see a [list of integrations here](/docs/integrations/memory) - but for this demo we will use an ephemeral demo class.\n",
"\n",
"Here's an example of the API:"
]

@ -15,7 +15,7 @@
"source": [
"# How to add retrieval to chatbots\n",
"\n",
"Retrieval is a common technique chatbots use to augment their responses with data outside a chat model's training data. This section will cover how to implement retrieval in the context of chatbots, but it's worth noting that retrieval is a very subtle and deep topic - we encourage you to explore [other parts of the documentation](/docs/use_cases/question_answering/) that go into greater depth!\n",
"Retrieval is a common technique chatbots use to augment their responses with data outside a chat model's training data. This section will cover how to implement retrieval in the context of chatbots, but it's worth noting that retrieval is a very subtle and deep topic - we encourage you to explore [other parts of the documentation](/docs/how_to#qa-with-rag) that go into greater depth!\n",
"\n",
"## Setup\n",
"\n",
@ -80,7 +80,7 @@
"source": [
"## Creating a retriever\n",
"\n",
"We'll use [the LangSmith documentation](https://docs.smith.langchain.com/overview) as source material and store the content in a vectorstore for later retrieval. Note that this example will gloss over some of the specifics around parsing and storing a data source - you can see more [in-depth documentation on creating retrieval systems here](/docs/use_cases/question_answering/).\n",
"We'll use [the LangSmith documentation](https://docs.smith.langchain.com/overview) as source material and store the content in a vectorstore for later retrieval. Note that this example will gloss over some of the specifics around parsing and storing a data source - you can see more [in-depth documentation on creating retrieval systems here](/docs/how_to#qa-with-rag).\n",
"\n",
"Let's use a document loader to pull text from the docs:"
]
@ -737,7 +737,7 @@
"source": [
"## Further reading\n",
"\n",
"This guide only scratches the surface of retrieval techniques. For more on different ways of ingesting, preparing, and retrieving the most relevant data, check out [this section](/docs/modules/data_connection/) of the docs."
"This guide only scratches the surface of retrieval techniques. For more on different ways of ingesting, preparing, and retrieving the most relevant data, check out the relevant how-to guides [here](/docs/how_to#document-loaders)."
]
}
],

@ -17,11 +17,11 @@
"\n",
"This section will cover how to create conversational agents: chatbots that can interact with other systems and APIs using tools.\n",
"\n",
"Before reading this guide, we recommend you read both [the chatbot quickstart](/docs/use_cases/chatbots/quickstart) in this section and be familiar with [the documentation on agents](/docs/tutorials/agents).\n",
"Before reading this guide, we recommend you read both [the chatbot quickstart](/docs/tutorials/chatbot) in this section and be familiar with [the documentation on agents](/docs/tutorials/agents).\n",
"\n",
"## Setup\n",
"\n",
"For this guide, we'll be using an [OpenAI tools agent](/docs/modules/agents/agent_types/openai_tools) with a single tool for searching the web. The default will be powered by [Tavily](/docs/integrations/tools/tavily_search), but you can switch it out for any similar tool. The rest of this section will assume you're using Tavily.\n",
"For this guide, we'll be using an [OpenAI tools agent](/docs/how_to/agent_executor) with a single tool for searching the web. The default will be powered by [Tavily](/docs/integrations/tools/tavily_search), but you can switch it out for any similar tool. The rest of this section will assume you're using Tavily.\n",
"\n",
"You'll need to [sign up for an account](https://tavily.com/) on the Tavily website, and install the following packages:"
]
@ -437,7 +437,7 @@
"\n",
"Other types agents can also support conversational responses too - for more, check out the [agents section](/docs/tutorials/agents).\n",
"\n",
"For more on tool usage, you can also check out [this use case section](/docs/use_cases/tool_use/)."
"For more on tool usage, you can also check out [this use case section](/docs/how_to#tools)."
]
}
],

@ -38,7 +38,7 @@
"The logic inside of `_get_relevant_documents` can involve arbitrary calls to a database or to the web using requests.\n",
"\n",
":::{.callout-tip}\n",
"By inherting from `BaseRetriever`, your retriever automatically becomes a LangChain [Runnable](/docs/expression_language/interface) and will gain the standard `Runnable` functionality out of the box!\n",
"By inherting from `BaseRetriever`, your retriever automatically becomes a LangChain [Runnable](/docs/concepts#interface) and will gain the standard `Runnable` functionality out of the box!\n",
":::\n",
"\n",
"\n",

@ -11,7 +11,7 @@
"\n",
"This covers how to load `HTML` documents into a LangChain [Document](https://api.python.langchain.com/en/latest/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) objects that we can use downstream.\n",
"\n",
"Parsing HTML files often requires specialized tools. Here we demonstrate parsing via [Unstructured](https://unstructured-io.github.io/unstructured/) and [BeautifulSoup4](https://beautiful-soup-4.readthedocs.io/en/latest/), which can be installed via pip. Head over to the integrations page to find integrations with additional services, such as [Azure AI Document Intelligence](/docs/0.2.x/integrations/document_loaders/azure_document_intelligence) or [FireCrawl](/docs/0.2.x/integrations/document_loaders/firecrawl).\n",
"Parsing HTML files often requires specialized tools. Here we demonstrate parsing via [Unstructured](https://unstructured-io.github.io/unstructured/) and [BeautifulSoup4](https://beautiful-soup-4.readthedocs.io/en/latest/), which can be installed via pip. Head over to the integrations page to find integrations with additional services, such as [Azure AI Document Intelligence](/docs/integrations/document_loaders/azure_document_intelligence) or [FireCrawl](/docs/integrations/document_loaders/firecrawl).\n",
"\n",
"## Loading HTML with Unstructured"
]

@ -48,7 +48,7 @@
"receive the tool call, execute it, and return the output to the LLM to inform its \n",
"response. LangChain includes a suite of [built-in tools](/docs/integrations/tools/) \n",
"and supports several methods for defining your own [custom tools](/docs/how_to/custom_tools). \n",
"Tool-calling is extremely useful for building [tool-using chains and agents](/docs/use_cases/tool_use), \n",
"Tool-calling is extremely useful for building [tool-using chains and agents](/docs/how_to#tools), \n",
"and for getting structured outputs from models more generally.\n",
"\n",
"Providers adopt different conventions for formatting tool schemas and tool calls. \n",
@ -262,7 +262,7 @@
"are populated in the `.invalid_tool_calls` attribute. An `InvalidToolCall` can have \n",
"a name, string arguments, identifier, and error message.\n",
"\n",
"If desired, [output parsers](/docs/modules/model_io/output_parsers) can further \n",
"If desired, [output parsers](/docs/how_to#output-parsers) can further \n",
"process the output. For example, we can convert back to the original Pydantic class:"
]
},
@ -351,7 +351,7 @@
"id": "55046320-3466-4ec1-a1f8-336234ba9019",
"metadata": {},
"source": [
"Note that adding message chunks will merge their corresponding tool call chunks. This is the principle by which LangChain's various [tool output parsers](/docs/modules/model_io/output_parsers/types/openai_tools/) support streaming.\n",
"Note that adding message chunks will merge their corresponding tool call chunks. This is the principle by which LangChain's various [tool output parsers](/docs/how_to/output_parser_structured) support streaming.\n",
"\n",
"For example, below we accumulate tool call chunks:"
]
@ -669,7 +669,7 @@
"## Next steps\n",
"\n",
"- **Output parsing**: See [OpenAI Tools output\n",
" parsers](/docs/modules/model_io/output_parsers/types/openai_tools/)\n",
" parsers](/docs/how_to/output_parser_structured)\n",
" and [OpenAI Functions output\n",
" parsers](/docs/modules/model_io/output_parsers/types/openai_functions/)\n",
" to learn about extracting the function calling API responses into\n",
@ -678,7 +678,7 @@
" handle creating a structured output chain for you.\n",
"- **Tool use**: See how to construct chains and agents that\n",
" call the invoked tools in [these\n",
" guides](/docs/use_cases/tool_use/)."
" guides](/docs/how_to#tools)."
]
}
],

@ -94,7 +94,7 @@
"source": [
"## LCEL\n",
"\n",
"Output parsers implement the [Runnable interface](/docs/expression_language/interface), the basic building block of the [LangChain Expression Language (LCEL)](/docs/expression_language/). This means they support `invoke`, `ainvoke`, `stream`, `astream`, `batch`, `abatch`, `astream_log` calls.\n",
"Output parsers implement the [Runnable interface](/docs/concepts#interface), the basic building block of the [LangChain Expression Language (LCEL)](/docs/concepts#langchain-expression-language). This means they support `invoke`, `ainvoke`, `stream`, `astream`, `batch`, `abatch`, `astream_log` calls.\n",
"\n",
"Output parsers accept a string or `BaseMessage` as input and can return an arbitrary type."
]

@ -19,7 +19,7 @@
"\n",
"As our query analysis becomes more complex, the LLM may struggle to understand how exactly it should respond in certain scenarios. In order to improve performance here, we can add examples to the prompt to guide the LLM.\n",
"\n",
"Let's take a look at how we can add examples for the LangChain YouTube video query analyzer we built in the [Quickstart](/docs/use_cases/query_analysis/quickstart)."
"Let's take a look at how we can add examples for the LangChain YouTube video query analyzer we built in the [Quickstart](/docs/tutorials/query_analysis)."
]
},
{

@ -33,7 +33,7 @@
"\n",
"## The pipe operator\n",
"\n",
"To show off how this works, let's go through an example. We'll walk through a common pattern in LangChain: using a [prompt template](/docs/modules/model_io/prompts/) to format input into a [chat model](/docs/modules/model_io/chat/), and finally converting the chat message output into a string with an [output parser](/docs/modules/model_io/output_parsers/).\n",
"To show off how this works, let's go through an example. We'll walk through a common pattern in LangChain: using a [prompt template](/docs/how_to#prompt-templates) to format input into a [chat model](/docs/how_to#chat-models), and finally converting the chat message output into a string with an [output parser](/docs/how_to#output-parsers).\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",

@ -19,7 +19,7 @@
"\n",
"Streaming is critical in making applications based on LLMs feel responsive to end-users.\n",
"\n",
"Important LangChain primitives like [chat models](/docs/concepts/#chat-models), [output parsers](/docs/concepts/#output-parsers), [prompts](/docs/concepts/#prompt-templates), [retrievers](/docs/concepts/#retrievers), and [agents](/docs/concepts/#agents) implement the LangChain [Runnable Interface](/docs/expression_language/interface).\n",
"Important LangChain primitives like [chat models](/docs/concepts/#chat-models), [output parsers](/docs/concepts/#output-parsers), [prompts](/docs/concepts/#prompt-templates), [retrievers](/docs/concepts/#retrievers), and [agents](/docs/concepts/#agents) implement the LangChain [Runnable Interface](/docs/concepts#interface).\n",
"\n",
"This interface provides two general approaches to stream content:\n",
"\n",
@ -246,9 +246,9 @@
"id": "868bc412",
"metadata": {},
"source": [
"You might notice above that `parser` actually doesn't block the streaming output from the model, and instead processes each chunk individually. Many of the [LCEL primitives](/docs/expression_language/primitives) also support this kind of transform-style passthrough streaming, which can be very convenient when constructing apps.\n",
"You might notice above that `parser` actually doesn't block the streaming output from the model, and instead processes each chunk individually. Many of the [LCEL primitives](/docs/how_to#langchain-expression-language-lcel) also support this kind of transform-style passthrough streaming, which can be very convenient when constructing apps.\n",
"\n",
"Certain runnables, like [prompt templates](/docs/modules/model_io/prompts) and [chat models](/docs/modules/model_io/chat), cannot process individual chunks and instead aggregate all previous steps. This will interrupt the streaming process. Custom functions can be [designed to return generators](/docs/expression_language/primitives/functions#streaming), which"
"Certain runnables, like [prompt templates](/docs/how_to#prompt-templates) and [chat models](/docs/how_to#chat-models), cannot process individual chunks and instead aggregate all previous steps. This will interrupt the streaming process. Custom functions can be [designed to return generators](/docs/how_to/functions#streaming), which"
]
},
{

@ -226,7 +226,7 @@
"are populated in the `.invalid_tool_calls` attribute. An `InvalidToolCall` can have \n",
"a name, string arguments, identifier, and error message.\n",
"\n",
"If desired, [output parsers](/docs/modules/model_io/output_parsers) can further \n",
"If desired, [output parsers](/docs/how_to#output-parsers) can further \n",
"process the output. For example, we can convert back to the original Pydantic class:"
]
},
@ -309,7 +309,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that adding message chunks will merge their corresponding tool call chunks. This is the principle by which LangChain's various [tool output parsers](/docs/modules/model_io/output_parsers/types/openai_tools/) support streaming.\n",
"Note that adding message chunks will merge their corresponding tool call chunks. This is the principle by which LangChain's various [tool output parsers](/docs/how_to/output_parser_structured) support streaming.\n",
"\n",
"For example, below we accumulate tool call chunks:"
]
@ -685,7 +685,7 @@
"\n",
"Now you've learned how to bind tool schemas to a chat model and to call those tools. Next, check out some more specific uses of tool calling:\n",
"\n",
"- Building [tool-using chains and agents](/docs/use_cases/tool_use/)\n",
"- Building [tool-using chains and agents](/docs/how_to#tools)\n",
"- Getting [structured outputs](/docs/how_to/structured_output/) from models"
]
}

@ -278,7 +278,7 @@
"\n",
"Chains are great when we know the specific sequence of tool usage needed for any user input. But for certain use cases, how many times we use tools depends on the input. In these cases, we want to let the model itself decide how many times to use tools and in what order. [Agents](/docs/tutorials/agents) let us do just this.\n",
"\n",
"LangChain comes with a number of built-in agents that are optimized for different use cases. Read about all the [agent types here](/docs/modules/agents/agent_types/).\n",
"LangChain comes with a number of built-in agents that are optimized for different use cases. Read about all the [agent types here](/docs/concepts#agents).\n",
"\n",
"We'll use the [tool calling agent](https://api.python.langchain.com/en/latest/agents/langchain.agents.tool_calling_agent.base.create_tool_calling_agent.html), which is generally the most reliable kind and the recommended one for most use cases.\n",
"\n",
@ -335,7 +335,7 @@
"id": "616f9714-5b18-4eed-b88a-d38e4cb1de99",
"metadata": {},
"source": [
"Agents are also great because they make it easy to use multiple tools. To learn how to build Chains that use multiple tools, check out the [Chains with multiple tools](/docs/use_cases/tool_use/multiple_tools) page."
"Agents are also great because they make it easy to use multiple tools. To learn how to build Chains that use multiple tools, check out the [Chains with multiple tools](/docs/how_to/tools_multiple) page."
]
},
{

@ -17,7 +17,7 @@
"source": [
"# How to use an LLM to choose between multiple tools\n",
"\n",
"In our [Quickstart](/docs/use_cases/tool_use/quickstart) we went over how to build a Chain that calls a single `multiply` tool. Now let's take a look at how we might augment this chain so that it can pick from a number of tools to call. We'll focus on Chains since [Agents](/docs/tutorials/agents) can route between multiple tools by default."
"In our [Quickstart](/docs/how_to/tool_calling) we went over how to build a Chain that calls a single `multiply` tool. Now let's take a look at how we might augment this chain so that it can pick from a number of tools to call. We'll focus on Chains since [Agents](/docs/tutorials/agents) can route between multiple tools by default."
]
},
{
@ -120,7 +120,7 @@
"id": "bbea4555-ed10-4a18-b802-e9a3071f132b",
"metadata": {},
"source": [
"The main difference between using one Tool and many is that we can't be sure which Tool the model will invoke upfront, so we cannot hardcode, like we did in the [Quickstart](/docs/use_cases/tool_use/quickstart), a specific tool into our chain. Instead we'll add `call_tools`, a `RunnableLambda` that takes the output AI message with tools calls and routes to the correct tools.\n",
"The main difference between using one Tool and many is that we can't be sure which Tool the model will invoke upfront, so we cannot hardcode, like we did in the [Quickstart](/docs/how_to/tool_calling), a specific tool into our chain. Instead we'll add `call_tools`, a `RunnableLambda` that takes the output AI message with tools calls and routes to the correct tools.\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",

@ -7,7 +7,7 @@
"source": [
"# How to call tools in parallel\n",
"\n",
"In the [Chains with multiple tools](/docs/use_cases/tool_use/multiple_tools) guide we saw how to build function-calling chains that select between multiple tools. Some models, like the OpenAI models released in Fall 2023, also support parallel function calling, which allows you to invoke multiple functions (or the same function multiple times) in a single model call. Our previous chain from the multiple tools guides actually already supports this."
"In the [Chains with multiple tools](/docs/how_to/tools_multiple) guide we saw how to build function-calling chains that select between multiple tools. Some models, like the OpenAI models released in Fall 2023, also support parallel function calling, which allows you to invoke multiple functions (or the same function multiple times) in a single model call. Our previous chain from the multiple tools guides actually already supports this."
]
},
{

@ -17,7 +17,7 @@
"source": [
"# How to use tools without function calling\n",
"\n",
"In this guide we'll build a Chain that does not rely on any special model APIs (like tool calling, which we showed in the [Quickstart](/docs/use_cases/tool_use/quickstart)) and instead just prompts the model directly to invoke tools."
"In this guide we'll build a Chain that does not rely on any special model APIs (like tool calling, which we showed in the [Quickstart](/docs/how_to/tool_calling)) and instead just prompts the model directly to invoke tools."
]
},
{

@ -124,7 +124,7 @@
"tags": []
},
"source": [
"Here are two examples of how to use the `TrubricsCallbackHandler` with Langchain [LLMs](/docs/modules/model_io/llms/) or [Chat Models](/docs/modules/model_io/chat/). We will use OpenAI models, so set your `OPENAI_API_KEY` key here:"
"Here are two examples of how to use the `TrubricsCallbackHandler` with Langchain [LLMs](/docs/how_to#llms) or [Chat Models](/docs/how_to#chat-models). We will use OpenAI models, so set your `OPENAI_API_KEY` key here:"
]
},
{

@ -77,7 +77,7 @@
"source": [
"## Usage\n",
"\n",
"ChatCohere supports all [ChatModel](/docs/modules/model_io/chat/) functionality:"
"ChatCohere supports all [ChatModel](/docs/how_to#chat-models) functionality:"
]
},
{
@ -201,7 +201,7 @@
"source": [
"## Chaining\n",
"\n",
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/expression_language)"
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language)"
]
},
{

@ -71,7 +71,7 @@
"source": [
"## Usage\n",
"\n",
"`FrienliChat` supports all methods of [`ChatModel`](/docs/modules/model_io/chat/) including async APIs."
"`FrienliChat` supports all methods of [`ChatModel`](/docs/how_to#chat-models) including async APIs."
]
},
{

@ -509,7 +509,7 @@
"source": [
"## Asynchronous calls\n",
"\n",
"We can make asynchronous calls via the Runnables [Async Interface](/docs/expression_language/interface)."
"We can make asynchronous calls via the Runnables [Async Interface](/docs/concepts#interface)."
]
},
{

@ -10,7 +10,7 @@
"\n",
"In particular, we will:\n",
"1. Utilize the [HuggingFaceTextGenInference](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_text_gen_inference.py), [HuggingFaceEndpoint](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_endpoint.py), or [HuggingFaceHub](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/huggingface_hub.py) integrations to instantiate an `LLM`.\n",
"2. Utilize the `ChatHuggingFace` class to enable any of these LLMs to interface with LangChain's [Chat Messages](/docs/modules/model_io/chat/#messages) abstraction.\n",
"2. Utilize the `ChatHuggingFace` class to enable any of these LLMs to interface with LangChain's [Chat Messages](docs/concepts#chat-models) abstraction.\n",
"3. Demonstrate how to use an open-source LLM to power an `ChatAgent` pipeline\n",
"\n",
"\n",
@ -280,7 +280,7 @@
"source": [
"## 3. Take it for a spin as an agent!\n",
"\n",
"Here we'll test out `Zephyr-7B-beta` as a zero-shot `ReAct` Agent. The example below is taken from [here](/docs/modules/agents/agent_types/react#using-chat-models).\n",
"Here we'll test out `Zephyr-7B-beta` as a zero-shot `ReAct` Agent. The example below is taken from [here](https://python.langchain.com/v0.1/docs/modules/agents/agent_types/react/#using-chat-models).\n",
"\n",
"> Note: To run this section, you'll need to have a [SerpAPI Token](https://serpapi.com/) saved as an environment variable: `SERPAPI_API_KEY`"
]

@ -17,9 +17,9 @@
"source": [
"# Llama2Chat\n",
"\n",
"This notebook shows how to augment Llama-2 `LLM`s with the `Llama2Chat` wrapper to support the [Llama-2 chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). Several `LLM` implementations in LangChain can be used as interface to Llama-2 chat models. These include [ChatHuggingFace](/docs/integrations/chat/huggingface), [LlamaCpp](/docs/use_cases/question_answering/local_retrieval_qa), [GPT4All](/docs/integrations/llms/gpt4all), ..., to mention a few examples. \n",
"This notebook shows how to augment Llama-2 `LLM`s with the `Llama2Chat` wrapper to support the [Llama-2 chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). Several `LLM` implementations in LangChain can be used as interface to Llama-2 chat models. These include [ChatHuggingFace](/docs/integrations/chat/huggingface), [LlamaCpp](/docs/tutorials/local_rag), [GPT4All](/docs/integrations/llms/gpt4all), ..., to mention a few examples. \n",
"\n",
"`Llama2Chat` is a generic wrapper that implements `BaseChatModel` and can therefore be used in applications as [chat model](/docs/modules/model_io/chat/). `Llama2Chat` converts a list of Messages into the [required chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) and forwards the formatted prompt as `str` to the wrapped `LLM`."
"`Llama2Chat` is a generic wrapper that implements `BaseChatModel` and can therefore be used in applications as [chat model](/docs/how_to#chat-models). `Llama2Chat` converts a list of Messages into the [required chat prompt format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2) and forwards the formatted prompt as `str` to the wrapped `LLM`."
]
},
{

@ -225,7 +225,7 @@
"source": [
"## Chaining\n",
"\n",
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/expression_language)"
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language)"
]
},
{

@ -1005,7 +1005,7 @@
"id": "79efa62d"
},
"source": [
"Like any other integration, ChatNVIDIA is fine to support chat utilities like conversation buffers by default. Below, we show the [LangChain ConversationBufferMemory](/docs/modules/memory/types/buffer) example applied to the `mixtral_8x7b` model."
"Like any other integration, ChatNVIDIA is fine to support chat utilities like conversation buffers by default. Below, we show the [LangChain ConversationBufferMemory](https://api.python.langchain.com/en/latest/memory/langchain.memory.buffer.ConversationBufferMemory.html) example applied to the `mixtral_8x7b` model."
]
},
{

@ -185,7 +185,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Take a look at the [LangChain Expressive Language (LCEL) Interface](/docs/expression_language/interface) for the other available interfaces for use when a chain is created.\n",
"Take a look at the [LangChain Expressive Language (LCEL) Interface](/docs/concepts#interface) for the other available interfaces for use when a chain is created.\n",
"\n",
"## Building from source\n",
"\n",

@ -8,7 +8,7 @@
"\n",
"> [Bigtable](https://cloud.google.com/bigtable) is a key-value and wide-column store, ideal for fast access to structured, semi-structured, or unstructured data. Extend your database application to build AI-powered experiences leveraging Bigtable's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Bigtable](https://cloud.google.com/bigtable) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `BigtableLoader` and `BigtableSaver`.\n",
"This notebook goes over how to use [Bigtable](https://cloud.google.com/bigtable) to [save, load and delete langchain documents](/docs/how_to#document-loaders) with `BigtableLoader` and `BigtableSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-bigtable-python/).\n",
"\n",

@ -8,7 +8,7 @@
"\n",
"> [Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers [MySQL](https://cloud.google.com/sql/mysql), [PostgreSQL](https://cloud.google.com/sql/postgres), and [SQL Server](https://cloud.google.com/sql/sqlserver) database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Cloud SQL for SQL server](https://cloud.google.com/sql/sqlserver) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `MSSQLLoader` and `MSSQLDocumentSaver`.\n",
"This notebook goes over how to use [Cloud SQL for SQL server](https://cloud.google.com/sql/sqlserver) to [save, load and delete langchain documents](/docs/how_to#document-loaders) with `MSSQLLoader` and `MSSQLDocumentSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-mssql-python/).\n",
"\n",

@ -8,7 +8,7 @@
"\n",
"> [Cloud SQL](https://cloud.google.com/sql) is a fully managed relational database service that offers high performance, seamless integration, and impressive scalability. It offers [MySQL](https://cloud.google.com/sql/mysql), [PostgreSQL](https://cloud.google.com/sql/postgresql), and [SQL Server](https://cloud.google.com/sql/sqlserver) database engines. Extend your database application to build AI-powered experiences leveraging Cloud SQL's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Cloud SQL for MySQL](https://cloud.google.com/sql/mysql) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `MySQLLoader` and `MySQLDocumentSaver`.\n",
"This notebook goes over how to use [Cloud SQL for MySQL](https://cloud.google.com/sql/mysql) to [save, load and delete langchain documents](/docs/how_to#document-loaders) with `MySQLLoader` and `MySQLDocumentSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-cloud-sql-mysql-python/).\n",
"\n",

@ -8,7 +8,7 @@
"\n",
"> [Firestore in Datastore Mode](https://cloud.google.com/datastore) is a NoSQL document database built for automatic scaling, high performance and ease of application development. Extend your database application to build AI-powered experiences leveraging Datastore's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Firestore in Datastore Mode](https://cloud.google.com/datastore) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `DatastoreLoader` and `DatastoreSaver`.\n",
"This notebook goes over how to use [Firestore in Datastore Mode](https://cloud.google.com/datastore) to [save, load and delete langchain documents](/docs/how_to#document-loaders) with `DatastoreLoader` and `DatastoreSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-datastore-python/).\n",
"\n",

@ -18,7 +18,7 @@
"by leveraging the El Carro Langchain integration.\n",
"\n",
"This guide goes over how to use El Carro Langchain integration to\n",
"[save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/)\n",
"[save, load and delete langchain documents](/docs/how_to#document-loaders)\n",
"with `ElCarroLoader` and `ElCarroDocumentSaver`. This integration works for any Oracle database, regardless of where it is running.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-el-carro-python/).\n",

@ -8,7 +8,7 @@
"\n",
"> [Firestore](https://cloud.google.com/firestore) is a serverless document-oriented database that scales to meet any demand. Extend your database application to build AI-powered experiences leveraging Firestore's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Firestore](https://cloud.google.com/firestore) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `FirestoreLoader` and `FirestoreSaver`.\n",
"This notebook goes over how to use [Firestore](https://cloud.google.com/firestore) to [save, load and delete langchain documents](/docs/how_to#document-loaders) with `FirestoreLoader` and `FirestoreSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-firestore-python/).\n",
"\n",

@ -10,7 +10,7 @@
"\n",
"> [Google Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) is a fully-managed service that is powered by the Redis in-memory data store to build application caches that provide sub-millisecond data access. Extend your database application to build AI-powered experiences leveraging Memorystore for Redis's Langchain integrations.\n",
"\n",
"This notebook goes over how to use [Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `MemorystoreDocumentLoader` and `MemorystoreDocumentSaver`.\n",
"This notebook goes over how to use [Memorystore for Redis](https://cloud.google.com/memorystore/docs/redis/memorystore-for-redis-overview) to [save, load and delete langchain documents](/docs/how_to#document-loaders) with `MemorystoreDocumentLoader` and `MemorystoreDocumentSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-memorystore-redis-python/).\n",
"\n",

@ -8,7 +8,7 @@
"\n",
"> [Spanner](https://cloud.google.com/spanner) is a highly scalable database that combines unlimited scalability with relational semantics, such as secondary indexes, strong consistency, schemas, and SQL providing 99.999% availability in one easy solution.\n",
"\n",
"This notebook goes over how to use [Spanner](https://cloud.google.com/spanner) to [save, load and delete langchain documents](/docs/modules/data_connection/document_loaders/) with `SpannerLoader` and `SpannerDocumentSaver`.\n",
"This notebook goes over how to use [Spanner](https://cloud.google.com/spanner) to [save, load and delete langchain documents](/docs/how_to#document-loaders) with `SpannerLoader` and `SpannerDocumentSaver`.\n",
"\n",
"Learn more about the package on [GitHub](https://github.com/googleapis/langchain-google-spanner-python/).\n",
"\n",

@ -99,9 +99,9 @@
"\n",
"## Get started [](\\#get-started \"Direct link to Get started\")\n",
"\n",
"[Heres](/docs/get_started/installation) how to install LangChain, set up your environment, and start building.\n",
"[Heres](/docs/installation) how to install LangChain, set up your environment, and start building.\n",
"\n",
"We recommend following our [Quickstart](/docs/get_started/quickstart) guide to familiarize yourself with the framework by building your first LangChain application.\n",
"We recommend following our [Quickstart](/docs/tutorials/llm_chain) guide to familiarize yourself with the framework by building your first LangChain application.\n",
"\n",
"Read up on our [Security](/docs/security) best practices to make sure you're developing safely with LangChain.\n",
"\n",
@ -113,8 +113,8 @@
"\n",
"LCEL is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.\n",
"\n",
"- **[Overview](/docs/expression_language/)**: LCEL and its benefits\n",
"- **[Interface](/docs/expression_language/interface)**: The standard interface for LCEL objects\n",
"- **[Overview](/docs/concepts#langchain-expression-language)**: LCEL and its benefits\n",
"- **[Interface](/docs/concepts#interface)**: The standard interface for LCEL objects\n",
"- **[How-to](/docs/expression_language/how_to)**: Key features of LCEL\n",
"- **[Cookbook](/docs/expression_language/cookbook)**: Example code for accomplishing common tasks\n",
"\n",
@ -136,13 +136,13 @@
"\n",
"## Examples, ecosystem, and resources [](\\#examples-ecosystem-and-resources \"Direct link to Examples, ecosystem, and resources\")\n",
"\n",
"### [Use cases](/docs/use_cases/question_answering/) [](\\#use-cases \"Direct link to use-cases\")\n",
"### [Use cases](/docs/how_to#qa-with-rag) [](\\#use-cases \"Direct link to use-cases\")\n",
"\n",
"Walkthroughs and techniques for common end-to-end use cases, like:\n",
"\n",
"- [Document question answering](/docs/use_cases/question_answering/)\n",
"- [Document question answering](/docs/how_to#qa-with-rag)\n",
"- [Chatbots](/docs/use_cases/chatbots/)\n",
"- [Analyzing structured data](/docs/use_cases/sql/)\n",
"- [Analyzing structured data](/docs/how_to#qa-over-sql--csv)\n",
"- and much more...\n",
"\n",
"### [Integrations](/docs/integrations/providers/) [](\\#integrations \"Direct link to integrations\")\n",

@ -584,7 +584,7 @@
"id": "8edb9976",
"metadata": {},
"source": [
"To address this, we can adjust the initial Cypher prompt of the QA chain. This involves adding guidance to the LLM on how users can refer to specific platforms, such as PS5 in our case. We achieve this using the LangChain [PromptTemplate](/docs/modules/model_io/prompts/), creating a modified initial prompt. This modified prompt is then supplied as an argument to our refined Memgraph-LangChain instance."
"To address this, we can adjust the initial Cypher prompt of the QA chain. This involves adding guidance to the LLM on how users can refer to specific platforms, such as PS5 in our case. We achieve this using the LangChain [PromptTemplate](/docs/how_to#prompt-templates), creating a modified initial prompt. This modified prompt is then supplied as an argument to our refined Memgraph-LangChain instance."
]
},
{

@ -79,7 +79,7 @@
"source": [
"## Usage\n",
"\n",
"Cohere supports all [LLM](/docs/modules/model_io/llms/) functionality:"
"Cohere supports all [LLM](/docs/how_to#llms) functionality:"
]
},
{
@ -193,7 +193,7 @@
"id": "39198f7d-6fc8-4662-954a-37ad38c4bec4",
"metadata": {},
"source": [
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/expression_language)"
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language)"
]
},
{

@ -71,7 +71,7 @@
"source": [
"## Usage\n",
"\n",
"`Frienli` supports all methods of [`LLM`](/docs/modules/model_io/llms/) including async APIs."
"`Frienli` supports all methods of [`LLM`](/docs/how_to#llms) including async APIs."
]
},
{

@ -72,7 +72,7 @@
"source": [
"## Usage\n",
"\n",
"VertexAI supports all [LLM](/docs/modules/model_io/llms/) functionality."
"VertexAI supports all [LLM](/docs/how_to#llms) functionality."
]
},
{
@ -326,7 +326,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/expression_language)"
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language)"
]
},
{

@ -105,7 +105,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](/docs/expression_language/interface)"
"To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](/docs/concepts#interface)"
]
}
],

@ -175,7 +175,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](/docs/expression_language/interface)"
"To learn more about the LangChain Expressive Language and the available methods on an LLM, see the [LCEL Interface](/docs/concepts#interface)"
]
},
{

@ -305,7 +305,7 @@ We need to install the `boto3` and `nltk` libraries.
pip install boto3 nltk
```
See a [usage example](/docs/guides/productionization/safety/amazon_comprehend_chain).
See a [usage example](https://python.langchain.com/v0.1/docs/guides/productionization/safety/amazon_comprehend_chain/).
```python
from langchain_experimental.comprehend_moderation import AmazonComprehendModerationChain

@ -7,7 +7,7 @@ sidebar_class_name: hidden
:::info
If you'd like to write your own integration, see [Extending LangChain](/docs/guides/development/extending_langchain/).
If you'd like to write your own integration, see [Extending LangChain](/docs/how_to/#custom).
If you'd like to contribute an integration, see [Contributing integrations](/docs/contributing/integrations/).
:::

@ -346,7 +346,7 @@ pip install langchain-experimental openai presidio-analyzer presidio-anonymizer
python -m spacy download en_core_web_lg
```
See [usage examples](/docs/guides/productionization/safety/presidio_data_anonymization/).
See [usage examples](https://python.langchain.com/v0.1/docs/guides/productionization/safety/presidio_data_anonymization).
```python
from langchain_experimental.data_anonymizer import PresidioAnonymizer, PresidioReversibleAnonymizer

@ -107,11 +107,11 @@ You can also use it to count tokens when splitting documents with
from langchain.text_splitter import CharacterTextSplitter
CharacterTextSplitter.from_tiktoken_encoder(...)
```
For a more detailed walkthrough of this, see [this notebook](/docs/modules/data_connection/document_transformers/split_by_token#tiktoken)
For a more detailed walkthrough of this, see [this notebook](/docs/how_to/split_by_token/#tiktoken)
## Chain
See a [usage example](/docs/guides/productionization/safety/moderation).
See a [usage example](https://python.langchain.com/v0.1/docs/guides/productionization/safety/moderation).
```python
from langchain.chains import OpenAIModerationChain

@ -33,7 +33,7 @@ db = SQLDatabase.from_uri(conn_str)
db_chain = SQLDatabaseChain.from_llm(OpenAI(temperature=0), db, verbose=True)
```
From here, see the [SQL Chain](/docs/use_cases/sql/) documentation on how to use.
From here, see the [SQL Chain](/docs/how_to#qa-over-sql--csv) documentation on how to use.
## LLMCache

@ -7,7 +7,7 @@
>It optimizes setup and configuration details, including GPU usage.
>For a complete list of supported models and model variants, see the [Ollama model library](https://ollama.ai/library).
See [this guide](/docs/guides/development/local_llms#quickstart) for more details
See [this guide](/docs/tutorials/local_rag) for more details
on how to use `Ollama` with LangChain.
## Installation and Setup

@ -132,7 +132,7 @@ Redis can be used to persist LLM conversations.
### Vector Store Retriever Memory
For a more detailed walkthrough of the `VectorStoreRetrieverMemory` wrapper, see [this notebook](/docs/modules/memory/types/vectorstore_retriever_memory).
For a more detailed walkthrough of the `VectorStoreRetrieverMemory` wrapper, see [this notebook](https://api.python.langchain.com/en/latest/memory/langchain.memory.vectorstore.VectorStoreRetrieverMemory.html).
### Chat Message History Memory
For a detailed example of Redis to cache conversation message history, see [this notebook](/docs/integrations/memory/redis_chat_message_history).

@ -13,7 +13,7 @@ pip install spacy
## Text Splitter
See a [usage example](/docs/modules/data_connection/document_transformers/split_by_token#spacy).
See a [usage example](/docs/how_to/split_by_token/#spacy).
```python
from langchain_text_splitters import SpacyTextSplitter

@ -126,7 +126,7 @@ from langchain_community.document_loaders import UnstructuredFileLoader
### UnstructuredHTMLLoader
See a [usage example](/docs/modules/data_connection/document_loaders/html).
See a [usage example](/docs/how_to/document_loader_html).
```python
from langchain_community.document_loaders import UnstructuredHTMLLoader
@ -173,7 +173,7 @@ from langchain_community.document_loaders import UnstructuredOrgModeLoader
### UnstructuredPDFLoader
See a [usage example](/docs/modules/data_connection/document_loaders/pdf#using-unstructured).
See a [usage example](/docs/how_to/document_loader_pdf#using-unstructured).
```python
from langchain_community.document_loaders import UnstructuredPDFLoader

@ -22,7 +22,7 @@
"See the [Vectara API documentation](https://docs.vectara.com/docs/) for more information on how to use the API.\n",
"\n",
"This notebook shows how to use functionality related to the `Vectara`'s integration with langchain.\n",
"Specificaly we will demonstrate how to use chaining with [LangChain's Expression Language](/docs/expression_language/) and using Vectara's integrated summarization capability."
"Specificaly we will demonstrate how to use chaining with [LangChain's Expression Language](/docs/concepts#langchain-expression-language) and using Vectara's integrated summarization capability."
]
},
{

@ -9,7 +9,7 @@
"\n",
">[Fleet AI Context](https://www.fleet.so/context) is a dataset of high-quality embeddings of the top 1200 most popular & permissive Python Libraries & their documentation.\n",
">\n",
">The `Fleet AI` team is on a mission to embed the world's most important data. They've started by embedding the top 1200 Python libraries to enable code generation with up-to-date knowledge. They've been kind enough to share their embeddings of the [LangChain docs](/docs/get_started/introduction) and [API reference](https://api.python.langchain.com/en/latest/api_reference.html).\n",
">The `Fleet AI` team is on a mission to embed the world's most important data. They've started by embedding the top 1200 Python libraries to enable code generation with up-to-date knowledge. They've been kind enough to share their embeddings of the [LangChain docs](/docs/introduction) and [API reference](https://api.python.langchain.com/en/latest/api_reference.html).\n",
"\n",
"Let's take a look at how we can use these embeddings to power a docs retrieval system and ultimately a simple code-generating chain!"
]

@ -12,7 +12,7 @@
">\n",
">[ColBERT](https://github.com/stanford-futuredata/ColBERT) is a fast and accurate retrieval model, enabling scalable BERT-based search over large text collections in tens of milliseconds.\n",
"\n",
"We can use this as a [retriever](/docs/modules/data_connection/retrievers). It will show functionality specific to this integration. After going through, it may be useful to explore [relevant use-case pages](/docs/use_cases/question_answering) to learn how to use this vector store as part of a larger chain.\n",
"We can use this as a [retriever](/docs/how_to#retrievers). It will show functionality specific to this integration. After going through, it may be useful to explore [relevant use-case pages](/docs/how_to#qa-with-rag) to learn how to use this vector store as part of a larger chain.\n",
"\n",
"This page covers how to use [RAGatouille](https://github.com/bclavie/RAGatouille) as a retriever in a LangChain chain. \n",
"\n",

@ -8,7 +8,7 @@
"\n",
">[Tavily's Search API](https://tavily.com) is a search engine built specifically for AI agents (LLMs), delivering real-time, accurate, and factual results at speed.\n",
"\n",
"We can use this as a [retriever](/docs/modules/data_connection/retrievers). It will show functionality specific to this integration. After going through, it may be useful to explore [relevant use-case pages](/docs/use_cases/question_answering) to learn how to use this vectorstore as part of a larger chain.\n",
"We can use this as a [retriever](/docs/how_to#retrievers). It will show functionality specific to this integration. After going through, it may be useful to explore [relevant use-case pages](/docs/how_to#qa-with-rag) to learn how to use this vectorstore as part of a larger chain.\n",
"\n",
"## Setup\n",
"\n",

@ -118,7 +118,7 @@
"source": [
"## Create the agent\n",
"\n",
"Now that we have defined the tools, we can create the agent. We will be using an OpenAI Functions agent - for more information on this type of agent, as well as other options, see [this guide](/docs/modules/agents/agent_types/)\n",
"Now that we have defined the tools, we can create the agent. We will be using an OpenAI Functions agent - for more information on this type of agent, as well as other options, see [this guide](/docs/concepts#agents)\n",
"\n",
"First, we choose the LLM we want to be guiding the agent."
]
@ -176,7 +176,7 @@
"id": "f8014c9d",
"metadata": {},
"source": [
"Now, we can initalize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our [conceptual guide](/docs/modules/agents/concepts)"
"Now, we can initalize the agent with the LLM, the prompt, and the tools. The agent is responsible for taking in input and deciding what actions to take. Crucially, the Agent does not execute those actions - that is done by the AgentExecutor (next step). For more information about how to think about these components, see our [conceptual guide](/docs/concepts#agents)"
]
},
{
@ -196,7 +196,7 @@
"id": "1a58c9f8",
"metadata": {},
"source": [
"Finally, we combine the agent (the brains) with the tools inside the AgentExecutor (which will repeatedly call the agent and execute tools). For more information about how to think about these components, see our [conceptual guide](/docs/modules/agents/concepts)"
"Finally, we combine the agent (the brains) with the tools inside the AgentExecutor (which will repeatedly call the agent and execute tools). For more information about how to think about these components, see our [conceptual guide](/docs/concepts#agents)"
]
},
{

@ -156,7 +156,7 @@
"source": [
"## Using tool with an agent chain\n",
"\n",
"Reddit search functionality is also provided as a multi-input tool. In this example, we adapt [existing code from the docs](/docs/modules/memory/agent_with_memory), and use ChatOpenAI to create an agent chain with memory. This agent chain is able to pull information from Reddit and use these posts to respond to subsequent input. \n",
"Reddit search functionality is also provided as a multi-input tool. In this example, we adapt [existing code from the docs](https://python.langchain.com/v0.1/docs/modules/memory/agent_with_memory/), and use ChatOpenAI to create an agent chain with memory. This agent chain is able to pull information from Reddit and use these posts to respond to subsequent input. \n",
"\n",
"To run the example, add your reddit API access information and also get an OpenAI key from the [OpenAI API](https://help.openai.com/en/articles/4936850-where-do-i-find-my-api-key)."
]

@ -11,7 +11,7 @@
"\n",
"[Faiss documentation](https://faiss.ai/).\n",
"\n",
"This notebook shows how to use functionality related to the `FAISS` vector database. It will show functionality specific to this integration. After going through, it may be useful to explore [relevant use-case pages](/docs/use_cases/question_answering) to learn how to use this vectorstore as part of a larger chain."
"This notebook shows how to use functionality related to the `FAISS` vector database. It will show functionality specific to this integration. After going through, it may be useful to explore [relevant use-case pages](/docs/how_to#qa-with-rag) to learn how to use this vectorstore as part of a larger chain."
]
},
{
@ -169,7 +169,7 @@
"source": [
"## As a Retriever\n",
"\n",
"We can also convert the vectorstore into a [Retriever](/docs/modules/data_connection/retrievers) class. This allows us to easily use it in other LangChain methods, which largely work with retrievers"
"We can also convert the vectorstore into a [Retriever](/docs/how_to#retrievers) class. This allows us to easily use it in other LangChain methods, which largely work with retrievers"
]
},
{

@ -307,7 +307,7 @@
"metadata": {},
"source": [
"### Using a Timescale Vector as a Retriever\n",
"After initializing a TimescaleVector store, you can use it as a [retriever](/docs/modules/data_connection/retrievers/)."
"After initializing a TimescaleVector store, you can use it as a [retriever](/docs/how_to#retrievers)."
]
},
{
@ -477,7 +477,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we'll define a metadata function to extract the relevant metadata from the JSON record. We'll pass this function to the JSONLoader. See the [JSON document loader docs](/docs/modules/data_connection/document_loaders/json) for more details."
"Next, we'll define a metadata function to extract the relevant metadata from the JSON record. We'll pass this function to the JSONLoader. See the [JSON document loader docs](/docs/how_to/document_loader_json) for more details."
]
},
{

@ -388,7 +388,7 @@
"### As retriever\n",
"\n",
"To use this vector store as a\n",
"[LangChain retriever](/docs/modules/data_connection/retrievers/)\n",
"[LangChain retriever](/docs/how_to#retrievers)\n",
"simply call the `as_retriever` function, which is a standard vector store\n",
"method:"
]

@ -8,7 +8,7 @@ sidebar_class_name: hidden
**LangChain** is a framework for developing applications powered by large language models (LLMs).
LangChain simplifies every stage of the LLM application lifecycle:
- **Development**: Build your applications using LangChain's open-source [building blocks](/docs/expression_language/) and [components](/docs/modules/). Hit the ground running using [third-party integrations](/docs/integrations/platforms/) and [Templates](/docs/templates).
- **Development**: Build your applications using LangChain's open-source [building blocks](/docs/concepts#langchain-expression-language) and [components](/docs/concepts). Hit the ground running using [third-party integrations](/docs/integrations/platforms/) and [Templates](/docs/templates).
- **Productionization**: Use [LangSmith](/docs/langsmith/) to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence.
- **Deployment**: Turn any chain into an API with [LangServe](/docs/langserve).

@ -299,10 +299,10 @@
"\n",
"For more complex query-generation, we may want to create few-shot prompts or add query-checking steps. For advanced techniques like this and more check out:\n",
"\n",
"* [Prompting strategies](/docs/use_cases/graph/prompting): Advanced prompt engineering techniques.\n",
"* [Mapping values](/docs/use_cases/graph/mapping): Techniques for mapping values from questions to database.\n",
"* [Semantic layer](/docs/use_cases/graph/semantic): Techniques for implementing semantic layers.\n",
"* [Constructing graphs](/docs/use_cases/graph/constructing): Techniques for constructing knowledge graphs."
"* [Prompting strategies](/docs/how_to/graph_prompting): Advanced prompt engineering techniques.\n",
"* [Mapping values](docs/how_to/graph_mapping): Techniques for mapping values from questions to database.\n",
"* [Semantic layer](/docs/how_to/graph_semantic): Techniques for implementing semantic layers.\n",
"* [Constructing graphs](/docs/how_to/graph_constructing): Techniques for constructing knowledge graphs."
]
},
{

@ -541,7 +541,7 @@
"\n",
"### Client\n",
"\n",
"Now let's set up a client for programmatically interacting with our service. We can easily do this with the `[langserve.RemoteRunnable](/docs/langserve#client)`.\n",
"Now let's set up a client for programmatically interacting with our service. We can easily do this with the `[langserve.RemoteRunnable](/docs/langserve/#client)`.\n",
"Using this, we can interact with the served chain as if it were running client-side."
]
},

@ -11,7 +11,7 @@
"\n",
"LangChain has [integrations](https://integrations.langchain.com/) with many open-source LLMs that can be run locally.\n",
"\n",
"See [here](/docs/guides/development/local_llms) for setup instructions for these LLMs. \n",
"See [here](/docs/tutorials/local_rag) for setup instructions for these LLMs. \n",
"\n",
"For example, here we show how to run `GPT4All` or `LLaMA2` locally (e.g., on your laptop) using local embeddings and a local LLM.\n",
"\n",
@ -145,7 +145,7 @@
" \n",
"And / or, you can download a GGUF converted model (e.g., [here](https://huggingface.co/TheBloke)).\n",
"\n",
"Finally, as noted in detail [here](/docs/guides/development/local_llms) install `llama-cpp-python`"
"Finally, as noted in detail [here](/docs/tutorials/local_rag) install `llama-cpp-python`"
]
},
{

@ -409,7 +409,7 @@
"\n",
"For this we can use:\n",
"\n",
"- [BaseChatMessageHistory](/docs/modules/memory/chat_messages/): Store chat history.\n",
"- [BaseChatMessageHistory](https://api.python.langchain.com/en/latest/langchain_api_reference.html#module-langchain.memory): Store chat history.\n",
"- [RunnableWithMessageHistory](/docs/how_to/message_history): Wrapper for an LCEL chain and a `BaseChatMessageHistory` that handles injecting chat history into inputs and updating it after each invocation.\n",
"\n",
"For a detailed walkthrough of how to use these classes together to create a stateful conversational chain, head to the [How to add message history (memory)](/docs/how_to/message_history) LCEL page.\n",
@ -744,7 +744,7 @@
"id": "07dcb968-ed9a-458a-85e1-528cd28c6965",
"metadata": {},
"source": [
"Tools are LangChain [Runnables](/docs/expression_language/), and implement the usual interface:"
"Tools are LangChain [Runnables](/docs/concepts#langchain-expression-language), and implement the usual interface:"
]
},
{
@ -1048,7 +1048,7 @@
"- We used chains to build a predictable application that generates search queries for each user input;\n",
"- We used agents to build an application that \"decides\" when and how to generate search queries.\n",
"\n",
"To explore different types of retrievers and retrieval strategies, visit the [retrievers](/docs/0.2.x/how_to/#retrievers) section of the how-to guides.\n",
"To explore different types of retrievers and retrieval strategies, visit the [retrievers](/docs/how_to/#retrievers) section of the how-to guides.\n",
"\n",
"For a detailed walkthrough of LangChain's conversation memory abstractions, visit the [How to add message history (memory)](/docs/how_to/message_history) LCEL page.\n",
"\n",

@ -78,7 +78,7 @@
"```\n",
"\n",
"\n",
"For more details, see our [Installation guide](/docs/get_started/installation).\n",
"For more details, see our [Installation guide](/docs/installation).\n",
"\n",
"### LangSmith\n",
"\n",

@ -103,6 +103,9 @@ const config = {
// eslint-disable-next-line no-param-reassign
subItem.label = subItem.label.replace(/\//g, "/\u200B");
}
if (args.item.className) {
subItem.className = args.item.className;
}
});
return sidebarItems;
},

@ -29,14 +29,26 @@ module.exports = {
items: ["installation"],
},
{
type: "doc",
id: "tutorials/index",
type: "category",
link: {type: 'doc', id: 'tutorials/index'},
label: "Tutorials",
collapsible: false,
items: [{
type: 'autogenerated',
dirName: 'tutorials',
className: 'hidden',
}],
},
{
type: "doc",
id: "how_to/index",
type: "category",
link: {type: 'doc', id: 'how_to/index'},
label: "How-To Guides",
collapsible: false,
items: [{
type: 'autogenerated',
dirName: 'how_to',
className: 'hidden',
}],
},
"concepts",
{

@ -1,7 +1,7 @@
import logging
import os
import time
from typing import Dict, Iterator, Optional, Tuple
from typing import Any, Dict, Iterator, Literal, Optional, Tuple, Union
from langchain_core.documents import Document
@ -31,12 +31,32 @@ class OpenAIWhisperParser(BaseBlobParser):
*,
chunk_duration_threshold: float = 0.1,
base_url: Optional[str] = None,
language: Union[str, None] = None,
prompt: Union[str, None] = None,
response_format: Union[
Literal["json", "text", "srt", "verbose_json", "vtt"], None
] = None,
temperature: Union[float, None] = None,
):
self.api_key = api_key
self.chunk_duration_threshold = chunk_duration_threshold
self.base_url = (
base_url if base_url is not None else os.environ.get("OPENAI_API_BASE")
)
self.language = language
self.prompt = prompt
self.response_format = response_format
self.temperature = temperature
@property
def _create_params(self) -> Dict[str, Any]:
params = {
"language": self.language,
"prompt": self.prompt,
"response_format": self.response_format,
"temperature": self.temperature,
}
return {k: v for k, v in params.items() if v is not None}
def lazy_parse(self, blob: Blob) -> Iterator[Document]:
"""Lazily parse the blob."""
@ -95,7 +115,7 @@ class OpenAIWhisperParser(BaseBlobParser):
try:
if is_openai_v1():
transcript = client.audio.transcriptions.create(
model="whisper-1", file=file_obj
model="whisper-1", file=file_obj, **self._create_params
)
else:
transcript = openai.Audio.transcribe("whisper-1", file_obj)

@ -115,13 +115,16 @@ class AmazonKnowledgeBasesRetriever(BaseRetriever):
results = response["retrievalResults"]
documents = []
for result in results:
content = result["content"]["text"]
result.pop("content")
if "score" not in result:
result["score"] = 0
if "metadata" in result:
result["source_metadata"] = result.pop("metadata")
documents.append(
Document(
page_content=result["content"]["text"],
metadata={
"location": result["location"],
"score": result["score"] if "score" in result else 0,
},
page_content=content,
metadata=result,
)
)

@ -1,4 +1,4 @@
# This file is automatically @generated by Poetry 1.6.1 and should not be changed by hand.
# This file is automatically @generated by Poetry 1.7.1 and should not be changed by hand.
[[package]]
name = "aenum"
@ -3454,7 +3454,6 @@ files = [
{file = "jq-1.6.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:227b178b22a7f91ae88525810441791b1ca1fc71c86f03190911793be15cec3d"},
{file = "jq-1.6.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:780eb6383fbae12afa819ef676fc93e1548ae4b076c004a393af26a04b460742"},
{file = "jq-1.6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:08ded6467f4ef89fec35b2bf310f210f8cd13fbd9d80e521500889edf8d22441"},
{file = "jq-1.6.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:49e44ed677713f4115bd5bf2dbae23baa4cd503be350e12a1c1f506b0687848f"},
{file = "jq-1.6.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:984f33862af285ad3e41e23179ac4795f1701822473e1a26bf87ff023e5a89ea"},
{file = "jq-1.6.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f42264fafc6166efb5611b5d4cb01058887d050a6c19334f6a3f8a13bb369df5"},
{file = "jq-1.6.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a67154f150aaf76cc1294032ed588436eb002097dd4fd1e283824bf753a05080"},
@ -6107,6 +6106,8 @@ files = [
{file = "psycopg2-2.9.9-cp310-cp310-win_amd64.whl", hash = "sha256:426f9f29bde126913a20a96ff8ce7d73fd8a216cfb323b1f04da402d452853c3"},
{file = "psycopg2-2.9.9-cp311-cp311-win32.whl", hash = "sha256:ade01303ccf7ae12c356a5e10911c9e1c51136003a9a1d92f7aa9d010fb98372"},
{file = "psycopg2-2.9.9-cp311-cp311-win_amd64.whl", hash = "sha256:121081ea2e76729acfb0673ff33755e8703d45e926e416cb59bae3a86c6a4981"},
{file = "psycopg2-2.9.9-cp312-cp312-win32.whl", hash = "sha256:d735786acc7dd25815e89cc4ad529a43af779db2e25aa7c626de864127e5a024"},
{file = "psycopg2-2.9.9-cp312-cp312-win_amd64.whl", hash = "sha256:a7653d00b732afb6fc597e29c50ad28087dcb4fbfb28e86092277a559ae4e693"},
{file = "psycopg2-2.9.9-cp37-cp37m-win32.whl", hash = "sha256:5e0d98cade4f0e0304d7d6f25bbfbc5bd186e07b38eac65379309c4ca3193efa"},
{file = "psycopg2-2.9.9-cp37-cp37m-win_amd64.whl", hash = "sha256:7e2dacf8b009a1c1e843b5213a87f7c544b2b042476ed7755be813eaf4e8347a"},
{file = "psycopg2-2.9.9-cp38-cp38-win32.whl", hash = "sha256:ff432630e510709564c01dafdbe996cb552e0b9f3f065eb89bdce5bd31fabf4c"},
@ -6149,6 +6150,7 @@ files = [
{file = "psycopg2_binary-2.9.9-cp311-cp311-win32.whl", hash = "sha256:dc4926288b2a3e9fd7b50dc6a1909a13bbdadfc67d93f3374d984e56f885579d"},
{file = "psycopg2_binary-2.9.9-cp311-cp311-win_amd64.whl", hash = "sha256:b76bedd166805480ab069612119ea636f5ab8f8771e640ae103e05a4aae3e417"},
{file = "psycopg2_binary-2.9.9-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:8532fd6e6e2dc57bcb3bc90b079c60de896d2128c5d9d6f24a63875a95a088cf"},
{file = "psycopg2_binary-2.9.9-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:b0605eaed3eb239e87df0d5e3c6489daae3f7388d455d0c0b4df899519c6a38d"},
{file = "psycopg2_binary-2.9.9-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8f8544b092a29a6ddd72f3556a9fcf249ec412e10ad28be6a0c0d948924f2212"},
{file = "psycopg2_binary-2.9.9-cp312-cp312-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2d423c8d8a3c82d08fe8af900ad5b613ce3632a1249fd6a223941d0735fce493"},
{file = "psycopg2_binary-2.9.9-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2e5afae772c00980525f6d6ecf7cbca55676296b580c0e6abb407f15f3706996"},
@ -6157,6 +6159,8 @@ files = [
{file = "psycopg2_binary-2.9.9-cp312-cp312-musllinux_1_1_i686.whl", hash = "sha256:cb16c65dcb648d0a43a2521f2f0a2300f40639f6f8c1ecbc662141e4e3e1ee07"},
{file = "psycopg2_binary-2.9.9-cp312-cp312-musllinux_1_1_ppc64le.whl", hash = "sha256:911dda9c487075abd54e644ccdf5e5c16773470a6a5d3826fda76699410066fb"},
{file = "psycopg2_binary-2.9.9-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:57fede879f08d23c85140a360c6a77709113efd1c993923c59fde17aa27599fe"},
{file = "psycopg2_binary-2.9.9-cp312-cp312-win32.whl", hash = "sha256:64cf30263844fa208851ebb13b0732ce674d8ec6a0c86a4e160495d299ba3c93"},
{file = "psycopg2_binary-2.9.9-cp312-cp312-win_amd64.whl", hash = "sha256:81ff62668af011f9a48787564ab7eded4e9fb17a4a6a74af5ffa6a457400d2ab"},
{file = "psycopg2_binary-2.9.9-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:2293b001e319ab0d869d660a704942c9e2cce19745262a8aba2115ef41a0a42a"},
{file = "psycopg2_binary-2.9.9-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:03ef7df18daf2c4c07e2695e8cfd5ee7f748a1d54d802330985a78d2a5a6dca9"},
{file = "psycopg2_binary-2.9.9-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0a602ea5aff39bb9fac6308e9c9d82b9a35c2bf288e184a816002c9fae930b77"},
@ -6689,31 +6693,26 @@ python-versions = ">=3.8"
files = [
{file = "PyMuPDF-1.23.26-cp310-none-macosx_10_9_x86_64.whl", hash = "sha256:645a05321aecc8c45739f71f0eb574ce33138d19189582ffa5241fea3a8e2549"},
{file = "PyMuPDF-1.23.26-cp310-none-macosx_11_0_arm64.whl", hash = "sha256:2dfc9e010669ae92fade6fb72aaea49ebe3b8dcd7ee4dcbbe50115abcaa4d3fe"},
{file = "PyMuPDF-1.23.26-cp310-none-manylinux2014_aarch64.whl", hash = "sha256:734ee380b3abd038602be79114194a3cb74ac102b7c943bcb333104575922c50"},
{file = "PyMuPDF-1.23.26-cp310-none-manylinux2014_x86_64.whl", hash = "sha256:b22f8d854f8196ad5b20308c1cebad3d5189ed9f0988acbafa043947ea7e6c55"},
{file = "PyMuPDF-1.23.26-cp310-none-win32.whl", hash = "sha256:cc0f794e3466bc96b5bf79d42fbc1551428751e3fef38ebc10ac70396b676144"},
{file = "PyMuPDF-1.23.26-cp310-none-win_amd64.whl", hash = "sha256:2eb701247d8e685a24e45899d1175f01a3ce5fc792a4431c91fbb68633b29298"},
{file = "PyMuPDF-1.23.26-cp311-none-macosx_10_9_x86_64.whl", hash = "sha256:e2804a64bb57da414781e312fb0561f6be67658ad57ed4a73dce008b23fc70a6"},
{file = "PyMuPDF-1.23.26-cp311-none-macosx_11_0_arm64.whl", hash = "sha256:97b40bb22e3056874634617a90e0ed24a5172cf71791b9e25d1d91c6743bc567"},
{file = "PyMuPDF-1.23.26-cp311-none-manylinux2014_aarch64.whl", hash = "sha256:fab8833559bc47ab26ce736f915b8fc1dd37c108049b90396f7cd5e1004d7593"},
{file = "PyMuPDF-1.23.26-cp311-none-manylinux2014_x86_64.whl", hash = "sha256:f25aafd3e7fb9d7761a22acf2b67d704f04cc36d4dc33a3773f0eb3f4ec3606f"},
{file = "PyMuPDF-1.23.26-cp311-none-win32.whl", hash = "sha256:05e672ed3e82caca7ef02a88ace30130b1dd392a1190f03b2b58ffe7aa331400"},
{file = "PyMuPDF-1.23.26-cp311-none-win_amd64.whl", hash = "sha256:92b3c4dd4d0491d495f333be2d41f4e1c155a409bc9d04b5ff29655dccbf4655"},
{file = "PyMuPDF-1.23.26-cp312-none-macosx_10_9_x86_64.whl", hash = "sha256:a217689ede18cc6991b4e6a78afee8a440b3075d53b9dec4ba5ef7487d4547e9"},
{file = "PyMuPDF-1.23.26-cp312-none-macosx_11_0_arm64.whl", hash = "sha256:42ad2b819b90ce1947e11b90ec5085889df0a2e3aa0207bc97ecacfc6157cabc"},
{file = "PyMuPDF-1.23.26-cp312-none-manylinux2014_aarch64.whl", hash = "sha256:99607649f89a02bba7d8ebe96e2410664316adc95e9337f7dfeff6a154f93049"},
{file = "PyMuPDF-1.23.26-cp312-none-manylinux2014_x86_64.whl", hash = "sha256:bb42d4b8407b4de7cb58c28f01449f16f32a6daed88afb41108f1aeb3552bdd4"},
{file = "PyMuPDF-1.23.26-cp312-none-win32.whl", hash = "sha256:c40d044411615e6f0baa7d3d933b3032cf97e168c7fa77d1be8a46008c109aee"},
{file = "PyMuPDF-1.23.26-cp312-none-win_amd64.whl", hash = "sha256:3f876533aa7f9a94bcd9a0225ce72571b7808260903fec1d95c120bc842fb52d"},
{file = "PyMuPDF-1.23.26-cp38-none-macosx_10_9_x86_64.whl", hash = "sha256:52df831d46beb9ff494f5fba3e5d069af6d81f49abf6b6e799ee01f4f8fa6799"},
{file = "PyMuPDF-1.23.26-cp38-none-macosx_11_0_arm64.whl", hash = "sha256:0bbb0cf6593e53524f3fc26fb5e6ead17c02c64791caec7c4afe61b677dedf80"},
{file = "PyMuPDF-1.23.26-cp38-none-manylinux2014_aarch64.whl", hash = "sha256:5ef4360f20015673c20cf59b7e19afc97168795188c584254ed3778cde43ce77"},
{file = "PyMuPDF-1.23.26-cp38-none-manylinux2014_x86_64.whl", hash = "sha256:d7cd88842b2e7f4c71eef4d87c98c35646b80b60e6375392d7ce40e519261f59"},
{file = "PyMuPDF-1.23.26-cp38-none-win32.whl", hash = "sha256:6577e2f473625e2d0df5f5a3bf1e4519e94ae749733cc9937994d1b256687bfa"},
{file = "PyMuPDF-1.23.26-cp38-none-win_amd64.whl", hash = "sha256:fbe1a3255b2cd0d769b2da2c4efdd0c0f30d4961a1aac02c0f75cf951b337aa4"},
{file = "PyMuPDF-1.23.26-cp39-none-macosx_10_9_x86_64.whl", hash = "sha256:73fce034f2afea886a59ead2d0caedf27e2b2a8558b5da16d0286882e0b1eb82"},
{file = "PyMuPDF-1.23.26-cp39-none-macosx_11_0_arm64.whl", hash = "sha256:b3de8618b7cb5b36db611083840b3bcf09b11a893e2d8262f4e042102c7e65de"},
{file = "PyMuPDF-1.23.26-cp39-none-manylinux2014_aarch64.whl", hash = "sha256:879e7f5ad35709d8760ab6103c3d5dac8ab8043a856ab3653fd324af7358ee87"},
{file = "PyMuPDF-1.23.26-cp39-none-manylinux2014_x86_64.whl", hash = "sha256:deee96c2fd415ded7b5070d8d5b2c60679aee6ed0e28ac0d2cb998060d835c2c"},
{file = "PyMuPDF-1.23.26-cp39-none-win32.whl", hash = "sha256:9f7f4ef99dd8ac97fb0b852efa3dcbee515798078b6c79a6a13c7b1e7c5d41a4"},
{file = "PyMuPDF-1.23.26-cp39-none-win_amd64.whl", hash = "sha256:ba9a54552c7afb9ec85432c765e2fa9a81413acfaa7d70db7c9b528297749e5b"},
@ -7154,6 +7153,7 @@ files = [
{file = "PyYAML-6.0.1-cp311-cp311-win_amd64.whl", hash = "sha256:bf07ee2fef7014951eeb99f56f39c9bb4af143d8aa3c21b1677805985307da34"},
{file = "PyYAML-6.0.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:855fb52b0dc35af121542a76b9a84f8d1cd886ea97c84703eaa6d88e37a2ad28"},
{file = "PyYAML-6.0.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:40df9b996c2b73138957fe23a16a4f0ba614f4c0efce1e9406a184b6d07fa3a9"},
{file = "PyYAML-6.0.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a08c6f0fe150303c1c6b71ebcd7213c2858041a7e01975da3a99aed1e7a378ef"},
{file = "PyYAML-6.0.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6c22bec3fbe2524cde73d7ada88f6566758a8f7227bfbf93a408a9d86bcc12a0"},
{file = "PyYAML-6.0.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8d4e9c88387b0f5c7d5f281e55304de64cf7f9c0021a3525bd3b1c542da3b0e4"},
{file = "PyYAML-6.0.1-cp312-cp312-win32.whl", hash = "sha256:d483d2cdf104e7c9fa60c544d92981f12ad66a457afae824d146093b8c294c54"},
@ -10086,4 +10086,4 @@ extended-testing = ["aiosqlite", "aleph-alpha-client", "anthropic", "arxiv", "as
[metadata]
lock-version = "2.0"
python-versions = ">=3.8.1,<4.0"
content-hash = "9ad37aae2905701ec099c1f9cdec59692de43e8d047ceb2ce25898b4c873b190"
content-hash = "5ccaf74b84c7ea44bb6a726e8fad100ec0b149394f05f28ede9a81208d12fca0"

@ -1,6 +1,6 @@
[tool.poetry]
name = "langchain-community"
version = "0.0.38rc1"
version = "0.2.0rc1"
description = "Community contributed LangChain integrations."
authors = []
license = "MIT"
@ -10,7 +10,7 @@ repository = "https://github.com/langchain-ai/langchain"
[tool.poetry.dependencies]
python = ">=3.8.1,<4.0"
langchain-core = "^0.1.51"
langchain = "~0.2.0rc1"
langchain = "^0.2.0rc1"
SQLAlchemy = ">=1.4,<3"
requests = "^2"
PyYAML = ">=5.3"

@ -0,0 +1,68 @@
from typing import List
from unittest.mock import MagicMock
import pytest
from langchain_core.documents import Document
from langchain_community.retrievers import AmazonKnowledgeBasesRetriever
@pytest.fixture
def mock_client() -> MagicMock:
return MagicMock()
@pytest.fixture
def mock_retriever_config() -> dict:
return {"vectorSearchConfiguration": {"numberOfResults": 4}}
@pytest.fixture
def amazon_retriever(
mock_client: MagicMock, mock_retriever_config: dict
) -> AmazonKnowledgeBasesRetriever:
return AmazonKnowledgeBasesRetriever(
knowledge_base_id="test_kb_id",
retrieval_config=mock_retriever_config,
client=mock_client,
)
def test_create_client(amazon_retriever: AmazonKnowledgeBasesRetriever) -> None:
with pytest.raises(ImportError):
amazon_retriever.create_client({})
def test_get_relevant_documents(
amazon_retriever: AmazonKnowledgeBasesRetriever, mock_client: MagicMock
) -> None:
query: str = "test query"
mock_client.retrieve.return_value = {
"retrievalResults": [
{"content": {"text": "result1"}, "metadata": {"key": "value1"}},
{
"content": {"text": "result2"},
"metadata": {"key": "value2"},
"score": 1,
"location": "testLocation",
},
{"content": {"text": "result3"}},
]
}
documents: List[Document] = amazon_retriever._get_relevant_documents(
query,
run_manager=None, # type: ignore
)
assert len(documents) == 3
assert isinstance(documents[0], Document)
assert documents[0].page_content == "result1"
assert documents[0].metadata == {"score": 0, "source_metadata": {"key": "value1"}}
assert documents[1].page_content == "result2"
assert documents[1].metadata == {
"score": 1,
"source_metadata": {"key": "value2"},
"location": "testLocation",
}
assert documents[2].page_content == "result3"
assert documents[2].metadata == {"score": 0}

@ -2835,8 +2835,8 @@ files = [
[package.dependencies]
numpy = [
{version = ">=1.20.3", markers = "python_version < \"3.10\""},
{version = ">=1.21.0", markers = "python_version >= \"3.10\" and python_version < \"3.11\""},
{version = ">=1.23.2", markers = "python_version >= \"3.11\""},
{version = ">=1.21.0", markers = "python_version >= \"3.10\" and python_version < \"3.11\""},
]
python-dateutil = ">=2.8.2"
pytz = ">=2020.1"
@ -5556,4 +5556,4 @@ extended-testing = ["faker", "jinja2", "pandas", "presidio-analyzer", "presidio-
[metadata]
lock-version = "2.0"
python-versions = ">=3.8.1,<4.0"
content-hash = "8094eaac737aa33963b2c9c73e61bc0b366def7556f0ab081d8055fa96b342a5"
content-hash = "bf477c36b49f96245b3161e5ea1f9a291a05093a86a63957b0d474d80735d1da"

@ -1,6 +1,6 @@
[tool.poetry]
name = "langchain-experimental"
version = "0.0.58"
version = "0.2.0rc1"
description = "Building applications with LLMs through composability"
authors = []
license = "MIT"
@ -11,7 +11,7 @@ repository = "https://github.com/langchain-ai/langchain"
[tool.poetry.dependencies]
python = ">=3.8.1,<4.0"
langchain-core = "^0.1.52"
langchain-community = "^0.0.38rc1"
langchain-community = ">=0.0.38rc1,<0.3"
presidio-anonymizer = {version = "^2.2.352", optional = true}
presidio-analyzer = {version = "^2.2.352", optional = true}
faker = {version = "^19.3.1", optional = true}

@ -1,4 +1,5 @@
"""Chain that takes in an input and produces an action and action input."""
from __future__ import annotations
import asyncio
@ -346,11 +347,11 @@ class RunnableAgent(BaseSingleActionAgent):
input_keys_arg: List[str] = []
return_keys_arg: List[str] = []
stream_runnable: bool = True
"""Whether to stream from the runnable or not.
"""Whether to stream from the runnable or not.
If True then underlying LLM is invoked in a streaming fashion to make it possible
to get access to the individual LLM tokens when using stream_log with the Agent
Executor. If False then LLM is invoked in a non-streaming fashion and
If True then underlying LLM is invoked in a streaming fashion to make it possible
to get access to the individual LLM tokens when using stream_log with the Agent
Executor. If False then LLM is invoked in a non-streaming fashion and
individual LLM tokens will not be available in stream_log.
"""
@ -455,11 +456,11 @@ class RunnableMultiActionAgent(BaseMultiActionAgent):
input_keys_arg: List[str] = []
return_keys_arg: List[str] = []
stream_runnable: bool = True
"""Whether to stream from the runnable or not.
If True then underlying LLM is invoked in a streaming fashion to make it possible
to get access to the individual LLM tokens when using stream_log with the Agent
Executor. If False then LLM is invoked in a non-streaming fashion and
"""Whether to stream from the runnable or not.
If True then underlying LLM is invoked in a streaming fashion to make it possible
to get access to the individual LLM tokens when using stream_log with the Agent
Executor. If False then LLM is invoked in a non-streaming fashion and
individual LLM tokens will not be available in stream_log.
"""
@ -926,7 +927,7 @@ class AgentExecutor(Chain):
max_iterations: Optional[int] = 15
"""The maximum number of steps to take before ending the execution
loop.
Setting to 'None' could lead to an infinite loop."""
max_execution_time: Optional[float] = None
"""The maximum amount of wall clock time to spend in the execution
@ -938,7 +939,7 @@ class AgentExecutor(Chain):
`"force"` returns a string saying that it stopped because it met a
time or iteration limit.
`"generate"` calls the agent's LLM Chain one final time to generate
a final answer based on the previous steps.
"""
@ -1565,6 +1566,7 @@ class AgentExecutor(Chain):
tags=config.get("tags"),
metadata=config.get("metadata"),
run_name=config.get("run_name"),
run_id=config.get("run_id"),
yield_actions=True,
**kwargs,
)
@ -1586,6 +1588,7 @@ class AgentExecutor(Chain):
tags=config.get("tags"),
metadata=config.get("metadata"),
run_name=config.get("run_name"),
run_id=config.get("run_id"),
yield_actions=True,
**kwargs,
)

@ -14,6 +14,7 @@ from typing import (
Tuple,
Union,
)
from uuid import UUID
from langchain_core.agents import (
AgentAction,
@ -54,6 +55,7 @@ class AgentExecutorIterator:
tags: Optional[list[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
run_name: Optional[str] = None,
run_id: Optional[UUID] = None,
include_run_info: bool = False,
yield_actions: bool = False,
):
@ -67,6 +69,7 @@ class AgentExecutorIterator:
self.tags = tags
self.metadata = metadata
self.run_name = run_name
self.run_id = run_id
self.include_run_info = include_run_info
self.yield_actions = yield_actions
self.reset()
@ -76,6 +79,7 @@ class AgentExecutorIterator:
tags: Optional[list[str]]
metadata: Optional[Dict[str, Any]]
run_name: Optional[str]
run_id: Optional[UUID]
include_run_info: bool
yield_actions: bool
@ -162,6 +166,7 @@ class AgentExecutorIterator:
run_manager = callback_manager.on_chain_start(
dumpd(self.agent_executor),
self.inputs,
self.run_id,
name=self.run_name,
)
try:
@ -227,6 +232,7 @@ class AgentExecutorIterator:
run_manager = await callback_manager.on_chain_start(
dumpd(self.agent_executor),
self.inputs,
self.run_id,
name=self.run_name,
)
try:

@ -1,4 +1,5 @@
"""Base interface that all chains should implement."""
import inspect
import json
import logging
@ -127,6 +128,7 @@ class Chain(RunnableSerializable[Dict[str, Any], Dict[str, Any]], ABC):
tags = config.get("tags")
metadata = config.get("metadata")
run_name = config.get("run_name") or self.get_name()
run_id = config.get("run_id")
include_run_info = kwargs.get("include_run_info", False)
return_only_outputs = kwargs.get("return_only_outputs", False)
@ -145,6 +147,7 @@ class Chain(RunnableSerializable[Dict[str, Any], Dict[str, Any]], ABC):
run_manager = callback_manager.on_chain_start(
dumpd(self),
inputs,
run_id,
name=run_name,
)
try:
@ -178,6 +181,7 @@ class Chain(RunnableSerializable[Dict[str, Any], Dict[str, Any]], ABC):
tags = config.get("tags")
metadata = config.get("metadata")
run_name = config.get("run_name") or self.get_name()
run_id = config.get("run_id")
include_run_info = kwargs.get("include_run_info", False)
return_only_outputs = kwargs.get("return_only_outputs", False)
@ -195,6 +199,7 @@ class Chain(RunnableSerializable[Dict[str, Any], Dict[str, Any]], ABC):
run_manager = await callback_manager.on_chain_start(
dumpd(self),
inputs,
run_id,
name=run_name,
)
try:

@ -3,6 +3,7 @@ from uuid import UUID
import pytest
from langchain_core.language_models import FakeListLLM
from langchain_core.tools import Tool
from langchain_core.tracers.context import collect_runs
from langchain.agents import (
AgentExecutor,
@ -251,6 +252,28 @@ def test_agent_iterator_properties_and_setters() -> None:
assert isinstance(agent_iter.agent_executor, AgentExecutor)
def test_agent_iterator_manual_run_id() -> None:
"""Test react chain iterator with manually specified run_id."""
agent = _get_agent()
run_id = UUID("f47ac10b-58cc-4372-a567-0e02b2c3d479")
with collect_runs() as cb:
agent_iter = agent.stream("when was langchain made", {"run_id": run_id})
list(agent_iter)
run = cb.traced_runs[0]
assert run.id == run_id
async def test_manually_specify_rid_async() -> None:
agent = _get_agent()
run_id = UUID("f47ac10b-58cc-4372-a567-0e02b2c3d479")
with collect_runs() as cb:
res = agent.astream("bar", {"run_id": run_id})
async for _ in res:
pass
run = cb.traced_runs[0]
assert run.id == run_id
def test_agent_iterator_reset() -> None:
"""Test reset functionality of AgentExecutorIterator."""
agent = _get_agent()

@ -1,9 +1,12 @@
"""Test logic on base chain class."""
import uuid
from typing import Any, Dict, List, Optional
import pytest
from langchain_core.callbacks.manager import CallbackManagerForChainRun
from langchain_core.memory import BaseMemory
from langchain_core.tracers.context import collect_runs
from langchain.chains.base import Chain
from langchain.schema import RUN_KEY
@ -180,6 +183,37 @@ def test_run_with_callback_and_input_error() -> None:
assert handler.errors == 1
def test_manually_specify_rid() -> None:
chain = FakeChain()
run_id = uuid.uuid4()
with collect_runs() as cb:
chain.invoke({"foo": "bar"}, {"run_id": run_id})
run = cb.traced_runs[0]
assert run.id == run_id
run_id2 = uuid.uuid4()
with collect_runs() as cb:
list(chain.stream({"foo": "bar"}, {"run_id": run_id2}))
run = cb.traced_runs[0]
assert run.id == run_id2
async def test_manually_specify_rid_async() -> None:
chain = FakeChain()
run_id = uuid.uuid4()
with collect_runs() as cb:
await chain.ainvoke({"foo": "bar"}, {"run_id": run_id})
run = cb.traced_runs[0]
assert run.id == run_id
run_id2 = uuid.uuid4()
with collect_runs() as cb:
res = chain.astream({"foo": "bar"}, {"run_id": run_id2})
async for _ in res:
pass
run = cb.traced_runs[0]
assert run.id == run_id2
def test_run_with_callback_and_output_error() -> None:
"""Test callback manager catches run validation output error."""
handler = FakeCallbackHandler()

@ -60,7 +60,8 @@ def cosine_similarity(X: Matrix, Y: Matrix) -> np.ndarray:
Y = np.array(Y)
if X.shape[1] != Y.shape[1]:
raise ValueError(
f"Number of columns in X and Y must be the same. X has shape {X.shape} "
"Number of columns in X and Y must be the same. X has shape"
f"{X.shape} "
f"and Y has shape {Y.shape}."
)
@ -133,6 +134,7 @@ class Chroma(VectorStore):
collection_metadata: Optional[Dict] = None,
client: Optional[chromadb.ClientAPI] = None,
relevance_score_fn: Optional[Callable[[float], float]] = None,
create_collection_if_not_exists: Optional[bool] = True,
) -> None:
"""Initialize with a Chroma client."""
@ -161,11 +163,14 @@ class Chroma(VectorStore):
)
self._embedding_function = embedding_function
self._collection = self._client.get_or_create_collection(
name=collection_name,
embedding_function=None,
metadata=collection_metadata,
)
if create_collection_if_not_exists:
self._collection = self._client.get_or_create_collection(
name=collection_name,
embedding_function=None,
metadata=collection_metadata,
)
else:
self._collection = self._client.get_collection(name=collection_name)
self.override_relevance_score_fn = relevance_score_fn
@property
@ -650,7 +655,8 @@ class Chroma(VectorStore):
"""
return self.update_documents([document_id], [document])
def update_documents(self, ids: List[str], documents: List[Document]) -> None: # type: ignore
# type: ignore
def update_documents(self, ids: List[str], documents: List[Document]) -> None:
"""Update a document in the collection.
Args:

@ -1,10 +1,12 @@
"""Test Chroma functionality."""
import uuid
from typing import Generator
import chromadb
import pytest
import requests
from chromadb.api.client import SharedSystemClient
from langchain_core.documents import Document
from langchain_core.embeddings.fake import FakeEmbeddings as Fak
@ -15,6 +17,13 @@ from tests.integration_tests.fake_embeddings import (
)
@pytest.fixture()
def client() -> Generator[chromadb.ClientAPI, None, None]:
SharedSystemClient.clear_system_cache()
client = chromadb.Client(chromadb.config.Settings())
yield client
def test_chroma() -> None:
"""Test end to end construction and search."""
texts = ["foo", "bar", "baz"]
@ -271,10 +280,7 @@ def test_chroma_with_relevance_score_custom_normalization_fn() -> None:
]
def test_init_from_client() -> None:
import chromadb
client = chromadb.Client(chromadb.config.Settings())
def test_init_from_client(client: chromadb.ClientAPI) -> None:
Chroma(client=client)
@ -414,3 +420,72 @@ def test_chroma_legacy_batching() -> None:
)
db.delete_collection()
def test_create_collection_if_not_exist_default() -> None:
"""Tests existing behaviour without the new create_collection_if_not_exists flag."""
texts = ["foo", "bar", "baz"]
docsearch = Chroma.from_texts(
collection_name="test_collection", texts=texts, embedding=FakeEmbeddings()
)
assert docsearch._client.get_collection("test_collection") is not None
docsearch.delete_collection()
def test_create_collection_if_not_exist_true_existing(
client: chromadb.ClientAPI,
) -> None:
"""Tests create_collection_if_not_exists=True and collection already existing."""
client.create_collection("test_collection")
vectorstore = Chroma(
client=client,
collection_name="test_collection",
embedding_function=FakeEmbeddings(),
create_collection_if_not_exists=True,
)
assert vectorstore._client.get_collection("test_collection") is not None
vectorstore.delete_collection()
def test_create_collection_if_not_exist_false_existing(
client: chromadb.ClientAPI,
) -> None:
"""Tests create_collection_if_not_exists=False and collection already existing."""
client.create_collection("test_collection")
vectorstore = Chroma(
client=client,
collection_name="test_collection",
embedding_function=FakeEmbeddings(),
create_collection_if_not_exists=False,
)
assert vectorstore._client.get_collection("test_collection") is not None
vectorstore.delete_collection()
def test_create_collection_if_not_exist_false_non_existing(
client: chromadb.ClientAPI,
) -> None:
"""Tests create_collection_if_not_exists=False and collection not-existing,
should raise."""
with pytest.raises(Exception, match="does not exist"):
Chroma(
client=client,
collection_name="test_collection",
embedding_function=FakeEmbeddings(),
create_collection_if_not_exists=False,
)
def test_create_collection_if_not_exist_true_non_existing(
client: chromadb.ClientAPI,
) -> None:
"""Tests create_collection_if_not_exists=True and collection non-existing. ."""
vectorstore = Chroma(
client=client,
collection_name="test_collection",
embedding_function=FakeEmbeddings(),
create_collection_if_not_exists=True,
)
assert vectorstore._client.get_collection("test_collection") is not None
vectorstore.delete_collection()

Loading…
Cancel
Save