docs: lcel how to and cheatsheet (#21851)

pull/21859/head
Bagatur 2 weeks ago committed by GitHub
parent c3caec5aaf
commit 8b3c5f93f5
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -69,9 +69,9 @@ md-sync:
generate-references:
$(PYTHON) scripts/generate_api_reference_links.py --docs_dir $(OUTPUT_NEW_DOCS_DIR)
build: install-py-deps generate-files copy-infra render md-sync generate-references
build: install-py-deps generate-files copy-infra render md-sync
vercel-build: install-vercel-deps build
vercel-build: install-vercel-deps build generate-references
rm -rf docs
mv $(OUTPUT_NEW_DOCS_DIR) docs
rm -rf build

@ -16,7 +16,7 @@
"id": "711752cb-4f15-42a3-9838-a0c67f397771",
"metadata": {},
"source": [
"# How to attach runtime arguments to a Runnable\n",
"# How to add default invocation args to a Runnable\n",
"\n",
":::info Prerequisites\n",
"\n",

@ -0,0 +1,200 @@
{
"cells": [
{
"cell_type": "raw",
"id": "77bf57fb-e990-45f2-8b5f-c76388b05966",
"metadata": {},
"source": [
"---\n",
"keywords: [LCEL]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "50d57bf2-7104-4570-b3e5-90fd71e1bea1",
"metadata": {},
"source": [
"# How to create a dynamic (self-constructing) chain\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following:\n",
"- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n",
"- [How to turn any function into a runnable](/docs/how_to/functions)\n",
"\n",
":::\n",
"\n",
"Sometimes we want to construct parts of a chain at runtime, depending on the chain inputs ([routing](/docs/how_to/routing/) is the most common example of this). We can create dynamic chains like this using a very useful property of RunnableLambda's, which is that if a RunnableLambda returns a Runnable, that Runnable is itself invoked. Let's see an example.\n",
"\n",
"```{=mdx}\n",
"import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
"\n",
"<ChatModelTabs\n",
" customVarName=\"llm\"\n",
"/>\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "406bffc2-86d0-4cb9-9262-5c1e3442397a",
"metadata": {},
"outputs": [],
"source": [
"# | echo: false\n",
"\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "0ae6692b-983e-40b8-aa2a-6c078d945b9e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"According to the context provided, Egypt's population in 2024 is estimated to be about 111 million.\""
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import Runnable, RunnablePassthrough, chain\n",
"\n",
"contextualize_instructions = \"\"\"Convert the latest user question into a standalone question given the chat history. Don't answer the question, return the question and nothing else (no descriptive text).\"\"\"\n",
"contextualize_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", contextualize_instructions),\n",
" (\"placeholder\", \"{chat_history}\"),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"contextualize_question = contextualize_prompt | llm | StrOutputParser()\n",
"\n",
"qa_instructions = (\n",
" \"\"\"Answer the user question given the following context:\\n\\n{context}.\"\"\"\n",
")\n",
"qa_prompt = ChatPromptTemplate.from_messages(\n",
" [(\"system\", qa_instructions), (\"human\", \"{question}\")]\n",
")\n",
"\n",
"\n",
"@chain\n",
"def contextualize_if_needed(input_: dict) -> Runnable:\n",
" if input_.get(\"chat_history\"):\n",
" # NOTE: This is returning another Runnable, not an actual output.\n",
" return contextualize_question\n",
" else:\n",
" return RunnablePassthrough()\n",
"\n",
"\n",
"@chain\n",
"def fake_retriever(input_: dict) -> str:\n",
" return \"egypt's population in 2024 is about 111 million\"\n",
"\n",
"\n",
"full_chain = (\n",
" RunnablePassthrough.assign(question=contextualize_if_needed).assign(\n",
" context=fake_retriever\n",
" )\n",
" | qa_prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")\n",
"\n",
"full_chain.invoke(\n",
" {\n",
" \"question\": \"what about egypt\",\n",
" \"chat_history\": [\n",
" (\"human\", \"what's the population of indonesia\"),\n",
" (\"ai\", \"about 276 million\"),\n",
" ],\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "5076ddb4-4a99-47ad-b549-8ac27ca3e2c6",
"metadata": {},
"source": [
"The key here is that `contextualize_if_needed` returns another Runnable and not an actual output. This returned Runnable is itself run when the full chain is executed.\n",
"\n",
"Looking at the trace we can see that, since we passed in chat_history, we executed the contextualize_question chain as part of the full chain: https://smith.langchain.com/public/9e0ae34c-4082-4f3f-beed-34a2a2f4c991/r"
]
},
{
"cell_type": "markdown",
"id": "4fe6ca44-a643-4859-a290-be68403f51f0",
"metadata": {},
"source": [
"Note that the streaming, batching, etc. capabilities of the returned Runnable are all preserved"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "6def37fa-5105-4090-9b07-77cb488ecd9c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"What\n",
" is\n",
" the\n",
" population\n",
" of\n",
" Egypt\n",
"?\n"
]
}
],
"source": [
"for chunk in contextualize_if_needed.stream(\n",
" {\n",
" \"question\": \"what about egypt\",\n",
" \"chat_history\": [\n",
" (\"human\", \"what's the population of indonesia\"),\n",
" (\"ai\", \"about 276 million\"),\n",
" ],\n",
" }\n",
"):\n",
" print(chunk)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-2",
"language": "python",
"name": "poetry-venv-2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,11 +1,21 @@
{
"cells": [
{
"cell_type": "raw",
"id": "018f3868-e60d-4db6-a1c6-c6633c66b1f4",
"metadata": {},
"source": [
"---\n",
"keywords: [LCEL, fallbacks]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "19c9cbd6",
"metadata": {},
"source": [
"# Fallbacks\n",
"# How to add fallbacks to a runnable\n",
"\n",
"When working with language models, you may often encounter issues from the underlying APIs, whether these be rate limiting or downtime. Therefore, as you go to move your LLM applications into production it becomes more and more important to safeguard against these. That's why we've introduced the concept of fallbacks. \n",
"\n",
@ -447,7 +457,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -19,27 +19,29 @@ For comprehensive descriptions of every class and function see the [API Referenc
This highlights functionality that is core to using LangChain.
- [How to: return structured data from an LLM](/docs/how_to/structured_output/)
- [How to: use a chat model to call tools](/docs/how_to/tool_calling/)
- [How to: return structured data from a model](/docs/how_to/structured_output/)
- [How to: use a model to call tools](/docs/how_to/tool_calling/)
- [How to: stream runnables](/docs/how_to/streaming)
- [How to: debug your LLM apps](/docs/how_to/debugging/)
## LangChain Expression Language (LCEL)
LangChain Expression Language is a way to create arbitrary custom chains. It is built on the [Runnable](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html) protocol.
[LangChain Expression Language](/docs/concepts/#langchain-expression-language-lcel) is a way to create arbitrary custom chains. It is built on the [Runnable](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.Runnable.html) protocol.
[**LCEL cheatsheet**](/docs/how_to/lcel_cheatsheet/): For a quick overview of how to use the main LCEL primitives.
- [How to: chain runnables](/docs/how_to/sequence)
- [How to: stream runnables](/docs/how_to/streaming)
- [How to: invoke runnables in parallel](/docs/how_to/parallel/)
- [How to: attach runtime arguments to a runnable](/docs/how_to/binding/)
- [How to: run custom functions](/docs/how_to/functions)
- [How to: pass through arguments from one step to the next](/docs/how_to/passthrough)
- [How to: add values to a chain's state](/docs/how_to/assign)
- [How to: configure a chain at runtime](/docs/how_to/configure)
- [How to: add message history](/docs/how_to/message_history)
- [How to: route execution within a chain](/docs/how_to/routing)
- [How to: add default invocation args to runnables](/docs/how_to/binding/)
- [How to: turn any function into a runnable](/docs/how_to/functions)
- [How to: pass through inputs from one chain step to the next](/docs/how_to/passthrough)
- [How to: configure runnable behavior at runtime](/docs/how_to/configure)
- [How to: add message history (memory) to a chain](/docs/how_to/message_history)
- [How to: route between sub-chains](/docs/how_to/routing)
- [How to: create a dynamic (self-constructing) chain](/docs/how_to/dynamic_chain/)
- [How to: inspect runnables](/docs/how_to/inspect)
- [How to: add fallbacks](/docs/how_to/fallbacks)
- [How to: add fallbacks to a runnable](/docs/how_to/fallbacks)
## Components

File diff suppressed because it is too large Load Diff

@ -941,7 +941,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -16,7 +16,7 @@
"id": "4b47436a",
"metadata": {},
"source": [
"# How to route execution within a chain\n",
"# How to route between sub-chains\n",
"\n",
":::info Prerequisites\n",
"\n",

@ -30,7 +30,7 @@
"\n",
"The resulting [`RunnableSequence`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableSequence.html) is itself a runnable, which means it can be invoked, streamed, or further chained just like any other runnable. Advantages of chaining runnables in this way are efficient streaming (the sequence will stream output as soon as it is available), and debugging and tracing with tools like [LangSmith](/docs/how_to/debugging).\n",
"\n",
"## The pipe operator\n",
"## The pipe operator: `|`\n",
"\n",
"To show off how this works, let's go through an example. We'll walk through a common pattern in LangChain: using a [prompt template](/docs/how_to#prompt-templates) to format input into a [chat model](/docs/how_to#chat-models), and finally converting the chat message output into a string with an [output parser](/docs/how_to#output-parsers).\n",
"\n",
@ -230,11 +230,28 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"You now know some ways to chain two runnables together.\n",
"Or the abbreviated:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"composed_chain_with_pipe = RunnableParallel({\"joke\": chain}).pipe(\n",
" analysis_prompt, model, StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Related\n",
"\n",
"To learn more, see the other how-to guides on runnables in this section."
"- [Streaming](/docs/how_to/streaming/): Check out the streaming guide to understand the streaming behavior of a chain\n",
"- "
]
}
],

@ -1524,7 +1524,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to use a chat model to call tools\n",
"# How to use a model to call tools\n",
"\n",
":::info Prerequisites\n",
"\n",

@ -95,7 +95,7 @@ if TYPE_CHECKING:
RunLog,
RunLogPatch,
)
from langchain_core.tracers.root_listeners import Listener
from langchain_core.tracers.schemas import Run
Other = TypeVar("Other")
@ -1258,9 +1258,15 @@ class Runnable(Generic[Input, Output], ABC):
def with_listeners(
self,
*,
on_start: Optional[Listener] = None,
on_end: Optional[Listener] = None,
on_error: Optional[Listener] = None,
on_start: Optional[
Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]
] = None,
on_end: Optional[
Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]
] = None,
on_error: Optional[
Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]
] = None,
) -> Runnable[Input, Output]:
"""
Bind lifecycle listeners to a Runnable, returning a new Runnable.
@ -1276,22 +1282,26 @@ class Runnable(Generic[Input, Output], ABC):
Example:
.. code-block:: python
from langchain_core.runnables import RunnableLambda
from langchain_core.tracers.schemas import Run
import time
def test_runnable(time_to_sleep : int):
time.sleep(time_to_sleep)
def fn_start(run_obj : Runnable):
def fn_start(run_obj: Run):
print("start_time:", run_obj.start_time)
def fn_end(run_obj : Runnable):
def fn_end(run_obj: Run):
print("end_time:", run_obj.end_time)
RunnableLambda(test_runnable).with_listeners(
chain = RunnableLambda(test_runnable).with_listeners(
on_start=fn_start,
on_end=fn_end
).invoke(2)
)
chain.invoke(2)
"""
from langchain_core.tracers.root_listeners import RootListenersTracer
@ -1339,6 +1349,7 @@ class Runnable(Generic[Input, Output], ABC):
Example:
.. code-block:: python
from langchain_core.runnables import RunnableLambda
count = 0
@ -4239,9 +4250,15 @@ class RunnableEach(RunnableEachBase[Input, Output]):
def with_listeners(
self,
*,
on_start: Optional[Listener] = None,
on_end: Optional[Listener] = None,
on_error: Optional[Listener] = None,
on_start: Optional[
Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]
] = None,
on_end: Optional[
Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]
] = None,
on_error: Optional[
Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]
] = None,
) -> RunnableEach[Input, Output]:
"""
Bind lifecycle listeners to a Runnable, returning a new Runnable.
@ -4729,9 +4746,15 @@ class RunnableBinding(RunnableBindingBase[Input, Output]):
def with_listeners(
self,
*,
on_start: Optional[Listener] = None,
on_end: Optional[Listener] = None,
on_error: Optional[Listener] = None,
on_start: Optional[
Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]
] = None,
on_end: Optional[
Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]
] = None,
on_error: Optional[
Union[Callable[[Run], None], Callable[[Run, RunnableConfig], None]]
] = None,
) -> Runnable[Input, Output]:
"""Bind lifecycle listeners to a Runnable, returning a new Runnable.

Loading…
Cancel
Save