Start cookbook and move stuff from use cases (#11636)

pull/11675/head
Bagatur 8 months ago committed by GitHub
parent 99adcdb1c9
commit cf86447623
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -15,10 +15,10 @@ docs_build:
docs/.local_build.sh
docs_clean:
rm -r docs/_dist
rm -r _dist
docs_linkcheck:
poetry run linkchecker docs/_dist/docs/ --ignore-url node_modules
poetry run linkchecker _dist/docs/ --ignore-url node_modules
api_docs_build:
poetry run python docs/api_reference/create_api_rst.py
@ -53,4 +53,4 @@ help:
@echo 'api_docs_linkcheck - run linkchecker on the API Reference documentation'
@echo 'spell_check - run codespell on the project'
@echo 'spell_fix - run codespell on the project and fix the errors'
@echo '-- TEST and LINT tasks are within libs/*/ per-package --'
@echo '-- TEST and LINT tasks are within libs/*/ per-package --'

@ -567,7 +567,7 @@
"\n",
"Given an `llm` created from one of the models above, you can use it for [many use cases](docs/use_cases).\n",
"\n",
"For example, here is a guide to [RAG](docs/use_cases/question_answering/how_to/local_retrieval_qa) with local LLMs.\n",
"For example, here is a guide to [RAG](docs/use_cases/question_answering/local_retrieval_qa) with local LLMs.\n",
"\n",
"In general, use cases for local LLMs can be driven by at least two factors:\n",
"\n",

@ -124,7 +124,7 @@
"source": [
"## RAG\n",
"\n",
"We can use Olama with RAG, [just as shown here](https://python.langchain.com/docs/use_cases/question_answering/how_to/local_retrieval_qa).\n",
"We can use Olama with RAG, [just as shown here](https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa).\n",
"\n",
"Let's use the 13b model:\n",
"\n",

@ -102,7 +102,7 @@
"source": [
"## RAG\n",
"\n",
"We can use Olama with RAG, [just as shown here](https://python.langchain.com/docs/use_cases/question_answering/how_to/local_retrieval_qa).\n",
"We can use Olama with RAG, [just as shown here](https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa).\n",
"\n",
"Let's use the 13b model:\n",
"\n",

@ -13,7 +13,7 @@ Activeloop Deep Lake supports SelfQuery Retrieval:
## More Resources
1. [Ultimate Guide to LangChain & Deep Lake: Build ChatGPT to Answer Questions on Your Financial Data](https://www.activeloop.ai/resources/ultimate-guide-to-lang-chain-deep-lake-build-chat-gpt-to-answer-questions-on-your-financial-data/)
2. [Twitter the-algorithm codebase analysis with Deep Lake](/docs/use_cases/question_answering/how_to/code/twitter-the-algorithm-analysis-deeplake)
2. [Twitter the-algorithm codebase analysis with Deep Lake](/docs/use_cases/question_answering/code/twitter-the-algorithm-analysis-deeplake)
4. [Code Understanding](/docs/modules/data_connection/retrievers/self_query/activeloop_deeplake_self_query)
3. Here is [whitepaper](https://www.deeplake.ai/whitepaper) and [academic paper](https://arxiv.org/pdf/2209.10785.pdf) for Deep Lake
4. Here is a set of additional resources available for review: [Deep Lake](https://github.com/activeloopai/deeplake), [Get started](https://docs.activeloop.ai/getting-started) and [Tutorials](https://docs.activeloop.ai/hub-tutorials)

@ -13,7 +13,7 @@ pip install python-arango
Connect your ArangoDB Database with a chat model to get insights on your data.
See the notebook example [here](/docs/use_cases/more/graph/graph_arangodb_qa.html).
See the notebook example [here](/docs/use_cases/graph/graph_arangodb_qa.html).
```python
from arango import ArangoClient

@ -41,4 +41,4 @@ from langchain.graphs import Neo4jGraph
from langchain.chains import GraphCypherQAChain
```
For a more detailed walkthrough of Cypher generating chain, see [this notebook](/docs/use_cases/more/graph/graph_cypher_qa.html)
For a more detailed walkthrough of Cypher generating chain, see [this notebook](/docs/use_cases/graph/graph_cypher_qa.html)

@ -338,7 +338,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's look at an example of using Timescale Vector as a retriever with the [RetrievalQA chain](https://python.langchain.com/docs/use_cases/question_answering/how_to/vector_db_qa) and the [stuff chain](https://python.langchain.com/docs/modules/chains/document/stuff).\n",
"Let's look at an example of using Timescale Vector as a retriever with the [RetrievalQA chain](https://python.langchain.com/docs/use_cases/question_answering/vector_db_qa) and the [stuff chain](https://python.langchain.com/docs/modules/chains/document/stuff).\n",
"\n",
"In this example, we'll ask the same query as above, but this time we'll pass the relevant documents returned from Timescale Vector to an LLM to use as context to answer our question.\n",
"\n",

@ -9,7 +9,7 @@
"\n",
"LangChain provides async support for Chains by leveraging the [asyncio](https://docs.python.org/3/library/asyncio.html) library.\n",
"\n",
"Async methods are currently supported in `LLMChain` (through `arun`, `apredict`, `acall`) and `LLMMathChain` (through `arun` and `acall`), `ChatVectorDBChain`, and [QA chains](/docs/use_cases/question_answering/how_to/question_answering.html). Async support for other chains is on the roadmap."
"Async methods are currently supported in `LLMChain` (through `arun`, `apredict`, `acall`) and `LLMMathChain` (through `arun` and `acall`), `ChatVectorDBChain`, and [QA chains](/docs/use_cases/question_answering/question_answering.html). Async support for other chains is on the roadmap."
]
},
{

@ -476,7 +476,7 @@
"- [Extraction](/docs/modules/chains/additional/extraction): very similar to structured output chain, intended for information/entity extraction specifically.\n",
"- [Tagging](/docs/use_cases/tagging): tag inputs.\n",
"- [OpenAPI](/docs/use_cases/apis/openapi_openai): take an OpenAPI spec and create + execute valid requests against the API, using OpenAI functions under the hood.\n",
"- [QA with citations](/docs/use_cases/question_answering/how_to/qa_citations): use OpenAI functions ability to extract citations from text."
"- [QA with citations](/docs/use_cases/question_answering/qa_citations): use OpenAI functions ability to extract citations from text."
]
}
],

@ -700,7 +700,7 @@
"\n",
"### Going deeper \n",
"\n",
"* Agents, such as the [conversational retrieval agent](/docs/use_cases/question_answering/how_to/conversational_retrieval_agents), can be used for retrieval when necessary while also holding a conversation.\n"
"* Agents, such as the [conversational retrieval agent](/docs/use_cases/question_answering/conversational_retrieval_agents), can be used for retrieval when necessary while also holding a conversation.\n"
]
},
{

@ -1,13 +1,22 @@
{
"cells": [
{
"cell_type": "raw",
"id": "1302a608-4b4d-46bf-bd0c-b4f13eff2e5e",
"metadata": {},
"source": [
"---\n",
"sidebar-position: 1\n",
"title: Synthetic data generation\n",
"---"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "aa3571cc",
"metadata": {},
"source": [
"# Synthetic Data generation\n",
"\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/data_generation.ipynb)\n",
"\n",
"## Use case\n",
@ -612,7 +621,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -7,7 +7,7 @@
"source": [
"# Diffbot Graph Transformer\n",
"\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/more/graph/diffbot_graphtransformer.ipynb)\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/graph/diffbot_graphtransformer.ipynb)\n",
"\n",
"## Use case\n",
"\n",

@ -5,7 +5,7 @@
"id": "a6850189",
"metadata": {},
"source": [
"# Graph QA\n",
"# NetworkX Graph QA\n",
"\n",
"This notebook goes over how to do question answering over a graph data structure."
]
@ -296,7 +296,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.13"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -1,4 +1,8 @@
# Analyzing graph data
---
sidebar-position: 1
---
# Graph querying
Graph databases give us a powerful way to represent and query real-world relationships. There are a number of chains that make it easy to use LLMs to interact with various graph DBs.

@ -1,2 +0,0 @@
label: 'More'
position: 2

@ -1,28 +0,0 @@
---
sidebar_position: 0
---
# Agent simulations
Agent simulations involve interacting one or more agents with each other.
Agent simulations generally involve two main components:
- Long Term Memory
- Simulation Environment
Specific implementations of agent simulations (or parts of agent simulations) include:
## Simulations with One Agent
- [Simulated Environment: Gymnasium](./gymnasium.html): an example of how to create a simple agent-environment interaction loop with [Gymnasium](https://gymnasium.farama.org/) (formerly [OpenAI Gym](https://github.com/openai/gym)).
## Simulations with Two Agents
- [CAMEL](./camel_role_playing.html): an implementation of the CAMEL (Communicative Agents for “Mind” Exploration of Large Scale Language Model Society) paper, where two agents communicate with each other.
- [Two Player D&D](./two_player_dnd.html): an example of how to use a generic simulator for two agents to implement a variant of the popular Dungeons & Dragons role playing game.
- [Agent Debates with Tools](./two_agent_debate_tools.html): an example of how to enable Dialogue Agents to use tools to inform their responses.
## Simulations with Multiple Agents
- [Multi-Player D&D](./multi_player_dnd.html): an example of how to use a generic dialogue simulator for multiple dialogue agents with a custom speaker-ordering, illustrated with a variant of the popular Dungeons & Dragons role playing game.
- [Decentralized Speaker Selection](./multiagent_bidding.html): an example of how to implement a multi-agent dialogue without a fixed schedule for who speaks when. Instead the agents decide for themselves who speaks by outputting bids to speak. This example shows how to do this in the context of a fictitious presidential debate.
- [Authoritarian Speaker Selection](./multiagent_authoritarian.html): an example of how to implement a multi-agent dialogue, where a privileged agent directs who speaks what. This example also showcases how to enable the privileged agent to determine when the conversation terminates. This example shows how to do this in the context of a fictitious news show.
- [Simulated Environment: PettingZoo](./petting_zoo.html): an example of how to create a agent-environment interaction loop for multiple agents with [PettingZoo](https://pettingzoo.farama.org/) (a multi-agent version of [Gymnasium](https://gymnasium.farama.org/)).
- [Generative Agents](./characters.html): This notebook implements a generative agent based on the paper [Generative Agents: Interactive Simulacra of Human Behavior](https://arxiv.org/abs/2304.03442) by Park, et. al.

@ -1,718 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "842dd272",
"metadata": {},
"source": [
"# Agents\n",
"\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/more/agents/agents.ipynb)\n",
"\n",
"## Use case \n",
"\n",
"LLM-based agents are powerful general problem solvers.\n",
"\n",
"The [primary LLM agent components](https://lilianweng.github.io/posts/2023-06-23-agent/) include at least 3 things:\n",
"\n",
"* `Planning`: The ability to break down tasks into smaller sub-goals\n",
"* `Memory`: The ability to retain and recall information\n",
"* `Tools`: The ability to get information from external sources (e.g., APIs)\n",
"\n",
"Unlike LLMs simply connected to [APIs](/docs/use_cases/apis/apis), agents [can](https://www.youtube.com/watch?v=DWUdGhRrv2c):\n",
"\n",
"* Self-correct\n",
"* Handle multi-hop tasks (several intermediate \"hops\" or steps to arrive at a conclusion)\n",
"* Tackle long time horizon tasks (that require access to long-term memory)\n",
"\n",
"![Image description](/img/agents_use_case_1.png)\n",
"\n",
"## Overview \n",
"\n",
"LangChain has [many agent types](/docs/modules/agents/agent_types/).\n",
"\n",
"Nearly all agents will use the following components:\n",
" \n",
"**Planning**\n",
" \n",
"* `Prompt`: Can given the LLM [personality](https://arxiv.org/pdf/2304.03442.pdf), context (e.g, via retrieval from memory), or strategies for learninng (e.g., [chain-of-thought](https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/#chain-of-thought-cot)).\n",
"* `Agent` Responsible for deciding what step to take next using an LLM with the `Prompt`\n",
"\n",
"**Memory**\n",
"\n",
"* This can be short or long-term, allowing the agent to persist information.\n",
"\n",
"**Tools**\n",
"\n",
"* Tools are functions that an agent can call.\n",
"\n",
"But, there are some taxonomic differences:\n",
"\n",
"* `Action agents`: Designed to decide the sequence of actions (tool use) (e.g., OpenAI functions agents, ReAct agents).\n",
"* `Simulation agents`: Designed for role-play often in simulated enviorment (e.g., Generative Agents, CAMEL).\n",
"* `Autonomous agents`: Designed for indepdent execution towards long term goals (e.g., BabyAGI, Auto-GPT).\n",
"\n",
"This will focus on `Action agents`.\n",
"\n",
"\n",
"## Quickstart "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3a704c7a",
"metadata": {},
"outputs": [],
"source": [
"! pip install langchain openai google-search-results\n",
"\n",
"# Set env var OPENAI_API_KEY and SERPAPI_API_KEY or load from a .env file\n",
"# import dotenv\n",
"\n",
"# dotenv.load_dotenv()"
]
},
{
"cell_type": "markdown",
"id": "639d41ad",
"metadata": {},
"source": [
"`Tools`\n",
"\n",
"LangChain has [many tools](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/load_tools.py) for Agents that we can load easily.\n",
"\n",
"Let's load search and a calcultor."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c60001c9",
"metadata": {},
"outputs": [],
"source": [
"# Tool\n",
"from langchain.agents import load_tools\n",
"from langchain.chat_models import ChatOpenAI\n",
"llm = ChatOpenAI(temperature=0)\n",
"tools = load_tools([\"serpapi\", \"llm-math\"], llm=llm)"
]
},
{
"cell_type": "markdown",
"id": "431ba30b",
"metadata": {},
"source": [
"`Agent`\n",
"\n",
"The [`OPENAI_FUNCTIONS` agent](/docs/modules/agents/agent_types/openai_functions_agent) is a good action agent to start with.\n",
"\n",
"OpenAI models have been fine-tuned to recognize when function should be called."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d636395f",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'As of 2023, the estimated population of Canada is approximately 39,858,480 people.'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Prompt\n",
"from langchain.agents import AgentExecutor\n",
"from langchain.schema import SystemMessage\n",
"from langchain.agents import OpenAIFunctionsAgent\n",
"system_message = SystemMessage(content=\"You are a search assistant.\")\n",
"prompt = OpenAIFunctionsAgent.create_prompt(system_message=system_message)\n",
"\n",
"# Agent\n",
"search_agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)\n",
"agent_executor = AgentExecutor(agent=search_agent, tools=tools, verbose=False)\n",
"\n",
"# Run\n",
"agent_executor.run(\"How many people live in canada as of 2023?\")"
]
},
{
"cell_type": "markdown",
"id": "27842380",
"metadata": {},
"source": [
"Great, we have created a simple search agent with a tool!\n",
"\n",
"Note that we use an agent executor, which is the runtime for an agent. \n",
"\n",
"This is what calls the agent and executes the actions it chooses. \n",
"\n",
"Pseudocode for this runtime is below:\n",
"```\n",
"next_action = agent.get_action(...)\n",
"while next_action != AgentFinish:\n",
" observation = run(next_action)\n",
" next_action = agent.get_action(..., next_action, observation)\n",
"return next_action\n",
"```\n",
"\n",
"While this may seem simple, there are several complexities this runtime handles for you, including:\n",
"\n",
"* Handling cases where the agent selects a non-existent tool\n",
"* Handling cases where the tool errors\n",
"* Handling cases where the agent produces output that cannot be parsed into a tool invocation\n",
"* Logging and observability at all levels (agent decisions, tool calls) either to stdout or LangSmith.\n"
]
},
{
"cell_type": "markdown",
"id": "0b93c7d0",
"metadata": {},
"source": [
"## Memory \n",
"\n",
"### Short-term memory\n",
"\n",
"Of course, `memory` is needed to enable conversation / persistence of information.\n",
"\n",
"LangChain has many options for [short-term memory](/docs/modules/memory/types/), which are frequently used in [chat](/docs/modules/memory/adding_memory.html). \n",
"\n",
"They can be [employed with agents](/docs/modules/memory/agent_with_memory) too.\n",
"\n",
"`ConversationBufferMemory` is a popular choice for short-term memory.\n",
"\n",
"We set `MEMORY_KEY`, which can be referenced by the prompt later.\n",
"\n",
"Now, let's add memory to our agent."
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "1d291015",
"metadata": {},
"outputs": [],
"source": [
"# Memory \n",
"from langchain.memory import ConversationBufferMemory\n",
"MEMORY_KEY = \"chat_history\"\n",
"memory = ConversationBufferMemory(memory_key=MEMORY_KEY, return_messages=True)\n",
"\n",
"# Prompt w/ placeholder for memory\n",
"from langchain.schema import SystemMessage\n",
"from langchain.agents import OpenAIFunctionsAgent\n",
"from langchain.prompts import MessagesPlaceholder\n",
"system_message = SystemMessage(content=\"You are a search assistant tasked with using Serpapi to answer questions.\")\n",
"prompt = OpenAIFunctionsAgent.create_prompt(\n",
" system_message=system_message,\n",
" extra_prompt_messages=[MessagesPlaceholder(variable_name=MEMORY_KEY)]\n",
")\n",
"\n",
"# Agent\n",
"search_agent_memory = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt, memory=memory)\n",
"agent_executor_memory = AgentExecutor(agent=search_agent_memory, tools=tools, memory=memory, verbose=False)"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "b4b2249a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'As of August 2023, the estimated population of Canada is approximately 38,781,291 people.'"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor_memory.run(\"How many people live in Canada as of August, 2023?\")"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "4d31b0cf",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'As of August 2023, the largest province in Canada is Ontario, with a population of over 15 million people.'"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor_memory.run(\"What is the population of its largest provence as of August, 2023?\")"
]
},
{
"cell_type": "markdown",
"id": "3606c32a",
"metadata": {},
"source": [
"Looking at the [trace](https://smith.langchain.com/public/4425a131-ec90-4aaa-acd8-5b880c7452a3/r), we can what is happening:\n",
"\n",
"* The chat history is passed to the LLMs\n",
"* This gives context to `its` in `What is the population of its largest provence as of August, 2023?`\n",
"* The LLM generates a function call to the search tool\n",
"\n",
"```\n",
"function_call:\n",
" name: Search\n",
" arguments: |-\n",
" {\n",
" \"query\": \"population of largest province in Canada as of August 2023\"\n",
" }\n",
"```\n",
"\n",
"* The search is executed\n",
"* The results from search are passed back to the LLM for synthesis into an answer\n",
"\n",
"![Image description](/img/oai_function_agent.png)"
]
},
{
"cell_type": "markdown",
"id": "384e37f8",
"metadata": {},
"source": [
"### Long-term memory \n",
"\n",
"Vectorstores are great options for long-term memory."
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "1489746c",
"metadata": {},
"outputs": [],
"source": [
"import faiss\n",
"from langchain.vectorstores import FAISS\n",
"from langchain.docstore import InMemoryDocstore\n",
"from langchain.embeddings import OpenAIEmbeddings\n",
"embedding_size = 1536\n",
"embeddings_model = OpenAIEmbeddings()\n",
"index = faiss.IndexFlatL2(embedding_size)\n",
"vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})"
]
},
{
"cell_type": "markdown",
"id": "9668ef5d",
"metadata": {},
"source": [
"### Going deeper \n",
"\n",
"* Explore projects using long-term memory, such as [autonomous agents](/docs/use_cases/autonomous_agents/autonomous_agents)."
]
},
{
"cell_type": "markdown",
"id": "43fe2bb3",
"metadata": {},
"source": [
"## Tools \n",
"\n",
"As mentioned above, LangChain has [many tools](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/load_tools.py) for Agents that we can load easily.\n",
"\n",
"We can also define [custom tools](/docs/modules/agents/tools/custom_tools). For example, here is a search tool.\n",
"\n",
"* The `Tool` dataclass wraps functions that accept a single string input and returns a string output.\n",
"* `return_direct` determines whether to return the tool's output directly. \n",
"* Setting this to `True` means that after the tool is called, the `AgentExecutor` will stop looping."
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "7357e496",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import Tool, tool\n",
"from langchain.utilities import GoogleSearchAPIWrapper\n",
"search = GoogleSearchAPIWrapper()\n",
"search_tool = [\n",
" Tool(\n",
" name=\"Search\",\n",
" func=search.run,\n",
" description=\"useful for when you need to answer questions about current events\",\n",
" return_direct=True,\n",
" )\n",
"]"
]
},
{
"cell_type": "markdown",
"id": "c6ef5bfa",
"metadata": {},
"source": [
"To make it easier to define custom tools, a `@tool` decorator is provided. \n",
"\n",
"This decorator can be used to quickly create a Tool from a simple function."
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "b6308c69",
"metadata": {},
"outputs": [],
"source": [
"# Tool\n",
"@tool\n",
"def get_word_length(word: str) -> int:\n",
" \"\"\"Returns the length of a word.\"\"\"\n",
" return len(word)\n",
"word_length_tool = [get_word_length]"
]
},
{
"cell_type": "markdown",
"id": "83c104d7",
"metadata": {},
"source": [
"### Going deeper\n",
"\n",
"**Toolkits**\n",
"\n",
"* Toolkits are groups of tools needed to accomplish specific objectives.\n",
"* [Here](/docs/integrations/toolkits/) are > 15 different agent toolkits (e.g., Gmail, Pandas, etc). \n",
"\n",
"Here is a simple way to think about agents vs the various chains covered in other docs:\n",
"\n",
"![Image description](/img/agents_vs_chains.png)"
]
},
{
"cell_type": "markdown",
"id": "5eefe4a0",
"metadata": {},
"source": [
"## Agents\n",
"\n",
"There's a number of [action agent types](docs/modules/agents/agent_types/) available in LangChain.\n",
"\n",
"* [ReAct](/docs/modules/agents/agent_types/react.html): This is the most general purpose action agent using the [ReAct framework](https://arxiv.org/pdf/2205.00445.pdf), which can work with [Docstores](/docs/modules/agents/agent_types/react_docstore.html) or [Multi-tool Inputs](/docs/modules/agents/agent_types/structured_chat.html).\n",
"* [OpenAI functions](/docs/modules/agents/agent_types/openai_functions_agent.html): Designed to work with OpenAI function-calling models.\n",
"* [Conversational](/docs/modules/agents/agent_types/chat_conversation_agent.html): This agent is designed to be used in conversational settings\n",
"* [Self-ask with search](/docs/modules/agents/agent_types/self_ask_with_search.html): Designed to lookup factual answers to questions\n",
"\n",
"### OpenAI Functions agent\n",
"\n",
"As shown in Quickstart, let's continue with [`OpenAI functions` agent](/docs/modules/agents/agent_types/).\n",
"\n",
"This uses OpenAI models, which are fine-tuned to detect when a function should to be called.\n",
"\n",
"They will respond with the inputs that should be passed to the function.\n",
"\n",
"But, we can unpack it, first with a custom prompt:"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "1c2deb4a",
"metadata": {},
"outputs": [],
"source": [
"# Memory\n",
"MEMORY_KEY = \"chat_history\"\n",
"memory = ConversationBufferMemory(memory_key=MEMORY_KEY, return_messages=True)\n",
"\n",
"# Prompt\n",
"from langchain.schema import SystemMessage\n",
"from langchain.agents import OpenAIFunctionsAgent\n",
"system_message = SystemMessage(content=\"You are very powerful assistant, but bad at calculating lengths of words.\")\n",
"prompt = OpenAIFunctionsAgent.create_prompt(\n",
" system_message=system_message,\n",
" extra_prompt_messages=[MessagesPlaceholder(variable_name=MEMORY_KEY)]\n",
")"
]
},
{
"cell_type": "markdown",
"id": "ee317a45",
"metadata": {},
"source": [
"Define agent:"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "460dab9b",
"metadata": {},
"outputs": [],
"source": [
"# Agent \n",
"from langchain.agents import OpenAIFunctionsAgent\n",
"agent = OpenAIFunctionsAgent(llm=llm, tools=word_length_tool, prompt=prompt)"
]
},
{
"cell_type": "markdown",
"id": "184e6c23",
"metadata": {},
"source": [
"Run agent:"
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "f4f27d37",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'There are 5 letters in the word \"educa\".'"
]
},
"execution_count": 33,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Run the executer, including short-term memory we created\n",
"agent_executor = AgentExecutor(agent=agent, tools=word_length_tool, memory=memory, verbose=False)\n",
"agent_executor.run(\"how many letters in the word educa?\")"
]
},
{
"cell_type": "markdown",
"id": "e4d9217e",
"metadata": {},
"source": [
"### ReAct agent\n",
"\n",
"[ReAct](https://arxiv.org/abs/2210.03629) agents are another popular framework.\n",
"\n",
"There has been lots of work on [LLM reasoning](https://ai.googleblog.com/2022/05/language-models-perform-reasoning-via.html), such as chain-of-thought prompting.\n",
"\n",
"There also has been work on LLM action-taking to generate obervations, such as [Say-Can](https://say-can.github.io/).\n",
"\n",
"ReAct marries these two ideas:\n",
"\n",
"![Image description](/img/ReAct.png)\n",
" \n",
"It uses a charecteristic `Thought`, `Action`, `Observation` [pattern in the output](https://lilianweng.github.io/posts/2023-06-23-agent/).\n",
" \n",
"We can use `initialize_agent` to create the ReAct agent from a list of available types [here](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/types.py):\n",
"\n",
"```\n",
"* AgentType.ZERO_SHOT_REACT_DESCRIPTION: ZeroShotAgent\n",
"* AgentType.REACT_DOCSTORE: ReActDocstoreAgent\n",
"* AgentType.SELF_ASK_WITH_SEARCH: SelfAskWithSearchAgent\n",
"* AgentType.CONVERSATIONAL_REACT_DESCRIPTION: ConversationalAgent\n",
"* AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION: ChatAgent\n",
"* AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION: ConversationalChatAgent\n",
"* AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION: StructuredChatAgent\n",
"* AgentType.OPENAI_FUNCTIONS: OpenAIFunctionsAgent\n",
"* AgentType.OPENAI_MULTI_FUNCTIONS: OpenAIMultiFunctionsAgent\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 41,
"id": "85f033d3",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentType\n",
"from langchain.agents import initialize_agent\n",
"MEMORY_KEY = \"chat_history\"\n",
"memory = ConversationBufferMemory(memory_key=MEMORY_KEY, return_messages=True)\n",
"react_agent = initialize_agent(search_tool, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=False, memory=memory)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7d05a26c",
"metadata": {},
"outputs": [],
"source": [
"react_agent(\"How many people live in Canada as of August, 2023?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9b626dc5",
"metadata": {},
"outputs": [],
"source": [
"react_agent(\"What is the population of its largest provence as of August, 2023?\")"
]
},
{
"cell_type": "markdown",
"id": "d4df0638",
"metadata": {},
"source": [
"LangSmith can help us run diagnostics on the ReAct agent:\n",
"\n",
"The [ReAct agent](https://smith.langchain.com/public/3d8d0a15-d73f-44f3-9f81-037f7031c592/r) fails to pass chat history to LLM, gets wrong answer.\n",
" \n",
"The OAI functions agent does and [gets right answer](https://smith.langchain.com/public/4425a131-ec90-4aaa-acd8-5b880c7452a3/r), as shown above.\n",
" \n",
"Also the search tool result for [ReAct](https://smith.langchain.com/public/6473e608-fc9d-47c9-a8a4-2ef7f2801d82/r) is worse than [OAI](https://smith.langchain.com/public/4425a131-ec90-4aaa-acd8-5b880c7452a3/r/26b85fa9-e33a-4028-8650-1714f8b3db96).\n",
"\n",
"Collectivly, this tells us: carefully inspect Agent traces and tool outputs. \n",
"\n",
"As we saw with the [SQL use case](/docs/use_cases/qa_structured/sql), `ReAct agents` can be work very well for specific problems. \n",
"\n",
"But, as shown here, the result is degraded relative to what we see with the OpenAI agent."
]
},
{
"cell_type": "markdown",
"id": "5cde8f9a",
"metadata": {},
"source": [
"### Custom\n",
"\n",
"Let's peel it back even further to define our own action agent.\n",
"\n",
"We can [create a custom agent](/docs/modules/agents/how_to/custom_agent.html) to unpack the central pieces:\n",
"\n",
"* `Tools`: The tools the agent has available to use\n",
"* `Agent`: decides which action to take"
]
},
{
"cell_type": "code",
"execution_count": 34,
"id": "3313f5cd",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"The current population of Canada is 38,808,843 as of Tuesday, August 1, 2023, based on Worldometer elaboration of the latest United Nations data 1. Canada 2023\\xa0... Mar 22, 2023 ... Record-high population growth in the year 2022. Canada's population was estimated at 39,566,248 on January 1, 2023, after a record population\\xa0... Jun 19, 2023 ... As of June 16, 2023, there are now 40 million Canadians! This is a historic milestone for Canada and certainly cause for celebration. It is also\\xa0... Jun 28, 2023 ... Canada's population was estimated at 39,858,480 on April 1, 2023, an increase of 292,232 people (+0.7%) from January 1, 2023. The main driver of population growth is immigration, and to a lesser extent, natural growth. Demographics of Canada · Population pyramid of Canada in 2023. May 2, 2023 ... On January 1, 2023, Canada's population was estimated to be 39,566,248, following an unprecedented increase of 1,050,110 people between January\\xa0... Canada ranks 37th by population among countries of the world, comprising about 0.5% of the world's total, with over 40.0 million Canadians as of 2023. The current population of Canada in 2023 is 38,781,291, a 0.85% increase from 2022. The population of Canada in 2022 was 38,454,327, a 0.78% increase from 2021. Whether a given sub-nation is a province or a territory depends upon how its power and authority are derived. Provinces were given their power by the\\xa0... Jun 28, 2023 ... Index to the latest information from the Census of Population. ... 2023. Census in Brief: Multilingualism of Canadian households\\xa0...\""
]
},
"execution_count": 34,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing import List, Tuple, Any, Union\n",
"from langchain.schema import AgentAction, AgentFinish\n",
"from langchain.agents import Tool, AgentExecutor, BaseSingleActionAgent\n",
"\n",
"class FakeAgent(BaseSingleActionAgent):\n",
" \"\"\"Fake Custom Agent.\"\"\"\n",
"\n",
" @property\n",
" def input_keys(self):\n",
" return [\"input\"]\n",
"\n",
" def plan(\n",
" self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any\n",
" ) -> Union[AgentAction, AgentFinish]:\n",
" \"\"\"Given input, decided what to do.\n",
"\n",
" Args:\n",
" intermediate_steps: Steps the LLM has taken to date,\n",
" along with observations\n",
" **kwargs: User inputs.\n",
"\n",
" Returns:\n",
" Action specifying what tool to use.\n",
" \"\"\"\n",
" return AgentAction(tool=\"Search\", tool_input=kwargs[\"input\"], log=\"\")\n",
"\n",
" async def aplan(\n",
" self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any\n",
" ) -> Union[AgentAction, AgentFinish]:\n",
" \"\"\"Given input, decided what to do.\n",
"\n",
" Args:\n",
" intermediate_steps: Steps the LLM has taken to date,\n",
" along with observations\n",
" **kwargs: User inputs.\n",
"\n",
" Returns:\n",
" Action specifying what tool to use.\n",
" \"\"\"\n",
" return AgentAction(tool=\"Search\", tool_input=kwargs[\"input\"], log=\"\")\n",
" \n",
"fake_agent = FakeAgent()\n",
"fake_agent_executor = AgentExecutor.from_agent_and_tools(agent=fake_agent, \n",
" tools=search_tool, \n",
" verbose=False)\n",
"\n",
"fake_agent_executor.run(\"How many people live in canada as of 2023?\")"
]
},
{
"cell_type": "markdown",
"id": "1335f0c6",
"metadata": {},
"source": [
"## Runtime\n",
"\n",
"The `AgentExecutor` class is the main agent runtime supported by LangChain. \n",
"\n",
"However, there are other, more experimental runtimes for `autonomous_agents`:\n",
" \n",
"* Plan-and-execute Agent\n",
"* Baby AGI\n",
"* Auto GPT\n",
"\n",
"Explore more about:\n",
"\n",
"* [`Simulation agents`](/docs/modules/agents/agent_use_cases/agent_simulations): Designed for role-play often in simulated enviorment (e.g., Generative Agents, CAMEL).\n",
"* [`Autonomous agents`](/docs/modules/agents/agent_use_cases/autonomous_agents): Designed for indepdent execution towards long term goals (e.g., BabyAGI, Auto-GPT).\n",
"\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

@ -1,707 +0,0 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# CAMEL Role-Playing Autonomous Cooperative Agents\n",
"\n",
"This is a langchain implementation of paper: \"CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society\".\n",
"\n",
"Overview:\n",
"\n",
"The rapid advancement of conversational and chat-based language models has led to remarkable progress in complex task-solving. However, their success heavily relies on human input to guide the conversation, which can be challenging and time-consuming. This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents and provide insight into their \"cognitive\" processes. To address the challenges of achieving autonomous cooperation, we propose a novel communicative agent framework named role-playing. Our approach involves using inception prompting to guide chat agents toward task completion while maintaining consistency with human intentions. We showcase how role-playing can be used to generate conversational data for studying the behaviors and capabilities of chat agents, providing a valuable resource for investigating conversational language models. Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems, and open-sourcing our library to support research on communicative agents and beyond.\n",
"\n",
"The original implementation: https://github.com/lightaime/camel\n",
"\n",
"Project website: https://www.camel-ai.org/\n",
"\n",
"Arxiv paper: https://arxiv.org/abs/2303.17760\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Import LangChain related modules "
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"from typing import List\n",
"from langchain.chat_models import ChatOpenAI\n",
"from langchain.prompts.chat import (\n",
" SystemMessagePromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
")\n",
"from langchain.schema import (\n",
" AIMessage,\n",
" HumanMessage,\n",
" SystemMessage,\n",
" BaseMessage,\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Define a CAMEL agent helper class"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"class CAMELAgent:\n",
" def __init__(\n",
" self,\n",
" system_message: SystemMessage,\n",
" model: ChatOpenAI,\n",
" ) -> None:\n",
" self.system_message = system_message\n",
" self.model = model\n",
" self.init_messages()\n",
"\n",
" def reset(self) -> None:\n",
" self.init_messages()\n",
" return self.stored_messages\n",
"\n",
" def init_messages(self) -> None:\n",
" self.stored_messages = [self.system_message]\n",
"\n",
" def update_messages(self, message: BaseMessage) -> List[BaseMessage]:\n",
" self.stored_messages.append(message)\n",
" return self.stored_messages\n",
"\n",
" def step(\n",
" self,\n",
" input_message: HumanMessage,\n",
" ) -> AIMessage:\n",
" messages = self.update_messages(input_message)\n",
"\n",
" output_message = self.model(messages)\n",
" self.update_messages(output_message)\n",
"\n",
" return output_message"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup OpenAI API key and roles and task for role-playing"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"\"\n",
"\n",
"assistant_role_name = \"Python Programmer\"\n",
"user_role_name = \"Stock Trader\"\n",
"task = \"Develop a trading bot for the stock market\"\n",
"word_limit = 50 # word limit for task brainstorming"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a task specify agent for brainstorming and get the specified task"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Specified task: Develop a Python-based swing trading bot that scans market trends, monitors stocks, and generates trading signals to help a stock trader to place optimal buy and sell orders with defined stop losses and profit targets.\n"
]
}
],
"source": [
"task_specifier_sys_msg = SystemMessage(content=\"You can make a task more specific.\")\n",
"task_specifier_prompt = \"\"\"Here is a task that {assistant_role_name} will help {user_role_name} to complete: {task}.\n",
"Please make it more specific. Be creative and imaginative.\n",
"Please reply with the specified task in {word_limit} words or less. Do not add anything else.\"\"\"\n",
"task_specifier_template = HumanMessagePromptTemplate.from_template(\n",
" template=task_specifier_prompt\n",
")\n",
"task_specify_agent = CAMELAgent(task_specifier_sys_msg, ChatOpenAI(temperature=1.0))\n",
"task_specifier_msg = task_specifier_template.format_messages(\n",
" assistant_role_name=assistant_role_name,\n",
" user_role_name=user_role_name,\n",
" task=task,\n",
" word_limit=word_limit,\n",
")[0]\n",
"specified_task_msg = task_specify_agent.step(task_specifier_msg)\n",
"print(f\"Specified task: {specified_task_msg.content}\")\n",
"specified_task = specified_task_msg.content"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create inception prompts for AI assistant and AI user for role-playing"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"assistant_inception_prompt = \"\"\"Never forget you are a {assistant_role_name} and I am a {user_role_name}. Never flip roles! Never instruct me!\n",
"We share a common interest in collaborating to successfully complete a task.\n",
"You must help me to complete the task.\n",
"Here is the task: {task}. Never forget our task!\n",
"I must instruct you based on your expertise and my needs to complete the task.\n",
"\n",
"I must give you one instruction at a time.\n",
"You must write a specific solution that appropriately completes the requested instruction.\n",
"You must decline my instruction honestly if you cannot perform the instruction due to physical, moral, legal reasons or your capability and explain the reasons.\n",
"Do not add anything else other than your solution to my instruction.\n",
"You are never supposed to ask me any questions you only answer questions.\n",
"You are never supposed to reply with a flake solution. Explain your solutions.\n",
"Your solution must be declarative sentences and simple present tense.\n",
"Unless I say the task is completed, you should always start with:\n",
"\n",
"Solution: <YOUR_SOLUTION>\n",
"\n",
"<YOUR_SOLUTION> should be specific and provide preferable implementations and examples for task-solving.\n",
"Always end <YOUR_SOLUTION> with: Next request.\"\"\"\n",
"\n",
"user_inception_prompt = \"\"\"Never forget you are a {user_role_name} and I am a {assistant_role_name}. Never flip roles! You will always instruct me.\n",
"We share a common interest in collaborating to successfully complete a task.\n",
"I must help you to complete the task.\n",
"Here is the task: {task}. Never forget our task!\n",
"You must instruct me based on my expertise and your needs to complete the task ONLY in the following two ways:\n",
"\n",
"1. Instruct with a necessary input:\n",
"Instruction: <YOUR_INSTRUCTION>\n",
"Input: <YOUR_INPUT>\n",
"\n",
"2. Instruct without any input:\n",
"Instruction: <YOUR_INSTRUCTION>\n",
"Input: None\n",
"\n",
"The \"Instruction\" describes a task or question. The paired \"Input\" provides further context or information for the requested \"Instruction\".\n",
"\n",
"You must give me one instruction at a time.\n",
"I must write a response that appropriately completes the requested instruction.\n",
"I must decline your instruction honestly if I cannot perform the instruction due to physical, moral, legal reasons or my capability and explain the reasons.\n",
"You should instruct me not ask me questions.\n",
"Now you must start to instruct me using the two ways described above.\n",
"Do not add anything else other than your instruction and the optional corresponding input!\n",
"Keep giving me instructions and necessary inputs until you think the task is completed.\n",
"When the task is completed, you must only reply with a single word <CAMEL_TASK_DONE>.\n",
"Never say <CAMEL_TASK_DONE> unless my responses have solved your task.\"\"\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create a helper helper to get system messages for AI assistant and AI user from role names and the task"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"def get_sys_msgs(assistant_role_name: str, user_role_name: str, task: str):\n",
" assistant_sys_template = SystemMessagePromptTemplate.from_template(\n",
" template=assistant_inception_prompt\n",
" )\n",
" assistant_sys_msg = assistant_sys_template.format_messages(\n",
" assistant_role_name=assistant_role_name,\n",
" user_role_name=user_role_name,\n",
" task=task,\n",
" )[0]\n",
"\n",
" user_sys_template = SystemMessagePromptTemplate.from_template(\n",
" template=user_inception_prompt\n",
" )\n",
" user_sys_msg = user_sys_template.format_messages(\n",
" assistant_role_name=assistant_role_name,\n",
" user_role_name=user_role_name,\n",
" task=task,\n",
" )[0]\n",
"\n",
" return assistant_sys_msg, user_sys_msg"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create AI assistant agent and AI user agent from obtained system messages"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"assistant_sys_msg, user_sys_msg = get_sys_msgs(\n",
" assistant_role_name, user_role_name, specified_task\n",
")\n",
"assistant_agent = CAMELAgent(assistant_sys_msg, ChatOpenAI(temperature=0.2))\n",
"user_agent = CAMELAgent(user_sys_msg, ChatOpenAI(temperature=0.2))\n",
"\n",
"# Reset agents\n",
"assistant_agent.reset()\n",
"user_agent.reset()\n",
"\n",
"# Initialize chats\n",
"user_msg = HumanMessage(\n",
" content=(\n",
" f\"{user_sys_msg.content}. \"\n",
" \"Now start to give me introductions one by one. \"\n",
" \"Only reply with Instruction and Input.\"\n",
" )\n",
")\n",
"\n",
"assistant_msg = HumanMessage(content=f\"{assistant_sys_msg.content}\")\n",
"assistant_msg = assistant_agent.step(user_msg)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Start role-playing session to solve the task!"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Original task prompt:\n",
"Develop a trading bot for the stock market\n",
"\n",
"Specified task prompt:\n",
"Develop a Python-based swing trading bot that scans market trends, monitors stocks, and generates trading signals to help a stock trader to place optimal buy and sell orders with defined stop losses and profit targets.\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Install the necessary Python libraries for data analysis and trading.\n",
"Input: None\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can install the necessary Python libraries using pip, a package installer for Python. We can install pandas, numpy, matplotlib, and ta-lib for data analysis and trading. We can use the following command to install these libraries:\n",
"\n",
"```\n",
"pip install pandas numpy matplotlib ta-lib\n",
"```\n",
"\n",
"Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Import the necessary libraries in the Python script.\n",
"Input: None\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can import the necessary libraries in the Python script using the import statement. We need to import pandas, numpy, matplotlib, and ta-lib for data analysis and trading. We can use the following code to import these libraries:\n",
"\n",
"```\n",
"import pandas as pd\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import talib as ta\n",
"```\n",
"\n",
"Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Load historical stock data into a pandas DataFrame.\n",
"Input: The path to the CSV file containing the historical stock data.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can load historical stock data into a pandas DataFrame using the `read_csv()` function from pandas. We need to pass the path to the CSV file containing the historical stock data as an argument to this function. We can use the following code to load the historical stock data:\n",
"\n",
"```\n",
"df = pd.read_csv('path/to/csv/file.csv')\n",
"```\n",
"\n",
"This will load the historical stock data into a pandas DataFrame called `df`. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Preprocess the historical stock data by setting the date column as the index and sorting the DataFrame in ascending order by date.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can preprocess the historical stock data by setting the date column as the index and sorting the DataFrame in ascending order by date using the `set_index()` and `sort_index()` functions from pandas. We can use the following code to preprocess the historical stock data:\n",
"\n",
"```\n",
"df = df.set_index('date')\n",
"df = df.sort_index(ascending=True)\n",
"```\n",
"\n",
"This will set the date column as the index and sort the DataFrame in ascending order by date. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Calculate the short-term and long-term moving averages for the stock data using the `ta.SMA()` function from ta-lib.\n",
"Input: The period for the short-term moving average and the period for the long-term moving average.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can calculate the short-term and long-term moving averages for the stock data using the `ta.SMA()` function from ta-lib. We need to pass the period for the short-term moving average and the period for the long-term moving average as arguments to this function. We can use the following code to calculate the short-term and long-term moving averages:\n",
"\n",
"```\n",
"short_ma = ta.SMA(df['close'], timeperiod=short_period)\n",
"long_ma = ta.SMA(df['close'], timeperiod=long_period)\n",
"```\n",
"\n",
"This will calculate the short-term and long-term moving averages for the stock data and store them in the `short_ma` and `long_ma` variables, respectively. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages using the `concat()` function from pandas. We need to pass the historical stock data, the short-term moving average, and the long-term moving average as arguments to this function. We can use the following code to create the new DataFrame:\n",
"\n",
"```\n",
"new_df = pd.concat([df, short_ma, long_ma], axis=1)\n",
"new_df.columns = ['open', 'high', 'low', 'close', 'volume', 'short_ma', 'long_ma']\n",
"```\n",
"\n",
"This will create a new DataFrame called `new_df` that combines the historical stock data with the short-term and long-term moving averages. The columns of the new DataFrame are named 'open', 'high', 'low', 'close', 'volume', 'short_ma', and 'long_ma'. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages. We can use the following code to create the new column:\n",
"\n",
"```\n",
"new_df['signal'] = np.where(new_df['short_ma'] > new_df['long_ma'], 1, -1)\n",
"```\n",
"\n",
"This will create a new column called 'signal' in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages. If the short-term moving average is greater than the long-term moving average, the signal is 1 (buy), otherwise the signal is -1 (sell). Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target.\n",
"Input: The stop loss and profit target as percentages.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target. We need to pass the stop loss and profit target as percentages as arguments to this function. We can use the following code to create the new column:\n",
"\n",
"```\n",
"stop_loss = stop_loss_percent / 100\n",
"profit_target = profit_target_percent / 100\n",
"\n",
"new_df['pnl'] = 0.0\n",
"buy_price = 0.0\n",
"for i in range(1, len(new_df)):\n",
" if new_df['signal'][i] == 1 and new_df['signal'][i-1] == -1:\n",
" buy_price = new_df['close'][i]\n",
" elif new_df['signal'][i] == -1 and new_df['signal'][i-1] == 1:\n",
" sell_price = new_df['close'][i]\n",
" if sell_price <= buy_price * (1 - stop_loss):\n",
" new_df['pnl'][i] = -stop_loss\n",
" elif sell_price >= buy_price * (1 + profit_target):\n",
" new_df['pnl'][i] = profit_target\n",
" else:\n",
" new_df['pnl'][i] = (sell_price - buy_price) / buy_price\n",
"```\n",
"\n",
"This will create a new column called 'pnl' in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target. The stop loss and profit target are calculated based on the stop_loss_percent and profit_target_percent variables, respectively. The buy and sell prices are stored in the buy_price and sell_price variables, respectively. If the sell price is less than or equal to the stop loss, the profit or loss is set to -stop_loss. If the sell price is greater than or equal to the profit target, the profit or loss is set to profit_target. Otherwise, the profit or loss is calculated as (sell_price - buy_price) / buy_price. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Calculate the total profit or loss for all trades.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can calculate the total profit or loss for all trades by summing the values in the 'pnl' column of the DataFrame. We can use the following code to calculate the total profit or loss:\n",
"\n",
"```\n",
"total_pnl = new_df['pnl'].sum()\n",
"```\n",
"\n",
"This will calculate the total profit or loss for all trades and store it in the total_pnl variable. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Visualize the stock data, short-term moving average, and long-term moving average using a line chart.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can visualize the stock data, short-term moving average, and long-term moving average using a line chart using the `plot()` function from pandas. We can use the following code to visualize the data:\n",
"\n",
"```\n",
"plt.figure(figsize=(12,6))\n",
"plt.plot(new_df.index, new_df['close'], label='Close')\n",
"plt.plot(new_df.index, new_df['short_ma'], label='Short MA')\n",
"plt.plot(new_df.index, new_df['long_ma'], label='Long MA')\n",
"plt.xlabel('Date')\n",
"plt.ylabel('Price')\n",
"plt.title('Stock Data with Moving Averages')\n",
"plt.legend()\n",
"plt.show()\n",
"```\n",
"\n",
"This will create a line chart that visualizes the stock data, short-term moving average, and long-term moving average. The x-axis represents the date and the y-axis represents the price. The chart also includes a legend that labels each line. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Visualize the buy and sell signals using a scatter plot.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can visualize the buy and sell signals using a scatter plot using the `scatter()` function from matplotlib. We can use the following code to visualize the signals:\n",
"\n",
"```\n",
"buy_signals = new_df[new_df['signal'] == 1]\n",
"sell_signals = new_df[new_df['signal'] == -1]\n",
"\n",
"plt.figure(figsize=(12,6))\n",
"plt.scatter(buy_signals.index, buy_signals['close'], label='Buy', marker='^', color='green')\n",
"plt.scatter(sell_signals.index, sell_signals['close'], label='Sell', marker='v', color='red')\n",
"plt.plot(new_df.index, new_df['close'], label='Close')\n",
"plt.xlabel('Date')\n",
"plt.ylabel('Price')\n",
"plt.title('Buy and Sell Signals')\n",
"plt.legend()\n",
"plt.show()\n",
"```\n",
"\n",
"This will create a scatter plot that visualizes the buy and sell signals. The x-axis represents the date and the y-axis represents the price. The chart also includes a legend that labels each point. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Print the total profit or loss for all trades.\n",
"Input: None.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: We can print the total profit or loss for all trades using the `print()` function. We can use the following code to print the total profit or loss:\n",
"\n",
"```\n",
"print('Total Profit/Loss: {:.2%}'.format(total_pnl))\n",
"```\n",
"\n",
"This will print the total profit or loss for all trades in percentage format with two decimal places. Next request.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"Instruction: Finalize the code by putting all the code snippets together in a single Python script.\n",
"Input: The necessary inputs for each code snippet.\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Solution: Here is the complete Python script that puts all the code snippets together:\n",
"\n",
"```\n",
"import pandas as pd\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"import talib as ta\n",
"\n",
"# Load historical stock data into a pandas DataFrame\n",
"df = pd.read_csv('path/to/csv/file.csv')\n",
"\n",
"# Preprocess the historical stock data\n",
"df = df.set_index('date')\n",
"df = df.sort_index(ascending=True)\n",
"\n",
"# Calculate the short-term and long-term moving averages\n",
"short_period = 10\n",
"long_period = 30\n",
"short_ma = ta.SMA(df['close'], timeperiod=short_period)\n",
"long_ma = ta.SMA(df['close'], timeperiod=long_period)\n",
"\n",
"# Create a new DataFrame that combines the historical stock data with the short-term and long-term moving averages\n",
"new_df = pd.concat([df, short_ma, long_ma], axis=1)\n",
"new_df.columns = ['open', 'high', 'low', 'close', 'volume', 'short_ma', 'long_ma']\n",
"\n",
"# Create a new column in the DataFrame that indicates when to buy or sell the stock based on the crossover of the short-term and long-term moving averages\n",
"new_df['signal'] = np.where(new_df['short_ma'] > new_df['long_ma'], 1, -1)\n",
"\n",
"# Create a new column in the DataFrame that indicates the profit or loss for each trade based on the buy and sell signals and the defined stop loss and profit target\n",
"stop_loss_percent = 5\n",
"profit_target_percent = 10\n",
"stop_loss = stop_loss_percent / 100\n",
"profit_target = profit_target_percent / 100\n",
"new_df['pnl'] = 0.0\n",
"buy_price = 0.0\n",
"for i in range(1, len(new_df)):\n",
" if new_df['signal'][i] == 1 and new_df['signal'][i-1] == -1:\n",
" buy_price = new_df['close'][i]\n",
" elif new_df['signal'][i] == -1 and new_df['signal'][i-1] == 1:\n",
" sell_price = new_df['close'][i]\n",
" if sell_price <= buy_price * (1 - stop_loss):\n",
" new_df['pnl'][i] = -stop_loss\n",
" elif sell_price >= buy_price * (1 + profit_target):\n",
" new_df['pnl'][i] = profit_target\n",
" else:\n",
" new_df['pnl'][i] = (sell_price - buy_price) / buy_price\n",
"\n",
"# Calculate the total profit or loss for all trades\n",
"total_pnl = new_df['pnl'].sum()\n",
"\n",
"# Visualize the stock data, short-term moving average, and long-term moving average using a line chart\n",
"plt.figure(figsize=(12,6))\n",
"plt.plot(new_df.index, new_df['close'], label='Close')\n",
"plt.plot(new_df.index, new_df['short_ma'], label='Short MA')\n",
"plt.plot(new_df.index, new_df['long_ma'], label='Long MA')\n",
"plt.xlabel('Date')\n",
"plt.ylabel('Price')\n",
"plt.title('Stock Data with Moving Averages')\n",
"plt.legend()\n",
"plt.show()\n",
"\n",
"# Visualize the buy and sell signals using a scatter plot\n",
"buy_signals = new_df[new_df['signal'] == 1]\n",
"sell_signals = new_df[new_df['signal'] == -1]\n",
"plt.figure(figsize=(12,6))\n",
"plt.scatter(buy_signals.index, buy_signals['close'], label='Buy', marker='^', color='green')\n",
"plt.scatter(sell_signals.index, sell_signals['close'], label='Sell', marker='v', color='red')\n",
"plt.plot(new_df.index, new_df['close'], label='Close')\n",
"plt.xlabel('Date')\n",
"plt.ylabel('Price')\n",
"plt.title('Buy and Sell Signals')\n",
"plt.legend()\n",
"plt.show()\n",
"\n",
"# Print the total profit or loss for all trades\n",
"print('Total Profit/Loss: {:.2%}'.format(total_pnl))\n",
"```\n",
"\n",
"You need to replace the path/to/csv/file.csv with the actual path to the CSV file containing the historical stock data. You can also adjust the short_period, long_period, stop_loss_percent, and profit_target_percent variables to suit your needs.\n",
"\n",
"\n",
"AI User (Stock Trader):\n",
"\n",
"<CAMEL_TASK_DONE>\n",
"\n",
"\n",
"AI Assistant (Python Programmer):\n",
"\n",
"Great! Let me know if you need any further assistance.\n",
"\n",
"\n"
]
}
],
"source": [
"print(f\"Original task prompt:\\n{task}\\n\")\n",
"print(f\"Specified task prompt:\\n{specified_task}\\n\")\n",
"\n",
"chat_turn_limit, n = 30, 0\n",
"while n < chat_turn_limit:\n",
" n += 1\n",
" user_ai_msg = user_agent.step(assistant_msg)\n",
" user_msg = HumanMessage(content=user_ai_msg.content)\n",
" print(f\"AI User ({user_role_name}):\\n\\n{user_msg.content}\\n\\n\")\n",
"\n",
" assistant_ai_msg = assistant_agent.step(user_msg)\n",
" assistant_msg = HumanMessage(content=assistant_ai_msg.content)\n",
" print(f\"AI Assistant ({assistant_role_name}):\\n\\n{assistant_msg.content}\\n\\n\")\n",
" if \"<CAMEL_TASK_DONE>\" in user_msg.content:\n",
" break"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "camel",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.9"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}

@ -1,50 +0,0 @@
---
sidebar_position: 0
---
# Agents
Agents can be used for a variety of tasks.
Agents combine the decision making ability of a language model with tools in order to create a system
that can execute and implement solutions on your behalf. Before reading any more, it is highly
recommended that you read the documentation in the `agent` module to understand the concepts associated with agents more.
Specifically, you should be familiar with what the `agent`, `tool`, and `agent executor` abstractions are before reading more.
- [Agent documentation](/docs/modules/agents.html) (for interacting with the outside world)
## Create Your Own Agent
Once you have read that documentation, you should be prepared to create your own agent.
What exactly does that involve?
Here's how we recommend getting started with creating your own agent:
### Step 1: Create Tools
Agents are largely defined by the tools they can use.
If you have a specific task you want the agent to accomplish, you have to give it access to the right tools.
We have many tools natively in LangChain, so you should first look to see if any of them meet your needs.
But we also make it easy to define a custom tool, so if you need custom tools you should absolutely do that.
### (Optional) Step 2: Modify Agent
The built-in LangChain agent types are designed to work well in generic situations,
but you may be able to improve performance by modifying the agent implementation.
There are several ways you could do this:
1. Modify the base prompt. This can be used to give the agent more context on how it should behave, etc.
2. Modify the output parser. This is necessary if the agent is having trouble parsing the language model output.
### (Optional) Step 3: Modify Agent Executor
This step is usually not necessary, as this is pretty general logic.
Possible reasons you would want to modify this include adding different stopping conditions, or handling errors
## Examples
Specific examples of agents include:
- [AI Plugins](./custom_agent_with_plugin_retrieval.html): an implementation of an agent that is designed to be able to use all AI Plugins.
- [Plug-and-PlAI (Plugins Database)](./custom_agent_with_plugin_retrieval_using_plugnplai.html): an implementation of an agent that is designed to be able to use all AI Plugins retrieved from PlugNPlAI.
- [Wikibase Agent](./wikibase_agent.html): an implementation of an agent that is designed to interact with Wikibase.
- [Sales GPT](./sales_agent_with_context.html): This notebook demonstrates an implementation of a Context-Aware AI Sales agent.
- [Multi-Modal Output Agent](./multi_modal_output_agent.html): an implementation of a multi-modal output agent that can generate text and images.

@ -1,30 +0,0 @@
---
sidebar_position: 0
---
# Autonomous (long-running) agents
Autonomous Agents are agents that designed to be more long running.
You give them one or multiple long term goals, and they independently execute towards those goals.
The applications combine tool usage and long term memory.
At the moment, Autonomous Agents are fairly experimental and based off of other open-source projects.
By implementing these open source projects in LangChain primitives we can get the benefits of LangChain -
easy switching and experimenting with multiple LLMs, usage of different vectorstores as memory,
usage of LangChain's collection of tools.
## Baby AGI ([Original Repo](https://github.com/yoheinakajima/babyagi))
- [Baby AGI](/docs/use_cases/autonomous_agents/aby_agi.html): a notebook implementing BabyAGI as LLM Chains
- [Baby AGI with Tools](/docs/use_cases/autonomous_agents/baby_agi_with_agent.html): building off the above notebook, this example substitutes in an agent with tools as the execution tools, allowing it to actually take actions.
## AutoGPT ([Original Repo](https://github.com/Significant-Gravitas/Auto-GPT))
- [AutoGPT](/docs/use_cases/autonomous_agents/autogpt.html): a notebook implementing AutoGPT in LangChain primitives
- [WebSearch Research Assistant](/docs/use_cases/autonomous_agents/marathon_times.html): a notebook showing how to use AutoGPT plus specific tools to act as research assistant that can use the web.
## MetaPrompt ([Original Repo](https://github.com/ngoodman/metaprompt))
- [Meta-Prompt](/docs/use_cases/autonomous_agents/meta_prompt.html): a notebook implementing Meta-Prompt in LangChain primitives
## HuggingGPT ([Original Repo](https://github.com/microsoft/JARVIS))
- [HuggingGPT](/docs/use_cases/autonomous_agents/hugginggpt.html): a notebook implementing HuggingGPT in LangChain primitives

@ -1,236 +0,0 @@
# Plan-and-execute
Plan-and-execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the ["Plan-and-Solve" paper](https://arxiv.org/abs/2305.04091).
The planning is almost always done by an LLM.
The execution is usually done by a separate agent (equipped with tools).
## Imports
```python
from langchain.chat_models import ChatOpenAI
from langchain_experimental.plan_and_execute import PlanAndExecute, load_agent_executor, load_chat_planner
from langchain.llms import OpenAI
from langchain.utilities import SerpAPIWrapper
from langchain.agents.tools import Tool
from langchain.chains import LLMMathChain
```
## Tools
```python
search = SerpAPIWrapper()
llm = OpenAI(temperature=0)
llm_math_chain = LLMMathChain.from_llm(llm=llm, verbose=True)
tools = [
Tool(
name="Search",
func=search.run,
description="useful for when you need to answer questions about current events"
),
Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for when you need to answer questions about math"
),
]
```
## Planner, Executor, and Agent
```python
model = ChatOpenAI(temperature=0)
```
```python
planner = load_chat_planner(model)
```
```python
executor = load_agent_executor(model, tools, verbose=True)
```
```python
agent = PlanAndExecute(planner=planner, executor=executor, verbose=True)
```
## Run example
```python
agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
```
<CodeOutputBlock lang="python">
```
> Entering new PlanAndExecute chain...
steps=[Step(value="Search for Leo DiCaprio's girlfriend on the internet."), Step(value='Find her current age.'), Step(value='Raise her current age to the 0.43 power using a calculator or programming language.'), Step(value='Output the result.'), Step(value="Given the above steps taken, respond to the user's original question.\n\n")]
> Entering new AgentExecutor chain...
Action:
```
{
"action": "Search",
"action_input": "Who is Leo DiCaprio's girlfriend?"
}
```
Observation: DiCaprio broke up with girlfriend Camila Morrone, 25, in the summer of 2022, after dating for four years. He's since been linked to another famous supermodel Gigi Hadid. The power couple were first supposedly an item in September after being spotted getting cozy during a party at New York Fashion Week.
Thought:Based on the previous observation, I can provide the answer to the current objective.
Action:
```
{
"action": "Final Answer",
"action_input": "Leo DiCaprio is currently linked to Gigi Hadid."
}
```
> Finished chain.
*****
Step: Search for Leo DiCaprio's girlfriend on the internet.
Response: Leo DiCaprio is currently linked to Gigi Hadid.
> Entering new AgentExecutor chain...
Action:
```
{
"action": "Search",
"action_input": "What is Gigi Hadid's current age?"
}
```
Observation: 28 years
Thought:Previous steps: steps=[(Step(value="Search for Leo DiCaprio's girlfriend on the internet."), StepResponse(response='Leo DiCaprio is currently linked to Gigi Hadid.'))]
Current objective: value='Find her current age.'
Action:
```
{
"action": "Search",
"action_input": "What is Gigi Hadid's current age?"
}
```
Observation: 28 years
Thought:Previous steps: steps=[(Step(value="Search for Leo DiCaprio's girlfriend on the internet."), StepResponse(response='Leo DiCaprio is currently linked to Gigi Hadid.')), (Step(value='Find her current age.'), StepResponse(response='28 years'))]
Current objective: None
Action:
```
{
"action": "Final Answer",
"action_input": "Gigi Hadid's current age is 28 years."
}
```
> Finished chain.
*****
Step: Find her current age.
Response: Gigi Hadid's current age is 28 years.
> Entering new AgentExecutor chain...
Action:
```
{
"action": "Calculator",
"action_input": "28 ** 0.43"
}
```
> Entering new LLMMathChain chain...
28 ** 0.43
```text
28 ** 0.43
```
...numexpr.evaluate("28 ** 0.43")...
Answer: 4.1906168361987195
> Finished chain.
Observation: Answer: 4.1906168361987195
Thought:The next step is to provide the answer to the user's question.
Action:
```
{
"action": "Final Answer",
"action_input": "Gigi Hadid's current age raised to the 0.43 power is approximately 4.19."
}
```
> Finished chain.
*****
Step: Raise her current age to the 0.43 power using a calculator or programming language.
Response: Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.
> Entering new AgentExecutor chain...
Action:
```
{
"action": "Final Answer",
"action_input": "The result is approximately 4.19."
}
```
> Finished chain.
*****
Step: Output the result.
Response: The result is approximately 4.19.
> Entering new AgentExecutor chain...
Action:
```
{
"action": "Final Answer",
"action_input": "Gigi Hadid's current age raised to the 0.43 power is approximately 4.19."
}
```
> Finished chain.
*****
Step: Given the above steps taken, respond to the user's original question.
Response: Gigi Hadid's current age raised to the 0.43 power is approximately 4.19.
> Finished chain.
"Gigi Hadid's current age raised to the 0.43 power is approximately 4.19."
```
</CodeOutputBlock>

@ -1,14 +0,0 @@
# Code writing
:::warning
All program-writing chains should be treated as *VERY* experimental and should not be used in any environment where sensitive/important data is stored, as there is arbitrary code execution involved in using these.
:::
Much like humans, LLMs are great at writing out programs, but not always great at executing them. For example, they can write down complex mathematical equations far better than they can compute the results. In such cases, it is useful to combine an LLM with a program runtime, so that the LLM converts unstructured text to a program and then a simpler tool (like a calculator) actually executes the program.
In other cases, only a program can be used to access the desired information (e.g., the contents of a directory on your computer). In such cases it is again useful to let an LLM generate the code and a separate tool to execute it.
import DocCardList from "@theme/DocCardList";
<DocCardList />

@ -1,8 +0,0 @@
# Self-checking
One of the main issues with using LLMs is that they can often hallucinate and make false claims. One of the surprisingly effective ways to remediate this is to use the LLM itself to check its own answers.
import DocCardList from "@theme/DocCardList";
<DocCardList />

@ -1,3 +1,3 @@
label: 'QA over structured data'
collapsed: false
position: 0.5
position: 0

@ -1,2 +1,2 @@
position: 0
collapsed: false
position: 0.1
collapsed: true

@ -0,0 +1,44 @@
# Analyze a single long document
The AnalyzeDocumentChain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain.
```python
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
```
```python
from langchain.llms import OpenAI
from langchain.chains import AnalyzeDocumentChain
llm = OpenAI(temperature=0)
```
```python
from langchain.chains.question_answering import load_qa_chain
```
```python
qa_chain = load_qa_chain(llm, chain_type="map_reduce")
```
```python
qa_document_chain = AnalyzeDocumentChain(combine_docs_chain=qa_chain)
```
```python
qa_document_chain.run(input_document=state_of_the_union, question="what did the president say about justice breyer?")
```
<CodeOutputBlock lang="python">
```
' The president thanked Justice Breyer for his service.'
```
</CodeOutputBlock>

@ -2,7 +2,7 @@
sidebar_position: 2
---
# Store and reference chat history
# Remembering chat history
The ConversationalRetrievalQA chain builds on RetrievalQAChain to provide a chat history component.
It first combines the chat history (either explicitly passed in or retrieved from the provided memory) and the question into a standalone question, then looks up relevant documents from the retriever, and finally passes those documents and the question to a question-answering chain to return a response.

@ -6,7 +6,7 @@
"source": [
"---\n",
"sidebar_position: 1\n",
"title: Code understanding\n",
"title: RAG over code\n",
"---"
]
},
@ -361,7 +361,8 @@
"outputs": [],
"source": [
"from langchain.llms import LlamaCpp\n",
"from langchain.prompts import PromptTemplate\nfrom langchain.chains import LLMChain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.chains import LLMChain\n",
"from langchain.callbacks.manager import CallbackManager\n",
"from langchain.memory import ConversationSummaryMemory\n",
"from langchain.chains import ConversationalRetrievalChain \n",

@ -5,7 +5,7 @@
"id": "839f3c76",
"metadata": {},
"source": [
"# Conversational Retrieval Agent\n",
"# Agent with retrieval tool\n",
"\n",
"This is an agent specifically optimized for doing retrieval when necessary and also holding a conversation.\n",
"\n",

@ -5,7 +5,7 @@
"id": "88d7cc8c",
"metadata": {},
"source": [
"# Perform context-aware text splitting\n",
"# Text splitting by header\n",
"\n",
"Text splitting for vector storage often uses sentences or other delimiters [to keep related text together](https://www.pinecone.io/learn/chunking-strategies/). \n",
"\n",
@ -327,7 +327,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.9.1"
},
"vscode": {
"interpreter": {

@ -1,7 +0,0 @@
# Analyze Document
The AnalyzeDocumentChain can be used as an end-to-end to chain. This chain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain.
import Example from "@snippets/modules/chains/additional/analyze_document.mdx"
<Example/>

@ -1,30 +0,0 @@
---
sidebar_position: 6
---
# Code understanding
Overview
LangChain is a useful tool designed to parse GitHub code repositories. By leveraging VectorStores, Conversational RetrieverChain, and GPT-4, it can answer questions in the context of an entire GitHub repository or generate new code. This documentation page outlines the essential components of the system and guides using LangChain for better code comprehension, contextual question answering, and code generation in GitHub repositories.
## Conversational Retriever Chain
Conversational RetrieverChain is a retrieval-focused system that interacts with the data stored in a VectorStore. Utilizing advanced techniques, like context-aware filtering and ranking, it retrieves the most relevant code snippets and information for a given user query. Conversational RetrieverChain is engineered to deliver high-quality, pertinent results while considering conversation history and context.
LangChain Workflow for Code Understanding and Generation
1. Index the code base: Clone the target repository, load all files within, chunk the files, and execute the indexing process. Optionally, you can skip this step and use an already indexed dataset.
2. Embedding and Code Store: Code snippets are embedded using a code-aware embedding model and stored in a VectorStore.
Query Understanding: GPT-4 processes user queries, grasping the context and extracting relevant details.
3. Construct the Retriever: Conversational RetrieverChain searches the VectorStore to identify the most relevant code snippets for a given query.
4. Build the Conversational Chain: Customize the retriever settings and define any user-defined filters as needed.
5. Ask questions: Define a list of questions to ask about the codebase, and then use the ConversationalRetrievalChain to generate context-aware answers. The LLM (GPT-4) generates comprehensive, context-aware answers based on retrieved code snippets and conversation history.
The full tutorial is available below.
- [Twitter the-algorithm codebase analysis with Deep Lake](./twitter-the-algorithm-analysis-deeplake.html): A notebook walking through how to parse github source code and run queries conversation.
- [LangChain codebase analysis with Deep Lake](./code-analysis-deeplake.html): A notebook walking through how to analyze and do question answering over THIS code base.

@ -1,4 +1,4 @@
# QA over in-memory documents
# RAG over in-memory documents
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our [Document chains](/docs/modules/chains/document/).

@ -5,7 +5,7 @@
"id": "5151afed",
"metadata": {},
"source": [
"# Question Answering\n",
"# Retrieval-augmented generation (RAG)\n",
"\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/question_answering/qa.ipynb)\n",
"\n",
@ -235,7 +235,7 @@
"- `DocumentSplitters` are just one type of the more generic `DocumentTransformers`.\n",
"- See further documentation on transformers [here](/docs/modules/data_connection/document_transformers/).\n",
"- `Context-aware splitters` keep the location (\"context\") of each split in the original `Document`:\n",
" - [Markdown files](/docs/use_cases/question_answering/how_to/document-context-aware-QA)\n",
" - [Markdown files](/docs/use_cases/question_answering/document-context-aware-QA)\n",
" - [Code (py or js)](docs/integrations/document_loaders/source_code)\n",
" - [Documents](/docs/integrations/document_loaders/grobid)\n",
"\n",
@ -353,7 +353,7 @@
"Some common ways to improve on vector similarity search include:\n",
"- `MultiQueryRetriever` [generates variants of the input question](/docs/modules/data_connection/retrievers/MultiQueryRetriever) to improve retrieval.\n",
"- `Max marginal relevance` selects for [relevance and diversity](https://www.cs.cmu.edu/~jgc/publication/The_Use_MMR_Diversity_Based_LTMIR_1998.pdf) among the retrieved documents.\n",
"- Documents can be filtered during retrieval using [`metadata` filters](/docs/use_cases/question_answering/how_to/document-context-aware-QA)."
"- Documents can be filtered during retrieval using [`metadata` filters](/docs/use_cases/question_answering/document-context-aware-QA)."
]
},
{
@ -447,7 +447,7 @@
"#### Choosing LLMs\n",
"- Browse the > 90 LLM and chat model integrations [here](https://integrations.langchain.com/).\n",
"- See further documentation on LLMs and chat models [here](/docs/modules/model_io/models/).\n",
"- See a guide on local LLMS [here](/docs/modules/use_cases/question_answering/how_to/local_retrieval_qa)."
"- See a guide on local LLMS [here](/docs/modules/use_cases/question_answering/local_retrieval_qa)."
]
},
{
@ -525,7 +525,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -5,7 +5,7 @@
"id": "3ea857b1",
"metadata": {},
"source": [
"# Use local LLMs\n",
"# RAG using local models\n",
"\n",
"The popularity of projects like [PrivateGPT](https://github.com/imartinez/privateGPT), [llama.cpp](https://github.com/ggerganov/llama.cpp), and [GPT4All](https://github.com/nomic-ai/gpt4all) underscore the importance of running LLMs locally.\n",
"\n",
@ -413,7 +413,8 @@
}
],
"source": [
"from langchain.prompts import PromptTemplate\nfrom langchain.chains import LLMChain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.chains import LLMChain\n",
"\n",
"# Prompt\n",
"prompt = PromptTemplate.from_template(\n",
@ -711,7 +712,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -5,7 +5,7 @@
"id": "66398b75",
"metadata": {},
"source": [
"# Multiple Retrieval Sources\n",
"# Retrieving from multiple sources\n",
"\n",
"Often times you may want to do retrieval over multiple sources. These can be different vectorstores (where one contains information about topic X and the other contains info about topic Y). They could also be completely different databases altogether!\n",
"\n",
@ -158,7 +158,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -5,7 +5,7 @@
"id": "9b5c258f",
"metadata": {},
"source": [
"# Cite sources\n",
"# Citing retrieval sources\n",
"\n",
"This notebook shows how to use OpenAI functions ability to extract citations from text."
]
@ -171,7 +171,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.9.1"
}
},
"nbformat": 4,

@ -1,7 +1,7 @@
---
sidebar_position: 1
---
# QA using a Retriever
# Using a Retriever
This example showcases question answering over an index.

@ -191,9 +191,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

@ -75,10 +75,96 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "578d6a90",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: openai in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (0.27.8)\n",
"Requirement already satisfied: tiktoken in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (0.4.0)\n",
"Requirement already satisfied: chromadb in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (0.4.4)\n",
"Requirement already satisfied: langchain in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (0.0.299)\n",
"Requirement already satisfied: requests>=2.20 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from openai) (2.31.0)\n",
"Requirement already satisfied: tqdm in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from openai) (4.64.1)\n",
"Requirement already satisfied: aiohttp in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from openai) (3.8.5)\n",
"Requirement already satisfied: regex>=2022.1.18 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from tiktoken) (2023.6.3)\n",
"Requirement already satisfied: pydantic<2.0,>=1.9 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (1.10.12)\n",
"Requirement already satisfied: chroma-hnswlib==0.7.2 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (0.7.2)\n",
"Requirement already satisfied: fastapi<0.100.0,>=0.95.2 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (0.99.1)\n",
"Requirement already satisfied: uvicorn[standard]>=0.18.3 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (0.23.2)\n",
"Requirement already satisfied: numpy>=1.21.6 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (1.24.4)\n",
"Requirement already satisfied: posthog>=2.4.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (3.0.1)\n",
"Requirement already satisfied: typing-extensions>=4.5.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (4.7.1)\n",
"Requirement already satisfied: pulsar-client>=3.1.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (3.2.0)\n",
"Requirement already satisfied: onnxruntime>=1.14.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (1.15.1)\n",
"Requirement already satisfied: tokenizers>=0.13.2 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (0.13.3)\n",
"Requirement already satisfied: pypika>=0.48.9 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (0.48.9)\n",
"Collecting tqdm (from openai)\n",
" Obtaining dependency information for tqdm from https://files.pythonhosted.org/packages/00/e5/f12a80907d0884e6dff9c16d0c0114d81b8cd07dc3ae54c5e962cc83037e/tqdm-4.66.1-py3-none-any.whl.metadata\n",
" Downloading tqdm-4.66.1-py3-none-any.whl.metadata (57 kB)\n",
"\u001b[2K \u001b[38;2;114;156;31m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m57.6/57.6 kB\u001b[0m \u001b[31m2.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25hRequirement already satisfied: overrides>=7.3.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (7.4.0)\n",
"Requirement already satisfied: importlib-resources in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from chromadb) (6.0.0)\n",
"Requirement already satisfied: PyYAML>=5.3 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (6.0.1)\n",
"Requirement already satisfied: SQLAlchemy<3,>=1.4 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (2.0.20)\n",
"Requirement already satisfied: anyio<4.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (3.7.1)\n",
"Requirement already satisfied: async-timeout<5.0.0,>=4.0.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (4.0.3)\n",
"Requirement already satisfied: dataclasses-json<0.7,>=0.5.7 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (0.5.9)\n",
"Requirement already satisfied: jsonpatch<2.0,>=1.33 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (1.33)\n",
"Requirement already satisfied: langsmith<0.1.0,>=0.0.38 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (0.0.42)\n",
"Requirement already satisfied: numexpr<3.0.0,>=2.8.4 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (2.8.5)\n",
"Requirement already satisfied: tenacity<9.0.0,>=8.1.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from langchain) (8.2.3)\n",
"Requirement already satisfied: attrs>=17.3.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (23.1.0)\n",
"Requirement already satisfied: charset-normalizer<4.0,>=2.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (3.2.0)\n",
"Requirement already satisfied: multidict<7.0,>=4.5 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (6.0.4)\n",
"Requirement already satisfied: yarl<2.0,>=1.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.9.2)\n",
"Requirement already satisfied: frozenlist>=1.1.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.4.0)\n",
"Requirement already satisfied: aiosignal>=1.1.2 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.3.1)\n",
"Requirement already satisfied: idna>=2.8 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from anyio<4.0->langchain) (3.4)\n",
"Requirement already satisfied: sniffio>=1.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from anyio<4.0->langchain) (1.3.0)\n",
"Requirement already satisfied: exceptiongroup in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from anyio<4.0->langchain) (1.1.3)\n",
"Requirement already satisfied: marshmallow<4.0.0,>=3.3.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from dataclasses-json<0.7,>=0.5.7->langchain) (3.20.1)\n",
"Requirement already satisfied: marshmallow-enum<2.0.0,>=1.5.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from dataclasses-json<0.7,>=0.5.7->langchain) (1.5.1)\n",
"Requirement already satisfied: typing-inspect>=0.4.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from dataclasses-json<0.7,>=0.5.7->langchain) (0.9.0)\n",
"Requirement already satisfied: starlette<0.28.0,>=0.27.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from fastapi<0.100.0,>=0.95.2->chromadb) (0.27.0)\n",
"Requirement already satisfied: jsonpointer>=1.9 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from jsonpatch<2.0,>=1.33->langchain) (2.4)\n",
"Requirement already satisfied: coloredlogs in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from onnxruntime>=1.14.1->chromadb) (15.0.1)\n",
"Requirement already satisfied: flatbuffers in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from onnxruntime>=1.14.1->chromadb) (23.5.26)\n",
"Requirement already satisfied: packaging in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from onnxruntime>=1.14.1->chromadb) (23.1)\n",
"Requirement already satisfied: protobuf in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from onnxruntime>=1.14.1->chromadb) (4.23.4)\n",
"Requirement already satisfied: sympy in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from onnxruntime>=1.14.1->chromadb) (1.12)\n",
"Requirement already satisfied: six>=1.5 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from posthog>=2.4.0->chromadb) (1.16.0)\n",
"Requirement already satisfied: monotonic>=1.5 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from posthog>=2.4.0->chromadb) (1.6)\n",
"Requirement already satisfied: backoff>=1.10.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from posthog>=2.4.0->chromadb) (2.2.1)\n",
"Requirement already satisfied: python-dateutil>2.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from posthog>=2.4.0->chromadb) (2.8.2)\n",
"Requirement already satisfied: certifi in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from pulsar-client>=3.1.0->chromadb) (2023.7.22)\n",
"Requirement already satisfied: urllib3<3,>=1.21.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from requests>=2.20->openai) (1.26.16)\n",
"Requirement already satisfied: click>=7.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (8.1.7)\n",
"Requirement already satisfied: h11>=0.8 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.14.0)\n",
"Requirement already satisfied: httptools>=0.5.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.6.0)\n",
"Requirement already satisfied: python-dotenv>=0.13 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (1.0.0)\n",
"Requirement already satisfied: uvloop!=0.15.0,!=0.15.1,>=0.14.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.17.0)\n",
"Requirement already satisfied: watchfiles>=0.13 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (0.19.0)\n",
"Requirement already satisfied: websockets>=10.4 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from uvicorn[standard]>=0.18.3->chromadb) (11.0.3)\n",
"Requirement already satisfied: zipp>=3.1.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from importlib-resources->chromadb) (3.16.2)\n",
"Requirement already satisfied: mypy-extensions>=0.3.0 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from typing-inspect>=0.4.0->dataclasses-json<0.7,>=0.5.7->langchain) (1.0.0)\n",
"Requirement already satisfied: humanfriendly>=9.1 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from coloredlogs->onnxruntime>=1.14.1->chromadb) (10.0)\n",
"Requirement already satisfied: mpmath>=0.19 in /Users/bagatur/langchain/.venv/lib/python3.9/site-packages (from sympy->onnxruntime>=1.14.1->chromadb) (1.3.0)\n",
"Using cached tqdm-4.66.1-py3-none-any.whl (78 kB)\n",
"Installing collected packages: tqdm\n",
" Attempting uninstall: tqdm\n",
" Found existing installation: tqdm 4.64.1\n",
" Uninstalling tqdm-4.64.1:\n",
" Successfully uninstalled tqdm-4.64.1\n",
"\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
"clarifai 9.8.1 requires tqdm==4.64.1, but you have tqdm 4.66.1 which is incompatible.\u001b[0m\u001b[31m\n",
"\u001b[0mSuccessfully installed tqdm-4.66.1\n"
]
}
],
"source": [
"!pip install openai tiktoken chromadb langchain\n",
"\n",
@ -154,7 +240,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"The article discusses the concept of building autonomous agents powered by large language models (LLMs). It explores the components of such agents, including planning, memory, and tool use. The article provides case studies and examples of proof-of-concept demos, highlighting the challenges and limitations of LLM-powered agents. It also includes references to related research papers and provides a citation for the article.\n"
"The article discusses the concept of building autonomous agents powered by large language models (LLMs). It explores the components of such agents, including planning, memory, and tool use. The article provides case studies and proof-of-concept examples of LLM-powered agents in various domains, such as scientific discovery and generative agents simulation. It also highlights the challenges and limitations of using LLMs in agent systems.\n"
]
}
],
@ -243,7 +329,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 5,
"id": "ce48b805-d98b-4e0f-8b9e-3b3e72cad3d3",
"metadata": {},
"outputs": [],
@ -265,7 +351,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 6,
"id": "6a718890-99ab-439a-8f79-b9ae9c58ad24",
"metadata": {},
"outputs": [],
@ -280,7 +366,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 7,
"id": "f189184a-673e-4530-8a6b-57b091045d87",
"metadata": {},
"outputs": [],
@ -291,7 +377,28 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 9,
"id": "c9d1da97-d590-4a96-82b2-8002d27fd7f6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatPromptTemplate(input_variables=['docs'], messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['docs'], template='The following is a set of documents:\\n{docs}\\nBased on this list of docs, please identify the main themes \\nHelpful Answer:'))])"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"reduce_prompt"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "1edb1b0d",
"metadata": {},
"outputs": [],
@ -301,7 +408,7 @@
"\n",
"# Takes a list of documents, combines them into a single string, and passes this to an LLMChain\n",
"combine_documents_chain = StuffDocumentsChain(\n",
" llm_chain=reduce_chain, document_variable_name=\"doc_summaries\"\n",
" llm_chain=reduce_chain, document_variable_name=\"docs\"\n",
")\n",
"\n",
"# Combines and iteravely reduces the mapped documents\n",
@ -325,7 +432,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 11,
"id": "22f1cdc2",
"metadata": {},
"outputs": [
@ -358,7 +465,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 12,
"id": "c7afb8c3",
"metadata": {},
"outputs": [
@ -366,25 +473,25 @@
"name": "stdout",
"output_type": "stream",
"text": [
"The main themes identified in the provided set of documents are:\n",
"Based on the list of documents provided, the main themes can be identified as follows:\n",
"\n",
"1. LLM-powered autonomous agent systems: The documents discuss the concept of building autonomous agents with large language models (LLMs) as the core controller. They explore the potential of LLMs beyond content generation and present them as powerful problem solvers.\n",
"1. LLM-powered autonomous agents: The documents discuss the concept of building agents with LLM as their core controller and highlight the potential of LLM beyond generating written content. They explore the capabilities of LLM as a general problem solver.\n",
"\n",
"2. Components of the agent system: The documents outline the key components of LLM-powered agent systems, including planning, memory, and tool use. Each component is described in detail, highlighting its role in enhancing the agent's capabilities.\n",
"2. Agent system overview: The documents provide an overview of the components that make up a LLM-powered autonomous agent system, including planning, memory, and tool use. Each component is explained in detail, highlighting its role in enhancing the agent's capabilities.\n",
"\n",
"3. Planning and task decomposition: The planning component focuses on task decomposition and self-reflection. The agent breaks down complex tasks into smaller subgoals and learns from past actions to improve future results.\n",
"3. Planning: The documents discuss how the agent breaks down large tasks into smaller subgoals and utilizes self-reflection to improve the quality of its actions and results.\n",
"\n",
"4. Memory and learning: The memory component includes short-term memory for in-context learning and long-term memory for retaining and recalling information over extended periods. The use of external vector stores for fast retrieval is also mentioned.\n",
"4. Memory: The documents explain the importance of both short-term and long-term memory in an agent system. Short-term memory is utilized for in-context learning, while long-term memory allows the agent to retain and recall information over extended periods.\n",
"\n",
"5. Tool use and external APIs: The agent learns to utilize external APIs for accessing additional information, code execution, and proprietary sources. This enhances the agent's knowledge and problem-solving abilities.\n",
"5. Tool use: The documents highlight the agent's ability to call external APIs for additional information and resources that may be missing from its pre-trained model weights. This includes accessing current information, executing code, and retrieving proprietary information.\n",
"\n",
"6. Case studies and proof-of-concept examples: The documents provide case studies and examples to demonstrate the application of LLM-powered agents in scientific discovery, generative simulations, and other domains. These examples serve as proof-of-concept for the effectiveness of the agent system.\n",
"6. Case studies and proof-of-concept examples: The documents provide examples of how LLM-powered autonomous agents can be applied in various domains, such as scientific discovery and generative agent simulations. These case studies serve as examples of the capabilities and potential applications of such agents.\n",
"\n",
"7. Challenges and limitations: The documents mention challenges associated with building LLM-powered autonomous agents, such as the limitations of finite context length, difficulties in long-term planning, and reliability issues with natural language interfaces.\n",
"7. Challenges: The documents acknowledge the challenges associated with building and utilizing LLM-powered autonomous agents, although specific challenges are not mentioned in the given set of documents.\n",
"\n",
"8. Citation and references: The documents include a citation and reference section for acknowledging the sources and inspirations for the concepts discussed.\n",
"8. Citation and references: The documents include a citation and reference section, indicating that the information presented is based on existing research and sources.\n",
"\n",
"Overall, the main themes revolve around the development and capabilities of LLM-powered autonomous agent systems, including their components, planning and task decomposition, memory and learning mechanisms, tool use and external APIs, case studies and proof-of-concept examples, challenges and limitations, and the importance of proper citation and references.\n"
"Overall, the main themes in the provided documents revolve around LLM-powered autonomous agents, their components and capabilities, planning, memory, tool use, case studies, and challenges.\n"
]
}
],
@ -428,17 +535,17 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 13,
"id": "de1dc10e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The GPT-Engineer project aims to create a repository of code for specific tasks specified in natural language. It involves breaking down tasks into smaller components and seeking clarification from the user when needed. The project emphasizes the importance of implementing every detail of the architecture as code and provides guidelines for file organization, code structure, and dependencies. However, there are challenges in long-term planning and task decomposition, as well as the reliability of the natural language interface. The system has limited communication bandwidth and struggles to adjust plans when faced with unexpected errors. The reliability of model outputs is questionable, as formatting errors and rebellious behavior can occur. The conversation also includes instructions for writing the code, including laying out the core classes, functions, and methods, and providing the code in a markdown code block format. The user is reminded to ensure that the code is fully functional and follows best practices for file naming, imports, and types. The project is powered by LLM (Large Language Models) and incorporates prompting techniques from various research papers.'"
"'The article explores the concept of building autonomous agents powered by large language models (LLMs) and their potential as problem solvers. It discusses different approaches to task decomposition, the integration of self-reflection into LLM-based agents, and the use of external classical planners for long-horizon planning. The new context introduces the Chain of Hindsight (CoH) approach and Algorithm Distillation (AD) for training models to produce better outputs. It also discusses different types of memory and the use of external memory for fast retrieval. The article explores the concept of tool use and introduces the MRKL system and experiments on fine-tuning LLMs to use external tools. It introduces HuggingGPT, a framework that uses ChatGPT as a task planner, and discusses the challenges of using LLM-powered agents in real-world scenarios. The article concludes with case studies on scientific discovery agents and the use of LLM-powered agents in anticancer drug discovery. It also introduces the concept of generative agents that combine LLM with memory, planning, and reflection mechanisms. The conversation samples provided discuss the implementation of a game architecture and the challenges in building LLM-centered agents. The article provides references to related research papers and resources for further exploration.'"
]
},
"execution_count": 22,
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
@ -458,7 +565,7 @@
},
{
"cell_type": "code",
"execution_count": 28,
"execution_count": 14,
"id": "f86c8072",
"metadata": {},
"outputs": [],
@ -494,7 +601,7 @@
},
{
"cell_type": "code",
"execution_count": 29,
"execution_count": 15,
"id": "d9600b67-79d4-4f85-aba2-9fe81fa29f49",
"metadata": {},
"outputs": [
@ -502,7 +609,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"L'articolo discute il concetto di costruire agenti autonomi utilizzando LLM (large language model) come controller principale. Esplora i diversi componenti di un sistema di agenti alimentato da LLM, inclusa la pianificazione, la memoria e l'uso di strumenti. Dimostrazioni di concetto come AutoGPT mostrano la possibilità di creare agenti autonomi con LLM come controller principale. Approcci come Chain of Thought, Tree of Thoughts, LLM+P, ReAct e Reflexion consentono agli agenti autonomi di pianificare, riflettere su se stessi e migliorare iterativamente. Tuttavia, ci sono sfide legate alla lunghezza del contesto, alla pianificazione a lungo termine e alla decomposizione delle attività. Inoltre, l'affidabilità dell'interfaccia di linguaggio naturale tra LLM e componenti esterni come la memoria e gli strumenti è incerta. Nonostante ciò, l'uso di LLM come router per indirizzare le richieste ai moduli esperti più adatti è stato proposto come architettura neuro-simbolica per agenti autonomi nel sistema MRKL. L'articolo fa riferimento a diverse pubblicazioni che approfondiscono l'argomento, tra cui Chain of Thought, Tree of Thoughts, LLM+P, ReAct, Reflexion, e MRKL Systems.\n"
"Il presente articolo discute il concetto di costruire agenti autonomi utilizzando LLM (large language model) come controller principale. Esplora i diversi componenti di un sistema di agenti alimentato da LLM, tra cui la pianificazione, la memoria e l'uso degli strumenti. Dimostrazioni di concetto come AutoGPT mostrano il potenziale di LLM come risolutore generale di problemi. Approcci come Chain of Thought, Tree of Thoughts, LLM+P, ReAct e Reflexion consentono agli agenti autonomi di pianificare, riflettere su se stessi e migliorarsi iterativamente. Tuttavia, ci sono sfide da affrontare, come la limitata capacità di contesto che limita l'inclusione di informazioni storiche dettagliate e la difficoltà di pianificazione a lungo termine e decomposizione delle attività. Inoltre, l'affidabilità dell'interfaccia di linguaggio naturale tra LLM e componenti esterni come la memoria e gli strumenti è incerta, poiché i LLM possono commettere errori di formattazione e mostrare comportamenti ribelli. Nonostante ciò, il sistema AutoGPT viene menzionato come esempio di dimostrazione di concetto che utilizza LLM come controller principale per agenti autonomi. Questo articolo fa riferimento a diverse fonti che esplorano approcci e applicazioni specifiche di LLM nell'ambito degli agenti autonomi.\n"
]
}
],
@ -512,7 +619,7 @@
},
{
"cell_type": "code",
"execution_count": 32,
"execution_count": 16,
"id": "5f91a8eb-daa5-4191-ace4-01765801db3e",
"metadata": {},
"outputs": [
@ -522,9 +629,9 @@
"text": [
"This article discusses the concept of building autonomous agents using LLM (large language model) as the core controller. The article explores the different components of an LLM-powered agent system, including planning, memory, and tool use. It also provides examples of proof-of-concept demos and highlights the potential of LLM as a general problem solver.\n",
"\n",
"Questo articolo discute del concetto di costruire agenti autonomi utilizzando LLM (large language model) come controller principale. L'articolo esplora i diversi componenti di un sistema di agenti alimentato da LLM, inclusa la pianificazione, la memoria e l'uso degli strumenti. Vengono anche forniti esempi di dimostrazioni di proof-of-concept e si evidenzia il potenziale di LLM come risolutore generale di problemi. Inoltre, vengono presentati approcci come Chain of Thought, Tree of Thoughts, LLM+P, ReAct e Reflexion che consentono agli agenti autonomi di pianificare, riflettere su se stessi e migliorare iterativamente.\n",
"Questo articolo discute del concetto di costruire agenti autonomi utilizzando LLM (large language model) come controller principale. L'articolo esplora i diversi componenti di un sistema di agenti alimentato da LLM, inclusa la pianificazione, la memoria e l'uso degli strumenti. Vengono forniti anche esempi di dimostrazioni di proof-of-concept e si evidenzia il potenziale di LLM come risolutore generale di problemi. Inoltre, vengono presentati approcci come Chain of Thought, Tree of Thoughts, LLM+P, ReAct e Reflexion che consentono agli agenti autonomi di pianificare, riflettere su se stessi e migliorare iterativamente.\n",
"\n",
"Questo articolo discute del concetto di costruire agenti autonomi utilizzando LLM (large language model) come controller principale. L'articolo esplora i diversi componenti di un sistema di agenti alimentato da LLM, inclusa la pianificazione, la memoria e l'uso degli strumenti. Vengono anche forniti esempi di dimostrazioni di proof-of-concept e si evidenzia il potenziale di LLM come risolutore generale di problemi. Inoltre, vengono presentati approcci come Chain of Thought, Tree of Thoughts, LLM+P, ReAct e Reflexion che consentono agli agenti autonomi di pianificare, riflettere su se stessi e migliorare iterativamente. Il nuovo contesto riguarda l'approccio Chain of Hindsight (CoH) che permette al modello di migliorare autonomamente i propri output attraverso un processo di apprendimento supervisionato. Viene anche presentato l'approccio Algorithm Distillation (AD) che applica lo stesso concetto alle traiettorie di apprendimento per compiti di reinforcement learning.\n"
"Questo articolo discute del concetto di costruire agenti autonomi utilizzando LLM (large language model) come controller principale. L'articolo esplora i diversi componenti di un sistema di agenti alimentato da LLM, inclusa la pianificazione, la memoria e l'uso degli strumenti. Vengono forniti anche esempi di dimostrazioni di proof-of-concept e si evidenzia il potenziale di LLM come risolutore generale di problemi. Inoltre, vengono presentati approcci come Chain of Thought, Tree of Thoughts, LLM+P, ReAct e Reflexion che consentono agli agenti autonomi di pianificare, riflettere su se stessi e migliorare iterativamente. Il nuovo contesto riguarda l'approccio Chain of Hindsight (CoH) che permette al modello di migliorare autonomamente i propri output attraverso un processo di apprendimento supervisionato. Viene anche presentato l'approccio Algorithm Distillation (AD) che applica lo stesso concetto alle traiettorie di apprendimento per compiti di reinforcement learning.\n"
]
}
],
@ -532,20 +639,56 @@
"print(\"\\n\\n\".join(result[\"intermediate_steps\"][:3]))"
]
},
{
"cell_type": "markdown",
"id": "0d8a8398-a43c-4f14-933c-c0743ae6ec40",
"metadata": {},
"source": [
"## Splitting and summarizing in a single chain\n",
"For convenience, we can wrap both the text splitting of our long document and summarizing in a single `AnalyzeDocumentsChain`."
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 17,
"id": "0ddd522e-30dc-4f6a-b993-c4f97e656c4f",
"metadata": {},
"outputs": [
{
"ename": "ValueError",
"evalue": "`run` not supported when there is not exactly one output key. Got ['output_text', 'intermediate_steps'].",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mValueError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[0;32mIn[17], line 4\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mlangchain\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mchains\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m AnalyzeDocumentChain\n\u001b[1;32m 3\u001b[0m summarize_document_chain \u001b[38;5;241m=\u001b[39m AnalyzeDocumentChain(combine_docs_chain\u001b[38;5;241m=\u001b[39mchain, text_splitter\u001b[38;5;241m=\u001b[39mtext_splitter)\n\u001b[0;32m----> 4\u001b[0m \u001b[43msummarize_document_chain\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrun\u001b[49m\u001b[43m(\u001b[49m\u001b[43mdocs\u001b[49m\u001b[43m[\u001b[49m\u001b[38;5;241;43m0\u001b[39;49m\u001b[43m]\u001b[49m\u001b[43m)\u001b[49m\n",
"File \u001b[0;32m~/langchain/libs/langchain/langchain/chains/base.py:496\u001b[0m, in \u001b[0;36mChain.run\u001b[0;34m(self, callbacks, tags, metadata, *args, **kwargs)\u001b[0m\n\u001b[1;32m 459\u001b[0m \u001b[38;5;250m\u001b[39m\u001b[38;5;124;03m\"\"\"Convenience method for executing chain.\u001b[39;00m\n\u001b[1;32m 460\u001b[0m \n\u001b[1;32m 461\u001b[0m \u001b[38;5;124;03mThe main difference between this method and `Chain.__call__` is that this\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 493\u001b[0m \u001b[38;5;124;03m # -> \"The temperature in Boise is...\"\u001b[39;00m\n\u001b[1;32m 494\u001b[0m \u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m 495\u001b[0m \u001b[38;5;66;03m# Run at start to make sure this is possible/defined\u001b[39;00m\n\u001b[0;32m--> 496\u001b[0m _output_key \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_run_output_key\u001b[49m\n\u001b[1;32m 498\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m args \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m kwargs:\n\u001b[1;32m 499\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mlen\u001b[39m(args) \u001b[38;5;241m!=\u001b[39m \u001b[38;5;241m1\u001b[39m:\n",
"File \u001b[0;32m~/langchain/libs/langchain/langchain/chains/base.py:445\u001b[0m, in \u001b[0;36mChain._run_output_key\u001b[0;34m(self)\u001b[0m\n\u001b[1;32m 442\u001b[0m \u001b[38;5;129m@property\u001b[39m\n\u001b[1;32m 443\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_run_output_key\u001b[39m(\u001b[38;5;28mself\u001b[39m) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m \u001b[38;5;28mstr\u001b[39m:\n\u001b[1;32m 444\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mlen\u001b[39m(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39moutput_keys) \u001b[38;5;241m!=\u001b[39m \u001b[38;5;241m1\u001b[39m:\n\u001b[0;32m--> 445\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mValueError\u001b[39;00m(\n\u001b[1;32m 446\u001b[0m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m`run` not supported when there is not exactly \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 447\u001b[0m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mone output key. Got \u001b[39m\u001b[38;5;132;01m{\u001b[39;00m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39moutput_keys\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 448\u001b[0m )\n\u001b[1;32m 449\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39moutput_keys[\u001b[38;5;241m0\u001b[39m]\n",
"\u001b[0;31mValueError\u001b[0m: `run` not supported when there is not exactly one output key. Got ['output_text', 'intermediate_steps']."
]
}
],
"source": [
"from langchain.chains import AnalyzeDocumentChain\n",
"\n",
"summarize_document_chain = AnalyzeDocumentChain(combine_docs_chain=chain, text_splitter=text_splitter)\n",
"summarize_document_chain.run(docs[0])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d8df14d0-d548-4a5d-b00a-f4cfd64f1076",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "poetry-venv",
"language": "python",
"name": "python3"
"name": "poetry-venv"
},
"language_info": {
"codemirror_mode": {

@ -1,70 +0,0 @@
```python
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
```
## Summarize
Let's take a look at it in action below, using it to summarize a long document.
```python
from langchain.llms import OpenAI
from langchain.chains.summarize import load_summarize_chain
llm = OpenAI(temperature=0)
summary_chain = load_summarize_chain(llm, chain_type="map_reduce")
```
```python
from langchain.chains import AnalyzeDocumentChain
```
```python
summarize_document_chain = AnalyzeDocumentChain(combine_docs_chain=summary_chain)
```
```python
summarize_document_chain.run(state_of_the_union)
```
<CodeOutputBlock lang="python">
```
" In this speech, President Biden addresses the American people and the world, discussing the recent aggression of Russia's Vladimir Putin in Ukraine and the US response. He outlines economic sanctions and other measures taken to hold Putin accountable, and announces the US Department of Justice's task force to go after the crimes of Russian oligarchs. He also announces plans to fight inflation and lower costs for families, invest in American manufacturing, and provide military, economic, and humanitarian assistance to Ukraine. He calls for immigration reform, protecting the rights of women, and advancing the rights of LGBTQ+ Americans, and pays tribute to military families. He concludes with optimism for the future of America."
```
</CodeOutputBlock>
## Question Answering
Let's take a look at this using a question answering chain.
```python
from langchain.chains.question_answering import load_qa_chain
```
```python
qa_chain = load_qa_chain(llm, chain_type="map_reduce")
```
```python
qa_document_chain = AnalyzeDocumentChain(combine_docs_chain=qa_chain)
```
```python
qa_document_chain.run(input_document=state_of_the_union, question="what did the president say about justice breyer?")
```
<CodeOutputBlock lang="python">
```
' The president thanked Justice Breyer for his service.'
```
</CodeOutputBlock>

@ -0,0 +1,8 @@
---
title: Cookbook
hide_table_of_contents: true
---
# Cookbook
The page you're looking for has been moved to the [cookbook section of the repo](https://github.com/langchain-ai/langchain/tree/master/cookbook) as a notebook.

@ -1,5 +1,209 @@
{
"redirects": [
{
"source": "/docs/use_cases/more/agents/autonomous_agents/:path*",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/agents/agent_simulations/:path*",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/question_answering/how_to/code/code-analysis-deeplake",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/code_writing/cpal",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/agents/agents/custom_agent_with_plugin_retrieval",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/agents/agents/custom_agent_with_plugin_retrieval_using_plugnplai",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/qa_structured/integrations/databricks",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/qa_structured/integrations/elasticsearch",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/question_answering/how_to/flare",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/question_answering/how_to/hyde",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/code_writing/",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/learned_prompt_optimization",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/code_writing/llm_bash",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/self_check/llm_checker",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/code_writing/llm_math",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/self_check/llm_summarization_checker",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/code_writing/llm_symbolic_math",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/agents/multi_modal/multi_modal_output_agent",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/qa_structured/integrations/myscale_vector_sql",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/question_answering/integrations/openai_functions_retrieval_qa",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/code_writing/pal",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/agents/agents/sales_agent_with_context",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/question_answering/integrations/semantic-search-over-chat",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/self_check/smart_llm",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/qa_structured/integrations/sqlite",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/graph/tot",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/question_answering/how_to/code/twitter-the-algorithm-analysis-deeplake",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/agents/agents/wikibase_agent",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/data_generation",
"destination": "/docs/use_cases/data_generation"
},
{
"source": "/docs/use_cases/more/graph/:path*",
"destination": "/docs/use_cases/graph/:path*"
},
{
"source": "/docs/use_cases/more/graph(/?)",
"destination": "/docs/use_cases/graph/"
},
{
"source": "/docs/use_cases/question_answering/how_to/chat_vector_db",
"destination": "/docs/use_cases/question_answering/chat_vector_db"
},
{
"source": "/docs/use_cases/code_understanding",
"destination": "/docs/use_cases/question_answering/code_understanding"
},
{
"source": "/docs/use_cases/question_answering/how_to/conversational_retrieval_agents",
"destination": "/docs/use_cases/question_answering/conversational_retrieval_agents"
},
{
"source": "/docs/use_cases/question_answering/how_to/document-context-aware-QA",
"destination": "/docs/use_cases/question_answering/document-context-aware-QA"
},
{
"source": "/docs/use_cases/question_answering/question_answering",
"destination": "/docs/use_cases/question_answering/"
},
{
"source": "/docs/use_cases/question_answering/how_to/local_retrieval_qa",
"destination": "/docs/use_cases/question_answering/local_retrieval_qa"
},
{
"source": "/docs/use_cases/question_answering/how_to/multi_retrieval_qa_router",
"destination": "/docs/use_cases/question_answering/multi_retrieval_qa_router"
},
{
"source": "/docs/use_cases/question_answering/how_to/multiple_retrieval",
"destination": "/docs/use_cases/question_answering/multiple_retrieval"
},
{
"source": "/docs/use_cases/question_answering/how_to/qa_citations",
"destination": "/docs/use_cases/question_answering/qa_citations"
},
{
"source": "/docs/use_cases/question_answering/how_to/question_answering",
"destination": "/docs/use_cases/question_answering/question_answering"
},
{
"source": "/docs/use_cases/question_answering/how_to/vector_db_qa",
"destination": "/docs/use_cases/question_answering/vector_db_qa"
},
{
"source": "/docs/use_cases/question_answering/how_to/vector_db_text_generation",
"destination": "/docs/use_cases/question_answering/vector_db_text_generation"
},
{
"source": "/docs/use_cases/more/agents/agent_simulations(/?)",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/agents/agents(/?)",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/agents/agents/camel_role_playing",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/agents(/?)",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/agents/autonomous_agents(/?)",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/more/self_check/",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/question_answering/how_to/analyze_document",
"destination": "/cookbook"
},
{
"source": "/docs/use_cases/question_answering/how_to/code/",
"destination": "/cookbook"
},
{
"source": "/docs/modules/agents/agents/examples/mrkl_chat(.html?)",
"destination": "/docs/modules/agents/"
@ -13,7 +217,7 @@
"destination": "/docs/integrations/providers/"
},
{
"source": "/docs/integrations/platforms(/?)",
"source": "/docs/integrations/platforms",
"destination": "/docs/integrations/providers/"
},
{
@ -1194,35 +1398,23 @@
},
{
"source": "/en/latest/modules/chains/examples/flare.html",
"destination": "/docs/use_cases/question_answering/how_to/flare"
},
{
"source": "/docs/use_cases/graph/graph_cypher_qa",
"destination": "/docs/use_cases/more/graph/graph_cypher_qa"
"destination": "/cookbook"
},
{
"source": "/en/latest/modules/chains/examples/graph_cypher_qa.html",
"destination": "/docs/use_cases/more/graph/graph_cypher_qa"
},
{
"source": "/docs/use_cases/graph/graph_nebula_qa",
"destination": "/docs/use_cases/more/graph/graph_nebula_qa"
"destination": "/docs/use_cases/graph/graph_cypher_qa"
},
{
"source": "/en/latest/modules/chains/examples/graph_nebula_qa.html",
"destination": "/docs/use_cases/more/graph/graph_nebula_qa"
},
{
"source": "/docs/use_cases/graph/graph_qa",
"destination": "/docs/use_cases/more/graph/graph_qa"
"destination": "/docs/use_cases/graph/graph_nebula_qa"
},
{
"source": "/en/latest/modules/chains/index_examples/graph_qa.html",
"destination": "/docs/use_cases/more/graph/graph_qa"
"destination": "/docs/use_cases/graph_qa"
},
{
"source": "/en/latest/modules/chains/index_examples/hyde.html",
"destination": "/docs/use_cases/question_answering/how_to/hyde"
"destination": "/cookbook"
},
{
"source": "/en/latest/modules/chains/examples/llm_bash.html",
@ -1258,7 +1450,7 @@
},
{
"source": "/en/latest/modules/chains/index_examples/vector_db_text_generation.html",
"destination": "/docs/use_cases/question_answering/how_to/vector_db_text_generation"
"destination": "/docs/use_cases/question_answering/vector_db_text_generation"
},
{
"source": "/en/latest/modules/chains/generic/router.html",
@ -3197,40 +3389,8 @@
"destination": "/docs/modules/model_io/prompts/prompt_templates/prompt_serialization"
},
{
"source": "/en/latest/use_cases/agent_simulations/camel_role_playing.html",
"destination": "/docs/use_cases/agent_simulations/camel_role_playing"
},
{
"source": "/en/latest/use_cases/agent_simulations/characters.html",
"destination": "/docs/use_cases/agent_simulations/characters"
},
{
"source": "/en/latest/use_cases/agent_simulations/gymnasium.html",
"destination": "/docs/use_cases/agent_simulations/gymnasium"
},
{
"source": "/en/latest/use_cases/agent_simulations/multi_player_dnd.html",
"destination": "/docs/use_cases/agent_simulations/multi_player_dnd"
},
{
"source": "/en/latest/use_cases/agent_simulations/multiagent_authoritarian.html",
"destination": "/docs/use_cases/agent_simulations/multiagent_authoritarian"
},
{
"source": "/en/latest/use_cases/agent_simulations/multiagent_bidding.html",
"destination": "/docs/use_cases/agent_simulations/multiagent_bidding"
},
{
"source": "/en/latest/use_cases/agent_simulations/petting_zoo.html",
"destination": "/docs/use_cases/agent_simulations/petting_zoo"
},
{
"source": "/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html",
"destination": "/docs/use_cases/agent_simulations/two_agent_debate_tools"
},
{
"source": "/en/latest/use_cases/agent_simulations/two_player_dnd.html",
"destination": "/docs/use_cases/agent_simulations/two_player_dnd"
"source": "/en/latest/use_cases/agent_simulations/:path*",
"destination": "/cookbook"
},
{
"source": "/en/latest/use_cases/agents/baby_agi.html",
@ -3269,24 +3429,8 @@
"destination": "/docs/use_cases/apis"
},
{
"source": "/en/latest/use_cases/autonomous_agents/autogpt.html",
"destination": "/docs/use_cases/autonomous_agents/autogpt"
},
{
"source": "/en/latest/use_cases/autonomous_agents/baby_agi.html",
"destination": "/docs/use_cases/autonomous_agents/baby_agi"
},
{
"source": "/en/latest/use_cases/autonomous_agents/baby_agi_with_agent.html",
"destination": "/docs/use_cases/autonomous_agents/baby_agi_with_agent"
},
{
"source": "/en/latest/use_cases/autonomous_agents/marathon_times.html",
"destination": "/docs/use_cases/autonomous_agents/marathon_times"
},
{
"source": "/en/latest/use_cases/autonomous_agents/meta_prompt.html",
"destination": "/docs/use_cases/autonomous_agents/meta_prompt"
"source": "/en/latest/use_cases/autonomous_agents/:path*",
"destination": "/cookbook"
},
{
"source": "/en/latest/use_cases/chatbots/voice_assistant.html",
@ -3490,23 +3634,23 @@
},
{
"source": "/docs/modules/chains/additional/analyze_document",
"destination": "/docs/use_cases/question_answering/how_to/analyze_document"
"destination": "/docs/use_cases/question_answering/analyze_document"
},
{
"source": "/docs/modules/chains/popular/chat_vector_db",
"destination": "/docs/use_cases/question_answering/how_to/chat_vector_db"
"destination": "/docs/use_cases/question_answering/chat_vector_db"
},
{
"source": "/docs/modules/chains/additional/multi_retrieval_qa_router",
"destination": "/docs/use_cases/question_answering/how_to/multi_retrieval_qa_router"
"destination": "/docs/use_cases/question_answering/multi_retrieval_qa_router"
},
{
"source": "/docs/modules/chains/additional/question_answering",
"destination": "/docs/use_cases/question_answering/how_to/question_answering"
"destination": "/docs/use_cases/question_answering/question_answering"
},
{
"source": "/docs/modules/chains/popular/vector_db_qa",
"destination": "/docs/use_cases/question_answering/how_to/vector_db_qa"
"destination": "/docs/use_cases/question_answering/vector_db_qa"
},
{
"source": "/docs/modules/chains/popular/summarize",
@ -3584,113 +3728,61 @@
"source": "/docs/use_cases/code_writing(/?)",
"destination": "/docs/use_cases/more/code_writing/"
},
{
"source": "/docs/use_cases/graph(/?)",
"destination": "/docs/use_cases/more/graph/"
},
{
"source": "/docs/use_cases/graph/graph_arangodb_qa",
"destination": "/docs/use_cases/more/graph/graph_arangodb_qa"
},
{
"source": "/docs/modules/chains/additional/graph_arangodb_qa",
"destination": "/docs/use_cases/more/graph/graph_arangodb_qa"
},
{
"source": "/docs/use_cases/graph/graph_cypher_qa",
"destination": "/docs/use_cases/more/graph/graph_cypher_qa"
"destination": "/docs/use_cases/graph/graph_arangodb_qa"
},
{
"source": "/docs/modules/chains/additional/graph_cypher_qa",
"destination": "/docs/use_cases/more/graph/graph_cypher_qa"
},
{
"source": "/docs/use_cases/graph/graph_hugegraph_qa",
"destination": "/docs/use_cases/more/graph/graph_hugegraph_qa"
"destination": "/docs/use_cases/graph/graph_cypher_qa"
},
{
"source": "/docs/modules/chains/additional/graph_hugegraph_qa",
"destination": "/docs/use_cases/more/graph/graph_hugegraph_qa"
},
{
"source": "/docs/use_cases/graph/graph_kuzu_qa",
"destination": "/docs/use_cases/more/graph/graph_kuzu_qa"
"destination": "/docs/use_cases/graph/graph_hugegraph_qa"
},
{
"source": "/docs/modules/chains/additional/graph_kuzu_qa",
"destination": "/docs/use_cases/more/graph/graph_kuzu_qa"
},
{
"source": "/docs/use_cases/graph/graph_falkordb_qa",
"destination": "/docs/use_cases/more/graph/graph_falkordb_qa"
"destination": "/docs/use_cases/graph/graph_kuzu_qa"
},
{
"source": "/docs/modules/chains/additional/graph_falkordb_qa",
"destination": "/docs/use_cases/more/graph/graph_falkordb_qa"
},
{
"source": "/docs/use_cases/graph/graph_nebula_qa",
"destination": "/docs/use_cases/more/graph/graph_nebula_qa"
"destination": "/docs/use_cases/graph/graph_falkordb_qa"
},
{
"source": "/docs/modules/chains/additional/graph_nebula_qa",
"destination": "/docs/use_cases/more/graph/graph_nebula_qa"
},
{
"source": "/docs/use_cases/graph/graph_qa",
"destination": "/docs/use_cases/more/graph/graph_qa"
"destination": "/docs/use_cases/graph/graph_nebula_qa"
},
{
"source": "/docs/modules/chains/additional/graph_qa",
"destination": "/docs/use_cases/more/graph/graph_qa"
},
{
"source": "/docs/use_cases/graph/graph_sparql_qa",
"destination": "/docs/use_cases/more/graph/graph_sparql_qa"
"destination": "/docs/use_cases/graph/graph_qa"
},
{
"source": "/docs/modules/chains/additional/graph_sparql_qa",
"destination": "/docs/use_cases/more/graph/graph_sparql_qa"
},
{
"source": "/docs/use_cases/graph/neptune_cypher_qa",
"destination": "/docs/use_cases/more/graph/neptune_cypher_qa"
"destination": "/docs/use_cases/graph/graph_sparql_qa"
},
{
"source": "/docs/modules/chains/additional/neptune_cypher_qa",
"destination": "/docs/use_cases/more/graph/neptune_cypher_qa"
},
{
"source": "/docs/use_cases/graph/tot",
"destination": "/docs/use_cases/more/graph/tot"
"destination": "/docs/use_cases/graph/neptune_cypher_qa"
},
{
"source": "/docs/modules/chains/additional/tot",
"destination": "/docs/use_cases/more/graph/tot"
},
{
"source": "/docs/use_cases/question_answering//document-context-aware-QA",
"destination": "/docs/use_cases/question_answering/how_to/document-context-aware-QA"
"destination": "/docs/use_cases/graph/tot"
},
{
"source": "/docs/modules/chains/additional/flare",
"destination": "/docs/use_cases/question_answering/how_to/flare"
"destination": "/cookbook"
},
{
"source": "/docs/modules/chains/additional/hyde",
"destination": "/docs/use_cases/question_answering/how_to/hyde"
},
{
"source": "/docs/use_cases/question_answering//local_retrieval_qa",
"destination": "/docs/use_cases/question_answering/how_to/local_retrieval_qa"
"destination": "/cookbook"
},
{
"source": "/docs/modules/chains/additional/qa_citations",
"destination": "/docs/use_cases/question_answering/how_to/qa_citations"
"destination": "/docs/use_cases/question_answering/qa_citations"
},
{
"source": "/docs/modules/chains/additional/vector_db_text_generation",
"destination": "/docs/use_cases/question_answering/how_to/vector_db_text_generation"
"destination": "/docs/use_cases/question_answering/vector_db_text_generation"
},
{
"source": "/docs/modules/chains/additional/openai_functions_retrieval_qa",
@ -3723,8 +3815,7 @@
{
"source": "/docs/use_cases/self_check(/?)",
"destination": "/docs/use_cases/more/self_check/"
},
{
}, {
"source": "/docs/modules/chains/additional/elasticsearch_database",
"destination": "/docs/use_cases/qa_structured/integrations/elasticsearch"
},

Loading…
Cancel
Save