mirror of https://github.com/hwchase17/langchain
Compare commits
12 Commits
1ca983e69a
...
8c6e13fbe4
Author | SHA1 | Date |
---|---|---|
Erick Friis | 8c6e13fbe4 | 2 weeks ago |
Eugene Yurtsev | 39e9b644b9 | 2 weeks ago |
Erick Friis | 3db85cbb5b | 2 weeks ago |
ccurme | 9c2828aaa8 | 2 weeks ago |
Erick Friis | 8580e350be | 2 weeks ago |
Anthony Chu | c735849e76 | 2 weeks ago |
Erick Friis | 02701c277f | 2 weeks ago |
ccurme | 81ae184cc9 | 2 weeks ago |
Erick Friis | 13b01104c9 | 2 weeks ago |
ccurme | 375f447e58 | 2 weeks ago |
Erick Friis | 2be4b1b2c9 | 2 weeks ago |
Erick Friis | d1fc841b1a | 2 weeks ago |
@ -1,36 +0,0 @@
|
||||
# langchain
|
||||
|
||||
## 0.1.0 (Jan 5, 2024)
|
||||
|
||||
#### Deleted
|
||||
|
||||
No deletions.
|
||||
|
||||
#### Deprecated
|
||||
|
||||
Deprecated classes and methods will be removed in 0.2.0
|
||||
|
||||
| Deprecated | Alternative | Reason |
|
||||
|---------------------------------|-----------------------------------|------------------------------------------------|
|
||||
| ChatVectorDBChain | ConversationalRetrievalChain | More general to all retrievers |
|
||||
| create_ernie_fn_chain | create_ernie_fn_runnable | Use LCEL under the hood |
|
||||
| created_structured_output_chain | create_structured_output_runnable | Use LCEL under the hood |
|
||||
| NatBotChain | | Not used |
|
||||
| create_openai_fn_chain | create_openai_fn_runnable | Use LCEL under the hood |
|
||||
| create_structured_output_chain | create_structured_output_runnable | Use LCEL under the hood |
|
||||
| load_query_constructor_chain | load_query_constructor_runnable | Use LCEL under the hood |
|
||||
| VectorDBQA | RetrievalQA | More general to all retrievers |
|
||||
| Sequential Chain | LCEL | Obviated by LCEL |
|
||||
| SimpleSequentialChain | LCEL | Obviated by LCEL |
|
||||
| TransformChain | LCEL/RunnableLambda | Obviated by LCEL |
|
||||
| create_tagging_chain | create_structured_output_runnable | Use LCEL under the hood |
|
||||
| ChatAgent | create_react_agent | Use LCEL builder over a class |
|
||||
| ConversationalAgent | create_react_agent | Use LCEL builder over a class |
|
||||
| ConversationalChatAgent | create_json_chat_agent | Use LCEL builder over a class |
|
||||
| initialize_agent | Individual create agent methods | Individual create agent methods are more clear |
|
||||
| ZeroShotAgent | create_react_agent | Use LCEL builder over a class |
|
||||
| OpenAIFunctionsAgent | create_openai_functions_agent | Use LCEL builder over a class |
|
||||
| OpenAIMultiFunctionsAgent | create_openai_tools_agent | Use LCEL builder over a class |
|
||||
| SelfAskWithSearchAgent | create_self_ask_with_search | Use LCEL builder over a class |
|
||||
| StructuredChatAgent | create_structured_chat_agent | Use LCEL builder over a class |
|
||||
| XMLAgent | create_xml_agent | Use LCEL builder over a class |
|
@ -1,27 +1,10 @@
|
||||
# langchain-core
|
||||
|
||||
## 0.1.7 (Jan 5, 2024)
|
||||
|
||||
#### Deleted
|
||||
|
||||
No deletions.
|
||||
## 0.1.x
|
||||
|
||||
#### Deprecated
|
||||
|
||||
- `BaseChatModel` methods `__call__`, `call_as_llm`, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.invoke` instead.
|
||||
- `BaseChatModel` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.ainvoke` instead.
|
||||
- `BaseLLM` methods `__call__, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseLLM.invoke` instead.
|
||||
- `BaseLLM` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseLLM.ainvoke` instead.
|
||||
|
||||
#### Fixed
|
||||
|
||||
- Restrict recursive URL scraping: [#15559](https://github.com/langchain-ai/langchain/pull/15559)
|
||||
|
||||
#### Added
|
||||
|
||||
No additions.
|
||||
|
||||
#### Beta
|
||||
|
||||
- Marked `langchain_core.load.load` and `langchain_core.load.loads` as beta.
|
||||
- Marked `langchain_core.beta.runnables.context.ContextGet` and `langchain_core.beta.runnables.context.ContextSet` as beta.
|
||||
- `BaseLLM` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseLLM.ainvoke` instead.
|
@ -0,0 +1,676 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "b8982428",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Run LLMs locally\n",
|
||||
"\n",
|
||||
"## Use case\n",
|
||||
"\n",
|
||||
"The popularity of projects like [PrivateGPT](https://github.com/imartinez/privateGPT), [llama.cpp](https://github.com/ggerganov/llama.cpp), [Ollama](https://github.com/ollama/ollama), [GPT4All](https://github.com/nomic-ai/gpt4all), [llamafile](https://github.com/Mozilla-Ocho/llamafile), and others underscore the demand to run LLMs locally (on your own device).\n",
|
||||
"\n",
|
||||
"This has at least two important benefits:\n",
|
||||
"\n",
|
||||
"1. `Privacy`: Your data is not sent to a third party, and it is not subject to the terms of service of a commercial service\n",
|
||||
"2. `Cost`: There is no inference fee, which is important for token-intensive applications (e.g., [long-running simulations](https://twitter.com/RLanceMartin/status/1691097659262820352?s=20), summarization)\n",
|
||||
"\n",
|
||||
"## Overview\n",
|
||||
"\n",
|
||||
"Running an LLM locally requires a few things:\n",
|
||||
"\n",
|
||||
"1. `Open-source LLM`: An open-source LLM that can be freely modified and shared \n",
|
||||
"2. `Inference`: Ability to run this LLM on your device w/ acceptable latency\n",
|
||||
"\n",
|
||||
"### Open-source LLMs\n",
|
||||
"\n",
|
||||
"Users can now gain access to a rapidly growing set of [open-source LLMs](https://cameronrwolfe.substack.com/p/the-history-of-open-source-llms-better). \n",
|
||||
"\n",
|
||||
"These LLMs can be assessed across at least two dimensions (see figure):\n",
|
||||
" \n",
|
||||
"1. `Base model`: What is the base-model and how was it trained?\n",
|
||||
"2. `Fine-tuning approach`: Was the base-model fine-tuned and, if so, what [set of instructions](https://cameronrwolfe.substack.com/p/beyond-llama-the-power-of-open-llms#%C2%A7alpaca-an-instruction-following-llama-model) was used?\n",
|
||||
"\n",
|
||||
"![Image description](../../static/img/OSS_LLM_overview.png)\n",
|
||||
"\n",
|
||||
"The relative performance of these models can be assessed using several leaderboards, including:\n",
|
||||
"\n",
|
||||
"1. [LmSys](https://chat.lmsys.org/?arena)\n",
|
||||
"2. [GPT4All](https://gpt4all.io/index.html)\n",
|
||||
"3. [HuggingFace](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard)\n",
|
||||
"\n",
|
||||
"### Inference\n",
|
||||
"\n",
|
||||
"A few frameworks for this have emerged to support inference of open-source LLMs on various devices:\n",
|
||||
"\n",
|
||||
"1. [`llama.cpp`](https://github.com/ggerganov/llama.cpp): C++ implementation of llama inference code with [weight optimization / quantization](https://finbarr.ca/how-is-llama-cpp-possible/)\n",
|
||||
"2. [`gpt4all`](https://docs.gpt4all.io/index.html): Optimized C backend for inference\n",
|
||||
"3. [`Ollama`](https://ollama.ai/): Bundles model weights and environment into an app that runs on device and serves the LLM\n",
|
||||
"4. [`llamafile`](https://github.com/Mozilla-Ocho/llamafile): Bundles model weights and everything needed to run the model in a single file, allowing you to run the LLM locally from this file without any additional installation steps\n",
|
||||
"\n",
|
||||
"In general, these frameworks will do a few things:\n",
|
||||
"\n",
|
||||
"1. `Quantization`: Reduce the memory footprint of the raw model weights\n",
|
||||
"2. `Efficient implementation for inference`: Support inference on consumer hardware (e.g., CPU or laptop GPU)\n",
|
||||
"\n",
|
||||
"In particular, see [this excellent post](https://finbarr.ca/how-is-llama-cpp-possible/) on the importance of quantization.\n",
|
||||
"\n",
|
||||
"![Image description](../../static/img/llama-memory-weights.png)\n",
|
||||
"\n",
|
||||
"With less precision, we radically decrease the memory needed to store the LLM in memory.\n",
|
||||
"\n",
|
||||
"In addition, we can see the importance of GPU memory bandwidth [sheet](https://docs.google.com/spreadsheets/d/1OehfHHNSn66BP2h3Bxp2NJTVX97icU0GmCXF6pK23H8/edit#gid=0)!\n",
|
||||
"\n",
|
||||
"A Mac M2 Max is 5-6x faster than a M1 for inference due to the larger GPU memory bandwidth.\n",
|
||||
"\n",
|
||||
"![Image description](../../static/img/llama_t_put.png)\n",
|
||||
"\n",
|
||||
"## Quickstart\n",
|
||||
"\n",
|
||||
"[`Ollama`](https://ollama.ai/) is one way to easily run inference on macOS.\n",
|
||||
" \n",
|
||||
"The instructions [here](https://github.com/jmorganca/ollama?tab=readme-ov-file#ollama) provide details, which we summarize:\n",
|
||||
" \n",
|
||||
"* [Download and run](https://ollama.ai/download) the app\n",
|
||||
"* From command line, fetch a model from this [list of options](https://github.com/jmorganca/ollama): e.g., `ollama pull llama2`\n",
|
||||
"* When the app is running, all models are automatically served on `localhost:11434`\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "86178adb",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"' The first man on the moon was Neil Armstrong, who landed on the moon on July 20, 1969 as part of the Apollo 11 mission. obviously.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_community.llms import Ollama\n",
|
||||
"\n",
|
||||
"llm = Ollama(model=\"llama2\")\n",
|
||||
"llm.invoke(\"The first man on the moon was ...\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "343ab645",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Stream tokens as they are being generated."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 40,
|
||||
"id": "9cd83603",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" The first man to walk on the moon was Neil Armstrong, an American astronaut who was part of the Apollo 11 mission in 1969. февруари 20, 1969, Armstrong stepped out of the lunar module Eagle and onto the moon's surface, famously declaring \"That's one small step for man, one giant leap for mankind\" as he took his first steps. He was followed by fellow astronaut Edwin \"Buzz\" Aldrin, who also walked on the moon during the mission."
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"' The first man to walk on the moon was Neil Armstrong, an American astronaut who was part of the Apollo 11 mission in 1969. февруари 20, 1969, Armstrong stepped out of the lunar module Eagle and onto the moon\\'s surface, famously declaring \"That\\'s one small step for man, one giant leap for mankind\" as he took his first steps. He was followed by fellow astronaut Edwin \"Buzz\" Aldrin, who also walked on the moon during the mission.'"
|
||||
]
|
||||
},
|
||||
"execution_count": 40,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.callbacks.manager import CallbackManager\n",
|
||||
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
|
||||
"\n",
|
||||
"llm = Ollama(\n",
|
||||
" model=\"llama2\", callback_manager=CallbackManager([StreamingStdOutCallbackHandler()])\n",
|
||||
")\n",
|
||||
"llm.invoke(\"The first man on the moon was ...\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "5cb27414",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Environment\n",
|
||||
"\n",
|
||||
"Inference speed is a challenge when running models locally (see above).\n",
|
||||
"\n",
|
||||
"To minimize latency, it is desirable to run models locally on GPU, which ships with many consumer laptops [e.g., Apple devices](https://www.apple.com/newsroom/2022/06/apple-unveils-m2-with-breakthrough-performance-and-capabilities/).\n",
|
||||
"\n",
|
||||
"And even with GPU, the available GPU memory bandwidth (as noted above) is important.\n",
|
||||
"\n",
|
||||
"### Running Apple silicon GPU\n",
|
||||
"\n",
|
||||
"`Ollama` and [`llamafile`](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#gpu-support) will automatically utilize the GPU on Apple devices.\n",
|
||||
" \n",
|
||||
"Other frameworks require the user to set up the environment to utilize the Apple GPU.\n",
|
||||
"\n",
|
||||
"For example, `llama.cpp` python bindings can be configured to use the GPU via [Metal](https://developer.apple.com/metal/).\n",
|
||||
"\n",
|
||||
"Metal is a graphics and compute API created by Apple providing near-direct access to the GPU. \n",
|
||||
"\n",
|
||||
"See the [`llama.cpp`](docs/integrations/llms/llamacpp) setup [here](https://github.com/abetlen/llama-cpp-python/blob/main/docs/install/macos.md) to enable this.\n",
|
||||
"\n",
|
||||
"In particular, ensure that conda is using the correct virtual environment that you created (`miniforge3`).\n",
|
||||
"\n",
|
||||
"E.g., for me:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"conda activate /Users/rlm/miniforge3/envs/llama\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"With the above confirmed, then:\n",
|
||||
"\n",
|
||||
"```\n",
|
||||
"CMAKE_ARGS=\"-DLLAMA_METAL=on\" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c382e79a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## LLMs\n",
|
||||
"\n",
|
||||
"There are various ways to gain access to quantized model weights.\n",
|
||||
"\n",
|
||||
"1. [`HuggingFace`](https://huggingface.co/TheBloke) - Many quantized model are available for download and can be run with framework such as [`llama.cpp`](https://github.com/ggerganov/llama.cpp). You can also download models in [`llamafile` format](https://huggingface.co/models?other=llamafile) from HuggingFace.\n",
|
||||
"2. [`gpt4all`](https://gpt4all.io/index.html) - The model explorer offers a leaderboard of metrics and associated quantized models available for download \n",
|
||||
"3. [`Ollama`](https://github.com/jmorganca/ollama) - Several models can be accessed directly via `pull`\n",
|
||||
"\n",
|
||||
"### Ollama\n",
|
||||
"\n",
|
||||
"With [Ollama](https://github.com/jmorganca/ollama), fetch a model via `ollama pull <model family>:<tag>`:\n",
|
||||
"\n",
|
||||
"* E.g., for Llama-7b: `ollama pull llama2` will download the most basic version of the model (e.g., smallest # parameters and 4 bit quantization)\n",
|
||||
"* We can also specify a particular version from the [model list](https://github.com/jmorganca/ollama?tab=readme-ov-file#model-library), e.g., `ollama pull llama2:13b`\n",
|
||||
"* See the full set of parameters on the [API reference page](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.ollama.Ollama.html)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 42,
|
||||
"id": "8ecd2f78",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"' Sure! Here\\'s the answer, broken down step by step:\\n\\nThe first man on the moon was... Neil Armstrong.\\n\\nHere\\'s how I arrived at that answer:\\n\\n1. The first manned mission to land on the moon was Apollo 11.\\n2. The mission included three astronauts: Neil Armstrong, Edwin \"Buzz\" Aldrin, and Michael Collins.\\n3. Neil Armstrong was the mission commander and the first person to set foot on the moon.\\n4. On July 20, 1969, Armstrong stepped out of the lunar module Eagle and onto the moon\\'s surface, famously declaring \"That\\'s one small step for man, one giant leap for mankind.\"\\n\\nSo, the first man on the moon was Neil Armstrong!'"
|
||||
]
|
||||
},
|
||||
"execution_count": 42,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_community.llms import Ollama\n",
|
||||
"\n",
|
||||
"llm = Ollama(model=\"llama2:13b\")\n",
|
||||
"llm.invoke(\"The first man on the moon was ... think step by step\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "07c8c0d1",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### Llama.cpp\n",
|
||||
"\n",
|
||||
"Llama.cpp is compatible with a [broad set of models](https://github.com/ggerganov/llama.cpp).\n",
|
||||
"\n",
|
||||
"For example, below we run inference on `llama2-13b` with 4 bit quantization downloaded from [HuggingFace](https://huggingface.co/TheBloke/Llama-2-13B-GGML/tree/main).\n",
|
||||
"\n",
|
||||
"As noted above, see the [API reference](https://api.python.langchain.com/en/latest/llms/langchain.llms.llamacpp.LlamaCpp.html?highlight=llamacpp#langchain.llms.llamacpp.LlamaCpp) for the full set of parameters. \n",
|
||||
"\n",
|
||||
"From the [llama.cpp API reference docs](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.llamacpp.LlamaCpp.htm), a few are worth commenting on:\n",
|
||||
"\n",
|
||||
"`n_gpu_layers`: number of layers to be loaded into GPU memory\n",
|
||||
"\n",
|
||||
"* Value: 1\n",
|
||||
"* Meaning: Only one layer of the model will be loaded into GPU memory (1 is often sufficient).\n",
|
||||
"\n",
|
||||
"`n_batch`: number of tokens the model should process in parallel \n",
|
||||
"\n",
|
||||
"* Value: n_batch\n",
|
||||
"* Meaning: It's recommended to choose a value between 1 and n_ctx (which in this case is set to 2048)\n",
|
||||
"\n",
|
||||
"`n_ctx`: Token context window\n",
|
||||
"\n",
|
||||
"* Value: 2048\n",
|
||||
"* Meaning: The model will consider a window of 2048 tokens at a time\n",
|
||||
"\n",
|
||||
"`f16_kv`: whether the model should use half-precision for the key/value cache\n",
|
||||
"\n",
|
||||
"* Value: True\n",
|
||||
"* Meaning: The model will use half-precision, which can be more memory efficient; Metal only supports True."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "5eba38dc",
|
||||
"metadata": {
|
||||
"vscode": {
|
||||
"languageId": "plaintext"
|
||||
}
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%env CMAKE_ARGS=\"-DLLAMA_METAL=on\"\n",
|
||||
"%env FORCE_CMAKE=1\n",
|
||||
"%pip install --upgrade --quiet llama-cpp-python --no-cache-dirclear"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "a88bf0c8-e989-4bcd-bcb7-4d7757e684f2",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain.callbacks.manager import CallbackManager\n",
|
||||
"from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler\n",
|
||||
"from langchain_community.llms import LlamaCpp\n",
|
||||
"\n",
|
||||
"llm = LlamaCpp(\n",
|
||||
" model_path=\"/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin\",\n",
|
||||
" n_gpu_layers=1,\n",
|
||||
" n_batch=512,\n",
|
||||
" n_ctx=2048,\n",
|
||||
" f16_kv=True,\n",
|
||||
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
|
||||
" verbose=True,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "f56f5168",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"The console log will show the below to indicate Metal was enabled properly from steps above:\n",
|
||||
"```\n",
|
||||
"ggml_metal_init: allocating\n",
|
||||
"ggml_metal_init: using MPS\n",
|
||||
"```"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 45,
|
||||
"id": "7890a077",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"Llama.generate: prefix-match hit\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" and use logical reasoning to figure out who the first man on the moon was.\n",
|
||||
"\n",
|
||||
"Here are some clues:\n",
|
||||
"\n",
|
||||
"1. The first man on the moon was an American.\n",
|
||||
"2. He was part of the Apollo 11 mission.\n",
|
||||
"3. He stepped out of the lunar module and became the first person to set foot on the moon's surface.\n",
|
||||
"4. His last name is Armstrong.\n",
|
||||
"\n",
|
||||
"Now, let's use our reasoning skills to figure out who the first man on the moon was. Based on clue #1, we know that the first man on the moon was an American. Clue #2 tells us that he was part of the Apollo 11 mission. Clue #3 reveals that he was the first person to set foot on the moon's surface. And finally, clue #4 gives us his last name: Armstrong.\n",
|
||||
"Therefore, the first man on the moon was Neil Armstrong!"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"llama_print_timings: load time = 9623.21 ms\n",
|
||||
"llama_print_timings: sample time = 143.77 ms / 203 runs ( 0.71 ms per token, 1412.01 tokens per second)\n",
|
||||
"llama_print_timings: prompt eval time = 485.94 ms / 7 tokens ( 69.42 ms per token, 14.40 tokens per second)\n",
|
||||
"llama_print_timings: eval time = 6385.16 ms / 202 runs ( 31.61 ms per token, 31.64 tokens per second)\n",
|
||||
"llama_print_timings: total time = 7279.28 ms\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\" and use logical reasoning to figure out who the first man on the moon was.\\n\\nHere are some clues:\\n\\n1. The first man on the moon was an American.\\n2. He was part of the Apollo 11 mission.\\n3. He stepped out of the lunar module and became the first person to set foot on the moon's surface.\\n4. His last name is Armstrong.\\n\\nNow, let's use our reasoning skills to figure out who the first man on the moon was. Based on clue #1, we know that the first man on the moon was an American. Clue #2 tells us that he was part of the Apollo 11 mission. Clue #3 reveals that he was the first person to set foot on the moon's surface. And finally, clue #4 gives us his last name: Armstrong.\\nTherefore, the first man on the moon was Neil Armstrong!\""
|
||||
]
|
||||
},
|
||||
"execution_count": 45,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"llm.invoke(\"The first man on the moon was ... Let's think step by step\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "831ddf7c",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### GPT4All\n",
|
||||
"\n",
|
||||
"We can use model weights downloaded from [GPT4All](/docs/integrations/llms/gpt4all) model explorer.\n",
|
||||
"\n",
|
||||
"Similar to what is shown above, we can run inference and use [the API reference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.gpt4all.GPT4All.html) to set parameters of interest."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "e27baf6e",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"%pip install gpt4all"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "915ecd4c-8f6b-4de3-a787-b64cb7c682b4",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"from langchain_community.llms import GPT4All\n",
|
||||
"\n",
|
||||
"llm = GPT4All(\n",
|
||||
" model=\"/Users/rlm/Desktop/Code/gpt4all/models/nous-hermes-13b.ggmlv3.q4_0.bin\"\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 47,
|
||||
"id": "e3d4526f",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\".\\n1) The United States decides to send a manned mission to the moon.2) They choose their best astronauts and train them for this specific mission.3) They build a spacecraft that can take humans to the moon, called the Lunar Module (LM).4) They also create a larger spacecraft, called the Saturn V rocket, which will launch both the LM and the Command Service Module (CSM), which will carry the astronauts into orbit.5) The mission is planned down to the smallest detail: from the trajectory of the rockets to the exact movements of the astronauts during their moon landing.6) On July 16, 1969, the Saturn V rocket launches from Kennedy Space Center in Florida, carrying the Apollo 11 mission crew into space.7) After one and a half orbits around the Earth, the LM separates from the CSM and begins its descent to the moon's surface.8) On July 20, 1969, at 2:56 pm EDT (GMT-4), Neil Armstrong becomes the first man on the moon. He speaks these\""
|
||||
]
|
||||
},
|
||||
"execution_count": 47,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"llm.invoke(\"The first man on the moon was ... Let's think step by step\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "056854e2-5e4b-4a03-be7e-03192e5c4e1e",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"### llamafile\n",
|
||||
"\n",
|
||||
"One of the simplest ways to run an LLM locally is using a [llamafile](https://github.com/Mozilla-Ocho/llamafile). All you need to do is:\n",
|
||||
"\n",
|
||||
"1) Download a llamafile from [HuggingFace](https://huggingface.co/models?other=llamafile)\n",
|
||||
"2) Make the file executable\n",
|
||||
"3) Run the file\n",
|
||||
"\n",
|
||||
"llamafiles bundle model weights and a [specially-compiled](https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#technical-details) version of [`llama.cpp`](https://github.com/ggerganov/llama.cpp) into a single file that can run on most computers any additional dependencies. They also come with an embedded inference server that provides an [API](https://github.com/Mozilla-Ocho/llamafile/blob/main/llama.cpp/server/README.md#api-endpoints) for interacting with your model. \n",
|
||||
"\n",
|
||||
"Here's a simple bash script that shows all 3 setup steps:\n",
|
||||
"\n",
|
||||
"```bash\n",
|
||||
"# Download a llamafile from HuggingFace\n",
|
||||
"wget https://huggingface.co/jartine/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile\n",
|
||||
"\n",
|
||||
"# Make the file executable. On Windows, instead just rename the file to end in \".exe\".\n",
|
||||
"chmod +x TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile\n",
|
||||
"\n",
|
||||
"# Start the model server. Listens at http://localhost:8080 by default.\n",
|
||||
"./TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile --server --nobrowser\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"After you run the above setup steps, you can use LangChain to interact with your model:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "002e655c-ba18-4db3-ac7b-f33e825d14b6",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"\"\\nFirstly, let's imagine the scene where Neil Armstrong stepped onto the moon. This happened in 1969. The first man on the moon was Neil Armstrong. We already know that.\\n2nd, let's take a step back. Neil Armstrong didn't have any special powers. He had to land his spacecraft safely on the moon without injuring anyone or causing any damage. If he failed to do this, he would have been killed along with all those people who were on board the spacecraft.\\n3rd, let's imagine that Neil Armstrong successfully landed his spacecraft on the moon and made it back to Earth safely. The next step was for him to be hailed as a hero by his people back home. It took years before Neil Armstrong became an American hero.\\n4th, let's take another step back. Let's imagine that Neil Armstrong wasn't hailed as a hero, and instead, he was just forgotten. This happened in the 1970s. Neil Armstrong wasn't recognized for his remarkable achievement on the moon until after he died.\\n5th, let's take another step back. Let's imagine that Neil Armstrong didn't die in the 1970s and instead, lived to be a hundred years old. This happened in 2036. In the year 2036, Neil Armstrong would have been a centenarian.\\nNow, let's think about the present. Neil Armstrong is still alive. He turned 95 years old on July 20th, 2018. If he were to die now, his achievement of becoming the first human being to set foot on the moon would remain an unforgettable moment in history.\\nI hope this helps you understand the significance and importance of Neil Armstrong's achievement on the moon!\""
|
||||
]
|
||||
},
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_community.llms.llamafile import Llamafile\n",
|
||||
"\n",
|
||||
"llm = Llamafile()\n",
|
||||
"\n",
|
||||
"llm.invoke(\"The first man on the moon was ... Let's think step by step.\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6b84e543",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Prompts\n",
|
||||
"\n",
|
||||
"Some LLMs will benefit from specific prompts.\n",
|
||||
"\n",
|
||||
"For example, LLaMA will use [special tokens](https://twitter.com/RLanceMartin/status/1681879318493003776?s=20).\n",
|
||||
"\n",
|
||||
"We can use `ConditionalPromptSelector` to set prompt based on the model type."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"id": "16759b7c-7903-4269-b7b4-f83b313d8091",
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# Set our LLM\n",
|
||||
"llm = LlamaCpp(\n",
|
||||
" model_path=\"/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin\",\n",
|
||||
" n_gpu_layers=1,\n",
|
||||
" n_batch=512,\n",
|
||||
" n_ctx=2048,\n",
|
||||
" f16_kv=True,\n",
|
||||
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
|
||||
" verbose=True,\n",
|
||||
")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "66656084",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"Set the associated prompt based upon the model version."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 58,
|
||||
"id": "8555f5bf",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='<<SYS>> \\n You are an assistant tasked with improving Google search results. \\n <</SYS>> \\n\\n [INST] Generate THREE Google search queries that are similar to this question. The output should be a numbered list of questions and each should have a question mark at the end: \\n\\n {question} [/INST]', template_format='f-string', validate_template=True)"
|
||||
]
|
||||
},
|
||||
"execution_count": 58,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain.chains import LLMChain\n",
|
||||
"from langchain.chains.prompt_selector import ConditionalPromptSelector\n",
|
||||
"from langchain_core.prompts import PromptTemplate\n",
|
||||
"\n",
|
||||
"DEFAULT_LLAMA_SEARCH_PROMPT = PromptTemplate(\n",
|
||||
" input_variables=[\"question\"],\n",
|
||||
" template=\"\"\"<<SYS>> \\n You are an assistant tasked with improving Google search \\\n",
|
||||
"results. \\n <</SYS>> \\n\\n [INST] Generate THREE Google search queries that \\\n",
|
||||
"are similar to this question. The output should be a numbered list of questions \\\n",
|
||||
"and each should have a question mark at the end: \\n\\n {question} [/INST]\"\"\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"DEFAULT_SEARCH_PROMPT = PromptTemplate(\n",
|
||||
" input_variables=[\"question\"],\n",
|
||||
" template=\"\"\"You are an assistant tasked with improving Google search \\\n",
|
||||
"results. Generate THREE Google search queries that are similar to \\\n",
|
||||
"this question. The output should be a numbered list of questions and each \\\n",
|
||||
"should have a question mark at the end: {question}\"\"\",\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"QUESTION_PROMPT_SELECTOR = ConditionalPromptSelector(\n",
|
||||
" default_prompt=DEFAULT_SEARCH_PROMPT,\n",
|
||||
" conditionals=[(lambda llm: isinstance(llm, LlamaCpp), DEFAULT_LLAMA_SEARCH_PROMPT)],\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"prompt = QUESTION_PROMPT_SELECTOR.get_prompt(llm)\n",
|
||||
"prompt"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 59,
|
||||
"id": "d0aedfd2",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
" Sure! Here are three similar search queries with a question mark at the end:\n",
|
||||
"\n",
|
||||
"1. Which NBA team did LeBron James lead to a championship in the year he was drafted?\n",
|
||||
"2. Who won the Grammy Awards for Best New Artist and Best Female Pop Vocal Performance in the same year that Lady Gaga was born?\n",
|
||||
"3. What MLB team did Babe Ruth play for when he hit 60 home runs in a single season?"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "stderr",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"llama_print_timings: load time = 14943.19 ms\n",
|
||||
"llama_print_timings: sample time = 72.93 ms / 101 runs ( 0.72 ms per token, 1384.87 tokens per second)\n",
|
||||
"llama_print_timings: prompt eval time = 14942.95 ms / 93 tokens ( 160.68 ms per token, 6.22 tokens per second)\n",
|
||||
"llama_print_timings: eval time = 3430.85 ms / 100 runs ( 34.31 ms per token, 29.15 tokens per second)\n",
|
||||
"llama_print_timings: total time = 18578.26 ms\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"' Sure! Here are three similar search queries with a question mark at the end:\\n\\n1. Which NBA team did LeBron James lead to a championship in the year he was drafted?\\n2. Who won the Grammy Awards for Best New Artist and Best Female Pop Vocal Performance in the same year that Lady Gaga was born?\\n3. What MLB team did Babe Ruth play for when he hit 60 home runs in a single season?'"
|
||||
]
|
||||
},
|
||||
"execution_count": 59,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"# Chain\n",
|
||||
"llm_chain = LLMChain(prompt=prompt, llm=llm)\n",
|
||||
"question = \"What NFL team won the Super Bowl in the year that Justin Bieber was born?\"\n",
|
||||
"llm_chain.run({\"question\": question})"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6e0d37e7-f1d9-4848-bf2c-c22392ee141f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"We also can use the LangChain Prompt Hub to fetch and / or store prompts that are model specific.\n",
|
||||
"\n",
|
||||
"This will work with your [LangSmith API key](https://docs.smith.langchain.com/).\n",
|
||||
"\n",
|
||||
"For example, [here](https://smith.langchain.com/hub/rlm/rag-prompt-llama) is a prompt for RAG with LLaMA-specific tokens."
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6ba66260",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Use cases\n",
|
||||
"\n",
|
||||
"Given an `llm` created from one of the models above, you can use it for [many use cases](/docs/how_to#use-cases).\n",
|
||||
"\n",
|
||||
"For example, here is a guide to [RAG](/docs/tutorials/local_rag) with local LLMs.\n",
|
||||
"\n",
|
||||
"In general, use cases for local LLMs can be driven by at least two factors:\n",
|
||||
"\n",
|
||||
"* `Privacy`: private data (e.g., journals, etc) that a user does not want to share \n",
|
||||
"* `Cost`: text preprocessing (extraction/tagging), summarization, and agent simulations are token-use-intensive tasks\n",
|
||||
"\n",
|
||||
"In addition, [here](https://blog.langchain.dev/using-langsmith-to-support-fine-tuning-of-open-source-llms/) is an overview on fine-tuning, which can utilize open-source LLMs."
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.7"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -0,0 +1,354 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "6bd1219b-f31c-41b0-95e6-3204ad894ac7",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Response metadata\n",
|
||||
"\n",
|
||||
"Many model providers include some metadata in their chat generation responses. This metadata can be accessed via the `AIMessage.response_metadata: Dict` attribute. Depending on the model provider and model configuration, this can contain information like [token counts](/docs/how_to/chat_token_usage_tracking), [logprobs](/docs/how_to/logprobs), and more.\n",
|
||||
"\n",
|
||||
"Here's what the response metadata looks like for a few different providers:\n",
|
||||
"\n",
|
||||
"## OpenAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "161f5898-9976-4a75-943d-03eda1a40a60",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'token_usage': {'completion_tokens': 164,\n",
|
||||
" 'prompt_tokens': 17,\n",
|
||||
" 'total_tokens': 181},\n",
|
||||
" 'model_name': 'gpt-4-turbo',\n",
|
||||
" 'system_fingerprint': 'fp_76f018034d',\n",
|
||||
" 'finish_reason': 'stop',\n",
|
||||
" 'logprobs': None}"
|
||||
]
|
||||
},
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI(model=\"gpt-4-turbo\")\n",
|
||||
"msg = llm.invoke([(\"human\", \"What's the oldest known example of cuneiform\")])\n",
|
||||
"msg.response_metadata"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "98eab683-df03-44a1-a034-ebbe7c6851b6",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Anthropic"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "61c43496-83b5-4d71-bd60-3e6d46c62a5e",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'id': 'msg_01CzQyD7BX8nkhDNfT1QqvEp',\n",
|
||||
" 'model': 'claude-3-sonnet-20240229',\n",
|
||||
" 'stop_reason': 'end_turn',\n",
|
||||
" 'stop_sequence': None,\n",
|
||||
" 'usage': {'input_tokens': 17, 'output_tokens': 296}}"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_anthropic import ChatAnthropic\n",
|
||||
"\n",
|
||||
"llm = ChatAnthropic(model=\"claude-3-sonnet-20240229\")\n",
|
||||
"msg = llm.invoke([(\"human\", \"What's the oldest known example of cuneiform\")])\n",
|
||||
"msg.response_metadata"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "c1f24f69-18f6-43c1-8b26-3f88ec515259",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Google VertexAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 8,
|
||||
"id": "39549336-25f5-4839-9846-f687cd77e59b",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'is_blocked': False,\n",
|
||||
" 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH',\n",
|
||||
" 'probability_label': 'NEGLIGIBLE',\n",
|
||||
" 'blocked': False},\n",
|
||||
" {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT',\n",
|
||||
" 'probability_label': 'NEGLIGIBLE',\n",
|
||||
" 'blocked': False},\n",
|
||||
" {'category': 'HARM_CATEGORY_HARASSMENT',\n",
|
||||
" 'probability_label': 'NEGLIGIBLE',\n",
|
||||
" 'blocked': False},\n",
|
||||
" {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT',\n",
|
||||
" 'probability_label': 'NEGLIGIBLE',\n",
|
||||
" 'blocked': False}],\n",
|
||||
" 'citation_metadata': None,\n",
|
||||
" 'usage_metadata': {'prompt_token_count': 10,\n",
|
||||
" 'candidates_token_count': 30,\n",
|
||||
" 'total_token_count': 40}}"
|
||||
]
|
||||
},
|
||||
"execution_count": 8,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_google_vertexai import ChatVertexAI\n",
|
||||
"\n",
|
||||
"llm = ChatVertexAI(model=\"gemini-pro\")\n",
|
||||
"msg = llm.invoke([(\"human\", \"What's the oldest known example of cuneiform\")])\n",
|
||||
"msg.response_metadata"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "bc4ef8bb-eee3-4266-b530-0af9b3b79fe9",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Bedrock (Anthropic)"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 16,
|
||||
"id": "1e4ac668-4c6a-48ad-9a6f-7b291477b45d",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'model_id': 'anthropic.claude-v2',\n",
|
||||
" 'usage': {'prompt_tokens': 19, 'completion_tokens': 371, 'total_tokens': 390}}"
|
||||
]
|
||||
},
|
||||
"execution_count": 16,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_aws import ChatBedrock\n",
|
||||
"\n",
|
||||
"llm = ChatBedrock(model_id=\"anthropic.claude-v2\")\n",
|
||||
"msg = llm.invoke([(\"human\", \"What's the oldest known example of cuneiform\")])\n",
|
||||
"msg.response_metadata"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "ee040d15-5575-4309-a9e9-aed5a09c78e3",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## MistralAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 7,
|
||||
"id": "deb41321-52d0-4795-a40c-4a811a13d7b0",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'token_usage': {'prompt_tokens': 19,\n",
|
||||
" 'total_tokens': 141,\n",
|
||||
" 'completion_tokens': 122},\n",
|
||||
" 'model': 'mistral-small',\n",
|
||||
" 'finish_reason': 'stop'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 7,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_mistralai import ChatMistralAI\n",
|
||||
"\n",
|
||||
"llm = ChatMistralAI()\n",
|
||||
"msg = llm.invoke([(\"human\", \"What's the oldest known example of cuneiform\")])\n",
|
||||
"msg.response_metadata"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "297c7be4-9505-48ac-96c0-4dc2047cfe7f",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Groq"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"id": "744e14ec-ff50-4642-9893-ff7bdf8927ff",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'token_usage': {'completion_time': 0.243,\n",
|
||||
" 'completion_tokens': 132,\n",
|
||||
" 'prompt_time': 0.022,\n",
|
||||
" 'prompt_tokens': 22,\n",
|
||||
" 'queue_time': None,\n",
|
||||
" 'total_time': 0.265,\n",
|
||||
" 'total_tokens': 154},\n",
|
||||
" 'model_name': 'mixtral-8x7b-32768',\n",
|
||||
" 'system_fingerprint': 'fp_7b44c65f25',\n",
|
||||
" 'finish_reason': 'stop',\n",
|
||||
" 'logprobs': None}"
|
||||
]
|
||||
},
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_groq import ChatGroq\n",
|
||||
"\n",
|
||||
"llm = ChatGroq()\n",
|
||||
"msg = llm.invoke([(\"human\", \"What's the oldest known example of cuneiform\")])\n",
|
||||
"msg.response_metadata"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "7cdeec00-8a8f-422a-8819-47c646578b65",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## TogetherAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 2,
|
||||
"id": "a984118e-a731-4864-bcea-7dc6c6b3d139",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'token_usage': {'completion_tokens': 208,\n",
|
||||
" 'prompt_tokens': 20,\n",
|
||||
" 'total_tokens': 228},\n",
|
||||
" 'model_name': 'mistralai/Mixtral-8x7B-Instruct-v0.1',\n",
|
||||
" 'system_fingerprint': None,\n",
|
||||
" 'finish_reason': 'eos',\n",
|
||||
" 'logprobs': None}"
|
||||
]
|
||||
},
|
||||
"execution_count": 2,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"\n",
|
||||
"from langchain_openai import ChatOpenAI\n",
|
||||
"\n",
|
||||
"llm = ChatOpenAI(\n",
|
||||
" base_url=\"https://api.together.xyz/v1\",\n",
|
||||
" api_key=os.environ[\"TOGETHER_API_KEY\"],\n",
|
||||
" model=\"mistralai/Mixtral-8x7B-Instruct-v0.1\",\n",
|
||||
")\n",
|
||||
"msg = llm.invoke([(\"human\", \"What's the oldest known example of cuneiform\")])\n",
|
||||
"msg.response_metadata"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"id": "3d5e0614-8dc2-4948-a0b5-dc76c7837a5a",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## FireworksAI"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 31,
|
||||
"id": "6ae32a93-26db-41bb-95c2-38ddd5085fbe",
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'token_usage': {'prompt_tokens': 19,\n",
|
||||
" 'total_tokens': 219,\n",
|
||||
" 'completion_tokens': 200},\n",
|
||||
" 'model_name': 'accounts/fireworks/models/mixtral-8x7b-instruct',\n",
|
||||
" 'system_fingerprint': '',\n",
|
||||
" 'finish_reason': 'length',\n",
|
||||
" 'logprobs': None}"
|
||||
]
|
||||
},
|
||||
"execution_count": 31,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_fireworks import ChatFireworks\n",
|
||||
"\n",
|
||||
"llm = ChatFireworks(model=\"accounts/fireworks/models/mixtral-8x7b-instruct\")\n",
|
||||
"msg = llm.invoke([(\"human\", \"What's the oldest known example of cuneiform\")])\n",
|
||||
"msg.response_metadata"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "poetry-venv-2",
|
||||
"language": "python",
|
||||
"name": "poetry-venv-2"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.1"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 5
|
||||
}
|
@ -1,45 +0,0 @@
|
||||
# LangChain Over Time
|
||||
|
||||
Due to the rapidly evolving field, LangChain has also evolved rapidly.
|
||||
This document serves to outline at a high level what has changed and why.
|
||||
|
||||
## 0.2
|
||||
|
||||
|
||||
## 0.1
|
||||
|
||||
The 0.1 release marked a few key changes for LangChain.
|
||||
By this point, the LangChain ecosystem had become large both in the breadth of what it enabled as well as the community behind it.
|
||||
|
||||
|
||||
**Split of packages**
|
||||
|
||||
LangChain was split up into several packages to increase modularity and decrease bloat.
|
||||
First, `langchain-core` is created as a lightweight core library containing the base abstractions,
|
||||
some core implementations of those abstractions, and the generic runtime for creating chains.
|
||||
Next, all third party integrations are split into `langchain-community` or their own individual partner packages.
|
||||
Higher level chains and agents remain in `langchain`.
|
||||
|
||||
**`Runnables`**
|
||||
|
||||
Having a specific class for each chain was proving not very scalable or flexible.
|
||||
Although these classes were left alone (without deprecation warnings) for this release,
|
||||
in the documentation much more space was given to generic runnables.
|
||||
|
||||
## < 0.1
|
||||
|
||||
There are several key characteristics of LangChain pre-0.1.
|
||||
|
||||
**Singular Package**
|
||||
|
||||
LangChain was largely a singular package.
|
||||
The only exception was was `langchain-experimental`, which largely held more experimental code.
|
||||
This meant that ALL integrations lived inside `langchain`.
|
||||
|
||||
|
||||
**Chains as classes**
|
||||
|
||||
Most high level chains were largely their own classes.
|
||||
There was a base `Chain` class from which all chains inherited.
|
||||
This meant that in order to chain the logic inside a chain you basically had to modify the source code.
|
||||
There were a few chains that were meant to be more generic (`SequentialChain`, `RouterChain`)
|
@ -1,22 +0,0 @@
|
||||
---
|
||||
sidebar_class_name: hidden
|
||||
---
|
||||
|
||||
# 🦜🛠️ LangSmith
|
||||
|
||||
[LangSmith](https://smith.langchain.com) helps you trace and evaluate your language model applications and intelligent agents to help you
|
||||
move from prototype to production.
|
||||
|
||||
Check out the [interactive walkthrough](/docs/langsmith/walkthrough) to get started.
|
||||
|
||||
For more information, please refer to the [LangSmith documentation](https://docs.smith.langchain.com/).
|
||||
|
||||
For tutorials and other end-to-end examples demonstrating ways to integrate LangSmith in your workflow,
|
||||
check out the [LangSmith Cookbook](https://github.com/langchain-ai/langsmith-cookbook). Some of the guides therein include:
|
||||
|
||||
- Leveraging user feedback in your JS application ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/feedback-examples/nextjs/README.md)).
|
||||
- Building an automated feedback pipeline ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/feedback-examples/algorithmic-feedback/algorithmic_feedback.ipynb)).
|
||||
- How to evaluate and audit your RAG workflows ([link](https://github.com/langchain-ai/langsmith-cookbook/tree/main/testing-examples/qa-correctness)).
|
||||
- How to fine-tune an LLM on real usage data ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/fine-tuning-examples/export-to-openai/fine-tuning-on-chat-runs.ipynb)).
|
||||
- How to use the [LangChain Hub](https://smith.langchain.com/hub) to version your prompts ([link](https://github.com/langchain-ai/langsmith-cookbook/blob/main/hub-examples/retrieval-qa-chain/retrieval-qa.ipynb))
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -0,0 +1 @@
|
||||
__pycache__
|
@ -0,0 +1,21 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2023 LangChain, Inc.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
@ -0,0 +1,59 @@
|
||||
.PHONY: all format lint test tests integration_tests docker_tests help extended_tests
|
||||
|
||||
# Default target executed when no arguments are given to make.
|
||||
all: help
|
||||
|
||||
# Define a variable for the test file path.
|
||||
TEST_FILE ?= tests/unit_tests/
|
||||
|
||||
test:
|
||||
poetry run pytest $(TEST_FILE)
|
||||
|
||||
tests:
|
||||
poetry run pytest $(TEST_FILE)
|
||||
|
||||
|
||||
######################
|
||||
# LINTING AND FORMATTING
|
||||
######################
|
||||
|
||||
# Define a variable for Python and notebook files.
|
||||
PYTHON_FILES=.
|
||||
MYPY_CACHE=.mypy_cache
|
||||
lint format: PYTHON_FILES=.
|
||||
lint_diff format_diff: PYTHON_FILES=$(shell git diff --relative=libs/partners/azure --name-only --diff-filter=d master | grep -E '\.py$$|\.ipynb$$')
|
||||
lint_package: PYTHON_FILES=langchain_azure_dynamic_sessions
|
||||
lint_tests: PYTHON_FILES=tests
|
||||
lint_tests: MYPY_CACHE=.mypy_cache_test
|
||||
|
||||
lint lint_diff lint_package lint_tests:
|
||||
poetry run ruff .
|
||||
poetry run ruff format $(PYTHON_FILES) --diff
|
||||
poetry run ruff --select I $(PYTHON_FILES)
|
||||
mkdir $(MYPY_CACHE); poetry run mypy $(PYTHON_FILES) --cache-dir $(MYPY_CACHE)
|
||||
|
||||
format format_diff:
|
||||
poetry run ruff format $(PYTHON_FILES)
|
||||
poetry run ruff --select I --fix $(PYTHON_FILES)
|
||||
|
||||
spell_check:
|
||||
poetry run codespell --toml pyproject.toml
|
||||
|
||||
spell_fix:
|
||||
poetry run codespell --toml pyproject.toml -w
|
||||
|
||||
check_imports: $(shell find langchain_azure_dynamic_sessions -name '*.py')
|
||||
poetry run python ./scripts/check_imports.py $^
|
||||
|
||||
######################
|
||||
# HELP
|
||||
######################
|
||||
|
||||
help:
|
||||
@echo '----'
|
||||
@echo 'check_imports - check imports'
|
||||
@echo 'format - run code formatters'
|
||||
@echo 'lint - run linters'
|
||||
@echo 'test - run unit tests'
|
||||
@echo 'tests - run unit tests'
|
||||
@echo 'test TEST_FILE=<test_file> - run all tests in file'
|
@ -0,0 +1,36 @@
|
||||
# langchain-azure-dynamic-sessions
|
||||
|
||||
This package contains the LangChain integration for Azure Container Apps dynamic sessions. You can use it to add a secure and scalable code interpreter to your agents.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install -U langchain-azure-dynamic-sessions
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
You first need to create an Azure Container Apps session pool and obtain its management endpoint. Then you can use the `SessionsPythonREPLTool` tool to give your agent the ability to execute Python code.
|
||||
|
||||
```python
|
||||
from langchain_azure_dynamic_sessions import SessionsPythonREPLTool
|
||||
|
||||
|
||||
# get the management endpoint from the session pool in the Azure portal
|
||||
tool = SessionsPythonREPLTool(pool_management_endpoint=POOL_MANAGEMENT_ENDPOINT)
|
||||
|
||||
prompt = hub.pull("hwchase17/react")
|
||||
tools=[tool]
|
||||
react_agent = create_react_agent(
|
||||
llm=llm,
|
||||
tools=tools,
|
||||
prompt=prompt,
|
||||
)
|
||||
|
||||
react_agent_executor = AgentExecutor(agent=react_agent, tools=tools, verbose=True, handle_parsing_errors=True)
|
||||
|
||||
react_agent_executor.invoke({"input": "What is the current time in Vancouver, Canada?"})
|
||||
```
|
||||
|
||||
By default, the tool uses `DefaultAzureCredential` to authenticate with Azure. If you're using a user-assigned managed identity, you must set the `AZURE_CLIENT_ID` environment variable to the ID of the managed identity.
|
||||
|
@ -0,0 +1,169 @@
|
||||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"# Azure Container Apps dynamic sessions\n",
|
||||
"\n",
|
||||
"Azure Container Apps dynamic sessions provides a secure and scalable way to run a Python code interpreter in Hyper-V isolated sandboxes. This allows your agents to run potentially untrusted code in a secure environment. The code interpreter environment includes many popular Python packages, such as NumPy, pandas, and scikit-learn.\n",
|
||||
"\n",
|
||||
"## Pre-requisites\n",
|
||||
"\n",
|
||||
"By default, the `SessionsPythonREPLTool` tool uses `DefaultAzureCredential` to authenticate with Azure. Locally, it'll use your credentials from the Azure CLI or VS Code. Install the Azure CLI and log in with `az login` to authenticate.\n",
|
||||
"\n",
|
||||
"## Using the tool\n",
|
||||
"\n",
|
||||
"Set variables:"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 1,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"import dotenv\n",
|
||||
"dotenv.load_dotenv()\n",
|
||||
"\n",
|
||||
"POOL_MANAGEMENT_ENDPOINT = os.getenv(\"POOL_MANAGEMENT_ENDPOINT\")\n",
|
||||
"AZURE_OPENAI_ENDPOINT = os.getenv(\"AZURE_OPENAI_ENDPOINT\")"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"'{\\n \"result\": 42,\\n \"stdout\": \"\",\\n \"stderr\": \"\"\\n}'"
|
||||
]
|
||||
},
|
||||
"execution_count": 4,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"from langchain_azure_dynamic_sessions import SessionsPythonREPLTool\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"tool = SessionsPythonREPLTool(pool_management_endpoint=POOL_MANAGEMENT_ENDPOINT)\n",
|
||||
"tool.run(\"6 * 7\")\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "markdown",
|
||||
"metadata": {},
|
||||
"source": [
|
||||
"## Full agent example"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [
|
||||
{
|
||||
"name": "stdout",
|
||||
"output_type": "stream",
|
||||
"text": [
|
||||
"\n",
|
||||
"\n",
|
||||
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
|
||||
"\u001b[32;1m\u001b[1;3mI need to calculate the compound interest on the initial amount over 6 years.\n",
|
||||
"Action: Python_REPL\n",
|
||||
"Action Input: \n",
|
||||
"```python\n",
|
||||
"initial_amount = 500\n",
|
||||
"interest_rate = 0.05\n",
|
||||
"time_period = 6\n",
|
||||
"final_amount = initial_amount * (1 + interest_rate)**time_period\n",
|
||||
"final_amount\n",
|
||||
"```\u001b[0m\u001b[36;1m\u001b[1;3m{\n",
|
||||
" \"result\": 670.0478203125002,\n",
|
||||
" \"stdout\": \"\",\n",
|
||||
" \"stderr\": \"\"\n",
|
||||
"}\u001b[0m\u001b[32;1m\u001b[1;3mThe final amount after 6 years will be $670.05\n",
|
||||
"Final Answer: $670.05\u001b[0m\n",
|
||||
"\n",
|
||||
"\u001b[1m> Finished chain.\u001b[0m\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"data": {
|
||||
"text/plain": [
|
||||
"{'input': 'If I put $500 in a bank account with a 5% interest rate, how much money will I have in the account after 6 years?',\n",
|
||||
" 'output': '$670.05'}"
|
||||
]
|
||||
},
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"output_type": "execute_result"
|
||||
}
|
||||
],
|
||||
"source": [
|
||||
"import os\n",
|
||||
"from azure.identity import DefaultAzureCredential\n",
|
||||
"from langchain_azure_dynamic_sessions import SessionsPythonREPLTool\n",
|
||||
"from langchain_openai import AzureChatOpenAI\n",
|
||||
"from langchain import agents, hub\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"credential = DefaultAzureCredential()\n",
|
||||
"os.environ[\"OPENAI_API_TYPE\"] = \"azure_ad\"\n",
|
||||
"os.environ[\"OPENAI_API_KEY\"] = credential.get_token(\"https://cognitiveservices.azure.com/.default\").token\n",
|
||||
"os.environ[\"AZURE_OPENAI_ENDPOINT\"] = AZURE_OPENAI_ENDPOINT\n",
|
||||
"\n",
|
||||
"llm = AzureChatOpenAI(\n",
|
||||
" azure_deployment=\"gpt-35-turbo\",\n",
|
||||
" openai_api_version=\"2023-09-15-preview\",\n",
|
||||
" streaming=True,\n",
|
||||
" temperature=0,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"repl = SessionsPythonREPLTool(\n",
|
||||
" pool_management_endpoint=POOL_MANAGEMENT_ENDPOINT,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"tools = [repl]\n",
|
||||
"react_agent = agents.create_react_agent(\n",
|
||||
" llm=llm,\n",
|
||||
" tools=tools,\n",
|
||||
" prompt=hub.pull(\"hwchase17/react\"),\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"react_agent_executor = agents.AgentExecutor(agent=react_agent, tools=tools, verbose=True, handle_parsing_errors=True)\n",
|
||||
"\n",
|
||||
"react_agent_executor.invoke({\"input\": \"If I put $500 in a bank account with a 5% interest rate, how much money will I have in the account after 6 years?\"})"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"provenance": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3 (ipykernel)",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.11.9"
|
||||
}
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 1
|
||||
}
|
@ -0,0 +1,5 @@
|
||||
from langchain_azure_dynamic_sessions.tools.sessions import SessionsPythonREPLTool
|
||||
|
||||
__all__ = [
|
||||
"SessionsPythonREPLTool",
|
||||
]
|
@ -0,0 +1,5 @@
|
||||
from langchain_azure_dynamic_sessions.tools.sessions import SessionsPythonREPLTool
|
||||
|
||||
__all__ = [
|
||||
"SessionsPythonREPLTool",
|
||||
]
|
@ -0,0 +1,273 @@
|
||||
import importlib.metadata
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import urllib
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from io import BytesIO
|
||||
from typing import Any, BinaryIO, Callable, List, Optional
|
||||
from uuid import uuid4
|
||||
|
||||
import requests
|
||||
from azure.core.credentials import AccessToken
|
||||
from azure.identity import DefaultAzureCredential
|
||||
from langchain_core.tools import BaseTool
|
||||
|
||||
try:
|
||||
_package_version = importlib.metadata.version("langchain-azure-dynamic-sessions")
|
||||
except importlib.metadata.PackageNotFoundError:
|
||||
_package_version = "0.0.0"
|
||||
USER_AGENT = f"langchain-azure-dynamic-sessions/{_package_version} (Language=Python)"
|
||||
|
||||
|
||||
def _access_token_provider_factory() -> Callable[[], Optional[str]]:
|
||||
"""Factory function for creating an access token provider function.
|
||||
|
||||
Returns:
|
||||
Callable[[], Optional[str]]: The access token provider function
|
||||
"""
|
||||
|
||||
access_token: Optional[AccessToken] = None
|
||||
|
||||
def access_token_provider() -> Optional[str]:
|
||||
nonlocal access_token
|
||||
if access_token is None or datetime.fromtimestamp(
|
||||
access_token.expires_on, timezone.utc
|
||||
) < datetime.now(timezone.utc) + timedelta(minutes=5):
|
||||
credential = DefaultAzureCredential()
|
||||
access_token = credential.get_token("https://dynamicsessions.io/.default")
|
||||
return access_token.token
|
||||
|
||||
return access_token_provider
|
||||
|
||||
|
||||
def _sanitize_input(query: str) -> str:
|
||||
"""Sanitize input to the python REPL.
|
||||
Remove whitespace, backtick & python (if llm mistakes python console as terminal)
|
||||
|
||||
Args:
|
||||
query: The query to sanitize
|
||||
|
||||
Returns:
|
||||
str: The sanitized query
|
||||
"""
|
||||
|
||||
# Removes `, whitespace & python from start
|
||||
query = re.sub(r"^(\s|`)*(?i:python)?\s*", "", query)
|
||||
# Removes whitespace & ` from end
|
||||
query = re.sub(r"(\s|`)*$", "", query)
|
||||
return query
|
||||
|
||||
|
||||
@dataclass
|
||||
class RemoteFileMetadata:
|
||||
"""Metadata for a file in the session."""
|
||||
|
||||
filename: str
|
||||
"""The filename relative to `/mnt/data`."""
|
||||
|
||||
size_in_bytes: int
|
||||
"""The size of the file in bytes."""
|
||||
|
||||
@property
|
||||
def full_path(self) -> str:
|
||||
"""Get the full path of the file."""
|
||||
return f"/mnt/data/{self.filename}"
|
||||
|
||||
@staticmethod
|
||||
def from_dict(data: dict) -> "RemoteFileMetadata":
|
||||
"""Create a RemoteFileMetadata object from a dictionary."""
|
||||
properties = data.get("properties", {})
|
||||
return RemoteFileMetadata(
|
||||
filename=properties.get("filename"),
|
||||
size_in_bytes=properties.get("size"),
|
||||
)
|
||||
|
||||
|
||||
class SessionsPythonREPLTool(BaseTool):
|
||||
"""A tool for running Python code in an Azure Container Apps dynamic sessions
|
||||
code interpreter.
|
||||
|
||||
Example:
|
||||
|
||||
.. code-block:: python
|
||||
from langchain_azure_dynamic_sessions import SessionsPythonREPLTool
|
||||
tool = SessionsPythonREPLTool(pool_management_endpoint="...")
|
||||
result = tool.run("6 * 7")
|
||||
"""
|
||||
|
||||
name: str = "Python_REPL"
|
||||
description: str = (
|
||||
"A Python shell. Use this to execute python commands "
|
||||
"when you need to perform calculations or computations. "
|
||||
"Input should be a valid python command. "
|
||||
"Returns a JSON object with the result, stdout, and stderr. "
|
||||
)
|
||||
|
||||
sanitize_input: bool = True
|
||||
"""Whether to sanitize input to the python REPL."""
|
||||
|
||||
pool_management_endpoint: str
|
||||
"""The management endpoint of the session pool. Should end with a '/'."""
|
||||
|
||||
access_token_provider: Callable[
|
||||
[], Optional[str]
|
||||
] = _access_token_provider_factory()
|
||||
"""A function that returns the access token to use for the session pool."""
|
||||
|
||||
session_id: str = str(uuid4())
|
||||
"""The session ID to use for the code interpreter. Defaults to a random UUID."""
|
||||
|
||||
def _build_url(self, path: str) -> str:
|
||||
pool_management_endpoint = self.pool_management_endpoint
|
||||
if not pool_management_endpoint:
|
||||
raise ValueError("pool_management_endpoint is not set")
|
||||
if not pool_management_endpoint.endswith("/"):
|
||||
pool_management_endpoint += "/"
|
||||
encoded_session_id = urllib.parse.quote(self.session_id)
|
||||
query = f"identifier={encoded_session_id}&api-version=2024-02-02-preview"
|
||||
query_separator = "&" if "?" in pool_management_endpoint else "?"
|
||||
full_url = pool_management_endpoint + path + query_separator + query
|
||||
return full_url
|
||||
|
||||
def execute(self, python_code: str) -> Any:
|
||||
"""Execute Python code in the session."""
|
||||
|
||||
if self.sanitize_input:
|
||||
python_code = _sanitize_input(python_code)
|
||||
|
||||
access_token = self.access_token_provider()
|
||||
api_url = self._build_url("code/execute")
|
||||
headers = {
|
||||
"Authorization": f"Bearer {access_token}",
|
||||
"Content-Type": "application/json",
|
||||
"User-Agent": USER_AGENT,
|
||||
}
|
||||
body = {
|
||||
"properties": {
|
||||
"codeInputType": "inline",
|
||||
"executionType": "synchronous",
|
||||
"code": python_code,
|
||||
}
|
||||
}
|
||||
|
||||
response = requests.post(api_url, headers=headers, json=body)
|
||||
response.raise_for_status()
|
||||
response_json = response.json()
|
||||
properties = response_json.get("properties", {})
|
||||
return properties
|
||||
|
||||
def _run(self, python_code: str) -> Any:
|
||||
response = self.execute(python_code)
|
||||
|
||||
# if the result is an image, remove the base64 data
|
||||
result = response.get("result")
|
||||
if isinstance(result, dict):
|
||||
if result.get("type") == "image" and "base64_data" in result:
|
||||
result.pop("base64_data")
|
||||
|
||||
return json.dumps(
|
||||
{
|
||||
"result": result,
|
||||
"stdout": response.get("stdout"),
|
||||
"stderr": response.get("stderr"),
|
||||
},
|
||||
indent=2,
|
||||
)
|
||||
|
||||
def upload_file(
|
||||
self,
|
||||
*,
|
||||
data: Optional[BinaryIO] = None,
|
||||
remote_file_path: Optional[str] = None,
|
||||
local_file_path: Optional[str] = None,
|
||||
) -> RemoteFileMetadata:
|
||||
"""Upload a file to the session.
|
||||
|
||||
Args:
|
||||
data: The data to upload.
|
||||
remote_file_path: The path to upload the file to, relative to
|
||||
`/mnt/data`. If local_file_path is provided, this is defaulted
|
||||
to its filename.
|
||||
local_file_path: The path to the local file to upload.
|
||||
|
||||
Returns:
|
||||
RemoteFileMetadata: The metadata for the uploaded file
|
||||
"""
|
||||
|
||||
if data and local_file_path:
|
||||
raise ValueError("data and local_file_path cannot be provided together")
|
||||
|
||||
if data:
|
||||
file_data = data
|
||||
elif local_file_path:
|
||||
if not remote_file_path:
|
||||
remote_file_path = os.path.basename(local_file_path)
|
||||
file_data = open(local_file_path, "rb")
|
||||
|
||||
access_token = self.access_token_provider()
|
||||
api_url = self._build_url("files/upload")
|
||||
headers = {
|
||||
"Authorization": f"Bearer {access_token}",
|
||||
"User-Agent": USER_AGENT,
|
||||
}
|
||||
files = [("file", (remote_file_path, file_data, "application/octet-stream"))]
|
||||
|
||||
response = requests.request(
|
||||
"POST", api_url, headers=headers, data={}, files=files
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
response_json = response.json()
|
||||
return RemoteFileMetadata.from_dict(response_json["value"][0])
|
||||
|
||||
def download_file(
|
||||
self, *, remote_file_path: str, local_file_path: Optional[str] = None
|
||||
) -> BinaryIO:
|
||||
"""Download a file from the session.
|
||||
|
||||
Args:
|
||||
remote_file_path: The path to download the file from,
|
||||
relative to `/mnt/data`.
|
||||
local_file_path: The path to save the downloaded file to.
|
||||
If not provided, the file is returned as a BufferedReader.
|
||||
|
||||
Returns:
|
||||
BinaryIO: The data of the downloaded file.
|
||||
"""
|
||||
access_token = self.access_token_provider()
|
||||
encoded_remote_file_path = urllib.parse.quote(remote_file_path)
|
||||
api_url = self._build_url(f"files/content/{encoded_remote_file_path}")
|
||||
headers = {
|
||||
"Authorization": f"Bearer {access_token}",
|
||||
"User-Agent": USER_AGENT,
|
||||
}
|
||||
|
||||
response = requests.get(api_url, headers=headers)
|
||||
response.raise_for_status()
|
||||
|
||||
if local_file_path:
|
||||
with open(local_file_path, "wb") as f:
|
||||
f.write(response.content)
|
||||
|
||||
return BytesIO(response.content)
|
||||
|
||||
def list_files(self) -> List[RemoteFileMetadata]:
|
||||
"""List the files in the session.
|
||||
|
||||
Returns:
|
||||
list[RemoteFileMetadata]: The metadata for the files in the session
|
||||
"""
|
||||
access_token = self.access_token_provider()
|
||||
api_url = self._build_url("files")
|
||||
headers = {
|
||||
"Authorization": f"Bearer {access_token}",
|
||||
"User-Agent": USER_AGENT,
|
||||
}
|
||||
|
||||
response = requests.get(api_url, headers=headers)
|
||||
response.raise_for_status()
|
||||
|
||||
response_json = response.json()
|
||||
return [RemoteFileMetadata.from_dict(entry) for entry in response_json["value"]]
|
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,104 @@
|
||||
[tool.poetry]
|
||||
name = "langchain-azure-dynamic-sessions"
|
||||
version = "0.1.0rc0"
|
||||
description = "An integration package connecting Azure Container Apps dynamic sessions and LangChain"
|
||||
authors = []
|
||||
readme = "README.md"
|
||||
repository = "https://github.com/langchain-ai/langchain"
|
||||
license = "MIT"
|
||||
|
||||
[tool.poetry.urls]
|
||||
"Source Code" = "https://github.com/langchain-ai/langchain/tree/master/libs/partners/azure-dynamic-sessions"
|
||||
|
||||
[tool.poetry.dependencies]
|
||||
python = ">=3.8.1,<4.0"
|
||||
langchain-core = "^0.1.52"
|
||||
azure-identity = "^1.16.0"
|
||||
requests = "^2.31.0"
|
||||
|
||||
[tool.poetry.group.test]
|
||||
optional = true
|
||||
|
||||
[tool.poetry.group.test.dependencies]
|
||||
pytest = "^7.3.0"
|
||||
freezegun = "^1.2.2"
|
||||
pytest-mock = "^3.10.0"
|
||||
syrupy = "^4.0.2"
|
||||
pytest-watcher = "^0.3.4"
|
||||
pytest-asyncio = "^0.21.1"
|
||||
langchain-core = {path = "../../core", develop = true}
|
||||
python-dotenv = "^1.0.1"
|
||||
|
||||
[tool.poetry.group.test_integration]
|
||||
optional = true
|
||||
|
||||
[tool.poetry.group.test_integration.dependencies]
|
||||
pytest = "^7.3.0"
|
||||
python-dotenv = "^1.0.1"
|
||||
|
||||
[tool.poetry.group.codespell]
|
||||
optional = true
|
||||
|
||||
[tool.poetry.group.codespell.dependencies]
|
||||
codespell = "^2.2.0"
|
||||
|
||||
[tool.poetry.group.lint]
|
||||
optional = true
|
||||
|
||||
[tool.poetry.group.lint.dependencies]
|
||||
ruff = "^0.1.5"
|
||||
python-dotenv = "^1.0.1"
|
||||
pytest = "^7.3.0"
|
||||
|
||||
[tool.poetry.group.typing.dependencies]
|
||||
mypy = "^0.991"
|
||||
langchain-core = {path = "../../core", develop = true}
|
||||
types-requests = "^2.31.0.20240406"
|
||||
|
||||
[tool.poetry.group.dev]
|
||||
optional = true
|
||||
|
||||
[tool.poetry.group.dev.dependencies]
|
||||
langchain-core = {path = "../../core", develop = true}
|
||||
ipykernel = "^6.29.4"
|
||||
langchain-openai = {path = "../openai", develop = true}
|
||||
langchainhub = "^0.1.15"
|
||||
|
||||
[tool.ruff]
|
||||
select = [
|
||||
"E", # pycodestyle
|
||||
"F", # pyflakes
|
||||
"I", # isort
|
||||
]
|
||||
|
||||
[tool.mypy]
|
||||
disallow_untyped_defs = "True"
|
||||
|
||||
[tool.coverage.run]
|
||||
omit = [
|
||||
"tests/*",
|
||||
]
|
||||
|
||||
[build-system]
|
||||
requires = ["poetry-core>=1.0.0"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
# --strict-markers will raise errors on unknown marks.
|
||||
# https://docs.pytest.org/en/7.1.x/how-to/mark.html#raising-errors-on-unknown-marks
|
||||
#
|
||||
# https://docs.pytest.org/en/7.1.x/reference/reference.html
|
||||
# --strict-config any warnings encountered while parsing the `pytest`
|
||||
# section of the configuration file raise errors.
|
||||
#
|
||||
# https://github.com/tophat/syrupy
|
||||
# --snapshot-warn-unused Prints a warning on unused snapshots rather than fail the test suite.
|
||||
addopts = "--snapshot-warn-unused --strict-markers --strict-config --durations=5"
|
||||
# Registering custom markers.
|
||||
# https://docs.pytest.org/en/7.1.x/example/markers.html#registering-markers
|
||||
markers = [
|
||||
"requires: mark tests as requiring a specific library",
|
||||
"asyncio: mark tests as requiring asyncio",
|
||||
"compile: mark placeholder test used to compile integration tests without running them",
|
||||
]
|
||||
asyncio_mode = "auto"
|
@ -0,0 +1,17 @@
|
||||
import sys
|
||||
import traceback
|
||||
from importlib.machinery import SourceFileLoader
|
||||
|
||||
if __name__ == "__main__":
|
||||
files = sys.argv[1:]
|
||||
has_failure = False
|
||||
for file in files:
|
||||
try:
|
||||
SourceFileLoader("x", file).load_module()
|
||||
except Exception:
|
||||
has_faillure = True
|
||||
print(file)
|
||||
traceback.print_exc()
|
||||
print()
|
||||
|
||||
sys.exit(1 if has_failure else 0)
|
@ -0,0 +1,27 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# This script searches for lines starting with "import pydantic" or "from pydantic"
|
||||
# in tracked files within a Git repository.
|
||||
#
|
||||
# Usage: ./scripts/check_pydantic.sh /path/to/repository
|
||||
|
||||
# Check if a path argument is provided
|
||||
if [ $# -ne 1 ]; then
|
||||
echo "Usage: $0 /path/to/repository"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
repository_path="$1"
|
||||
|
||||
# Search for lines matching the pattern within the specified repository
|
||||
result=$(git -C "$repository_path" grep -E '^import pydantic|^from pydantic')
|
||||
|
||||
# Check if any matching lines were found
|
||||
if [ -n "$result" ]; then
|
||||
echo "ERROR: The following lines need to be updated:"
|
||||
echo "$result"
|
||||
echo "Please replace the code with an import from langchain_core.pydantic_v1."
|
||||
echo "For example, replace 'from pydantic import BaseModel'"
|
||||
echo "with 'from langchain_core.pydantic_v1 import BaseModel'"
|
||||
exit 1
|
||||
fi
|
@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -eu
|
||||
|
||||
# Initialize a variable to keep track of errors
|
||||
errors=0
|
||||
|
||||
# make sure not importing from langchain or langchain_experimental
|
||||
git --no-pager grep '^from langchain\.' . && errors=$((errors+1))
|
||||
git --no-pager grep '^from langchain_experimental\.' . && errors=$((errors+1))
|
||||
|
||||
# Decide on an exit status based on the errors
|
||||
if [ "$errors" -gt 0 ]; then
|
||||
exit 1
|
||||
else
|
||||
exit 0
|
||||
fi
|
@ -0,0 +1 @@
|
||||
test file content
|
@ -0,0 +1,7 @@
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.mark.compile
|
||||
def test_placeholder() -> None:
|
||||
"""Used for compiling integration tests without running any real tests."""
|
||||
pass
|
@ -0,0 +1,68 @@
|
||||
import json
|
||||
import os
|
||||
from io import BytesIO
|
||||
|
||||
import dotenv
|
||||
|
||||
from langchain_azure_dynamic_sessions import SessionsPythonREPLTool
|
||||
|
||||
dotenv.load_dotenv()
|
||||
|
||||
POOL_MANAGEMENT_ENDPOINT = os.getenv("AZURE_DYNAMIC_SESSIONS_POOL_MANAGEMENT_ENDPOINT")
|
||||
TEST_DATA_PATH = os.path.join(os.path.dirname(__file__), "data", "testdata.txt")
|
||||
TEST_DATA_CONTENT = open(TEST_DATA_PATH, "rb").read()
|
||||
|
||||
|
||||
def test_end_to_end() -> None:
|
||||
tool = SessionsPythonREPLTool(pool_management_endpoint=POOL_MANAGEMENT_ENDPOINT)
|
||||
result = tool.run("print('hello world')\n1 + 1")
|
||||
assert json.loads(result) == {
|
||||
"result": 2,
|
||||
"stdout": "hello world\n",
|
||||
"stderr": "",
|
||||
}
|
||||
|
||||
# upload file content
|
||||
uploaded_file1_metadata = tool.upload_file(
|
||||
remote_file_path="test1.txt", data=BytesIO(b"hello world!!!!!")
|
||||
)
|
||||
assert uploaded_file1_metadata.filename == "test1.txt"
|
||||
assert uploaded_file1_metadata.size_in_bytes == 16
|
||||
assert uploaded_file1_metadata.full_path == "/mnt/data/test1.txt"
|
||||
downloaded_file1 = tool.download_file(remote_file_path="test1.txt")
|
||||
assert downloaded_file1.read() == b"hello world!!!!!"
|
||||
|
||||
# upload file from buffer
|
||||
with open(TEST_DATA_PATH, "rb") as f:
|
||||
uploaded_file2_metadata = tool.upload_file(remote_file_path="test2.txt", data=f)
|
||||
assert uploaded_file2_metadata.filename == "test2.txt"
|
||||
downloaded_file2 = tool.download_file(remote_file_path="test2.txt")
|
||||
assert downloaded_file2.read() == TEST_DATA_CONTENT
|
||||
|
||||
# upload file from disk, specifying remote file path
|
||||
uploaded_file3_metadata = tool.upload_file(
|
||||
remote_file_path="test3.txt", local_file_path=TEST_DATA_PATH
|
||||
)
|
||||
assert uploaded_file3_metadata.filename == "test3.txt"
|
||||
downloaded_file3 = tool.download_file(remote_file_path="test3.txt")
|
||||
assert downloaded_file3.read() == TEST_DATA_CONTENT
|
||||
|
||||
# upload file from disk, without specifying remote file path
|
||||
uploaded_file4_metadata = tool.upload_file(local_file_path=TEST_DATA_PATH)
|
||||
assert uploaded_file4_metadata.filename == os.path.basename(TEST_DATA_PATH)
|
||||
downloaded_file4 = tool.download_file(
|
||||
remote_file_path=uploaded_file4_metadata.filename
|
||||
)
|
||||
assert downloaded_file4.read() == TEST_DATA_CONTENT
|
||||
|
||||
# list files
|
||||
remote_files_metadata = tool.list_files()
|
||||
assert len(remote_files_metadata) == 4
|
||||
remote_file_paths = [metadata.filename for metadata in remote_files_metadata]
|
||||
expected_filenames = [
|
||||
"test1.txt",
|
||||
"test2.txt",
|
||||
"test3.txt",
|
||||
os.path.basename(TEST_DATA_PATH),
|
||||
]
|
||||
assert set(remote_file_paths) == set(expected_filenames)
|
@ -0,0 +1,9 @@
|
||||
from langchain_azure_dynamic_sessions import __all__
|
||||
|
||||
EXPECTED_ALL = [
|
||||
"SessionsPythonREPLTool",
|
||||
]
|
||||
|
||||
|
||||
def test_all_imports() -> None:
|
||||
assert sorted(EXPECTED_ALL) == sorted(__all__)
|
@ -0,0 +1,208 @@
|
||||
import json
|
||||
import re
|
||||
import time
|
||||
from unittest import mock
|
||||
from urllib.parse import parse_qs, urlparse
|
||||
|
||||
from azure.core.credentials import AccessToken
|
||||
|
||||
from langchain_azure_dynamic_sessions import SessionsPythonREPLTool
|
||||
from langchain_azure_dynamic_sessions.tools.sessions import (
|
||||
_access_token_provider_factory,
|
||||
)
|
||||
|
||||
POOL_MANAGEMENT_ENDPOINT = "https://westus2.dynamicsessions.io/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/sessions-rg/sessionPools/my-pool"
|
||||
|
||||
|
||||
def test_default_access_token_provider_returns_token() -> None:
|
||||
access_token_provider = _access_token_provider_factory()
|
||||
with mock.patch(
|
||||
"azure.identity.DefaultAzureCredential.get_token"
|
||||
) as mock_get_token:
|
||||
mock_get_token.return_value = AccessToken("token_value", 0)
|
||||
access_token = access_token_provider()
|
||||
assert access_token == "token_value"
|
||||
|
||||
|
||||
def test_default_access_token_provider_returns_cached_token() -> None:
|
||||
access_token_provider = _access_token_provider_factory()
|
||||
with mock.patch(
|
||||
"azure.identity.DefaultAzureCredential.get_token"
|
||||
) as mock_get_token:
|
||||
mock_get_token.return_value = AccessToken(
|
||||
"token_value", int(time.time() + 1000)
|
||||
)
|
||||
access_token = access_token_provider()
|
||||
assert access_token == "token_value"
|
||||
assert mock_get_token.call_count == 1
|
||||
|
||||
mock_get_token.return_value = AccessToken(
|
||||
"new_token_value", int(time.time() + 1000)
|
||||
)
|
||||
access_token = access_token_provider()
|
||||
assert access_token == "token_value"
|
||||
assert mock_get_token.call_count == 1
|
||||
|
||||
|
||||
def test_default_access_token_provider_refreshes_expiring_token() -> None:
|
||||
access_token_provider = _access_token_provider_factory()
|
||||
with mock.patch(
|
||||
"azure.identity.DefaultAzureCredential.get_token"
|
||||
) as mock_get_token:
|
||||
mock_get_token.return_value = AccessToken("token_value", int(time.time() - 1))
|
||||
access_token = access_token_provider()
|
||||
assert access_token == "token_value"
|
||||
assert mock_get_token.call_count == 1
|
||||
|
||||
mock_get_token.return_value = AccessToken(
|
||||
"new_token_value", int(time.time() + 1000)
|
||||
)
|
||||
access_token = access_token_provider()
|
||||
assert access_token == "new_token_value"
|
||||
assert mock_get_token.call_count == 2
|
||||
|
||||
|
||||
@mock.patch("requests.post")
|
||||
@mock.patch("azure.identity.DefaultAzureCredential.get_token")
|
||||
def test_code_execution_calls_api(
|
||||
mock_get_token: mock.MagicMock, mock_post: mock.MagicMock
|
||||
) -> None:
|
||||
tool = SessionsPythonREPLTool(pool_management_endpoint=POOL_MANAGEMENT_ENDPOINT)
|
||||
mock_post.return_value.json.return_value = {
|
||||
"$id": "1",
|
||||
"properties": {
|
||||
"$id": "2",
|
||||
"status": "Success",
|
||||
"stdout": "hello world\n",
|
||||
"stderr": "",
|
||||
"result": "",
|
||||
"executionTimeInMilliseconds": 33,
|
||||
},
|
||||
}
|
||||
mock_get_token.return_value = AccessToken("token_value", int(time.time() + 1000))
|
||||
|
||||
result = tool.run("print('hello world')")
|
||||
|
||||
assert json.loads(result) == {
|
||||
"result": "",
|
||||
"stdout": "hello world\n",
|
||||
"stderr": "",
|
||||
}
|
||||
|
||||
api_url = f"{POOL_MANAGEMENT_ENDPOINT}/code/execute"
|
||||
headers = {
|
||||
"Authorization": "Bearer token_value",
|
||||
"Content-Type": "application/json",
|
||||
"User-Agent": mock.ANY,
|
||||
}
|
||||
body = {
|
||||
"properties": {
|
||||
"codeInputType": "inline",
|
||||
"executionType": "synchronous",
|
||||
"code": "print('hello world')",
|
||||
}
|
||||
}
|
||||
mock_post.assert_called_once_with(mock.ANY, headers=headers, json=body)
|
||||
|
||||
called_headers = mock_post.call_args.kwargs["headers"]
|
||||
assert re.match(
|
||||
r"^langchain-azure-dynamic-sessions/\d+\.\d+\.\d+.* \(Language=Python\)",
|
||||
called_headers["User-Agent"],
|
||||
)
|
||||
|
||||
called_api_url = mock_post.call_args.args[0]
|
||||
assert called_api_url.startswith(api_url)
|
||||
|
||||
|
||||
@mock.patch("requests.post")
|
||||
@mock.patch("azure.identity.DefaultAzureCredential.get_token")
|
||||
def test_uses_specified_session_id(
|
||||
mock_get_token: mock.MagicMock, mock_post: mock.MagicMock
|
||||
) -> None:
|
||||
tool = SessionsPythonREPLTool(
|
||||
pool_management_endpoint=POOL_MANAGEMENT_ENDPOINT,
|
||||
session_id="00000000-0000-0000-0000-000000000003",
|
||||
)
|
||||
mock_post.return_value.json.return_value = {
|
||||
"$id": "1",
|
||||
"properties": {
|
||||
"$id": "2",
|
||||
"status": "Success",
|
||||
"stdout": "",
|
||||
"stderr": "",
|
||||
"result": "2",
|
||||
"executionTimeInMilliseconds": 33,
|
||||
},
|
||||
}
|
||||
mock_get_token.return_value = AccessToken("token_value", int(time.time() + 1000))
|
||||
tool.run("1 + 1")
|
||||
call_url = mock_post.call_args.args[0]
|
||||
parsed_url = urlparse(call_url)
|
||||
call_identifier = parse_qs(parsed_url.query)["identifier"][0]
|
||||
assert call_identifier == "00000000-0000-0000-0000-000000000003"
|
||||
|
||||
|
||||
def test_sanitizes_input() -> None:
|
||||
tool = SessionsPythonREPLTool(pool_management_endpoint=POOL_MANAGEMENT_ENDPOINT)
|
||||
with mock.patch("requests.post") as mock_post:
|
||||
mock_post.return_value.json.return_value = {
|
||||
"$id": "1",
|
||||
"properties": {
|
||||
"$id": "2",
|
||||
"status": "Success",
|
||||
"stdout": "",
|
||||
"stderr": "",
|
||||
"result": "",
|
||||
"executionTimeInMilliseconds": 33,
|
||||
},
|
||||
}
|
||||
tool.run("```python\nprint('hello world')\n```")
|
||||
body = mock_post.call_args.kwargs["json"]
|
||||
assert body["properties"]["code"] == "print('hello world')"
|
||||
|
||||
|
||||
def test_does_not_sanitize_input() -> None:
|
||||
tool = SessionsPythonREPLTool(
|
||||
pool_management_endpoint=POOL_MANAGEMENT_ENDPOINT, sanitize_input=False
|
||||
)
|
||||
with mock.patch("requests.post") as mock_post:
|
||||
mock_post.return_value.json.return_value = {
|
||||
"$id": "1",
|
||||
"properties": {
|
||||
"$id": "2",
|
||||
"status": "Success",
|
||||
"stdout": "",
|
||||
"stderr": "",
|
||||
"result": "",
|
||||
"executionTimeInMilliseconds": 33,
|
||||
},
|
||||
}
|
||||
tool.run("```python\nprint('hello world')\n```")
|
||||
body = mock_post.call_args.kwargs["json"]
|
||||
assert body["properties"]["code"] == "```python\nprint('hello world')\n```"
|
||||
|
||||
|
||||
def test_uses_custom_access_token_provider() -> None:
|
||||
def custom_access_token_provider() -> str:
|
||||
return "custom_token"
|
||||
|
||||
tool = SessionsPythonREPLTool(
|
||||
pool_management_endpoint=POOL_MANAGEMENT_ENDPOINT,
|
||||
access_token_provider=custom_access_token_provider,
|
||||
)
|
||||
|
||||
with mock.patch("requests.post") as mock_post:
|
||||
mock_post.return_value.json.return_value = {
|
||||
"$id": "1",
|
||||
"properties": {
|
||||
"$id": "2",
|
||||
"status": "Success",
|
||||
"stdout": "",
|
||||
"stderr": "",
|
||||
"result": "",
|
||||
"executionTimeInMilliseconds": 33,
|
||||
},
|
||||
}
|
||||
tool.run("print('hello world')")
|
||||
headers = mock_post.call_args.kwargs["headers"]
|
||||
assert headers["Authorization"] == "Bearer custom_token"
|
Loading…
Reference in New Issue