organize pages
parent
a18dcbb1d7
commit
48c081b892
Binary file not shown.
After Width: | Height: | Size: 45 KiB |
@ -1,3 +1,3 @@
|
||||
# About
|
||||
|
||||
This is the about page! This page is shown on the navbar.
|
||||
The Prompt Engineering Guide is a project by DAIR.AI. It aims to educate researchers and practitioners about prompt engineering.
|
||||
|
@ -0,0 +1,21 @@
|
||||
# Prompting Applications
|
||||
|
||||
In this guide we will cover some advanced and interesting ways we can use prompt engineering to perform useful and more advanced tasks.
|
||||
|
||||
**Note that this section is under heavy development.**
|
||||
|
||||
import { Card, Cards } from 'nextra-theme-docs'
|
||||
|
||||
|
||||
<Cards num={12}>
|
||||
<Card
|
||||
arrow
|
||||
title="Generating Data"
|
||||
href="/applications/generating">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Program-Aided Language Models"
|
||||
href="/applications/pal">
|
||||
</Card>
|
||||
</Cards>
|
@ -1,5 +1,4 @@
|
||||
{
|
||||
"introduction": "Prompting Applications",
|
||||
"generating": "Generating Data",
|
||||
"pal": "Program-Aided Language Models"
|
||||
}
|
@ -1,3 +0,0 @@
|
||||
In this guide we will cover some advanced and interesting ways we can use prompt engineering to perform useful and more advanced tasks.
|
||||
|
||||
**Note that this section is under heavy development.**
|
@ -0,0 +1,104 @@
|
||||
# PAL (Program-Aided Language Models)
|
||||
|
||||
import { Callout, FileTree } from 'nextra-theme-docs'
|
||||
import {Screenshot} from 'components/screenshot'
|
||||
import PAL from '../../img/pal.png'
|
||||
|
||||
[Gao et al., (2022)](https://arxiv.org/abs/2211.10435) presents a method that uses LLMs to read natural language problems and generate programs as the intermediate reasoning steps. Coined, program-aided language models (PAL), it differs from chain-of-thought prompting in that instead of using free-form text to obtain solution it offloads the solution step to a programmatic runtime such as a Python interpreter.
|
||||
|
||||
<Screenshot src={PAL} alt="PAL" />
|
||||
|
||||
Let's look at an example using LangChain and OpenAI GPT-3. We are interested to develop a simple application that's able to interpret the question being asked and provide an answer by leveraging the Python interpreter.
|
||||
|
||||
Specifically, we are interested to create a functionality that allows the use of the LLM to answer questions that require date understanding. We will provide the LLM a prompt that includes a few exemplars which are adopted from [here](https://github.com/reasoning-machines/pal/blob/main/pal/prompt/date_understanding_prompt.py).
|
||||
|
||||
These are the imports we need:
|
||||
|
||||
```python
|
||||
import openai
|
||||
from datetime import datetime
|
||||
from dateutil.relativedelta import relativedelta
|
||||
import os
|
||||
from langchain.llms import OpenAI
|
||||
from dotenv import load_dotenv
|
||||
```
|
||||
|
||||
Let's first configure a few things:
|
||||
|
||||
```python
|
||||
load_dotenv()
|
||||
|
||||
# API configuration
|
||||
openai.api_key = os.getenv("OPENAI_API_KEY")
|
||||
|
||||
# for LangChain
|
||||
os.environ["OPENAI_API_KEY"] = os.getenv("OPENAI_API_KEY")
|
||||
```
|
||||
|
||||
Setup model instance:
|
||||
|
||||
```python
|
||||
llm = OpenAI(model_name='text-davinci-003', temperature=0)
|
||||
```
|
||||
|
||||
Setup prompt + question:
|
||||
|
||||
```python
|
||||
question = "Today is 27 February 2023. I was born exactly 25 years ago. What is the date I was born in MM/DD/YYYY?"
|
||||
|
||||
DATE_UNDERSTANDING_PROMPT = """
|
||||
# Q: 2015 is coming in 36 hours. What is the date one week from today in MM/DD/YYYY?
|
||||
# If 2015 is coming in 36 hours, then today is 36 hours before.
|
||||
today = datetime(2015, 1, 1) - relativedelta(hours=36)
|
||||
# One week from today,
|
||||
one_week_from_today = today + relativedelta(weeks=1)
|
||||
# The answer formatted with %m/%d/%Y is
|
||||
one_week_from_today.strftime('%m/%d/%Y')
|
||||
# Q: The first day of 2019 is a Tuesday, and today is the first Monday of 2019. What is the date today in MM/DD/YYYY?
|
||||
# If the first day of 2019 is a Tuesday, and today is the first Monday of 2019, then today is 6 days later.
|
||||
today = datetime(2019, 1, 1) + relativedelta(days=6)
|
||||
# The answer formatted with %m/%d/%Y is
|
||||
today.strftime('%m/%d/%Y')
|
||||
# Q: The concert was scheduled to be on 06/01/1943, but was delayed by one day to today. What is the date 10 days ago in MM/DD/YYYY?
|
||||
# If the concert was scheduled to be on 06/01/1943, but was delayed by one day to today, then today is one day later.
|
||||
today = datetime(1943, 6, 1) + relativedelta(days=1)
|
||||
# 10 days ago,
|
||||
ten_days_ago = today - relativedelta(days=10)
|
||||
# The answer formatted with %m/%d/%Y is
|
||||
ten_days_ago.strftime('%m/%d/%Y')
|
||||
# Q: It is 4/19/1969 today. What is the date 24 hours later in MM/DD/YYYY?
|
||||
# It is 4/19/1969 today.
|
||||
today = datetime(1969, 4, 19)
|
||||
# 24 hours later,
|
||||
later = today + relativedelta(hours=24)
|
||||
# The answer formatted with %m/%d/%Y is
|
||||
today.strftime('%m/%d/%Y')
|
||||
# Q: Jane thought today is 3/11/2002, but today is in fact Mar 12, which is 1 day later. What is the date 24 hours later in MM/DD/YYYY?
|
||||
# If Jane thought today is 3/11/2002, but today is in fact Mar 12, then today is 3/1/2002.
|
||||
today = datetime(2002, 3, 12)
|
||||
# 24 hours later,
|
||||
later = today + relativedelta(hours=24)
|
||||
# The answer formatted with %m/%d/%Y is
|
||||
later.strftime('%m/%d/%Y')
|
||||
# Q: Jane was born on the last day of Feburary in 2001. Today is her 16-year-old birthday. What is the date yesterday in MM/DD/YYYY?
|
||||
# If Jane was born on the last day of Feburary in 2001 and today is her 16-year-old birthday, then today is 16 years later.
|
||||
today = datetime(2001, 2, 28) + relativedelta(years=16)
|
||||
# Yesterday,
|
||||
yesterday = today - relativedelta(days=1)
|
||||
# The answer formatted with %m/%d/%Y is
|
||||
yesterday.strftime('%m/%d/%Y')
|
||||
# Q: {question}
|
||||
""".strip() + '\n'
|
||||
```
|
||||
|
||||
```python
|
||||
llm_out = llm(DATE_UNDERSTANDING_PROMPT.format(question=question))
|
||||
print(llm_out)
|
||||
```
|
||||
|
||||
```python
|
||||
exec(llm_out)
|
||||
print(born)
|
||||
```
|
||||
|
||||
This will output the following: `02/27/1998`
|
@ -1,5 +1,55 @@
|
||||
# Preface
|
||||
# Prompt Engineering Guide
|
||||
|
||||
Prompt engineering is a relatively new discipline for developing and optimizing prompts to efficiently use language models (LMs) for a wide variety of applications and research topics. Prompt engineering skills help to better understand the capabilities and limitations of large language models (LLMs). Researchers use prompt engineering to improve the capacity of LLMs on a wide range of common and complex tasks such as question answering and arithmetic reasoning. Developers use prompt engineering to design robust and effective prompting techniques that interface with LLMs and other tools.
|
||||
|
||||
Motivated by the high interest in developing with LLMs, we have created this new prompt engineering guide that contains all the latest papers, learning guides, lectures, references, and tools related to prompt engineering.
|
||||
|
||||
import { Card, Cards } from 'nextra-theme-docs'
|
||||
|
||||
<Cards num={9}>
|
||||
<Card
|
||||
arrow
|
||||
title="Introduction"
|
||||
href="/introduction">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Techniques"
|
||||
href="/techniques">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Applications"
|
||||
href="/applications">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Models"
|
||||
href="/models">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Risks & Misuses"
|
||||
href="/risks">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Papers"
|
||||
href="/papers">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Tools"
|
||||
href="/tools">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Datasets"
|
||||
href="/datasets">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Additional Readings"
|
||||
href="/readings">
|
||||
</Card>
|
||||
</Cards>
|
@ -0,0 +1,13 @@
|
||||
# Models
|
||||
|
||||
In this section, we will cover some of the capabilities of language models by applying the latest and most advanced prompting engineering techniques.
|
||||
|
||||
import { Card, Cards } from 'nextra-theme-docs'
|
||||
|
||||
<Cards num={1}>
|
||||
<Card
|
||||
arrow
|
||||
title="ChatGPT"
|
||||
href="/models/chatgpt">
|
||||
</Card>
|
||||
</Cards>
|
@ -1,5 +1,4 @@
|
||||
{
|
||||
"introduction": "Introduction",
|
||||
"chatgpt": "ChatGPT"
|
||||
}
|
||||
|
@ -1,3 +0,0 @@
|
||||
# Models
|
||||
|
||||
In this section, we will cover some of the capabilities of language models by applying the latest and most advanced prompting engineering techniques.
|
@ -1,3 +1,23 @@
|
||||
# Risks & Misuses
|
||||
|
||||
We have seen already how effective well-crafted prompts can be for various tasks using techniques like few-shot learning. As we think about building real-world applications on top of LLMs, it becomes crucial to think about the misuses, risks, and safety involved with language models. This section focuses on highlighting some of the risks and misuses of LLMs via techniques like prompt injections. It also highlights harmful behaviors including how to mitigate via effective prompting techniques. Other topics of interest include generalizability, calibration, biases, social biases, and factuality to name a few.
|
||||
|
||||
import { Card, Cards } from 'nextra-theme-docs'
|
||||
|
||||
<Cards num={3}>
|
||||
<Card
|
||||
arrow
|
||||
title="Adversarial Prompting"
|
||||
href="/risks/adversarial">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Factuality"
|
||||
href="/risks/factuality">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Biases"
|
||||
href="/risks/biases">
|
||||
</Card>
|
||||
</Cards>
|
@ -0,0 +1,70 @@
|
||||
# Prompting Techniques
|
||||
|
||||
By this point, it should be obvious that it helps to improve prompts to get better results on different tasks. That's the whole idea behind prompt engineering.
|
||||
|
||||
While those examples were fun, let's cover a few concepts more formally before we jump into more advanced concepts.
|
||||
|
||||
import { Card, Cards } from 'nextra-theme-docs'
|
||||
|
||||
<Cards num={12}>
|
||||
<Card
|
||||
arrow
|
||||
title="Zero-shot Prompting"
|
||||
href="/techniques/zeroshot">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Few-shot Prompting"
|
||||
href="/techniques/fewshot">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Chain-of-Thought Prompting"
|
||||
href="/techniques/cot">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Zero-shot CoT"
|
||||
href="/techniques/zerocot">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Self-Consistency"
|
||||
href="/techniques/consistency">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Generate Knowledge Prompting"
|
||||
href="/techniques/knowledge">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Automatic Prompt Engineer"
|
||||
href="/techniques/ape">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Active-Prompt"
|
||||
href="/techniques/activeprompt">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Directional Stimulus Prompting"
|
||||
href="/techniques/dsp">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="ReAct"
|
||||
href="/techniques/react">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Multimodal CoT"
|
||||
href="/techniques/multimodalcot">
|
||||
</Card>
|
||||
<Card
|
||||
arrow
|
||||
title="Graph Prompting"
|
||||
href="/techniques/graph">
|
||||
</Card>
|
||||
</Cards>
|
@ -1,5 +0,0 @@
|
||||
# Prompting Techniques
|
||||
|
||||
By this point, it should be obvious that it helps to improve prompts to get better results on different tasks. That's the whole idea behind prompt engineering.
|
||||
|
||||
While those examples were fun, let's cover a few concepts more formally before we jump into more advanced concepts.
|
Loading…
Reference in New Issue