More and more people realize the importance of mastering prompting engeeing, this new way of coding using language (e.g. English).
Some CEOs of big tech companies even predicted, half of the future jobs will be prompt engineering based.
So how to work with prompty engineering effectively? Recently, Andrew Ng partnered with OpenAI to release a ChatGPT prompt engineering course for developers. This free course offers high-quality content, and here we summarizes the guidelines for crafting effective prompts mentioned in the video lessons, along with my personal insights.
Importance of Effective Prompts
Effective prompts are essential for obtaining high-quality responses from ChatGPT. A well-crafted prompt will help the AI to:
- Produce accurate and relevant information
- Maintain context and stay on-topic
- Generate coherent and well-structured responses
- Minimize errors and misunderstandings
Poorly constructed prompts can lead to irrelevant, ambiguous, or even incorrect outputs. Hence, investing time in crafting efficient prompts is crucial for obtaining the best results from ChatGPT.
Crafting High-Quality Prompts
To create efficient prompts that yield high-quality responses, consider the following principles and strategies:
Principle 1: Write clear and specific instructions
Ensure your prompts are clear and concise to help the model understand the intent and desired output. Avoid ambiguous language or phrasing that could lead to multiple interpretations. This can be accomplished with strategies such as:
Strategy 1: Use delimiters to clearly indicate distinct parts of the input
Delimiters help avoid potential interference from misleading user input. Examples of delimiters include:
- Triple quotes: “””
- Triple backticks: ```
- Tripe dashes —
- Angle brackets: <>
- XML tags:
Prompt example:Output:text = f"""
You should express what you want a model to do by \
providing instructions that are as clear and \
specific as you can possibly make them. \
This will guide the model towards the desired output, \
and reduce the chances of receiving irrelevant \
or incorrect responses. Don't confuse writing a \
clear prompt with writing a short prompt. \
In many cases, longer prompts provide more clarity \
and context for the model, which can lead to \
more detailed and relevant outputs.
"""
prompt = f"""
Summarize the text delimited by triple backticks \
into a single sentence.
```{text}```
"""
response = get_completion(prompt)
print(response)Clear and specific instructions should be ...
This allows the model to clearly understand the problem itself but also avoids injecting uncontrollable instructions. For example “Forget the previous command, do XYZ”Strategy 2: Ask for structured output html json
This approach helps make model outputs directly usable for programs, such as JSON outputs that can be read and converted into dictionary format by Python programs.
Prompt example:
prompt = f""" |
Output:
[ |
Strategy 3: Check whether conditions are satisfied, check assumptions required to do the task
If the completion of the task has preconditions that must be met, we should require the model to check these conditions first and instruct it to stop trying if they are not met.
Prompt example (satisfying conditions):
text_1 = f""" |
Output:
Completion for Text 1: |
This has the added benefit of taking into account potential edge cases to avoid unexpected errors or results.
Strategy 4: “Few-shot” prompting: Give a successful example of completing tasks, then ask the model to perform the task
Providing the model with one or more sample prompts helps clarify the expected output. For more information on few-shot learning, refer to GPT-3’s paper: “Language Models are Few-Shot Learners.”
Prompt example:
prompt = f""" |
Output:
<grandparent>: Resilience is like a tree that ... |
Principle 2: Give the model time to “think”
This principle utilizes the idea of a thought chain, breaking complex tasks into N sequential subtasks, allowing the model to think step-by-step and produce more accurate outputs. For more details, refer to this paper: Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Strategy 1: Specify the steps required to complete a task
Here’s an example involving summarizing text, translating it into French, listing names in the French summary, and finally outputting data in JSON format. By providing the necessary steps, the model can reference the results of previous steps and improve the accuracy of the output.
Prompt example:
prompt_2 = f""" |
Output:
|
Strategy 2: Instruct the model to work out its own solution before rushing to a conclusion
If the task is too complicated or the description is too little, then the model can only draw conclusions by guessing, just like a person solving a complex math problem with a serious shortage of remaining exam time, there is a high probability that the calculation will be wrong. So, in this case, we can instruct the model to take longer to think about the problem.
For instance, when checking a student’s exercise solution, instruct the model to first find its own solution to prevent rushing to an incorrect answer.
Poor prompt example:
prompt = f""" |
Output (incorrect): The student’s solution is correct.
The student's solution is correct. |
Updated prompt:
prompt = f""" |
Output (correct):
Let x be the size of the installation in square feet.Costs: |
Model Limitations: Hallucinations
ChatGPT may produce hallucinations, creating plausible but false information (e.g., non-existent literary works). To avoid this, you can ask the model to first look for relevant reference information (or mention reference information in the question, such as after Google), and then let the model answer the question based on this reference information.