In this lesson, we will explore the 8 essential principles of creating effective prompts. These principles will help you guide AI models to produce accurate, relevant, and high-quality responses no matter what kind of use-case you have. They are like universal guidelines for any type of prompt which you can apply practically any time you work with an LLM.
1. Give Direction
Definition
Giving direction involves clearly instructing the AI on what you want it to do. Being precise is always better. Being imprecise can lead to the AI misinterpreting what you want.
Importance
- Clarity: Ensures that the AI understands the task, reducing ambiguity.
- Focus: Helps the AI concentrate on relevant information, improving the quality of the response.
Examples
- Direct Instruction: “Summarize the key points of the following article.”
- Guided Task: “Compare and contrast the benefits of solar and wind energy.”
Tips
- Be explicit in your instructions.
- Avoid vague language to minimize misinterpretation.
2. Balance Complexity
Definition
Balancing complexity means adjusting the difficulty and detail of the prompt to match the AI’s capabilities and the desired depth of the response.
Importance
- Appropriateness: Ensures that the prompt is neither too simple nor too complex for the AI.
- Manageability: Helps the AI generate responses that are thorough but not overwhelming.
Examples
- Simple Task: “List three popular tourist destinations in Paris.”
- Complex Task: “Discuss the economic impacts of tourism in Paris over the past decade.”
Tips
- Consider the AI’s limitations and strengths.
- Start with simpler prompts and gradually increase complexity as needed.
3. Specify Format
Definition
Specifying the format involves defining how you want the AI to structure its response.
Importance
- Consistency: Ensures that responses are in a useful and predictable format.
- Readability: Makes the output easier to understand and utilize.
Examples
- Bullet Points: “List the advantages of electric cars in bullet points.”
- Structured Response: “Write a five-paragraph essay on climate change.”
Tips
- Choose a format that best suits your needs (e.g., lists, paragraphs, tables).
- Clearly outline the desired structure in the prompt.
4. Provide Examples
Definition
Providing examples means including sample responses or scenarios to illustrate what you expect from the AI.
Importance
- Guidance: Helps the AI understand the type of response you are looking for.
- Accuracy: Reduces the likelihood of irrelevant or incorrect answers.
Examples
- Example Scenario: “For instance, if asked about healthy breakfast options, you might list oatmeal, fruit smoothies, and yogurt.”
- Sample Response: “Example: ‘The capital of France is Paris.’”
Tips
- Include examples that are clear and relevant to the task.
- Use examples to clarify complex or nuanced prompts.
5. Divide Work
Definition
Dividing work involves breaking down complex tasks into smaller, more manageable parts. The more you try to make the LLM do at once, the higher the chance that the result won’t be so great.
Importance
- Simplicity: Makes it easier for the AI to handle complex tasks step by step.
- Focus: Ensures each part of the task receives adequate attention.
Examples
- Step-by-Step Instructions: “First, summarize the introduction. Then, list the main points of each section. Finally, provide a conclusion.”
- Task Segmentation: “Analyze the financial performance for Q1, Q2, Q3, and Q4 separately.”
Tips
- Identify logical sections or stages of the task.
- Provide clear instructions for each part.
6. Manage Context Length
Definition
Context length (or context window) is the amount of text an AI model can handle at once, like human short-term memory, and it’s measured in tokens (remember 1 token ≈ 0.75 words on average). For example, with a context length of 100,000 tokens, an AI can process approximately 75,000 words. The context is important for helping a model understand the text and generate coherent responses, but it can “forget” if you are trying to send more tokens than the model’s context length.
Importance
- Prevents Information Overload: Limiting the context helps prevent the AI from being overwhelmed by too much information, which can degrade the quality of its responses, or cause the LLM to “forget” something .
- Improves Coherence and Relevance: By focusing on a manageable window of text, the AI can maintain the relevance and coherence of its outputs, ensuring that responses are appropriately connected to the input.
- Aids in Focusing Responses: By constraining the amount of text, it prompts the AI to focus on the most relevant information, which can improve the accuracy and specificity of the answers.
Examples
- Script Writing: When an AI is used to write a script, keeping the prompt concise and focused on the immediate context (like the scene being written) ensures that the dialogue remains relevant and true to the characters’ development and the plot.
- Technical Support: In customer support scenarios, limiting the context to the customer’s current issue rather than the entire history of interactions can lead the AI to provide more direct and applicable solutions.
Tips
- More advanced prompts can sometimes use a lot of tokens, especially if they contain examples.
- Reduce redundancy and useless information in your prompts to conserve tokens, allowing more room for the AI to generate detailed and nuanced responses.
7. Know What You Want
Definition
Many people attempt to start working with LLMs without actually knowing what they are trying to acheive, which naturally leads to bad results. In reality, however, whenever you interact with humans and AI using language, you can do three things: have a conversation, give a command, or ask a question. Alternatively, you can mix it up by creating a detailed system prompt for a conversational chatbot, but integrate precise command and question techniques.
Importance
- Clarity: The LLM can better fulfill its task when it is set up with a clear purpose.
- Optimal Results: Directly leads to higher quality and more satisfactory results.
- Iterability: Allows for easier testing and improving of your prompts.
Examples
- You want to have a brainstorming partner chatbot, so you set up its system prompt to be very conversational and critical.
- You want an LLM to reformat some text into a specific style, so you set up a detailed system prompt with commands and examples.
- You want a document question-and-answer chatbot, so you set up its system prompt in a way that it answers questions in a certain way using information from an uploaded document.
Tips
- Start by answering the question “What do I need this bot to do?”
- Use your answer to that question to formulate a detailed system prompt.
- Keep one “base prompt” that you consistently improve upon.
8. Evaluate Quality
Definition
Evaluating quality means assessing the AI’s responses to ensure they meet your standards and expectations. This is obvious but important because some people have a tendency to just trust the LLM’s output without checking its quality.
Importance
- Effective length: The effective context length is usually around 1/2 of the entire LLM’s context length
- Accuracy: Confirms that the information provided is correct and relevant.
- Improvement: Identifies areas where the prompt or AI response can be refined.
Examples
- Checklists: “Verify that the response includes at least three key points, accurate data, and a clear conclusion.”
- Criteria: “Evaluate the response based on clarity, relevance, and completeness.”
Tips
- Develop a set of criteria for assessing the quality of responses.
- Provide feedback to improve future responses.
- Remember that the AI can hallucinate so fact-check important information.