🛠️ Generated Knowledge Prompting

What Is It?

Generated Knowledge Prompting is a technique in prompt engineering that leverages a Large Language Model’s (LLM) ability to generate and utilize relevant background information before providing an answer to a question. Instead of directly answering, the model first generates related knowledge or context, which helps it to reason through the question more effectively. This method enhances the accuracy and depth of responses, particularly for questions that require commonsense reasoning or specific factual knowledge.

By prompting the model to produce auxiliary information, we tap into its extensive training data, encouraging a more thorough exploration of the topic at hand. This approach aligns with how humans often think—by recalling relevant information before arriving at an answer.

How Does It Work?

The process of Generated Knowledge Prompting involves two main steps:

  1. Knowledge Generation: The model is first instructed to generate relevant facts, definitions, explanations, or context about the subject of the question. This step activates the model’s internal repository of knowledge related to the topic.
  2. Answer Generation: Using the generated knowledge as a foundation, the model then formulates an answer to the original question. The prior information guides the reasoning process, leading to a more accurate and comprehensive response.

By structuring prompts to include these steps, we encourage the model to reason more deeply, reducing the likelihood of superficial or incorrect answers.

When Do You Use It?

Generated Knowledge Prompting is particularly useful in situations where:

  • Complex Reasoning is Required: Questions that involve multi-step reasoning or require understanding of nuanced concepts benefit from this approach.
  • Commonsense or World Knowledge is Needed: When answering questions that rely on general world knowledge or common sense.
  • Preventing Misconceptions: To avoid incorrect answers that stem from misunderstandings or oversights.
  • Educational and Explanatory Contexts: When detailed explanations enhance the value of the response, such as in tutoring or informative discussions.

This technique is valuable for improving the performance of LLMs on tasks where direct answers may be insufficient due to the complexity of the topic or potential ambiguities.

Practical Example

Let’s illustrate how Generated Knowledge Prompting works with a different example.

In a User Prompt

Direct Question:

Question: Can penguins fly? Yes or No?

When asked directly, the model might incorrectly answer:

Answer: Yes.

This is incorrect because, while penguins are birds, they cannot fly.

Using Generated Knowledge Prompting:

Step 1: Prompt the Model to Generate Knowledge

Input: Can penguins fly?
Knowledge:

Model Generates Knowledge:

Penguins are a group of aquatic, flightless birds that live primarily in the Southern Hemisphere. While they have wings and are classified as birds, their wings have evolved into flippers adapted for swimming rather than flying.

Step 2: Use the Generated Knowledge to Answer the Question

Question: Can penguins fly? Yes or No?
Knowledge: Penguins are a group of aquatic, flightless birds that live primarily in the Southern Hemisphere. While they have wings and are classified as birds, their wings have evolved into flippers adapted for swimming rather than flying.

Explain and Answer:

Model’s Response:

No. Penguins cannot fly. Although they are birds with wings, their wings are flippers adapted for swimming underwater rather than flying in the air.

By first generating relevant knowledge about penguins, the model accurately explains and answers the question.

In a System Prompt

Alternatively, we can set up a system prompt that instructs the model to always generate knowledge before answering.

System Prompt:

You are an AI assistant that provides accurate answers by first generating relevant background knowledge before responding to questions, following this structure:

Knowledge:
{YOUR KNOWLEDGE HERE}

Answer:
{YOUR ANSWER HERE, AFTER CONSIDERING YOUR KNOWLEDGE}

User Prompt:

Is it safe to eat raw chicken? Yes or No?

Model’s Response:

Knowledge:
Raw chicken often contains harmful bacteria such as Salmonella and Campylobacter. Consuming raw or undercooked chicken can lead to foodborne illnesses, including food poisoning. Proper cooking kills these bacteria, making the chicken safe to eat.

Explain and Answer:
No. It is not safe to eat raw chicken because it may contain harmful bacteria that can cause foodborne illnesses. Chicken should be cooked thoroughly to eliminate any potential pathogens.

By embedding the knowledge-generation step in the system prompt, the model consistently provides well-reasoned and accurate answers.


This example demonstrates how Generated Knowledge Prompting enhances the model’s ability to provide correct and detailed responses by explicitly requesting relevant information before answering. It helps the model to process the question more thoroughly, similar to how a human might recall important facts before responding, thereby improving the overall quality of the interaction.