top of page

Prompting: A Truly Practical Guide

  • Writer: MinglesAI
    MinglesAI
  • Dec 12, 2024
  • 11 min read

Did you know that a prompt is more than just text you type into a box? Each prompt is almost like magic, where literally every word affects the outcome. In this article, we'll show you a practical guide to becoming a real wizard.



Introduction

So what's the "magic" behind prompting? It all comes down to making the model think the way we want it to. LLMs are black boxes, but we've cracked them open to understand if we can control them.

We've read and tested dozens of different prompting manuals, gathered all the best practices in one place, and are ready to share them with you.

Are Classic Prompting and Prompt Engineering the Same Thing?

Basic prompting is simply asking the model a question or giving it a simple instruction. Prompt engineering is a more complex process.

Basic Prompting

Prompt Engineering

Single interactions with simple queries.

Multiple dialogues, complex instructions, and carefully structured inputs and outputs.

Can be vague or ambiguous.

Prompts are precise, leaving little room for misinterpretation by the model.

One-time iteration.

Multiple testing, analysis, and improvement of prompts over time.

How Does a Prompt Engineer Work?

  1. Creates the Initial Prompt

First, let's define the end goal. Then, based on that, we formulate a draft version of the prompt.

  1. Tests and Identifies Problems

We check how well the prompt performs its task. How to test:

  • Prepare test cases describing various scenarios and edge cases

  • Run the draft prompt on test cases

  • Analyze the results

(more about how to test prompts is explained here)

  1. Chooses the Appropriate Technique

We determine what changes we want to make to the prompt based on testing results. How to decide:

  • Identify why a particular problem occurs

  • Study prompt development methods that can solve the problem

  • Choose a method

  • Implements Improvements

We improve the draft prompt. If you've found several problems for which you've chosen different methods - change the prompt step by step. This way, you can evaluate the results of changes more clearly.

  1. Iterates and Refines

  2. Test the improved prompt on the same test cases

  3. Review how the results have changed, if at all

  4. Find problems that haven't been solved, or new ones that have appeared

  5. Repeat the cycle until you get the desired result

Prompting Techniques

Like in any field, certain established methods have emerged in prompt writing. There are many techniques, but we've highlighted the most popular and useful ones.

Zero/Few-Shot

Zero-shot prompting is a technique of creating a prompt without using examples. In other words, we don't use input and output examples to train the model.

When to use:

  • If the task is clear from the instruction

  • If the model can solve the task on its own (e.g., common NLP tasks - sentiment analysis, text classification, etc.)

  • If examples might constrain the model too much

  • If it's impossible to cover all possible cases with examples

Few-shot prompting is a technique of creating a prompt with examples. That is, we use several input and output examples to train the model.

When to use:

  • If the task cannot be clearly described by instruction

  • If the desired task outcome differs from common standards (e.g., you want to label text sentiment not as positive/negative, but as neutral, enthusiastic, etc.)

  • If it's possible to cover all possible cases with examples OR examples can be adapted by the model for different cases

Role-based Prompting

Role-based prompting is a technique of creating a prompt based on assigning a role / perspective etc. to the model.

This technique is useful when you need answers from a specific perspective. For example:

  1. You could simply ask the model to describe the dangers of global warming. Then we get the answer:

Global warming causes extreme weather conditions, rising sea levels, ecosystem destruction, and threats to human health and safety worldwide.
  1. You could ask the model to describe the dangers of global warming from an economist's perspective. Then we get the answer:

Global warming can lead to significant economic losses, including infrastructure damage, deterioration of agricultural production, and increased costs for natural disaster protection.

Advantages of the technique:

  • Makes the response more relevant to your context

  • Helps the model avoid errors and hallucinations if you define the role well

However, setting the right role isn't as simple as it seems. Very often, role assignment can either confuse the model or have no effect on the answer at all. It's important to thoroughly test the prompt with different roles and make a choice.

Chain-Of-Thought Prompting

Chain-Of-Thought prompting is a technique of creating a prompt that makes the model think step by step.

This technique makes the model's thinking closer to human reasoning. The model performs each step only after analyzing the previous one. The thing is, logical tasks that are obvious even to a child might not be obvious to the model. For example: Let's give the model a task:

There were 5 apples on a tree, a bird took two, then one more ripened, and a worm bit two more. How many apples are on the tree?

Then the model's answer will be:

There are 2 apples left on the tree.

It's obvious to us that the answer is wrong, but not to the model. Now let's make the model think step by step by simply adding "Think step by step" to the prompt. Here's what answer the model will give this time:

Let's solve this step by step to count how many apples are left on the tree:

1. Initially, there are 5 apples on the tree.
2. A bird took 2 apples: (5 - 2 = 3) apples.
3. One more apple ripened: (3 + 1 = 4) apples.
4. A worm bit 2 apples, but didn't remove them, so the number of apples remains the same: 4 apples.

Therefore, there are 4 apples left on the tree.

Sometimes, to use this technique, it's really enough to just add one sentence to the prompt. However, in more complex tasks, you might need to detail the stages or explain the principle by which the model should act.

Chain-Of-Thought is currently one of the most popular techniques. Various variations of its use have emerged from it. We'll tell you about those too.

Chain-of-Verification Prompting

Chain-of-Verification prompting is a technique of creating a prompt that makes the model verify all previous steps before taking the next one.

This technique complements Chain-Of-Thought, making it more reliable. After all, during its reasoning process, the model can arrive at a false answer even if the reasoning was sequential. For example, in the apple task, I once got this answer:

Let's solve this step by step to count how many apples are left on the tree:

1. Initially, there are 5 apples on the tree.
2. A bird took 2 apples: (5 - 2 = 3) apples.
3. One more apple ripened: (3 + 1 = 4) apples.
4. A worm bit 2 apples, but didn't remove them, so the number of apples changes: (4 - 2 = 2) apples.

Therefore, there are 2 apples left on the tree.

The model reasoned correctly, but the last stage was wrong, which made the answer incorrect. Let's ask the model to verify each of its stages. We'll get this answer:

Let's solve this problem step by step and verify each step:

1. Initially, there are 5 apples on the tree.
2. A bird took 2 apples: (5 - 2 = 3) apples remain on the tree.
3. One more apple ripened: (3 + 1 = 4) apples on the tree.
4. A worm bit 2 apples:
   The worm didn't remove the apples, it only damaged them. The number of apples doesn't change, but the damaged apples are counted.
   So there are still 4 apples on the tree, just 2 of them are damaged.

Therefore, there are 4 apples left on the tree.

Obviously, the answer is correct now.

Chain-of-Note Prompting

Chain-of-Note prompting is a technique of creating a prompt that makes the model take so-called "notes" during the problem-solving process.

Why does the model need these notes? - The thing is, during reasoning, the model might make choices that contradict or don't correspond to each other. Notes are like a way to fix certain reasoning points so that the model can rely on them throughout the entire solution process.

For example, let's ask the model to solve a detective puzzle. We have 5 suspects and facts about each of them. The model must choose the culprit. Let's say one of the suspects is John. At step three, based on some facts, the model decides that John is innocent. However, at step seven, the model chooses John as guilty after receiving new facts and forgetting about its previous reasoning. If we ask the model to take notes, something like "John is innocent because..." will remain in the model's memory, and even with new facts, the model will stick to this conclusion.

Chain-of-Knowledge Prompting

Chain-of-Knowledge prompting is a technique of creating a prompt that makes the model use existing knowledge to solve a problem.

The main difference from Chain-Of-Thought prompting is that the model doesn't make conclusions independently during reasoning. The model relies on known facts and builds a logical chain from them leading to a specific answer.

For example, to solve a problem, you need to use laws of physics. Obviously, you can't rely on the correctness of laws that the model might try to derive independently. But asking the model to solve the problem based on specific laws - absolutely. The answer's accuracy will be higher than in simple Chain-Of-Thought because there won't be any distortion of real facts.

Writing Prompts: Getting Practical

Now that we know the prompt cycle and popular techniques, it's time to move to the most practical part - actually writing prompts. Let's start with the stick - namely,

what NOT to do:

  1. Give Unclear Instructions

Don't write too vague or broad formulations, this can lead to general or irrelevant answers.

  1. Give Too Detailed Instructions

Conversely, don't write too detailed instructions, this can overly constrain the model.

  1. Assume the Model Understands

When referring to a concept/term etc., don't assume the model is definitely familiar with them. Better provide more detailed context to avoid hallucinations.

  1. Speak Figuratively

The model always interprets literally, so figurative language or metaphors can lead to unexpected results.

  1. Contradict Yourself

Always check that your instructions and examples can't be interpreted as contradictory.

Now the carrot - what TO do:

  1. Be Clear and Concise

The prompt should be comprehensive and understandable. Don't use jargon or technical terms that might confuse the model.

  1. Repeat Instructions at the End

Some models may be sensitive to instruction placement in the prompt. Sometimes instructions at the end of the prompt turn out to be more significant than at the beginning.

  1. Use Specific Examples

Specific examples can help the model better grasp the task context. For example, if you ask the model to write a story in Agatha Christie's style, provide a couple of text excerpts from this author as examples.

  1. Vary Formulations

Using different formulations can help the model better understand the task and produce more diverse and creative results. Try using different styles, tones, and formats to see how the model responds.

  1. Standardize Output

Always include a couple of sentences at the end of the prompt about what format the answer should be in. This will make the model's responses more predictable and easier to analyze.

  1. Use Clear Syntax

Use clear syntax in the prompt: punctuation, headings, and section markers. This will make the prompt clearer and easier to interpret.

  1. Break Down the Task

If your task is complex, break it down into several small steps. This way, the model will get intermediate results that are less likely to be incorrect.

  1. Provide Context

If your task requires specific knowledge - provide it to the model. Otherwise, it will make it up.

  1. Define Desired Answer Length

It's always better to specify at least the approximate desired answer length in the prompt. Otherwise, you might face getting a whole article in one request and a couple of sentences in another.

The model doesn't work very well with word count in responses, but it always strictly follows requests for number of sentences or paragraphs.
Also, the model tends to give minimal answers, so if you ask for a minimum of 2 sentences, you'll likely get exactly 2.
  1. Avoid Ambiguity

Always explain terms or concepts that can be interpreted differently. If necessary, you can include definitions directly in the prompt.

  1. Use Positive and Negative Instructions

Clearly state what you want and don't want to clearly formulate expectations. For example, "Don't make up any facts; Use only known terms."

  1. Encourage Model Self-Analysis

Include instructions in the prompt that make the model evaluate its own answers. For example, give it instructions to make notes in parts of answers where it's uncertain.

  1. Reveal the Task Gradually

To guide the model's thinking process, write instructions from general to specific or vice versa.

  1. Control Tone and Style

Always clearly indicate desired tone and style (e.g., formal, casual, technical) so the response meets your expectations. Without instructions, the model will adapt its communication style to the user, which isn't always appropriate.

  1. Set Clear Boundaries

Always write what the model shouldn't discuss or generate. For example, "Don't include political opinions" or "Avoid mentioning specific brands."

  1. Iterate and Experiment

After you've created a prompt, test it on the model and see how it works. If the results don't meet expectations, try refining the prompt by adding more details or changing tone and style.

  1. Use Feedback

Finally, use user feedback or other sources to continuously improve your prompts if you have such an opportunity.

Here's the prompt format suggested by Google's Gemini model developers:
  1. Role - define "who" the model should be

  2. Task - clearly formulate the task or question

  3. Context - provide necessary context

  4. Format - specify response format, tone, and length

  5. Examples - include both positive and negative examples if applicable

  6. Boundaries - set explicit boundaries for content


Your Desktop Checklist for Writing Prompts


1. Is the prompt understandable?

- [ ] Can the prompt be understood at first reading without requiring additional explanations?
- [ ] Does the prompt avoid jargon or overly complex language?

2. Is the prompt specific?

- [ ] Does the prompt ask for exactly what you want, without being too vague or broad?
- [ ] Are clear instructions or examples provided to guide the model?

3. Is the context adequate?

- [ ] Is all necessary additional information included for the model to understand the task?
- [ ] Does the prompt include important details such as task purpose, required tone, or any specific constraints?

4. Is the instruction order logical?

- [ ] Is the most important information placed at the beginning or end of the prompt to emphasize it?
- [ ] Are related instructions logically grouped?

5. Is the prompt flexible?

- [ ] Does the prompt allow the model to seek creative solutions if creative approach is required?
- [ ] Is the prompt open enough not to limit the model's ability to provide diverse answers?

6. Is the prompt free from bias?

- [ ] Have you avoided phrasings that might introduce bias into the model's responses?
- [ ] Is the prompt neutral, not pushing the model toward a particular viewpoint unless intentionally required?

7. Is the prompt effective?

- [ ] Does the prompt get to the point without unnecessary words and complications?
- [ ] Does it reduce the need for multiple iterations to get the right answer?

8. Is the prompt easily reusable?

- [ ] Is the prompt designed so it can be adapted for similar tasks without substantial changes?
- [ ] Have you tested the prompt in different contexts to ensure its reliability?

9. Have you tested the prompt?

- [ ] Have you run the prompt through the model and checked the result for accuracy, relevance, and clarity?
- [ ] Have you refined the prompt based on results to improve effectiveness?

10. Are the outputs consistent?

- [ ] Does the model consistently generate accurate and relevant responses when using this prompt?
- [ ] Are responses stable across different runs?

11. Does the prompt align with goals?

- [ ] Does the prompt align with the overall purpose or task it was designed for?
- [ ] Does the prompt directly contribute to achieving the desired outcome?

Use it!


Prompt engineering isn't just a science, it's a true art that requires practice and constant improvement. Armed with the knowledge from this guide and regularly applying it in practice, you'll be able to create more effective prompts and get exactly the results you need.

And if you want to study additional sources independently, here's a list of various manuals and courses:

 
 
 

Comentários


bottom of page