# Mastering Prompt Engineering
Hey there,
Dive into the fascinating realm of prompt engineering with me! In this note, we'll explore two cool sessions: "Structuring Prompts using CO-STAR" and "Sectioning Prompts Using Delimiters." Think of CO-STAR as your guide to organized and clear prompts, while delimiters are like the secret sauce for breaking down your prompts in a super-smart way. Let's uncover the magic behind these techniques and spice up our chat game with natural language processing.
Happy reading!
## 1. [🔵] Structuring Prompts using the CO-STAR framework
Effective prompt structuring is crucial for eliciting optimal responses from an LLM. The CO-STAR framework, a brainchild of GovTech Singapore’s Data Science & AI team, is a handy template for structuring prompts. It considers all the key aspects that influence the effectiveness and relevance of an LLM’s response, leading to more optimal responses.
CO-STAR framework — Image by author
Here’s how it works:
### (C) Context: Provide background information on the task
This helps the LLM understand the specific scenario being discussed, ensuring its response is relevant.
### (O) Objective: Define what the task is that you want the LLM to perform
Being clear about your objective helps the LLM to focus its response on meeting that specific goal.
### (S) Style: Specify the writing style you want the LLM to use
This could be a particular famous person’s style of writing, or a particular expert in a profession, like a business analyst expert or CEO. This guides the LLM to respond with the manner and choice of words aligned with your needs.
### (T) Tone: Set the attitude of the response
This ensures the LLM’s response resonates with the intended sentiment or emotional context required. Examples are formal, humorous, empathetic, among others.
### (A) Audience: Identify who the response is intended for
Tailoring the LLM’s response to an audience, such as experts in a field, beginners, children, and so on, ensures that it is appropriate and understandable in your required context.
### (R) Response: Provide the response format
This ensures that the LLM outputs in the exact format that you require for downstream tasks. Examples include a list, a JSON, a professional report, and so on. For most LLM applications which work on the LLM responses programmatically for downstream manipulations, a JSON output format would be ideal.
### A practical application of CO-STAR
Here’s a real-world example of why the CO-STAR framework is useful. Let’s say you work as a social media manager, and you need help drafting a Facebook post to advertise your company’s new product.
A quick prompt without CO-STAR might look something like this:
```
Write a facebook post to advertise my company’s new product. My company’s name is Alpha and the product is called Beta, a new ultra-fast hairdryer.
```
Here’s the output from GPT-4:
Introducing the Revolutionary Beta Hairdryer by Alpha!
- Say goodbye to long, tedious hair drying sessions! Alpha is proud to unveil the Beta, our latest innovation in haircare technology.
- Ultra-Fast Drying: The Beta is designed to cut your drying time in half, thanks to its advanced, powerful motor. Get ready faster and enjoy more time doing what you love!
Shop Now: [Link to product page]
While adequate, this output is painfully generic, lacking the specific nuances and targeted appeal necessary to resonate with your company’s audience.
Here’s an example of using the CO-STAR template instead, which reminds you to incorporate other aspects of your task into your prompt, specifically the STYLE, TONE, and AUDIENCE aspects that are missing from the previous quick prompt:
```
# CONTEXT #
I want to advertise my company’s new product. My company’s name is Alpha and the product is called Beta, which is a new ultra-fast hairdryer.
# OBJECTIVE #
Create a Facebook post for me, which aims to get people to click on the product link to purchase it.
# STYLE #
Follow the writing style of successful companies that advertise similar products, such as Dyson.
# TONE #
Persuasive
# AUDIENCE #
My company’s audience profile on Facebook is typically the older generation. Tailor your post to target what this audience typically looks out for in hair products.
# RESPONSE #
The Facebook post, kept concise yet impactful.
```
By applying CO-STAR, GPT-4’s response becomes significantly more targeted and effective:
```
Embrace Effortless Elegance with Alpha Beta — The Ultra-Fast Hairdryer for the Wise Generation.
Rediscover the joy of simple, effective hair care with Alpha Beta. Our latest innovation is more than just a hairdryer; it’s a promise of swift, gentle, and
reliable hair styling for those who appreciate the finer things in life.
- Easy and Intuitive Use: Say goodbye to complicated gadgets. Alpha Beta is
crafted for comfort and simplicity, perfect for those who value straightforward,
hassle-free technology.
- Time-Saving Technology: We understand your time is precious. That’s why Alpha Beta cuts down drying time significantly, giving you more moments to enjoy life’s pleasures.
Make every day a good hair day with Alpha Beta. Experience the blend of
sophistication and simplicity today.
[Your Product Link Here]
```
The CO-STAR framework guides you to provide all of the crucial pieces of information about your task to the LLM in a structured manner, ensuring a tailored and optimized response to exactly what you need.
## 2. [🔵] Sectioning Prompts Using Delimiters
Delimiters are special tokens that help the LLM distinguish which parts of your prompt it should consider as a single unit of meaning. This is important because your entire prompt arrives to the LLM as a single long sequence of tokens. Delimiters provide structure to this sequence of tokens by fencing specific parts of your prompt to be treated differently.
It is noteworthy that delimiters may not make a difference to the quality of an LLM’s response for straightforward tasks. However, the more complex the task, the more impact the usage of delimiters for sectioning has on the LLM’s response.
Delimiters as Special Characters
A delimiter could be any sequence of special characters that usually wouldn’t appear together, for example:
```
###
===
>>>
```
The number and type of special characters chosen is inconsequential, as long as they are unique enough for the LLM to understand them as content separators instead of normal punctuation.
Here’s an example of how you might use such delimiters in a prompt:
```
Classify the sentiment of each conversation in <<<CONVERSATIONS>>> as
‘Positive’ or ‘Negative’. Give the sentiment classifications without any other preamble text.
###
EXAMPLE CONVERSATIONS
[Agent]: Good morning, how can I assist you today?
[Customer]: This product is terrible, nothing like what was advertised!
[Customer]: I’m extremely disappointed and expect a full refund.
[Agent]: Good morning, how can I help you today?
[Customer]: Hi, I just wanted to say that I’m really impressed with your
product. It exceeded my expectations!
###
EXAMPLE OUTPUTS
Negative
Positive
###
<<<
[Agent]: Hello! Welcome to our support. How can I help you today?
[Customer]: Hi there! I just wanted to let you know I received my order, and
it’s fantastic!
[Agent]: That’s great to hear! We’re thrilled you’re happy with your purchase.
Is there anything else I can assist you with?
[Customer]: No, that’s it. Just wanted to give some positive feedback. Thanks
for your excellent service!
[Agent]: Hello, thank you for reaching out. How can I assist you today?
[Customer]: I’m very disappointed with my recent purchase. It’s not what I expected at all.
[Agent]: I’m sorry to hear that. Could you please provide more details so I can help?
[Customer]: The product is of poor quality and it arrived late. I’m really
unhappy with this experience.
>>>
```
Above, the examples are sectioned using the delimiter ###, with the section headings EXAMPLE CONVERSATIONS and EXAMPLE OUTPUTS in capital letters to differentiate them. The preamble states that the conversations to be classified are sectioned inside <<<CONVERSATIONS>>>, and these conversations are subsequently given to the LLM at the bottom of the prompt without any explanatory text, but the LLM understands that these are the conversations it should classify due to the presence of the delimiters <<< and >>>.
Here is the output from GPT-4, with the sentiment classifications given without any other preamble text outputted, like what we asked for:
```
Positive
Negative
```
### Delimiters as XML Tags
Another approach to using delimiters is having them as XML tags. XML tags are tags enclosed in angle brackets, with opening and closing tags. An example is <tag> and </tag>. This is effective as LLMs have been trained on a lot of web content in XML, and have learned to understand its formatting.
Here’s the same prompt above, but structured using XML tags as delimiters instead:
```
Classify the sentiment of the following conversations into one of two classes, using the examples given. Give the sentiment classifications without any other
preamble text.
<classes>
Positive
Negative
</classes>
<example-conversations>
[Agent]: Good morning, how can I assist you today?
[Customer]: This product is terrible, nothing like what was advertised!
[Customer]: I’m extremely disappointed and expect a full refund.
[Agent]: Good morning, how can I help you today?
[Customer]: Hi, I just wanted to say that I’m really impressed with your
product. It exceeded my expectations!
</example-conversations>
<example-classes>
Negative
Positive
</example-classes>
<conversations>
[Agent]: Hello! Welcome to our support. How can I help you today?
[Customer]: Hi there! I just wanted to let you know I received my order, and
it’s fantastic!
[Agent]: That’s great to hear! We’re thrilled you’re happy with your purchase.
Is there anything else I can assist you with?
[Customer]: No, that’s it. Just wanted to give some positive feedback. Thanks
for your excellent service!
[Agent]: Hello, thank you for reaching out. How can I assist you today?
[Customer]: I’m very disappointed with my recent purchase. It’s not what I
expected at all.
[Agent]: I’m sorry to hear that. Could you please provide more details so I
can help?
[Customer]: The product is of poor quality and it arrived late. I’m really
unhappy with this experience.
</conversations>
```
It is beneficial to use the same noun for the XML tag as the words you have used to describe them in the instructions. The instructions we gave in the prompt above were:
```
Classify the sentiment of the following conversations into one of two classes, using the examples given. Give the sentiment classifications without any other
preamble text.
```
Where we used the nouns conversations, classes, and examples. As such, the XML tags we use as delimiters are <conversations>, <classes>, <example-conversations>, and <example-classes>. This ensures that the LLM understands how your instructions relate to the XML tags used as delimiters.
Again, the sectioning of your instructions in a clear and structured manner through the use of delimiters ensures that GPT-4 responds exactly how you want it to:
```
Positive
Negative
```
reference:
[How I Won Singapore’s GPT-4 Prompt Engineering Competition](https://towardsdatascience.com/how-i-won-singapores-gpt-4-prompt-engineering-competition-34c195a93d41)