8 Prompt Engineering Examples for Common NLP Tasks

What Is Prompt Engineering?

Prompt engineering involves crafting specific instructions or queries for large language models (LLMs) to get desired outputs. It plays a crucial role in fine-tuning the capabilities of models like OpenAI's GPT-4o, Google Gemini, and Anthropic Claude, ensuring they deliver accurate, relevant, and contextually appropriate responses. Instead of merely providing a broad question or command, prompt engineering involves structuring inputs in a way that optimizes the model's performance.

A well-structured prompt guides the AI effectively, reducing ambiguities and increasing productivity. For instance, specifying the tone, format, or constraints within the prompt can significantly impact the generated response. The approach is iterative, requiring continuous refinement based on output analysis to achieve optimal results.

In this article:

Common Types of Prompt Engineering {#common-types-of-prompt-engineering}

Text Completion Prompts {#text-completion-prompts}

Text completion prompts are used to generate the continuation of a given text. For example, if you input the beginning of a sentence or paragraph, the model attempts to complete it in a coherent manner. This technique is useful in creative writing, where the AI can extend a storyline, create text based on an outline, or generate new ideas.

Text completion can also aid in productivity tools, such as email drafting or report writing. By understanding the context, the AI completes sentences or paragraphs, saving user time. Refinement is often required to ensure the output aligns with the intended message.

Instruction-Based Prompts {#instruction-based-prompts}

Instruction-based prompts guide the AI by providing specific actions or tasks it needs to perform. These prompts require clear, direct language to improve the likelihood of obtaining a precise and relevant response. This type is useful in procedural contexts, such as step-by-step guides, programming tasks, or command execution.

Effective instruction-based prompts can transform a general AI response into a detailed, actionable output. This requires structuring prompts to cover the necessary scope while being clear and unambiguous.

Multiple-Choice Prompts {#multiple-choice-prompts}

Multiple-choice prompts present the AI with predefined options from which to choose. This format is beneficial in educational or decision-making applications. By providing limited choices, the AI focuses on specific answers, which can improve accuracy and relevance compared to open-ended responses.

These prompts necessitate careful formulation to ensure that all choices are plausible and clearly distinguishable. This method also helps in assessing the model’s understanding and capability in selecting the best option based on the given context.

Contextual Prompts {#contextual-prompts}

Contextual prompts provide background information or context to guide the AI in generating a more informed and relevant response. These prompts help in areas where understanding prior events, user history, or situational aspects is crucial. For instance, in customer support, providing context about the issue can lead to more precise and helpful responses.

The strength of contextual prompts lies in their ability to provide the AI with a deeper understanding of the situation. They require thoughtfully crafted inputs that encapsulate the necessary information without overwhelming the model.

Bias Mitigation Prompts {#bias-mitigation-prompts}

Bias mitigation prompts address and reduce inherent biases in AI outputs. By carefully structuring these prompts, developers can guide models to provide fair and balanced responses. This type is crucial in sensitive areas like hiring, law enforcement, and news generation, where biased outputs can have significant repercussions.

These prompts often include explicit instructions to consider diverse perspectives or avoid certain stereotypical assumptions. Implementing bias mitigation is a continuous process requiring regular updates and refinements to prompts to keep up with evolving understandings of fairness and bias in AI systems.

Fine-Tuning Prompts {#fine-tuning-prompts}

Fine-tuning prompts involve iteratively adjusting inputs based on previous outputs to refine the model's performance. This enables continuous improvement by learning from trial and error. Fine-tuning prompts help in specialized tasks, like narrowing down responses in a diagnostic tool or improving conversational AI applications.

Interactive prompting also allows for user feedback, creating a loop where the AI can adapt and enhance its responses. The iterative nature of this method is valuable, leading to highly customized and accurate model behavior.

Prompt Engineering Examples for Common NLP Tasks {#prompt-engineering-examples-for-common-nlp-tasks}

Below are simple examples showing how to create prompts that can help a large language model perform common NLP tasks more effectively.

1. Text Summarization

Text summarization is a common task where large language models (LLMs) distill lengthy texts into concise summaries. Prompt engineering plays a crucial role in guiding the model to produce accurate and relevant summaries. Let's walk through an example of summarizing information about antibiotics.

Prompt:

Explain antibiotics

Output:

Antibiotics are a type of medication used to treat bacterial infections. They work by either killing the bacteria or preventing them from reproducing, allowing the body's immune system to fight off the infection. Antibiotics are usually taken orally in the form of pills, capsules, or liquid solutions, or sometimes administered intravenously. They are not effective against viral infections, and using them inappropriately can lead to antibiotic resistance.

This response might be more detailed than required, so we can refine the prompt to request a one-sentence summary.

Refined Prompt:

Antibiotics are a type of medication used to treat bacterial infections. They work by either killing the bacteria or preventing them from reproducing, allowing the body's immune system to fight off the infection. Antibiotics are usually taken orally in the form of pills, capsules, or liquid solutions, or sometimes administered intravenously. They are not effective against viral infections, and using them inappropriately can lead to antibiotic resistance. Explain the above in one sentence.

Output:

Antibiotics are medications used to treat bacterial infections by either killing the bacteria or stopping them from reproducing, but they are not effective against viruses.

By crafting precise prompts, you can control the level of detail in the summaries generated by the model.

2. Text Classification

Text classification involves categorizing text into predefined labels. With prompt engineering, you can specify the exact format and behavior of the output.

Initial Prompt:

Classify the text into neutral, negative, or positive. Text: The movie was amazing! Sentiment:

Output:

Positive

If a specific format is required, such as lowercase labels, the prompt can be adjusted by providing examples.

Refined Prompt:

Classify the text into neutral, negative, or positive. Text: The weather is fine. Sentiment: Neutral Text: I think the food was okay. Sentiment: The movie was amazing!

Output:

Positive

This example shows how providing examples in the prompt can help the model produce output that exactly matches the desired format.

3. Information Extraction

Information extraction involves pulling specific data from text. This task benefits from precise prompts that clearly define what information is needed.

Prompt:

In recent years, companies have increasingly relied on machine learning techniques to improve their products. For example, Amazon uses machine learning algorithms for product recommendations, Google employs it for search optimization, and Netflix applies it to personalize content for viewers. Mention the companies that use machine learning as stated in the paragraph above:

Output:

The companies that use machine learning as stated in the paragraph above are Amazon, Google, and Netflix.

This prompt clearly directs the model to extract a specific piece of information, which it successfully does.

4. Code Generation

Code generation is an application of LLMs where prompts can be designed to automate coding tasks.

Prompt:

/* Create a Python function that takes a list of numbers and returns the sum of the even numbers. */

Output:

def sum_of_even_numbers(numbers): return sum(num for num in numbers if num % 2 == 0)

This prompt generates a Python function that calculates the sum of even numbers in a list. The model uses the description to understand the task and produces appropriate Python code.

For more complex tasks, such as handling date operations, detailed information can be included in the prompt.

Prompt:

""" Write a Python function that takes a date string in the format 'YYYY-MM-DD' and returns the day of the week. """

Output:

from datetime import datetime def get_day_of_week(date_str): date_obj = datetime.strptime(date_str, '%Y-%m-%d') return date_obj.strftime('%A')

This example demonstrates how structured prompts can guide the model to produce correct and useful code.

5. Question Answering

In question answering, structuring prompts with specific instructions and context leads to more accurate answers.

Prompt:

Answer the question based on the context below. Keep the answer short and concise. Respond "Unsure about answer" if not sure about the answer. Context: The Great Wall of China is an ancient series of walls and fortifications, totaling more than 13,000 miles in length, located in northern China. Construction of the wall began in the 7th century BC, with the most well-known sections built by the Ming Dynasty (1368-1644). The primary purpose of the wall was to protect Chinese states and empires against invasions by nomadic groups from the north. Question: When did construction of the Great Wall of China begin? Answer:

Output:

7th century BC.

This structured prompt helps the model generate a concise and accurate response.

6. Conversation

In conversational AI, prompts can be engineered to set the tone, style, and role of the AI, enabling customized interactions.

Prompt:

The following is a conversation with a friendly travel guide AI. The assistant's tone is casual and helpful. Human: Hi, can you help me plan a vacation? AI:

Output:

Paris is a fantastic choice! Make sure to visit iconic spots like the Eiffel Tower and the Louvre Museum. And don't miss out on a leisurely stroll along the Seine River. Also, trying some delicious French pastries is a must!

This prompt sets a casual and friendly tone, encouraging a helpful and engaging interaction.

Refined Prompt:

The following is a conversation with a luxury travel concierge AI. The assistant's tone is formal and sophisticated. Human: Hi, can you help me plan a vacation? AI:

Output:

Paris is an exquisite choice. I recommend experiencing the refined ambiance of the city by dining at Michelin-starred restaurants, enjoying a private tour of the Louvre, and staying at a five-star hotel with a view of the Eiffel Tower. A luxury Seine River cruise is also highly recommended for a serene and picturesque experience.

By adjusting the prompt, the AI shifts to a formal and sophisticated style suitable for a luxury travel context.

7. Sentiment Analysis

Sentiment analysis involves classifying the sentiment of a text. The regular prompt simply asks for the sentiment, while the refined prompt also requests an explanation for the classification.

Prompt:

Determine the sentiment of the following text: "I love the features, but the battery life is disappointing." Sentiment:

Output:

Mixed This result captures the presence of both positive and negative sentiments but lacks further detail.

Refined Prompt:

Determine the sentiment of the following text: "I love the features, but the battery life is disappointing." Provide a sentiment classification (positive, negative, or mixed) and a brief explanation of why you chose that classification. Sentiment: Explanation:

Output:

Sentiment: Mixed Explanation: The text expresses a positive sentiment towards the product's features ("I love the features") but also a negative sentiment about the battery life ("the battery life is disappointing"), indicating a mixed sentiment.

The refined prompt yields a more detailed response, providing both a classification and an explanation that reflects the nuances in the text.

8. Data Extraction

Data extraction involves pulling specific information from a text. The regular prompt asks for a phone number, while the refined prompt specifies the desired format and asks for the inclusion of an area code if missing.

Prompt:

Extract the email address from the following text: "For support, email us at [email protected]." Email Address:

Output:

helpdesk@company.com

The output correctly extracts the email address but does not include validation.

Refined Prompt:

Extract the email address from the following text: "For support, email us at helpdesk@companycom." Ensure the email address is in a valid format and try to correct it if not. Email Address:

Output:

The provided text includes an incorrectly formatted email address. The correct format for the email address should include a dot before the domain suffix. Here is the corrected email address: helpdesk@company.com

The refined prompt ensures the email address is extracted and confirms that it is in a valid format. The prompt can be further refined to provide only the correct email address without added commentary.

Build LLM Applications with Acorn

Visit https://gptscript.ai to download GPTScript and start building today. As we expand on the capabilities with GPTScript, we are also expanding our list of tools. With these tools, you can create any application imaginable: check out tools.gptscript.ai to get started.