19 Formulas and Prompt Structures for ChatGPT: Going Beyond the Basics

In 2025, interacting with artificial intelligence tools like OpenAI’s ChatGPT, xAI’s Grok, or Google’s Gemini has become essential for countless tasks—from agile content creation to the efficient automation of complex processes. However, obtaining accurate, relevant, and truly useful results isn’t always straightforward. That’s where the importance of prompt engineering comes in: a set of advanced techniques that transforms ordinary interactions into powerful and productive dialogues with today’s top AI tools.

In this guide, you’ll discover 19 tested and proven prompt formulas and structures, specifically designed to enhance the quality and accuracy of your interactions with various models. Whether you’re a content creator, marketing professional, educator, or someone looking for clearer insights and creative solutions, these frameworks will help you unlock the full potential of today’s AI tools—from OpenAI and Google to Anthropic and Microsoft.

Learn how to create the perfect prompt formula for ChatGPTClaudePerplexity, or any other LLM. Discover practical prompt examples and master the art of structuring prompts.

Ready to unlock the power of AI? Dive into detailed examples and adaptable strategies to meet your needs, complemented by our new guide on prompt techniques, frameworks, and formulas. Whether for content creation or insights generation, these structures will transform your AI experience.

Examples of Formulas and Prompt Structures

RTF: Role, Task, Format

The RTF structure is a simple yet effective approach to crafting prompts that clearly direct interactions with AI models like ChatGPT, as well as others like Anthropic’s Claude or Google’s Gemini. This method helps define the role the AI should take on (Role), the specific task or problem to be addressed (Task), and the expected format of the response (Format).

Using the RTF structure helps you get more targeted and accurate responses, making it especially useful in practical applications like content creation or quick analysis.

Let’s quickly break down the components:

  • Role: Defines who is performing the action in the prompt. This could be a person, an entity, or the AI assuming a specific role.
  • Task: Clearly states what needs to be done. It should directly describe the desired action or required information.
  • Format: Specifies how you want the response to be structured. These could include a list, explanatory text, a table, or other practical ChatGPT output formats depending on your communication goal.

Practical example using the RTF structure:

Prompt
As a nutrition expert (Role), provide a list of five protein-rich foods (Task) in a numbered list format (Format).

ChatGPT might respond something like:

Example of response
1. Chicken
2. Eggs
3. Quinoa
4. Almonds
5. Greek Yogurt

This example clearly shows how to apply the RTF structure to generate organized and specific answers. By defining a role (nutrition expert), stating a concrete task (list protein-rich foods), and setting the desired format (numbered list), you significantly improve the quality of AI-generated responses.

For a detailed explanation and more examples on using the RTF structure, check out the full guide:
👉 RTF: How to Create Effective Prompts with Role, Task, and Format

CTF: Context, Task, Format

The CTF structure is a powerful prompt engineering tool that enables you to clearly define the scenario in which the AI interaction is taking place (Context), specify the exact action expected (Task), and indicate the format you want for the response (Format). This method greatly optimizes the quality of responses across AI tools—not only ChatGPT but also Claude from Anthropic and Gemini from Google.

By clearly defining these three elements, you’ll receive more useful and targeted responses. This is especially effective for practical scenarios such as content generation, specific analyses, or structured problem-solving.

Let’s break down the components of the CTF structure:

  • Context: Provides essential information to clearly situate the scenario or problem. Context helps the AI understand the specific setting of your prompt.
  • Task: Directly and precisely describes what you expect the AI to do. It should clearly detail the objective or desired action.
  • Format: Defines how you want to receive the response—this could be a list, structured paragraph, table, or any other useful format.

Suppose you’re asking ChatGPT to compile a list of practical energy-saving tips for households. Using the CTF prompt structure, your prompt would be:

  • Context: “With rising energy prices and growing concern for environmental sustainability, many people are looking for ways to reduce their household energy consumption.”
  • Task: “Create a list of practical and easy-to-implement tips that homeowners can use to save energy.”
  • Format: “Present the tips in a numbered list, providing a brief explanation for each.”

Practical CTF example:

Prompt
Considering the increase in energy tariffs and concern for sustainability, create a numbered list of practical energy-saving tips for homes. The tips should be easy to implement and accompanied by a brief explanatory description.

This example clearly demonstrates how to use the CTF structure to guide detailed and specific AI responses. By providing a clear context, defining the desired task, and specifying the response format, you can significantly enhance the quality of AI-generated outputs.

For a detailed explanation and more examples on using the CTF structure, check out the full guide:
👉 CTF: How to Create Effective Prompts Using Context, Task, and Format

PECRA: Purpose, Expectation, Context, Request, Action

The PECRA structure is a versatile prompt engineering tool that emphasizes clarity and specificity when interacting with language models like ChatGPT, Grok, and DeepSeek. Let’s break down each component:

  • Purpose: States the reason why the prompt is being created. It clarifies the general intention of the interaction.
  • Expectation: Describes the type of response or result expected from the model.
  • Context: Provides the additional information needed for the model to fully understand the prompt and generate an appropriate response.
  • Request: Clearly specifies what is being asked of the model.
  • Action: Indicates the specific action the model is expected to perform.

Let’s imagine you want ChatGPT to create a study plan for a student preparing for an important math exam. Here’s how to apply the PECRA structure:

  • Purpose: “The purpose of this prompt is to help a student effectively prepare for a math exam.”
  • Expectation: “I expect to receive a detailed study plan that covers the main topics needed for the exam.”
  • Context: “The student has 2 weeks until the exam, can study about 3 hours a day, and struggles mainly with algebra and geometry.”
  • Request: “Based on this information, please create a study plan.”
  • Action: “Structure the plan starting with algebra fundamentals, followed by geometry, and include regular review sessions.”

Practical PECRA example:

Prompt
Considering a student who is preparing for an important math exam in 2 weeks and can dedicate 3 hours daily to studying, with difficulties in algebra and geometry, create a detailed study plan. The plan should start with the fundamentals of algebra, progress to geometry, and include regular reviews, aiming for effective preparation for the exam.

This example shows how the PECRA structure can be used to craft a clear and detailed prompt, guiding ChatGPT to deliver a response that meets the user’s specific expectations.

For a detailed explanation and more examples on using the PECRA structure, check out the full guide:
👉 PECRA: How to Create Effective Prompts with Purpose, Expectation, Context, Request, Action

CREATE: Character, Request, Examples, Adjustments, Type, Extras

The CREATE structure is a comprehensive method for prompt formulation, aimed at creating efficient and well-directed interactions with AI models like ChatGPT. This detailed approach allows users to clearly specify the context and expectations of the interaction. Let’s look at each component:

  • Characterization: Assigns a specific role or persona to ChatGPT, guiding its responses based on a defined profile or function. This helps shape the nature of the answers according to the desired context.
  • Request: Clearly and objectively defines what is expected of the model, detailing the task at hand.
  • Examples: Provides examples of the expected output, giving the model a clear reference for the type of content or response desired.
  • Adjustment: Allows the user to request refinements or specific modifications to previous responses, helping to better meet task needs.
  • Type of Output: Specifies the expected format of the answer—such as a narrative, list, detailed plan, and so on. This guides how the AI structures the information.
  • Extras: Offers the opportunity to add additional or contextual information, enriching the prompt and enabling more precise and aligned responses.

Imagine you want Gemini to create a personalized travel guide for a city you plan to visit. Here’s how to apply the CREATE structure:

  • Characterization: “Acting as an experienced local travel guide,”
  • Request: “create a personalized travel guide.”
  • Examples: “Include sections like accommodations, food, tourist attractions, and transport tips.”
  • Adjustment: “Prioritize budget-friendly and family-appropriate options.”
  • Type: “Organize the guide into clearly defined sections with recommendations and brief descriptions.”
  • Extras: “I’ll be traveling in July, so include relevant seasonal events and activities.”

Practical CREATE example:

Prompt
Acting as an experienced local travel guide, create a personalized guide for my trip to Barcelona in July. Include accommodations, gastronomy, tourist attractions, and transportation tips, prioritizing options that are budget-friendly and suitable for families. Organize the guide into clearly defined sections, with recommendations and brief descriptions for each item, and don't forget to add relevant seasonal events and activities for the period of my visit.

This example demonstrates how to use the CREATE structure to craft a detailed and specific prompt, guiding Gemini to produce a response that aligns with the user’s expectations and needs.

For a detailed explanation and more examples on using the CREATE structure, check out the full guide:
👉 CREATE: How to Create Effective Prompts with Characterization, Request, Examples, Adjustment, Type, Extras

CREO: Context, Request, Explanation, Outcome

The CREO structure is a methodology designed to optimize prompt formulation for more effective interactions with language models like Grok, Perplexity, and ChatGPT. It emphasizes the importance of providing context, making a clear request, explaining the task in detail, and defining the desired outcome.

  • Context: Provides background information needed for the model to understand the situation or topic.
  • Request: Clearly states what is expected from the model.
  • Explanation: Describes the task in detail to help the model fully understand the goal of the request.
  • Outcome: Specifies the type of answer or result the user expects to receive from the model.

Let’s imagine you want ChatGPT to generate a list of suggestions for boosting personal productivity. Here’s how to apply the CREO structure:

  • Context: “Given that many people work from home and face frequent distractions,”
  • Request: “create a list of suggestions.”
  • Explanation: “These suggestions should be practical and easy to implement for those working from home.”
  • Outcome: “I expect a list that includes time management techniques, workspace setup advice, and well-being tips.”

Practical CREO example:

Prompt
Considering many people work from home and face frequent distractions, create a list of practical and easy-to-implement suggestions to increase personal productivity. These suggestions should cover time management techniques, workspace setup, and wellness tips, aiming to improve focus and efficiency in the home environment.

This example illustrates how the CREO structure can be used to create clear and objective prompts, guiding ChatGPT to produce responses that effectively meet the user’s specific needs.

For a detailed explanation and more examples on using the CREO structure, check out the full guide:
👉 CREO: How to Create Effective Prompts with Context, Request, Explanation, Outcome

PAIN: Problem, Action, Information, Next Steps

The PAIN prompt structure is a prompt engineering methodology focused on identifying and solving specific problems using AI. It guides the formulation of requests in a way that elicits precise and applicable solutions.

  • Problem: Identifies the issue that needs to be resolved, clarifying the user’s challenge or need.
  • Action: Specifies the action or type of assistance expected from ChatGPT, directing it toward problem-solving.
  • Information: Requests detailed insights or clarifications that ChatGPT can provide to better understand the context or nuances of the problem.
  • Next Steps: Asks for an action plan, resources, or subsequent steps the user can follow to resolve the issue or achieve the desired goal.

Let’s say you’re struggling to manage your time effectively and want ChatGPT to help create a personalized time management plan. Here’s how to apply the PAIN structure:

  • Problem: “I’m struggling to manage my time effectively,”
  • Action: “I need a personalized time management plan.”
  • Information: “What strategies or tools would you recommend?”
  • Next Steps: “Provide a step-by-step plan I can start following immediately.”

Practical PAIN example:

Prompt
I'm struggling to manage my time effectively and need a personalized time management plan. What strategies or tools would you recommend? Please provide a step-by-step plan that I can start following immediately, considering my day is often interrupted by unexpected tasks.

This example shows how the PAIN structure can be used to craft a targeted and effective prompt, guiding ChatGPT to offer practical and personalized solutions for specific user challenges.

For a detailed explanation and more examples on using the PAIN structure, check out the full guide:
👉 PAIN: How to Create Effective Prompts with Problem, Action, Information, Next Steps

TREF: Task, Requirement, Expectation, Format

The TREF structure is a focused prompt engineering approach that helps define what the user wants to achieve, the criteria that must be met, what the expected outcome is, and the format in which it should be delivered. Let’s explore each of its components:

  • Task: What exactly you’re asking ChatGPT to do. This should be a clear action or set of actions.
  • Requirement: The specific criteria or conditions the response needs to meet.
  • Expectation: The desired outcome of the interaction, including the type of information or solution expected.
  • Format: How you want the information or solution to be presented.

Suppose you want Grok to write a summary of current trends in renewable energy technology. Here’s how to apply the TREF structure:

  • Task: “Write a summary on current trends in renewable energy technology.”
  • Requirement: “The summary should cover both technological advances and current challenges.”
  • Expectation: “I want a clear and concise overview that can be used to inform the general public.”
  • Format: “The summary should be structured in paragraphs, with a maximum of 300 words.”

Practical TREF example:

Prompt
Please write a summary of no more than 300 words about the current trends in renewable energy technology, including technological advancements and challenges. The summary should be structured in paragraphs, offering a clear and concise overview suitable for informing the general public.

This example shows how the TREF structure can be used to craft a detailed and specific prompt, guiding Grok to deliver a response that meets an informative need while adhering to defined content and formatting criteria.

For a detailed explanation and more examples on using the TREF structure, check out the full guide:
👉 TREF: How to Create Effective Prompts with Task, Requirement, Expectation, Format

GRADE: Goal, Request, Action, Detail, Examples

The GRADE structure is an effective technique for structuring prompts that clearly communicate the goal, outline the request, specify the desired action, provide additional context, and include examples to guide the response. Here’s the breakdown of each component:

  • Goal: The objective or purpose of the prompt. It defines what you hope to achieve with the interaction.
  • Request: What you are specifically asking ChatGPT to do.
  • Action: The specific action you want ChatGPT to take in response to your request.
  • Detail: Additional information that clarifies the request, providing context or more precise specifications.
  • Examples: Concrete cases or examples that illustrate the type of response or content you expect to receive.

Let’s say you want Perplexity to create an introductory guide for beginners on how to invest in cryptocurrencies. Here’s how to apply this prompt structure:

  • Goal: “Create an accessible introductory guide for beginners on cryptocurrency investment.”
  • Request: “Develop a guide that introduces the basic concepts of investing in cryptocurrencies.”
  • Action: “Include sections on what cryptocurrencies are, how to start investing, and safety tips.”
  • Detail: “The guide should be easy to understand for someone with no prior knowledge of the subject.”
  • Examples: “Provide examples of popular investment platforms and explain common terms like ‘blockchain’ and ‘digital wallet.’”

Practical GRADE example:

Prompt
Please create an introductory guide on how to invest in cryptocurrencies, aimed at beginners. The guide should introduce basic concepts, include sections on what cryptocurrencies are, how to start investing, and security tips. Ensure the content is accessible to someone with no prior knowledge, providing examples of popular investment platforms and explaining terms like 'blockchain' and 'digital wallet'.

This example demonstrates how the GRADE structure can be used to create a detailed and specific prompt, guiding Perplexity to produce a complete and accessible introductory guide on a complex topic like cryptocurrency investing.

For a detailed explanation and more examples on using the GRADE structure, check out the full guide:
👉 GRADE: How to Create Effective Prompts with Goal, Request, Action, Detail, Examples

ROSES: Role, Objective, Scenario, Expected Solution, Steps

The ROSES structure is designed to help you communicate a problem and how you want it to be approached in a detailed way, by specifying the role, objective, scenario, expected solution, and steps. This approach is particularly useful for complex requests or when a well-structured, detailed response is needed. Let’s break down each component:

  • Role: Defines who is performing the action or whose perspective is being considered. This can be ChatGPT taking on a specific role.
  • Objective: Clarifies what you hope to achieve with the prompt.
  • Scenario: Describes the context or situation where the request is placed, grounding the request with relevant background.
  • Expected Solution: Describes the type of response or result you expect from ChatGPT.
  • Steps: Specifies the process or series of actions to be followed to reach the desired solution.

Let’s imagine you want ChatGPT to help plan a digital marketing campaign for a new product. Using the ROSES structure, the prompt can be crafted as follows:

  • Role: “As a digital marketing expert…”
  • Objective: “…the goal is to create an effective campaign for launching a new tech product.”
  • Scenario: “The product is an innovative health-tracking device that connects to smartphones. The target market is young adults interested in tech and fitness.”
  • Expected Solution: “The plan should include social media strategies, digital influencers, and email marketing.”
  • Steps: “1. Identify key social platforms used by our target audience. 2. Select tech and fitness influencers. 3. Develop a pre- and post-launch email engagement series.”

Practical ROSES example:

Prompt
As a digital marketing expert, create a digital marketing campaign for the launch of a new health monitoring device that connects to smartphones. The product is aimed at young adults interested in technology and fitness. The plan should include strategies for social media, digital influencers, and email marketing, starting by identifying the main social media platforms, selecting influencers in the technology and fitness niche, and developing a series of emails for engagement before and after the launch.

For a detailed explanation and more examples on using the ROSES structure, check out the full guide:
👉 ROSES: How to Create Effective Prompts with Role, Objective, Scenario, Expected Solution, Steps

RDIREC: Role, Definition, Intent, Request, Example, Clarification

The RDIREC structure is a detailed methodology for formulating prompts that require complex and well-grounded responses. It incorporates elements such as the role, definition of key terms, the intent behind the request, specific examples, and clarifications to avoid ambiguity. Let’s explore each component:

  • Role: Specifies the point of view or capacity in which ChatGPT or the user is acting.
  • Definition: Clarifies key concepts or terms that are essential for understanding the prompt.
  • Intent: Explains the reason or objective behind the prompt, which helps guide the direction of the response.
  • Request: Clearly and precisely outlines what is being asked.
  • Example: Provides concrete cases or examples to serve as a reference for the expected type of response.
  • Clarification: Adds additional details or information to minimize misunderstandings and refine the response.

Let’s say you want ChatGPT to write content about the importance of cybersecurity for small and medium-sized enterprises (SMEs). Using the RDIREC structure, the prompt could be crafted like this:

  • Role: “As a cybersecurity consultant…”
  • Definition: “…define ‘cybersecurity’ and explain its relevance in today’s business environment, especially for SMEs.”
  • Intent: “The goal is to raise awareness among SME owners about cyber risks and encourage adoption of protective measures.”
  • Request: “Create an introductory guide to cybersecurity for SMEs, highlighting best practices and risk mitigation strategies.”
  • Example: “Include examples of common attacks like phishing and ransomware, and their consequences.”
  • Clarification: “Emphasize the importance of employee training, regular backups, and software updates as preventive measures.”

Practical RDIREC example:

Prompt
As a cybersecurity consultant, define 'cybersecurity' and explain its importance in the current business context, focusing on SMEs. The goal is to create an introductory guide that raises awareness about cyber risks and promotes the adoption of security measures. Include examples of attacks like phishing and ransomware, highlighting the consequences for businesses. Detail the relevance of employee training, regular backups, and software updates as preventive strategies.

This example shows how the RDIREC structure can be used to formulate a complex prompt, guiding ChatGPT to produce an educational and detailed piece of content about cybersecurity for SMEs, with clear definitions, illustrative examples, and clarifications that effectively shape the response.

For a detailed explanation and more examples on using the RDIREC structure, check out the full guide:
👉 RDIREC: How to Create Effective Prompts with Role, Definition, Intent, Request, Example, Clarification

RSCET: Role, Situation, Complication, Expectation, Task

The RSCET structure is used to develop prompts that describe a complex scenario requiring a response that addresses a specific situation, its complications, the expected outcome, and the task to be performed. This methodology helps create clear and structured prompts for scenarios involving problem-solving or in-depth analysis. Let’s break down each component:

  • Role: Defines who is involved or who should act, often ChatGPT in a specific role.
  • Situation: Describes the context or background of the prompt.
  • Complication: Identifies the challenge, issue, or complication present in the situation.
  • Expectation: Clarifies the expected outcome or desired solution.
  • Task: Specifies the action or set of actions to be taken to meet the prompt’s objective.

Let’s say you want Gemini to help plan a strategy to overcome a digital marketing challenge faced by a tech startup. Using the RSCET structure, the prompt can be formulated as follows:

  • Role: “As a digital marketing expert…”
  • Situation: “…the tech startup ‘TechNova’ is launching a new productivity app that helps users manage time and projects efficiently.”
  • Complication: “Despite the product’s high quality, TechNova faces strong market competition and struggles to reach its target audience.”
  • Expectation: “The goal is to develop an innovative digital marketing strategy that helps the app stand out and expand its reach.”
  • Task: “Create a marketing plan that includes SEO tactics, content marketing, and social media campaigns, focusing on the app’s unique value and how it solves users’ problems.”

Practical RSCET example:

Prompt
As a digital marketing expert, develop a strategy for the startup 'TechNova,' which is launching a new productivity app. Despite the quality of the product, the company faces strong competition and challenges in reaching its audience. The goal is to create a marketing plan that utilizes SEO, content marketing, and social media, highlighting the app's differentiators and its ability to solve user problems.

This example demonstrates how the RSCET structure can be applied to craft a prompt that outlines a challenging scenario, guiding Gemini to develop a focused and creative digital marketing strategy to overcome the identified complications.

For a detailed explanation and more examples on using the RSCET structure, check out the full guide:
👉 RSCET: How to Create Effective Prompts with Role, Situation, Complication, Expectation, Task

RASCEF: Role, Action, Steps, Context, Examples, Format

The RASCEF structure is a comprehensive approach to prompt creation that emphasizes clarity when communicating a complex task. It incorporates the assumed role, required actions, specific steps to complete the task, context surrounding the situation, examples to illustrate the request, and the desired response format. This structure is ideal for situations requiring detailed instructions and well-defined outcomes. Here’s a breakdown of each component:

  • Role: Defines who is performing the task or whose perspective is being used.
  • Action: Describes the main action or actions that need to be taken.
  • Steps: Outlines the specific procedures or steps required to complete the task.
  • Context: Provides relevant background information to fully understand the situation or problem at hand.
  • Examples: Includes examples or practical cases to serve as a model or inspiration for the response.
  • Format: Specifies how the response should be organized or presented.

Let’s imagine you want ChatGPT to develop a plan to increase the online visibility of a new artisanal coffee brand. Using the RASCEF structure, the prompt could be structured like this:

  • Role: “As a digital marketing consultant specializing in artisanal coffee brands…”
  • Action: “…develop a strategic plan to increase the brand’s online visibility.”
  • Steps: “1. Identify the target audience. 2. Choose the most suitable social media platforms. 3. Create engaging content that highlights the uniqueness of the coffee. 4. Launch a paid ads campaign. 5. Measure and adjust the strategy based on feedback and data analysis.”
  • Context: “The brand is new to the market and offers a unique selection of single-origin artisanal coffees but is struggling to stand out in a competitive market.”
  • Examples: “Include examples of content ideas such as origin stories, behind-the-scenes roasting videos, and customer testimonials.”
  • Format: “The plan should be presented as a structured document with clear sections for each step.”

Practical RASCEF example:

Prompt
As a digital marketing consultant specialized in artisan coffee brands, devise a strategic plan to increase the online visibility of a new coffee brand. The plan should include steps to identify the target audience, select suitable social media platforms, create engaging content, implement paid ad campaigns, and measure the success of the strategy. Consider the context of a competitive market and provide examples of content. Present the plan in a structured document with sections for each step.

This example demonstrates how the RASCEF structure facilitates the creation of a detailed, action-oriented prompt, allowing ChatGPT to generate a comprehensive and well-organized digital marketing plan for the artisanal coffee brand.

For a detailed explanation and more examples on using the RASCEF structure, check out the full guide:
👉 RASCEF: How to Create Effective Prompts with Role, Action, Steps, Context, Examples, Format

APE: Action, Purpose, Expectation

The APE structure is a concise methodology focusing on three essential elements for crafting effective prompts: the desired action, the purpose behind it, and the expected result. Let’s look at each component:

  • Action: What you want ChatGPT to do. This element is direct and clearly specifies the task to be performed.
  • Purpose: The reason why you are requesting the action. It defines the intention behind the prompt and clarifies the objective.
  • Expectation: The result you expect from the requested action. It specifies what would be considered a satisfactory response.

Imagine you want Meta AI to write an informative piece about the impact of artificial intelligence on education. Using the APE structure, the prompt would be:

  • Action: “Write an article about the impact of artificial intelligence on education.”
  • Purpose: “The purpose is to inform readers about how AI is transforming teaching and learning methods.”
  • Expectation: “I expect a detailed analysis covering both benefits and challenges, with concrete examples.”

Practical APE example:

Prompt
Write a detailed article about the impact of artificial intelligence on education, aiming to inform readers about the transformations in teaching and learning methods. I expect an analysis that explores the benefits and challenges, including concrete examples to illustrate these points.

This example shows how the APE structure can be used to create a clear and objective prompt, guiding Meta AI to produce informative and well-founded content on the proposed topic.

For a detailed explanation and more examples on using the APE structure, check out the full guide:
👉 APE: How to Create Effective Prompts with Action, Purpose, Expectation

TAG: Task, Action, Goal

The TAG structure is an efficient tool for defining prompts that are focused and results-oriented. It emphasizes the importance of establishing a specific task, the action needed to complete it, and the ultimate goal to be achieved. This approach helps create a clear direction for the interaction with ChatGPT. Here’s the breakdown:

  • Task: What needs to be done. This defines the scope of the request or information being asked.
  • Action: The specific steps or process by which the task should be completed.
  • Goal: The desired outcome or purpose of completing the task. It clarifies what you aim to achieve in the end.

Suppose you’re seeking ChatGPT’s help in planning a networking event for tech professionals. Using the TAG structure, the prompt could be:

  • Task: “Organize a networking event for tech professionals.”
  • Action: “Identify the key elements needed for the event’s success, such as venue, discussion topics, and special guests.”
  • Goal: “Create a networking opportunity that encourages meaningful exchanges and fosters long-lasting professional connections.”

Practical TAG example:

Prompt
Organize a networking event for technology professionals. To do this, identify the key elements that will contribute to the event's success, including the choice of venue, relevant discussion themes, and the selection of special guests. The objective is to create an environment that promotes valuable exchanges and fosters lasting professional connections among participants.

This example illustrates how the TAG structure can be applied to generate a clear and focused plan for organizing an event, detailing the task, necessary actions, and final goal.

For a detailed explanation and more examples on using the TAG structure, check out the full guide:
👉 TAG: How to Create Effective Prompts with Task, Action, Goal

ERA: Expectation, Role, Action

The ERA structure focuses on clearly defining the expected outcome, the role the AI or user assumes in the interaction, and the specific actions needed to achieve that expectation. This approach helps guide the language model effectively, ensuring responses align with the user’s goals. Let’s break down each component:

  • Expectation: The desired outcome or what is expected from the response. This element sets the final goal of the interaction.
  • Role: The function or identity assumed by ChatGPT or the user within the prompt context. Defining the role helps contextualize the response within a specific perspective.
  • Action: The specific steps or processes that should be followed to fulfill the task and meet the expectation.

Imagine you want DeepSeek to help you create a study plan for an IT certification exam. Applying the ERA structure, the prompt would be:

  • Expectation: “To develop an effective study plan that covers all necessary topics for the IT certification exam over a three-month period.”
  • Role: “As a virtual tutor with experience in IT certification exam preparation…”
  • Action: “…create a detailed study schedule, including recommended learning resources, a balanced topic distribution over the period, and effective review techniques.”

Practical ERA example:

Prompt
As a virtual tutor specializing in preparing for information technology certification exams, develop a detailed study plan for a certification exam taking place in three months. The plan should include a study schedule, recommended learning resources, a balanced distribution of topics, and effective revision techniques, aiming to cover all necessary topics for the exam.

This example demonstrates how the ERA structure can be used to request that DeepSeek create a detailed and structured study plan, clearly establishing the expectation, the role played by the model, and the specific actions to be taken.

For a detailed explanation and more examples on using the ERA structure, check out the full guide:
👉 ERA: How to Create Effective Prompts with Expectation, Role, Action

RACE: Role, Action, Context, Expectation

The RACE structure is a comprehensive methodology for prompt creation that emphasizes defining the role, the action required, the context surrounding the request, and the expected outcome. This approach ensures that all critical parts of a prompt are addressed, enabling accurate and goal-aligned responses. Let’s explore each component:

  • Role: Specifies who is performing the action or the perspective from which the request is made. It may involve ChatGPT taking on a specific role or the user defining their position.
  • Action: Clearly describes what you want ChatGPT to do, outlining the required task or actions.
  • Context: Provides background information and relevant details about the situation or problem, helping to clarify the prompt and guide the response.
  • Expectation: Defines the desired outcome or what is expected from the response, establishing a clear objective for the interaction.

Suppose you want ChatGPT’s help to optimize the layout of an e-commerce website to improve user experience. Using the RACE structure, the prompt could be:

  • Role: “As a UX (User Experience) designer…”
  • Action: “…evaluate the current layout of our e-commerce website and suggest specific improvements.”
  • Context: “The site has a high cart abandonment rate and user feedback suggests it’s difficult to find products.”
  • Expectation: “I expect actionable suggestions to simplify navigation, make product searches more intuitive, and reduce cart abandonment.”

Practical RACE example:

Prompt
As a UX designer, evaluate the current layout of our e-commerce website, considering that we face a high cart abandonment rate and user feedback indicating difficulties in finding products. Based on this, suggest specific improvements that can make navigation simpler and product search more intuitive, with the goal of reducing the cart abandonment rate.

This example shows how the RACE structure can be used to formulate a detailed and targeted prompt, requesting that ChatGPT provide a critical analysis and improvement suggestions for an e-commerce website, based on a specific context and with a clearly defined expected outcome.

For a detailed explanation and more examples on using the RACE structure, check out the full guide:
👉 RACE: How to Create Effective Prompts with Role, Action, Context, Expectation

COAST: Context, Objective, Actions, Scenario, Task

The COAST structure is designed to guide prompt formulation by covering all essential aspects of a complex request—from the context to the specific task to be completed. This methodology ensures that instructions are clear and comprehensive, helping the AI generate accurate and relevant responses. Let’s look at each component:

  • Context: Provides the background information needed to understand the situation or issue, establishing the foundation for the interaction.
  • Objective: Clearly defines what is expected from the interaction, clarifying the purpose of the request.
  • Actions: Details the specific steps or processes that should be followed to meet the objective.
  • Scenario: Describes the specific situation or set of circumstances in which the task occurs, providing additional contextual depth.
  • Task: Specifies the concrete action(s) ChatGPT should take to fulfill the prompt.

Suppose you want Claude to help create a strategy to increase employee participation in a corporate wellness program. Using the COAST structure, the prompt would be:

  • Context: “Our company recently launched a wellness program to promote employees’ physical and mental health, but participation has been low.”
  • Objective: “The goal is to develop an effective strategy to boost participation in the wellness program.”
  • Actions: “Identify participation barriers, design incentive mechanisms, and develop effective communication channels.”
  • Scenario: “Many employees work remotely and may not be fully aware of the program’s benefits.”
  • Task: “Create a detailed plan addressing these points, including specific actions to overcome barriers, encourage participation, and raise awareness.”

Practical COAST example:

Prompt
Given the context of low participation in the recently launched corporate wellness program, develop a strategy to increase employee engagement. Identify the main barriers to participation, propose attractive incentives, and structure effective communication channels, considering the scenario of many employees working remotely. The task is to create a detailed plan that addresses these points, aiming to significantly improve participation in the program.

This example demonstrates how the COAST structure can be used to build a detailed and comprehensive prompt, guiding Claude to develop a complex strategy that addresses multiple aspects, from problem analysis to the proposal of concrete solutions.

For a detailed explanation and more examples on using the COAST structure, check out the full guide:
👉 COAST: How to Create Effective Prompts with Context, Objective, Actions, Scenario, Task

RISE: Role, Input, Steps, Expectation

The RISE structure is designed to guide the creation of prompts that require a step-by-step approach to achieve a desired outcome. It focuses on the role of the respondent, the necessary input to start the task, the process steps, and the expected result. This methodology is particularly effective for tasks involving instructions or complex processes. Let’s explore each component:

  • Role: Defines the function or perspective of the person performing the task—either the user or ChatGPT.
  • Input: Specifies the information or resources needed to initiate the task or process.
  • Steps: Outlines the consecutive actions required to complete the task or reach the goal.
  • Expectation: Clarifies the outcome or goal expected at the end of the process.

Suppose you want ChatGPT to help plan and execute a market research study for a new product. Using the RISE structure, the prompt could be:

  • Role: “As a market research analyst…”
  • Input: “…with access to demographic data for the target audience and online research tools…”
  • Steps: “1. Clearly define the target audience. 2. Create a focused questionnaire based on key product aspects. 3. Choose the most suitable online research platform. 4. Analyze collected data to extract valuable insights.”
  • Expectation: “I expect a detailed report with insights into product acceptance in the target market, including marketing strategy recommendations.”

Practical RISE example:

Prompt
As a market research analyst, with access to demographic data of the target audience and online survey tools, plan and execute a survey for a new product. The process should include defining the target audience, drafting a questionnaire, choosing the survey platform, and analyzing the collected data. The goal is to obtain a report with insights on the product's acceptance in the market, accompanied by recommendations for the marketing strategy.

This example demonstrates how the RISE structure can be effectively applied to create a detailed and process-oriented prompt, enabling the execution of a complex task such as planning and conducting market research, following specific steps to achieve a well-defined outcome.

For a detailed explanation and more examples on using the RISE structure, check out the full guide:
👉 RISE: How to Create Effective Prompts with Role, Input, Steps, Expectation

SPARK: Situation, Problem, Aspiration, Results, Kismet

The SPARK structure is a rich, narrative-driven approach to prompt formulation designed to deeply explore a situation, identify key problems, define aspirations, anticipate results, and reflect on the element of chance or fate (kismet). This methodology is ideal for complex scenarios where understanding the full context and goals is crucial to generating creative and effective solutions. Let’s break it down:

  • Situation: Describes the current context or background of the interaction or problem at hand.
  • Problem: Identifies the central challenge or issue that needs to be addressed.
  • Aspiration: Defines the ideal state or goal to be achieved, in contrast with the current problem.
  • Results: Anticipates the positive outcomes or achievements expected from solving the problem or reaching the aspiration.
  • Kismet: Acknowledges the element of luck, fate, or external factors that may influence the outcome.

Let’s say you want ChatGPT to help develop an innovative strategy to increase user engagement for a wellness app. Using the SPARK structure, the prompt would be:

  • Situation: “The wellness app ‘WellLife’ has received positive feedback for its interface and features, but user engagement has been low in recent months.”
  • Problem: “The main issue is the lack of sustained engagement—users download the app but don’t use it regularly.”
  • Aspiration: “Our goal is to make ‘WellLife’ an essential part of users’ daily wellness routines by significantly increasing engagement.”
  • Results: “We hope to see higher daily usage, improved user retention, and positive feedback about the new engagement strategies.”
  • Kismet: “We understand that external factors, such as market trends and emerging technologies, may influence the success of our efforts.”

Practical SPARK example:

Prompt
Given the current situation of the wellness app 'WellLife', with positive feedback but low user participation, the challenge is to increase continuous engagement. We aspire to make 'WellLife' an indispensable part of users' wellness routines. We hope, as a result, to see an increase in daily use and better user retention, although we recognize that external factors may impact these outcomes. Develop an innovative strategy to achieve these goals.

This example demonstrates how the SPARK structure can be used to craft a detailed and solution-oriented prompt, addressing a complex situation through a combination of deep analysis, clear objectives, anticipated positive outcomes, and an acknowledgment of the role of fate or luck.

For a detailed explanation and more examples on using the SPARK structure, check out the full guide:
👉 SPARK: How to Create Effective Prompts with Situation, Problem, Aspiration, Results, Kismet

Frequently Asked Questions

What is a prompt structure, and why is it important in ChatGPT?

A prompt structure is an organized framework for how you phrase your questions or commands to an AI. Using structures like RTF, CTF, or PECRA helps ChatGPT (and other models) better understand what you’re asking, resulting in clearer, more useful, and personalized responses.

Do these structures only work with ChatGPT?

No! These structures also work with other generative AIs like Claude, Gemini, Grok, Perplexity, and even simpler LLM-based tools. What changes is how each model responds, but the structural logic remains valid.

Why use prompt formulas and structures in ChatGPT and other LLMs?

Formulas like RTF and PECRA ensure structured prompts for ChatGPT and similar AIs, which leads to more accurate and helpful responses. Structuring requests clearly maximizes the quality of interactions with LLMs, including Claude, Gemini, and Grok.

How do I choose the right prompt structure for my needs?

Start by identifying your prompt’s goal—do you need creativity, detail, problem-solving, etc.? Each structure is designed for a different focus: clarity (PECRA), targeted action (PAIN), or detailed context (CREATE). You’ll find examples in this article and in the new Practical Guide to Techniques, Frameworks, and Prompt Formulas for LLMs.

What is the best prompt formula for ChatGPT?

The ideal prompt formula depends on the task. In general, include a clear role, a specific task, and the desired format. A well-informed context often helps guide the AI and improves its responses.
This is also a great starting point if you’re learning how to write prompts for ChatGPT effectively.

Can I customize existing prompt structures?

Yes, adapt structures like PAIN or ROSES to your needs. You can even combine elements (e.g., Context from CTF + Expectation from TREF) for hybrid prompts. This flexibility helps tailor prompt structures for any LLM.

How can I ensure ChatGPT understands my prompt correctly?

Be clear and specific, and include all relevant details in your request. Using the prompt structures presented helps organize your ideas and ensures ChatGPT receives all necessary information. A great place to start is by testing the ChatGPT prompt examples provided in this guide.

Can I combine different prompt structures to get better results?

Combining structures can be especially useful for complex requests. It allows you to customize the interaction with ChatGPT, Grok, Gemini, and other LLMs to meet specific needs with greater precision.

Why aren’t my ChatGPT prompts generating accurate responses?

Vague or context-free prompts may lead to imprecise answers. Use structures like COAST to define clear goals and scenarios. Include specific details and test your prompts in other LLMs such as Claude, Gemini, and Perplexity.

What are the best examples of effective prompts for beginners?

Beginners can start with simpler structures like RTF or APE. These frameworks have few components, are easy to apply, and already provide a noticeable improvement in response quality.

Conclusion

Mastering prompt engineering is the key to unlocking the full potential of AI tools like ChatGPT, Claude, Gemini, Grok, and Perplexity. The 19 prompt formulas and structures in this guide—such as RTF, PECRA, and COAST—offer practical pathways to transform generic interactions into truly productive conversations with advanced language models.

Whether you’re creating content, solving problems, automating workflows, or generating strategic insights, these structures help you communicate with more clarity, purpose, and efficiency.

Start with simple formats like RTF (“As an expert, list 5 tips…”) or design complex projects using PECRA. Combine structures like CTF + PAIN or CREATE + GRADE to tailor prompts to your needs. The ideal prompt formula begins with clarity, context, and a defined goal—and this guide is your starting point.

Prompt engineering goes beyond asking the right questions: it builds bridges between humans and AI, guiding the machine toward useful, creative, and personalized results. Test, adapt, combine. Over time, your ability to extract valuable and precise answers will evolve alongside your prompts.

Want to go deeper? Explore the Practical Guide with techniques, frameworks, and formulas—and access detailed breakdowns of each structure. Your next prompt could be your most powerful yet.