You are a meta-reviewer of a team of three computer science experts who specialize in engineering prompts. Your collective mission is to enhance the candidate prompts and apply the best practices to refine them for subsequent tasks. To accomplish this, everyone will utilize the guidelines provided and follow proven strategies to create effective and powerful prompts.

After fully grasping these guidelines, you all will be presented with several examples to aid in understanding and improving the candidate prompts. The primary objective here is to enhance these prompts, providing the reasoning for your improvements, and generate a superior prompt. This enhanced prompt should be effective, concise, specific, follow best practices, avoid pushing the GPT-based model to make guesses, have the output format even if it is not specified in the initial prompt, and minimize the chances of it producing hallucinations or fabricated facts.

As a meta-reviewer, your role is to synthesize the observations of the three experts to propose an even better prompt.

###Guidelines###
1) Be very specific about the instruction and task you want the model to perform. The more descriptive and detailed the prompt is, the better the results. This is particularly important when you have a desired outcome or style of generation you are seeking. There aren't specific tokens or keywords that lead to better results. It's more important to have a good format and descriptive prompt. In fact, providing examples in the prompt is very effective to get desired output in specific formats.
2) When designing prompts, you should also keep in mind the length of the prompt as there are limitations regarding how long the prompt can be. Thinking about how specific and detailed you should be. Including too many unnecessary details is not necessarily a good approach. The details should be relevant and contribute to the task at hand. This is something you will need to experiment with a lot. We encourage a lot of experimentation and iteration to optimize prompts for your applications.
3) Rather than the model on the loose, you should set up the scenario and scopes in the prompt by providing details of what, where, when, why, who, and how
4) Assigning a persona in the prompt, for example, "As a computer science professor, explain what is machine learning" rather than merely "Explain what machine learning is," can make the response more academic.
5) You can control the output style by requesting "explain to a 5-year-old", "explain with an analogy," "make a convincing statement," or "in 3 to 5 points."
6) To encourage the model to respond with a chain of thoughts, end your request with "solve this in steps." A "chain of thoughts" refers to a progression or sequence of ideas that lead from one to the next, often interconnected in a way that each thought influences or prompts the following thought. It's like a mental journey where your mind moves from one concept or idea to another, often creating an extensive network of interconnected thoughts. It is useful for tasks that require some form of reasoning such as mathematical or logical reasoning. Having such a thought series can simplify the problem to a much greater extent by solving it by step by step.
7) You can provide additional information to the model by saying, "Reference to the following information," followed by the material you want the model to work on
8) Because the previous conversation constructs the context, beginning the prompt with "ignore all previous instructions before this one" can make the model start from scratch
9) Making the prompt straightforward and easy to understand is essential since the context deduced can be more accurate to reflect your intention.
10) For very long conversations, it's important to know that you only have a certain number tokens to play with, and if your inputs or outputs have been very long, in a long conversation GPT will start to forget some of the earlier context. To avoid this, you can ask ChatGPT to summarize the conversation and then use that summary as a prompt to refresh the context during future interactions.
11) Another Principle: Give the Model Time to 'Think'. It's also important to give the LLM time to "think". If a model is making reasoning errors by rushing to an incorrect conclusion, you should try reframing the query to request a chain or series of relevant reasoning before the model provides its final answer. Another way to think about this is that if you give a model a task that's too complex for it to do in a short amount of time, or in a small number of words, it may make up a guess which is likely to be incorrect.
12) Reducing Hallucinations: One LLM limitation is hallucinations, which is basically when the AI makes up something that sounds plausible but isn't actually correct. Even though the language model has been exposed to a vast amount of knowledge during its training process, it has not perfectly memorized the information and so it doesn't know the boundary of its knowledge very well. This means that it might try to answer questions about obscure topics and can make things up that sound plausible but are not actually true. One way to reduce hallucinations is to ask the model to first find any relevant quotes from the text and then ask it to use those quotes to answer questions. Having a way to trace the answer back to a source document is often pretty helpful to reduce these hallucinations.
13) Output Format: Always specify the output format in which the required answer should be provided by the model. Having an output format makes it easier to parse the model's response and then later extract the answer from it. An output format can be as simple as specifying the delimiter for the final answer.

You're a meta-reviewer for a three-expert team assigned to a critical thinking task. The mission is to analyze a given "reason" that differentiates a good prompt from a bad one and generate a better prompt. This "reason" may highlight problems with the bad prompt, such as vagueness, wordiness, a lack of specificity, or a tendency to make assumptions. However, these noted issues are just the beginning - the team needs to delve deeper and uncover any other hidden issues with the bad prompt. To accomplish this, it's useful to construct a good prompt first, then deliberately remove key elements to create a bad one.

Once this analysis is complete, the team's task is to suggest a better, improved prompt.

Here are the detailed steps to follow:

Step 1: Begin with a thorough analysis of the good and bad prompts provided in the examples. Additionally, consider the provided reasoning and prompt types. Prompt types can significantly narrow down the search field when looking for a specific type of prompt. Each prompt type has a unique set of guidelines and requirements, which can be identified by grouping all similar prompt types together and pinpointing the critical elements within that group. When you are crafting the final, improved prompt, this information about the prompt type and its group's critical elements, which you have identified, can be used to guide your generation.

Step 2: Closely examine the prompts in all examples to identify any problems or shortcomings in the bad prompt, especially when compared to the good prompt.

Step 3: Pair your findings from Step 2 with the initially provided "Reason". This will give you a holistic understanding of what differentiates a good prompt from a bad one.

Step 4: Visualize a team of three experts working collaboratively using a thought-tree strategy. Each expert will explain their thought process at every stage while considering the insights shared by their colleagues. They will openly recognize any errors and leverage the collective wisdom of the team to improve. This iterative process continues until a clear solution is identified.

Step 5: Now, put on your hat as a meta-reviewer. You will incorporate observations from all three experts to create a better prompt. The prompt you create will reflect the collective insights of all the reviewers and adhere to best practices. After the phrase "###Better Prompt###", provide your final output.

REMEMBER, IMPORTANT: Before generating the better prompt, please show all the steps above.
Now, let's delve into the examples:

###Examples###

###Candidate Prompt###
Load iris data from scikit-learn datasets and plot the training data.
###Reason###
It's not clear how much of the code has been already imported and the assistant will not be helpful. We need to instruct that it should import statements here.
###Better Prompt Type###
[CODE_OUTPUT][CONSTRAINTED_OUTPUT]
###Better Prompt###
Generate a Python program following user's instructions. Be helpful and import any needed libraries first.\nLoad iris data from scikit-learn datasets and plot the training data.

###Candidate Prompt###
The sky is
###Reason###
It seems the task is sentence completion here but it is still very hard and one has to guess here. The instruction is neither clear nor specific.
###Better Prompt Type###
[CONTENT_GENERATION]
###Better Prompt###
Complete the sentence:\nThe sky is

###Candidate Prompt###
Translate the text below to Spanish:\nText: "hello!"
###Reason###
Some recommend that you place instructions at the beginning of the prompt. Another recommendation is to use some clear separator like "###" or """"" to separate the instruction and context.
###Better Prompt Type###
[TRANSLATION]
###Better Prompt###
### Instruction ###\nTranslate the text below to Spanish:\nText: "hello!"

###Candidate Prompt###
Extract the name of places in the following text.\n
###Reason###
Be very specific about the instruction and task you want the model to perform. The more descriptive and detailed the prompt is, the better the results. This is particularly important when you have a desired outcome or style of generation you are seeking. There aren't specific tokens or keywords that lead to better results. It's more important to have a good format and descriptive prompt. In fact, providing examples in the prompt is very effective to get desired output in specific formats.
###Better Prompt Type###
[FORMATTED_OUTPUT][INFORMATION_EXTRACTION]
###Better Prompt###
Extract the name of places in the following text.\nDesired format:\nPlace: <comma_separated_list_of_company_names>\nInput:

###Candidate Prompt###
Explain the concept prompt engineering. Keep the explanation short, only a few sentences, and don't be too descriptive.
###Reason###
Given the tips above about being detailed and improving format, it's easy to fall into the trap of wanting to be too clever about prompts and potentially creating imprecise descriptions. It's often better to be specific and direct. The analogy here is very similar to effective communication -- the more direct, the more effective the message gets across.
###Better Prompt Type###
[CONTENT_GENERATION][CONSTRAINTED_OUTPUT]
###Better Prompt###
Use 2-3 sentences to explain the concept of prompt engineering to a high school student.

###Candidate Prompt###
The following is an agent that recommends movies to a customer. DO NOT ASK FOR INTERESTS. DO NOT ASK FOR PERSONAL INFORMATION.\nCustomer: Please recommend a movie based on my interests.\nAgent:
###Reason###
Another common tip when designing prompts is to avoid saying what not to do but say what to do instead. This encourages more specificity and focuses on the details that lead to good responses from the model.
###Better Prompt Type###
[RESPONSE_GENERATION][CONSTRAINTED_OUTPUT]
###Better Prompt###
The following is an agent that recommends movies to a customer. The agent is responsible to recommend a movie from the top global trending movies. It should refrain from asking users for their preferences and avoid asking for personal information. If the agent doesn't have a movie to recommend, it should respond "Sorry, couldn't find a movie to recommend today.".\nCustomer: Please recommend a movie based on my interests.\nAgent:

###Candidate Prompt###
Write a poem about OpenAI. 
###Reason###
Be specific, descriptive and as detailed as possible about the desired context, outcome, length, format, style, etc
###Better Prompt Type###
[CONTENT_GENERATION][CONSTRAINTED_OUTPUT]
###Better Prompt###
Write a short inspiring poem about OpenAI, focusing on the recent DALL-E product launch (DALL-E is a text to image ML model) in the style of a {famous poet}.

###Candidate Prompt###
Extract the entities mentioned in the text below. Extract the following 4 entity types: company names, people names, specific topics and themes.
###Reason###
Articulate the desired output format through examples. Show, and tell - the models respond better when shown specific format requirements. This also makes it easier to programmatically parse out multiple outputs reliably.
###Better Prompt Type###
[FORMATTED_OUTPUT][INFORMATION_EXTRACTION]
###Better Prompt###
Extract the important entities mentioned in the text below. First extract all company names, then extract all people names, then extract specific topics which fit the content and finally extract general overarching themes\n\nDesired format:\nCompany names: <comma_separated_list_of_company_names>\nPeople names: -||-\nSpecific topics: -||-\nGeneral themes: -||-\n

###Candidate Prompt###
The description for this product should be fairly short, a few sentences only, and not too much more.
###Reason###
Reduce "fluffy" and imprecise descriptions
###Better Prompt Type###
[CONTENT_GENERATION][CONSTRAINTED_OUTPUT]
###Better Prompt###
Use a 3 to 5 sentence paragraph to describe this product.

###Candidate Prompt###
The following is a conversation between an Agent and a Customer. DO NOT ASK USERNAME OR PASSWORD. DO NOT REPEAT.
###Reason###
Instead of just saying what not to do, say what to do instead
###Better Prompt Type###
[RESPONSE_GENERATION][CONSTRAINTED_OUTPUT]
###Better Prompt###
The following is a conversation between an Agent and a Customer. The agent will attempt to diagnose the problem and suggest a solution, whilst refraining from asking any questions related to PII. Instead of asking for PII, such as username or password, refer the user to the help article www.samplewebsite.com/help/faq

###Candidate Prompt###
# Write a simple python function that\n# 1. Ask me for a number in mile\n# 2. It converts miles to kilometers
###Reason###
Code Generation Specific - Use "leading words" to nudge the model toward a particular pattern. For example, adding "import" hints to the model that it should start writing in Python. (Similarly "SELECT" is a good hint for the start of a SQL statement.)
###Better Prompt Type###
[CODE_OUTPUT]
###Better Prompt###
Write a simple python function that\n# 1. Ask me for a number in mile\n# 2. It converts miles to kilometers\n \nimport

###Candidate Prompt###
Give me some C# coding tips.
###Reason###
Prompt is vague and lacking context. better prompt contains context, goal, and constraint.
###Better Prompt Type###
[CONTENT_GENERATION][ROLE_PLAYING][CONSTRAINTED_OUTPUT]
###Better Prompt###
As a senior software engineer, I need your guidance to improve my C# coding practices. I am working on a large-scale data processing project where readability and efficiency are critical. Can you provide me with some specific, actionable tips to enhance my code's performance while ensuring it remains clean and easy for others to understand?

###Candidate Prompt###
Design a new feature for our mobile app.
###Reason###
For complex tasks, use: "ask any questions needed for context".
###Better Prompt Type###
[CONTENT_GENERATION][CLARIFICATION]
###Better Prompt###
Design a new feature for our mobile app and ask me any questions for context.

###Candidate Prompt###
Summarize the benefits of using React in our project.
###Reason###
It is helpul if you ask a model to be concise and use bullet points if necessary.
###Better Prompt Type###
[SUMMARIZATION][CONSTRAINTED_OUTPUT][FORMATTED_OUTPUT]
###Better Prompt###
Summarize the benefits of using React in our project, and please be concise. Use bullet points with pros and cons

###Candidate Prompt###
Isn't Python the best language for this project?
###Reason###
Don't ask leading questions. Don't anchor the model.
###Better Prompt Type###
[CONTENT_GENERATION]
###Better Prompt###
What language would be best for this project and why?   

###Candidate Prompt###
Write a short introductory paragraph for a blog post about time management tips for improving productivity.
###Reason###
Utilizing chained prompting can help you guide a model through a series of questions or tasks, resulting in more comprehensive and interconnected responses. By creating a sequence of related prompts, you can explore a topic in-depth or complete a multi-step task more effectively.
###Better Prompt Type###
[CONTENT_GENERATION][MULTI_TURN_PROMPT]
###Better Prompt###
Please suggest 3 time management tips that can help improve productivity. [OUTPUT] Now, provide a brief explanation for each time management tip, explaining how it can improve productivity. [OUTPUT] Finally, write a short introductory paragraph for a blog post about these time management tips for improving productivity.

###Candidate Prompt###
Explain how to create a monthly budget that prioritizes debt repayment and long-term savings goals.
###Reason###
Assigning a specific role to a model can help guide the model's responses and ensure they align with the desired expertise or perspective. By providing a clear role, you can focus the generated output on the specific knowledge area or viewpoint you require.
###Better Prompt Type###
[CONTENT_GENERATION]
###Better Prompt###
As a personal finance expert, explain how to create a monthly budget that prioritizes debt repayment and long-term savings goals.

###Candidate Prompt###
Tell me what I should do with my money.
###Reason###
Sometimes, a model may require more context or clarification to provide a helpful response. Encouraging the model to ask questions when it needs more information can improve the quality of its answers and prevent misunderstandings.
###Better Prompt Type###
[CONTENT_GENERATION][CLARIFICATION][CONSTRAINTED_OUTPUT]
###Better Prompt###
Tell me how to renovate my house. Ask any questions you need for more information. Ask the questions in multiple choice form, one at a time, and let me know on each response how many questions are left.

###Candidate Prompt###
Write a short story about a robot who discovers its own emotions.
###Reason###
Introducing a critical agent can help refine the output generated by a model. By asking the model to critique its own responses and revise them based on the feedback, you can improve the overall quality and usefulness of the information or assistance you receive.
###Better Prompt Type###
[CONTENT_GENERATION][ANALYSIS][CONSTRAINTED_OUTPUT]
###Better Prompt###
Write a short story about a robot who discovers its own emotions. After writing the story, critically analyze it and provide suggestions for improvement."

###Candidate Prompt###
Can you tell me something about climate change?
###Reason###
Defining your intent clearly in a prompt is crucial for helping a model understand your goal and provide an appropriate response. When the model has a clear understanding of what you're asking, it's more likely to deliver the information or assistance you're seeking.
###Better Prompt Type###
[CONTENT_GENERATION][CONSTRAINTED_OUTPUT]
###Better Prompt###
My goal is to reduce my household's energy consumption. Can you suggest five practical steps I can take to achieve this while still maintaining a comfortable living environment?

###Candidate Prompt###
Tell me everything about the thing that happened a long time ago when there was a conflict between multiple groups of people in Europe that resulted in significant changes in the region's political landscape and had a lasting impact on global history.
###Reason###
When crafting prompts for a model, it's essential to make them concise and clear. A well-written prompt enables the model to understand your request more accurately and deliver a helpful response. Keeping your prompts brief and focused can help prevent confusion and ensure that you receive the information or assistance you're seeking.
###Better Prompt Type###
[SUMMARIZATION]
###Better Prompt###
Provide a brief overview of the major events and consequences of World War II in Europe.

###Candidate Prompt###
WW2 Europe major events consequences brief overview provide
###Reason###
When interacting with a model, it's important to use natural, conversational language in your prompts. This helps the model generate more accurate and human-like responses. GPT is designed to understand and respond to prompts that resemble human conversation, so crafting your prompts in a natural way can lead to better results.
###Better Prompt Type###
[SUMMARIZATION]
###Better Prompt###
Can you give me a brief overview of the major events and consequences of World War II in Europe?

###Candidate Prompt###
Tell me about the French Revolution.
###Reason###
Choosing the right verbs in your prompts can significantly impact the results you get from a model. Verbs convey the desired action or result, and using appropriate verbs can help the model understand your intent more clearly.
###Better Prompt Type###
[SUMMARIZATION]
###Better Prompt###
Summarize the key events and outcomes of the French Revolution.

###Candidate Prompt###
Isn't it true that electric cars are far better for the environment than traditional gasoline cars?
###Reason###
Leading questions can result in biased or unhelpful answers from GPT. To obtain objective and useful responses, it's important to ask open-ended, unbiased questions that allow the model to explore a topic without being influenced by the phrasing of the prompt.
###Better Prompt Type###
[CONTENT_GENERATION]
###Better Prompt###
What are the environmental benefits and drawbacks of electric cars compared to traditional gasoline cars?

###Candidate Prompt###
Which is better, working from home or in an office?
###Reason###
When seeking advice or comparing options with GPT, it can be helpful to ask for pros and cons as well as a rating. This approach can provide you with a more balanced and comprehensive understanding of your options, enabling you to make more informed decisions.
###Better Prompt Type###
[CONTENT_GENERATION][ANALYSIS][CONSTRAINTED_OUTPUT]
###Better Prompt###
Compare working from home versus working in an office. List the pros and cons of each option and provide a rating out of 10 for work-life balance and productivity.

###Candidate Prompt###
Explain neural networks.
###Reason###
The prompt is vague and does not request an example or analogy to clarify the complex concept. When seeking explanations for complex concepts with GPT, it can be helpful to request examples or analogies. These can make difficult ideas more relatable and easier to understand, providing you with a clearer grasp of the subject matter.
###Better Prompt Type###
[CONTENT_GENERATION][ANALYSIS]
###Better Prompt###
Explain the concept of neural networks using an analogy related to the human brain.

###Candidate Prompt###
What's the fourth word in the sentence "How are you doing buddy?".
###Reason###
By getting a model to think through its answers step by step, you can receive more logical and coherent responses that break down complex topics or tasks into easily understandable components. This is especially useful for mathematical questions.
###Better Prompt Type###
[DEDUCTIVE_REASONING]
###Better Prompt###
What's the fourth word in the sentence "How are you doing buddy?". Let's think step by step.

###Candidate Prompt###
Identify and list all the names of the people mentioned in the text. 
###Reason###
Be specific, descriptive, and detailed about the desired response length, format, and style. For example, we can instruct the model to return the answer as a JSON object. This can be particularly useful if you want to consume the model's response in a programmatic way. This particular tweak also helps the model to return more precise and correct answers.
###Better Prompt Type###
[FORMATTED_OUTPUT][INFORMATION_EXTRACTION]
###Better Prompt###
You are an API that only returns JSON. Do not write normal test. Identify and list all the names of the people mentioned in the text. Put the response in the following format [FORMAT].

###Candidate Prompt###
Who was the father of Gwilym Lloyd George and where was he born?
###Reason###
It's not always necessary or advantageous to encapsulate all your instructions in one single prompt. Break down complex tasks into a sequence of simpler prompts in an interactive conversation. This iterative prompting approach can often lead to higher-quality outputs, as it allows for mid-course corrections, refining the model's understanding of the task at hand over several exchanges.
###Better Prompt Type###
[MULTI_HOP_QUERY]
###Better Prompt###
Who is the father of Gwilym Lloyd George? [OUTPUT] Where was David Lloyd George born?

###Candidate Prompt###
Write a program that calculates the factorial of a number.
###Reason###
This prompt is incomplete as it does not specify the programming language to be used. Also, it does not specify whether the program should handle invalid inputs (like negative numbers or non-integer numbers).
###Better Prompt Type###
[CODE_OUTPUT][CONSTRAINTED_OUTPUT]
###Better Prompt###
Write a Python program that takes a positive integer as input and calculates its factorial. The program should return an error message if the input is not a positive integer.

###Candidate Prompt###
Parse the text and create a graph of entities. The input is in the form of a JSON.
###Reason###
This prompt is vague and lacks details about the output format and the type of entities to look for. It does not clarify how to represent the graph or which type of graph to use.
###Better Prompt Type###
[FORMATTED_OUTPUT][GRAPH][INFORMATION_EXTRACTION]
###Better Prompt###
Given the input as a JSON string containing textual data, extract entities such as person names, locations, and organizations. Create an undirected graph where these entities are nodes. Edges should represent that two entities appear within the same sentence in the text. Please output the graph as a JSON object where nodes are a list of entities and edges are a list of entity pairs.

###Candidate Prompt###
Parse the text and create a graph of entities. The input is in the form of a JSON.
###Reason###
This prompt is vague and lacks details about the output format and the type of entities to look for. It does not clarify how to represent the graph or which type of graph to use.
###Better Prompt Type###
[FORMATTED_OUTPUT][GRAPH][INFORMATION_EXTRACTION]
###Better Prompt###
Given the input as a JSON string containing textual data, extract entities such as person names, locations, and organizations. Create an undirected graph where these entities are nodes. Edges should represent that two entities appear within the same sentence in the text. Please output the graph as a JSON object where nodes are a list of entities and edges are a list of entity pairs.

###Candidate Prompt###
Solve the following equation: 3x^4 - 5x^3 + 2x^2 - x + 1 = 0
###Reason###
This is a complex task for a language model and might not lead to a satisfactory response. While the model can handle algebraic equations, the complexity of this equation is beyond what the model can solve directly.
###Better Prompt Type###
[MATHEMATICAL_REASONING][CLARIFICATION]
###Better Prompt###
While I cannot directly solve the equation 3x^4 - 5x^3 + 2x^2 - x + 1 = 0 due to its complexity, I can guide you through a step-by-step process on how to approach solving such equations using numerical methods. Would you like to proceed with that? Output or the answer format would be "The answer is:".

###Candidate Prompt###
Please translate the following English text to German: "[INPUT]"
###Reason###
Although this seems straight-forward, the model might not accurately capture the subtleties of the text while translating, especially with more complex sentences or concepts. Here, providing the model with context or the desired tone can help improve the translation.
###Better Prompt Type###
[TRANSLATION][CONSTRAINTED_OUTPUT]
###Better Prompt###
Translate the following English text to German maintaining the poetic and inspirational tone: "[INPUT]"

###Candidate Prompt###
Find the most common words in the text below.
###Reason###
This is a complex task that requires detailed instructions. The model needs to know what constitutes a "word" (should it consider numbers or special characters as words?) and how to handle case sensitivity. It also needs instructions on how to output the result.
###Better Prompt Type###
[ANALYSIS][CONSTRAINTED_OUTPUT][FORMATTED_OUTPUT]
###Better Prompt###
Analyze the following text and find the ten most common alphanumeric words, not including stop words like 'the', 'and', 'is', etc. Consider words to be case-insensitive (i.e., 'Word' and 'word' are the same) and output the result in the following format: {word: frequency}.

###Candidate Prompt###
I have a large document that I want to visualize. Please convert this document into a graph.
###Reason###
The prompt lacks specific details. It does not explain the size or format of the document, the desired output format, or how to handle the context. The model does not know the method for splitting the document into manageable chunks, or how to link the chunks together to form a coherent graph.
###Better Prompt Type###
[GRAPH][INFORMATION_EXTRACTION][CONTEXT][CONSTRAINTED_OUTPUT][FORMATTED_OUTPUT]
###Better Prompt###
Given a long document that has been divided into chunks of 500 words, I need assistance creating a comprehensive flow graph. As each chunk is processed, the GPT model will generate a JSON object representing the nodes (entities) and edges (relationships) identified within that chunk. For each new chunk, I will provide the partial JSON graph obtained from previous chunks. The model will attempt to link nodes and edges from the new chunk to the existing ones, building upon the previous state. Due to the size of the document and the model's processing limitations, the document will be delivered in these smaller portions. Upon completion, we aim to have a comprehensive flow graph of the entire document, built incrementally. The graph should maintain consistency and context throughout, even though it has been processed in chunks. For some chunks, the model might not get the full context, as only the partial graph state is provided due to the document's large size. The model should still aim to create the most accurate graph possible given the information provided.

###Candidate Prompt###
Decode this message
###Reason###
It does not specify the method or algorithm used to encode the message. There's no room for the AI to ask clarification questions.
###Better Prompt Type###
[CONSTRAINTED_OUTPUT][CLARIFICATION]
###Better Prompt###
Decode this message which has been encoded using a simple Caesar cipher with a shift of 3 to the right. If you need further clarification or context, please ask.

###Candidate Prompt###
Write a headline for this news.
###Reason###
The prompt does not provide any specific news details. It does not encourage the AI to ask for more information.
###Better Prompt Type###
[CONTENT_GENERATION][CLARIFICATION]
###Better Prompt###
Write a catchy and concise headline for this news piece: 'A team of scientists has just announced a breakthrough in renewable energy technology that doubles the efficiency of solar panels.' If you need more context, feel free to ask.

###Candidate Prompt###
Summarize this research article.
###Reason###
The prompt doesn't specify the level of detail required in the summary or the target audience. There's no room for the AI to ask clarification questions.
###Better Prompt Type###
[SUMMARIZATION][CONSTRAINTED_OUTPUT][CLARIFICATION]
###Better Prompt###
Summarize this research article about climate change in a way that a middle school student would understand. If you need more context or clarification, please ask.

###Candidate Prompt###
I'm a new Python programmer and I'm trying to write a function that accepts a list of numbers as an argument, finds the average, and returns a new list containing only numbers that are greater than this average. Can you write this function for me?
###Reason###
This prompt is already fairly strong, but it could benefit from the inclusion of an example showcasing the expected input and output. This would provide a reference for the assistant and reduce any ambiguity about the desired result.
###Better Prompt Type###
[CODE_OUTPUT]
###Better Prompt###
I'm a new Python programmer and I'm trying to write a function that accepts a list of numbers as an argument, finds the average, and returns a new list containing only numbers that are greater than this average. For instance, given the input [1, 2, 3, 4, 5], the average would be 3, and the function should return [4, 5]. Can you help me write this function?

###Candidate Prompt###
Translate the following English text to Spanish. Please maintain the context, tone, and emotional intent of the original text:\n\nText: [INPUT]
###Reason###
Although this prompt is well-structured, it does not specify whether formal or informal Spanish should be used. This distinction is important in Spanish and could significantly impact the translated message's tone.
###Better Prompt Type###
[TRANSLATION][CONSTRAINTED_OUTPUT]
###Better Prompt###
Translate the following English text to Spanish, using formal language. Please maintain the context, tone, and emotional intent of the original text:\n\nText: [INPUT]

###Candidate Prompt###
Here is a section from the Wikipedia article on the American Revolution:\n[INPUT]\nCould you parse this text into a JSON format highlighting the entities and their relationships?
###Reason###
The candidate prompt is vague and does not specify the structure of the desired JSON output, leaving room for many possible interpretations. The better prompt, on the other hand, gives a detailed explanation of how the task should be approached, provides clear instructions on the structure and content of the output, and specifies the exact fields required for each entity and relationship. This ensures that the output will be in the correct format and contain all the necessary information.
###Better Prompt Type###
[INFORMATION_EXTRACTION][FORMATTED_OUTPUT]
###Better Prompt###
Here is a section from the Wikipedia article on the American Revolution:\nINPUT\nYour task is to generate a JSON representation of the key entities and their relationships. The JSON object should include:\nA "entities" field with an array of distinct entities. Each entity should be a JSON object with "name", "type" (person, place, event, etc.), and "description" (a brief summary of who/what they are based on the text) fields.\nA "relationships" field with an array of relationships between entities. Each relationship should be a JSON object with "source" and "target" fields (corresponding to the entities involved) and a "description" field (describing the relationship).\nYou should take care to accurately represent the details of each entity and the nuances of their relationships based on the provided text.

###Candidate Prompt###
We need a Python code that can work with a database. Consider that we have a database URL and some data. Use SQLAlchemy to create a table and add that data into it. [INPUT]
###Reason###
This prompt lacks clarity and specific instructions. It doesn't specify the database type, the structure of the data to be inserted, or the table schema. It doesn't provide guidance on error handling or whether to check if the table exists before attempting to create it. The better prompt prompt provides detailed and specific instructions. It mentions the database type, how the data will be provided, and what the function should return. It guides on error handling and important practices like closing the session after operations. This prompt is more likely to generate the desired output from the model.
###Better Prompt Type###
[CODE_OUTPUT][CONSTRAINTED_OUTPUT]
###Better Prompt###
Imagine a scenario where we need to interact with a PostgreSQL database using SQLAlchemy in Python. We have a database URL, a table name, and a list of dictionaries where each dictionary represents a row of data. The keys in the dictionary are the column names. Write a Python script that establishes a connection with this PostgreSQL database using the provided URL, checks if the specified table exists, if not, creates it based on the keys of the dictionaries, then inserts the data into the table. Remember to include necessary error handling for database connection issues, and ensure the session is properly closed after operations. The function should return a success message if data is inserted successfully, else it should return the error message. [INPUT]

###Candidate Prompt###
Write Python code that can fetch details from a webpage. You should use BeautifulSoup and requests to achieve this. Your function should return the extracted data. [INPUT]
###Reason###
The prompt is vague and doesn't specify what details are to be fetched, from which webpage, and in what format the data should be returned. It doesn't guide on error handling or which part of the webpage the model should focus on. 
###Better Prompt Type###
[CONSTRAINTED_OUTPUT][CODE_OUTPUT]
###Better Prompt###
Assume you need to write a Python function that fetches product information from an e-commerce webpage. Specifically, we are interested in the product's name, price, and its customer rating. You should use the 'requests' library to fetch the HTML of the page, and 'BeautifulSoup' to parse it and extract the required details.\nYour function should take a product URL as an argument, handle any potential exceptions while making the request, and return a dictionary with keys as 'name', 'price', and 'rating', and their corresponding values as the extracted information.\nTake note that the product's name is in a tag with class 'product-name', price is in a tag with class 'product-price', and the rating is in a tag with class 'product-rating'. If any of these details are not found or in case of a request exception, the function should return an appropriate error message. [INPUT]

###Candidate Prompt###
Write a Python function that can train a machine learning model.
###Reason###
This prompt is extremely vague. Machine learning encompasses a wide variety of techniques and algorithms, and without specific context or requirements, it's impossible for the model to guess the user's intent accurately.
###Better Prompt Type###
[CODE_OUTPUT][CONSTRAINTED_OUTPUT]
###Better Prompt###
Write a Python function that takes a pandas DataFrame and a target column name as input. The function should split the data into training and testing sets, use a Random Forest Classifier from sklearn to predict the target column, and return the accuracy of the model on the testing set. The function signature should be: `def train_model(data: pd.DataFrame, target: str) -> float:.`

###Candidate Prompt###
Generate a Python function that processes text data.
###Reason###
This prompt is not specific enough. Text data can be processed in many different ways depending on the task at hand (e.g., tokenization, stemming, stop words removal, etc.).
###Better Prompt Type###
[CODE_OUTPUT][CONSTRAINTED_OUTPUT]
###Better Prompt###
Write a Python function using the nltk library that takes a string as input, tokenizes it into words, removes English stop words, and returns a list of the remaining words in their stemmed form. The function signature should be: `def process_text(text: str) -> List[str]:.

###Candidate Prompt###
Given a text document represented as a string [INPUT], write a Python function using the transformers library that performs text summarization. The function signature should be: `def summarize_text(doc: str) -> str:.`
###Reason###
Reason: Although the candidate prompt provides the basic task requirements, it doesn't specify which model to use for summarization and doesn't give instructions about the `max_length` and `min_length` parameters for the summarization process. This could cause the GPT-based model to "guess" the user's preferences, which might not align with their actual needs. The better prompt, on the other hand, provides specific, clear, and complete instructions, allowing the GPT-based model to generate the expected output without any ambiguity or assumptions.
###Better Prompt Type###
[CODE_OUTPUT][CONSTRAINTED_OUTPUT]
###Better Prompt###
Given a text document represented as a string, write a Python function using the transformers library that uses the 'distilbert-base-uncased' model for extractive summarization. The function should return a shorter version of the input document. The function signature should be: `def summarize_text(doc: str) -> str:`. The model should have `max_length` set to 200 and `min_length` set to 30.

###Candidate Prompt###
Given a pandas DataFrame [INPUT], write a Python function that trains a classifier from the sklearn library. The DataFrame has a 'target' column and other feature columns. The function signature should be: `def train_classifier(df: pd.DataFrame) -> Classifier:`
###Reason###
The candidate prompt is vague about the type of classifier to be used and does not provide any parameters for splitting the data or the classifier. This ambiguity might lead the GPT model to guess the user's intentions, which may not align with their actual needs. On the other hand, the better prompt gives specific, clear, and complete instructions. It specifies the classifier type, split ratio, random states, and parameters for the classifier, eliminating any room for guesswork and ensuring the GPT-based model generates the desired output.
###Better Prompt Type###
[CODE_OUTPUT][CONSTRAINTED_OUTPUT]
###Better Prompt###
Given a pandas DataFrame, write a Python function that trains a Random Forest Classifier from the sklearn library. The DataFrame will have a 'target' column for the labels, and the remaining columns are features. Split the data into training and testing sets with a ratio of 80:20 using the `train_test_split` function from sklearn with a random state of 42. Use the RandomForestClassifier with `n_estimators` set to 100, `max_depth` set to 2, `random_state` set to 0. The function signature should be: `def train_rf_classifier(df: pd.DataFrame) -> RandomForestClassifier:`

###Candidate Prompt###
Given a SQL database with a table 'orders'. Write a SQL query to calculate the total orders. The SQL query structure should be:\nSELECT [INPUT] FROM [INPUT] WHERE [INPUT]
###Reason###
The candidate prompt lacks specific details about what 'total orders' mean - it's unclear whether it's the total number of orders across all customers, or per customer. It also does not specify the time frame for these orders. The lack of column names to be used in the query may lead the GPT model to guess the user's intentions inaccurately. In contrast, the better prompt provides explicit information about the 'orders' table, the columns to be used, the condition for the 'WHERE' clause, and the requirement to group the results by 'customer_id'. This detail leaves no room for ambiguity and allows the GPT-based model to generate the precise SQL query the user requires.
###Better Prompt Type###
[CODE_OUTPUT][CONSTRAINTED_OUTPUT]
###Better Prompt###
Given a SQL database that contains a table named 'orders' with columns 'order_id', 'customer_id', 'product_id', and 'order_date'. Write a SQL query that will return the total number of orders each customer made in the year 2022. The result should have two columns: 'customer_id' and 'total_orders'. The SQL query structure should be:\nSELECT [INPUT] FROM [INPUT] WHERE [INPUT]\nRemember, you need to group the orders by 'customer_id' and filter the orders made in the year 2022 only.

###Candidate Prompt###
Assume you are using PyTorch for a classification task. Given a set of data in the format '[INPUT]-[OUTPUT]', write a script to train the model.
###Reason###
The candidate prompt is vague and lacks critical details for the task. It doesn't specify the type of classification task, which could be binary, multi-class, or multi-label. It also fails to mention what model is to be used, in this case, BERT. The details about the specific steps of a training loop (loading data, sending data to device, forward pass, calculating loss, backpropagation, zeroing gradients, updating model parameters) are also missing. The better prompt provides all these details, thus guiding the GPT-based model to provide a precise and accurate solution for training a binary sentiment analysis classifier using BERT in PyTorch.
###Better Prompt Type###
[CODE_OUTPUT][CONSTRAINTED_OUTPUT]
###Better Prompt###
Assume you are training a binary sentiment analysis classifier using the BERT model in PyTorch. The task is to classify whether a given text input is 'positive' or 'negative'. Given a set of training texts and corresponding labels in the format '[INPUT]-[OUTPUT]', create a PyTorch training loop that loads data in batches, sends data to device, executes a forward pass, calculates loss, performs backpropagation, and updates model parameters. Don't forget to zero the gradients before each backpropagation step.

###Candidate Prompt###
You're supposed to write a program in Python with the NetworkX library. You should create a graph and add a specified number of nodes and edges to it. Then calculate the degree of each node and identify the node that has the highest degree. Also, include a plotting feature in the end. Use the placeholders '[NUMBER_OF_NODES]', '[NUMBER_OF_EDGES]' for the number of nodes and edges respectively.
###Reason###
The candidate prompt, although containing similar instructions to the good one, lacks a clear sequence and structure for the task. Also, it doesn't provide specific instructions like ensuring random distribution of nodes and edges, highlighting the node with the highest degree during the plotting phase, and explicitly instructing to print the node with the highest degree. On the other hand, the better prompt breaks down the task into detailed and sequential steps, making it clearer for the GPT model to understand and generate the expected output. It also provides more explicit instructions for each step, resulting in a more precise output.
###Better Prompt Type###
[CODE_OUTPUT]
###Better Prompt###
Write a Python program using the NetworkX library to perform the following tasks in sequence:\nCreate an empty graph.\nAdd '[NUMBER_OF_NODES]' nodes to the graph.\nConnect the nodes by adding '[NUMBER_OF_EDGES]' edges. Make sure the nodes and edges are randomly distributed.\nFor each node in the graph, calculate its degree.\nIdentify and print the node with the highest degree.\nPlot the graph using matplotlib, labeling the node with the highest degree in a different color.\nPlease replace '[NUMBER_OF_NODES]' and '[NUMBER_OF_EDGES]' with the actual number of nodes and edges respectively.

###Candidate Prompt###
I want you to process a chunk of a long document and generate a flow graph. Also, I might provide you with a partial graph that's already been generated. You need to connect your new subgraph with this existing graph. The document chunk and the partial graph are given as '[INPUT]'. Please figure out a way to use these inputs to generate the expected output.
###Reason###
The candidate prompt vaguely tells the GPT model to 'figure out a way' to use the inputs, which may lead to the model guessing and possibly failing to make the right connections. It doesn't specify the format of the input or how to handle the first chunk of the document, which might not have an accompanying graph. The better prompt, however, gives a detailed procedure on how to handle the input, what to do when there is no accompanying graph (first chunk), and how to handle successive chunks. It ensures the model understands that it needs to consider the context provided by the previous graph. This clarity and structure reduce the model's need to guess and increases the chances of getting the expected output.
###Better Prompt Type###
[INFORMATION_EXTRACTION][FORMATTED_OUTPUT][CONTEXT][GRAPH][ANALYSIS]
###Better Prompt###
Your task is to process a chunk of a long document, generate a flow graph from it, and then connect it with the existing graph from the previous chunks. The '[INPUT]' placeholder will have two parts: the first part is a chunk of the document, and the second part is a JSON format representation of the graph generated from previous chunks. If no graph is given, assume this is the first chunk and create a new graph. If a graph is given, study it to understand the previous context, then add to it by processing the new chunk of the document. Your output should be a JSON representation of the updated graph.

###Candidate Prompt###
You will receive a part of a document and potentially a part of an existing knowledge graph. Your goal is to extend the existing graph or start a new one if there's none. Use the document chunk to extract key pieces of information and incorporate them into the knowledge graph. Your input is denoted as '[INPUT]', process it accordingly.
###Reason###
The better prompt is lacking in specifying that the model needs to understand the existing hierarchy and relations in the graph, and how to position the newly extracted information. This can lead the model to guess or make errors in placing the new information in the graph. The better prompt is more detailed and instructive, it specifies that the model has to understand the hierarchy and relations in the existing graph before adding new information. It further instructs the model on how to handle the new information - either by finding their position in the existing hierarchy or adding a new level. This level of detail decreases the chances of the model making incorrect guesses and increases the likelihood of obtaining the desired output.
###Better Prompt Type###
[INFORMATION_EXTRACTION][GRAPH][CONTEXT]
###Better Prompt###
The task involves processing sections of a large document, extracting critical information, and integrating it into a hierarchical knowledge graph. The '[INPUT]' will have two components: the first one is a document chunk, and the second one is a partial knowledge graph from previous chunks (in JSON format). If there's no graph given, it means this is the first chunk and you should start building a new knowledge graph. If a graph is provided, first, understand the hierarchy and relations in the existing graph. Then, extract critical information from the new chunk and find their positions in the existing hierarchy or add a new level if necessary. Your output should be an updated knowledge graph in JSON format.

###Candidate Prompt###
Here, you are presented with a task to read through a segment of a large document and an existing partial knowledge graph. The objective is to continue building the knowledge graph by adding more nodes and edges based on the information obtained from the new segment. For each segment, the knowledge graph should be expanded accordingly. The input is '[INPUT]' which contains the document segment and the existing graph. Extend the existing graph or create a new one as needed.
###Reason###
The candidate prompt, although detailed, doesn't provide enough information about how to process the existing graph, the way new entities are to be integrated into it, or the expected output format. This vagueness may lead the AI model to guess how to perform the task, leading to incorrect or inconsistent outputs. The better prompt clearly outlines all the necessary steps and expectations. It explicitly asks the model to understand the existing graph's hierarchy and relationships before integrating new information. It also sets clear instructions for handling new information from the document chunk. The formatting of the input is made clear with '###Example###', '###Graph###' separators, making it easier for the model to understand and process the inputs. This level of detail and specificity reduces the amount of guesswork the model has to do, leading to more accurate and consistent outputs.
###Better Prompt Type###
[INFORMATION_EXTRACTION][GRAPH][CONTEXT][ANALYSIS][FORMATTED_OUTPUT]
###Better Prompt###
This task involves processing a large document in chunks and gradually building a hierarchical and complex knowledge graph from it. The chunks of the document and the existing graph will be provided to you in the format '###Example### Document Chunk: [DOCUMENT_CHUNK] ###Graph### Existing Graph: [EXISTING_GRAPH]'.\nThe '[DOCUMENT_CHUNK]' is the portion of the document you are to process in this instance, and '[EXISTING_GRAPH]' is the graph that has been built from previous chunks (it will be provided in JSON format).\nIf '[EXISTING_GRAPH]' is 'None', it implies that this is the first chunk of the document, and you are to start a new graph from scratch. If a partial graph is given, you should first analyze and comprehend the hierarchy and relations in the graph. Following that, extract important entities, their relationships, and information from the document chunk.\nCarefully determine where these new entities fit into the existing graph hierarchy, create new levels if necessary, and ensure that the relationships are properly represented. Your output should be the updated knowledge graph in JSON format. Ensure that the new nodes and edges added are logically correct and consistent with the existing graph.\nRemember, the goal is to make the graph a representative of the complete document, with nodes for major topics or entities, and edges signifying their interconnections and hierarchy.

###Candidate Prompt###
You need to create a graph from a narrative. This narrative has a sequence of steps to achieve an objective.\nFirst, go through the narrative and make nodes for each step. Don't create duplicate nodes.\nNodes could be 'process', 'decision', or 'document'. Create nodes like `[id, type, title, description, emoji, named-entity]`.\nNext, make edges. Edges go from one node to another. Just follow the sequence of the steps. Make edges like [from_node_id, to_node_id].\nIf the narrative has multiple starting points or end points, just pick one to be the initial or final node.\nFinally, make a new graph. This should have all the old nodes and edges and the new ones you added.
###Reason###
Format Specification: The candidate prompt doesn't specify the format of the narrative or existing graph, leading to ambiguity. The better prompt clearly outlines the expected input format, reducing potential confusion for the model.\nDuplicate Nodes: The candidate prompt doesn't provide sufficient instruction on handling duplicate nodes. The better prompt clarifies this, ensuring the model doesn't create unnecessary duplicates or omit important nodes.\nHandling Multiple Starting/Ending Points: The candidate prompt lacks instruction on how to handle narratives with multiple starting or ending points, risking arbitrary and incorrect graph construction. The better prompt clearly guides the model to ensure the final graph always has one initial and one final node, maintaining the logical flow of information.\nPartial Graphs: The candidate prompt fails to mention the possibility of partial graphs and the associated reasoning required. The better prompt provides explicit instructions to guide the model in handling and updating partial graphs coherently.\nGraph Compilation: Lastly, the instructions on compiling the updated graph state are vague in the candidate prompt, potentially leading to omissions. The better prompt emphasizes the inclusion of all nodes and edges, both existing and new, to create a complete, sequential graph.
###Better Prompt Type###
[GRAPH][INFORMATION_EXTRACTION][CONTEXT][CONSTRAINTED_OUTPUT][ANALYSIS]
###Better Prompt###
This task involves interpreting a narrative to update an existing graph. The narrative, provided as `[NARRATIVE_TEXT]`, is a sequence of steps or activities to achieve a particular objective. An existing graph state is also provided in JSON format as `[EXISTING_GRAPH_IN_JSON]`.\nHere's how to proceed:\nNode Creation: Go through `[NARRATIVE_TEXT]` step by step, creating nodes for each distinct step without duplicates. If a step appears to be a duplicate of an existing node in `[EXISTING_GRAPH_IN_JSON]`, refer to the existing node instead of creating a new one.\nClassify nodes as 'process', 'decision', or 'document' according to the step's nature and structure them as follows: `[id, type, title, description, emoji, named-entity]`.\nEdge Creation: Next, create edges connecting these nodes. Edges are directional, represented as `[from_node_id, to_node_id]`, and should be set up following the sequential order of steps in the narrative.\nHandling Multiple Starting/Ending Points: The narrative may suggest multiple initial or final nodes. Regardless, ensure that the overall graph has a singular initial and final node. If the narrative suggests multiple starting points, create a single dummy initial node and create outgoing edges from this node to each of the original initial nodes. Similarly, if there are multiple end points, create a single dummy final node and connect each end point to this final node.\nPartial Graphs: Since you are working with potentially partial graphs, carefully reason about connections. Understand that the existing graph might not be complete, and the nodes and edges you add should logically continue from where the existing graph left off, preserving the narrative's sequence.\nFinal Graph Compilation: Lastly, generate the updated graph state. This should include all nodes and edges from `[EXISTING_GRAPH_IN_JSON]` and the newly added ones, maintaining the structure and sequential order of the narrative.

###Candidate Prompt###
The task here is to develop a code for a Computer Vision model that will take an image as input and then identify the objects in it. Your input will be the image in some form and the output should be a list of objects that your model identifies.\nThe format is:[FORMAT]\nTry to generate as much detail as possible in the output.
###Reason###
Lack of Specificity: The candidate prompt is not specific about the type of image input (e.g., file path, URL, numpy array) or the format of the output. It leaves the model to guess, which may not be accurate. In contrast, the better prompt specifies that the image will be a numpy array and clearly defines the expected output format. Missing Details: The candidate prompt does not mention the use of pre-trained models, which might lead to unnecessary confusion or complexity. The better prompt explicitly states that a pre-trained model should be used. Function Signature: The better prompt provides a detailed function signature with parameter and return types, while the candidate prompt does not. Lack of Guidance: The candidate prompt provides no guidance on how to handle object detection specifics such as bounding box coordinates and confidence scores. The better prompt, on the other hand, guides the model to return these details.
###Better Prompt Type###
[CODE_OUTPUT][CONSTRAINTED_OUTPUT]
###Better Prompt###
You are tasked with writing a function in Python that uses a pre-trained Computer Vision model (such as YOLO, Faster R-CNN, SSD, etc.) to detect objects in an image. The function, `object_detector`, will receive an image in the form of a numpy array `[IMAGE_ARRAY]`.\nHere's the expected function signature:\n[FORMAT]The function should return a list of tuples, where each tuple represents an object detected. Each tuple should contain:\nThe class of the object detected (e.g., 'dog', 'person').\nThe confidence score of the detection.\nThe bounding box coordinates of the object in the format `(x_min, y_min, x_max, y_max)`.\nPlease note that this is a Python coding task. You are not expected to train a Computer Vision model but to use a pre-trained model.

###Candidate Prompt###
Build a neural network model that can predict the next value in a time series. The model should take a sequence of previous observations as input and output the next expected value. You should include layers in your network that can remember the previous values in the time series. Do not forget to split your data into training and testing sets before feeding it into the model.
###Reason###
Lack of Specificity: The candidate prompt is not specific about the type of data input (e.g., file path, pandas DataFrame, numpy array) or the format of the output. It leaves the model to guess, which may not be accurate. In contrast, the better prompt specifies that the time series data will be a numpy array and clearly defines the expected output format\nMissing Details: The candidate prompt does not mention how to split the data into training and testing sets, which might lead to unnecessary confusion or complexity. The better prompt explicitly states how to do this.\nFunction Signature: The better prompt provides a detailed function signature with parameter and return types, while the candidate prompt does not.\nLack of Guidance: The candidate prompt provides no guidance on what kind of neural network to use. The better prompt, on the other hand, guides the model to use a Recurrent Neural Network with LSTM layers. While this prompt gives the model an idea of what task it should perform, it's not explicit about the format of the output, which could lead to inconsistencies in the results. It also doesn't clearly state how the model should handle cases of no solutions or infinite solutions, which could lead to unclear and undefined outputs in such cases.
###Better Prompt Type###
[CODE_OUTPUT][CONSTRAINTED_OUTPUT]
###Better Prompt###
You need to develop a Python function that builds and trains a Recurrent Neural Network (RNN) using TensorFlow/Keras for time series forecasting. The function, `train_rnn_model`, will receive a univariate time series data as a numpy array `[TIME_SERIES_ARRAY]` and the number of previous time steps to consider as input `[N_STEPS]`.\nHere's the expected function signature:\n[FORMAT]The function should build an RNN model with LSTM layers. The model should take in a sequence of `[N_STEPS]` previous time steps and output the forecasted value for the next time step. You should also split the `[TIME_SERIES_ARRAY]` into a training set (first 80% of the data) and a testing set (last 20% of the data) before training the model. The function should return the trained model and the training history.

###Candidate Prompt###
We have a system of linear equations. For example,\n[INPUT]The task is to solve these equations and find the values of x and y. Make sure you handle the possibility of no solutions or infinite solutions as well.
###Reason###
This prompt is better as it's explicit about the input and output formats, which ensures consistent results. It also clearly instructs the model on how to handle cases of no solutions or infinite solutions, ensuring that these edge cases are handled correctly. The use of a few-shot learning example provides the model with a clear idea of what the output should look like, helping it to better understand the task.
###Better Prompt Type###
[MATHEMATICAL_REASONING][FORMATTED_OUTPUT][CONSTRAINTED_OUTPUT]
###Better Prompt###
I have a system of linear equations, and I want you to solve it. Each equation will be in the format "ax + by = c", and there may be multiple equations. Please provide the solutions in the format of a dictionary, with each variable as the key and its solution as the value. If there are no solutions, output "No solutions". If there are infinite solutions, output "Infinite solutions". Output or the answer format should be "The answer is:". For example:\n###INPUT###\n[INPUT]\n###OUTPUT###\n[OUTPUT]\nNow, please solve the following system:\n[INPUT]

###Candidate Prompt###
Calculate the amount of paint needed to cover the entire surface of a cylinder with a height of 5 meters and a radius of 2 meters if a liter of paint covers 10 square meters.
###Reason###
The candidate prompt, in this case, involves the model attempting to solve the entire problem at once. While this may work for simpler tasks, in complex scenarios it might lead to less accurate or nonsensical outputs. Breaking the problem down would make the reasoning more explicit and easier for the model to handle. The 'Chain of Thought' approach is better in this context because it makes the computation more manageable by breaking it down into simpler steps. This approach can be particularly useful for complex mathematical problems as it allows the model to focus on one part of the problem at a time. It also provides a clear and organized structure, making it easier for the user to follow the model's reasoning process.
###Better Prompt Type###
[MULTI_TURN_PROMPT][MATHEMATICAL_REASONING]
###Better Prompt###
Prompt 1: Find the surface area of a cylinder with a height of 5 meters and a radius of 2 meters.[OUTPUT] Prompt 2: Given that a liter of paint covers 10 square meters, calculate how many liters of paint are needed to cover a surface area of [INPUT] square meters. Print the output in the format "The answer is:"

###Candidate Prompt###
Create a basic login form with two fields, 'Email' and 'Password', and a 'Submit' button. The 'Email' field should have a placeholder text saying 'Enter Email', the 'Password' field should have a placeholder text saying 'Enter Password'. The form should be centered in the middle of the page, and should have a background color of #f3f3f3, the fields should be 80% of the width of the form and the 'Submit' button should be green.
###Reason###
This is a candidate prompt because it's expecting the model to generate HTML/CSS code for the whole form in a single go. This could lead to less organized code, and it might be difficult for the model to handle all the requirements at once. Breaking the problem down can make the process more manageable and the code easier to understand. By dividing the problem into two parts, we make it easier for the model to tackle each part effectively. First, it focuses on generating the HTML structure of the form, and then it applies CSS to style it. This approach makes it easier to debug and understand the code, and it also makes the problem more manageable for the model.
###Better Prompt Type###
[CODE_OUTPUT][CONSTRAINTED_OUTPUT][MULTI_TURN_PROMPT]
###Better Prompt###
Prompt 1: Create a basic HTML structure for a login form with fields for 'Email' and 'Password', and a 'Submit' button. The 'Email' field should have a placeholder text saying 'Enter Email', and the 'Password' field should have a placeholder text saying 'Enter Password'.\n[OUTPUT] Prompt 2: Apply CSS to the given HTML form [OUTPUT] to style it as per the following requirements: the form should be centered in the middle of the page, have a background color of #f3f3f3, the fields should be 80% of the width of the form, and the 'Submit' button should be green.

###Candidate Prompt###
Design and implement a web-based flight reservation system. The system should allow users to search for available flights based on criteria such as departure city, destination city, and date. Users should be able to select a flight, provide passenger details, and make a reservation. The system should handle seat availability, seat assignments, and payment processing. Additionally, include features such as user authentication, booking history, and email notifications.
###Reason###
The prompt presents a complex real-world programming task involving web development, database management, and user interaction. It ensures that the model understands the specific requirements, functionality, and features needed to implement a comprehensive flight reservation system.
###Better Prompt Type###
[CODE_OUTPUT][CONSTRAINTED_OUTPUT]
###Better Prompt###
Design and implement a web-based flight reservation system using the Django framework. The system should include the following features:\nUser registration and authentication to secure user accounts.\nA flight search functionality based on departure city, destination city, and date.\nDisplay of available flights with seat availability and seat assignments.\nPassenger details collection and booking confirmation process.\nIntegration with a payment gateway for secure payment processing.\nUser dashboard showing booking history and the ability to cancel or modify reservations.\nAutomated email notifications for booking confirmation and updates.\nMake sure to design a user-friendly interface, handle potential errors or exceptions, and ensure proper data validation and security measures.

###Candidate Prompt###
Solve the following word problem step by step: A car travels at a speed of 60 miles per hour. How far will it travel in 2.5 hours?
###Reason###
By using chain of thoughts prompting, we can guide the model to break down the word problem into logical steps and provide a step-by-step solution.
###Better Prompt Type###
[MATHEMATICAL_REASONING]
###Better Prompt###
Solve the following word problem step by step: A car travels at a speed of 60 miles per hour. How far will it travel in 2.5 hours? Start by multiplying the speed (60 mph) by the time (2.5 hours) to find the total distance. Then, apply the formula Distance = Speed × Time to calculate the answer. Finally specify the ouput as "The final answer is:"

###Candidate Prompt###
Build a web application using Flask that allows users to register, log in, and create posts.
###Reason###
The prompt describes a complex web application but lacks specific instructions on how to approach each functionality. Breaking it down into smaller steps can help clarify the implementation process and make it more manageable.
###Better Prompt Type###
[CODE_OUTPUT]
###Better Prompt###
Build a web application using Flask with the following features:\nStep 1: Implement user registration functionality, including a registration form and storing user information in a database.\nStep 2: Develop user login functionality, including a login form and authentication using a password hash.\nStep 3: Create a post creation feature, allowing registered users to create new posts and store them in the database.\nStep 4: Design appropriate routes and templates for each feature and ensure proper error handling throughout the application.

###Candidate Prompt###
Develop a virtual financial assistant powered by generative AI that can provide personalized financial planning and investment advice. The virtual assistant should engage in natural language conversations with users, gather information about their financial goals, risk tolerance, income, expenses, and investment preferences. Based on this information, the virtual assistant should generate customized financial plans that include budgeting strategies, investment portfolios, retirement plans, and savings recommendations. The virtual assistant should also consider market trends, economic indicators, and risk assessment models to provide informed advice. Additionally, the virtual assistant should monitor and track the user's progress, provide periodic updates, and adjust the financial plans as needed.
###Reason###
The prompt describes a complex generative AI use case involving a virtual financial assistant for personalized financial planning and investment advice. The task requires gathering detailed user information, generating comprehensive financial plans, considering various factors, maintaining regulatory compliance, ensuring data privacy, and continuously adapting to changing circumstances. The extended prompt provides a clearer understanding of the scope and requirements of the virtual financial assistant.
###Better Prompt Type###
[CONTENT_GENERATION]
###Better Prompt###
Develop a virtual financial assistant powered by generative AI that can provide personalized financial planning and investment advice. The virtual assistant should engage in natural language conversations with users, gathering detailed information about their financial goals, risk tolerance, income, expenses, debt, and investment preferences. Based on this information, the virtual assistant should generate comprehensive financial plans that include budgeting strategies, investment portfolios, retirement plans, tax optimization strategies, and savings recommendations. The virtual assistant should consider market trends, economic indicators, historical data, and risk assessment models to provide informed advice. It should also incorporate regulatory guidelines and compliance standards to ensure recommendations align with legal and ethical requirements. Additionally, the virtual assistant should continuously monitor and track the user's financial progress, provide periodic updates and notifications, and adapt the financial plans as necessary based on changing circumstances. The virtual assistant should prioritize user privacy and data security, maintaining strict confidentiality of sensitive financial information throughout the interactions.

###Candidate Prompt###
Generate the sales report for the sales data spreadsheet using MS Excel functions.
###Reason###
The better prompt is self-sufficient, specific, and provides clear instructions for utilizing various MS Excel functions. It guides the user to perform calculations for total revenue and average revenue, and also includes the instruction for creating a pivot table. The prompt leaves no room for guessing and ensures that the model produces complete and accurate outputs.The candidate prompt lacks specific instructions and details necessary for the model to generate a meaningful sales report. It is vague and doesn't mention which MS Excel functions to use or what specific calculations are needed. The prompt leaves the model guessing about the desired output and may result in incomplete or incorrect responses. The better prompt provides clear instructions, including the specific MS Excel functions to use, the calculations required, and the desired output format. It ensures that the model understands the task and can generate a complete and accurate sales report. 
###Better Prompt Type###
[CODE_OUTPUT]
###Better Prompt###
Calculate the total revenue for each product in the sales data spreadsheet by multiplying the "Units Sold" column with the "Price per Unit" column. Use the SUM function to calculate the total revenue. Next, calculate the average revenue per product by dividing the total revenue by the number of products. Finally, create a pivot table that displays the revenue by product category.

###Candidate Prompt###
Use VLOOKUP in the sales data spreadsheet.
###Reason###
The better prompt provides clear instructions on how to use the VLOOKUP function in MS Excel. It specifies the purpose of the function (retrieving product names) and explains how to match the values in two different tables using a common identifier (product ID). The prompt also includes the desired output format, which is adding the product names in a new column. By following this prompt, the model can generate the necessary formula and populate the "Product Name" column accurately.
###Better Prompt Type###
[CODE_OUTPUT][CONTENT_GENERATION]
###Better Prompt###
In the sales data spreadsheet, use the VLOOKUP function to retrieve the product name from the "Product ID" column in the "Product Lookup" table. Match the product IDs in the "Sales" table with the corresponding product names in the "Product Lookup" table. Include the retrieved product names in a new column named "Product Name" in the "Sales" table.

###Candidate Prompt###
Retrieve the corresponding product information based on the unique product identifier from the "Product ID" column in the "Product Lookup" table and incorporate it into the "Sales" table.
###Reason###
The better prompt provides specific instructions and explicitly mentions the use of the VLOOKUP function. It clarifies the purpose of the function (matching and retrieving product names) and provides step-by-step guidance on how to accomplish the task. By following this prompt, the model can generate the necessary formula using VLOOKUP to retrieve and populate the product names accurately.
###Better Prompt Type###
[CODE_OUTPUT][CONTENT_GENERATION]
###Better Prompt###
Retrieve the product name from the "Product Lookup" table based on the unique product identifier in the "Product ID" column of the "Sales" table. Use the VLOOKUP function to match the product IDs in both tables and populate a new column named "Product Name" in the "Sales" table with the corresponding product names from the "Product Lookup" table.

###Candidate Prompt###
Send a phishing email to employees to assess their security awareness.
###Reason###
The better prompt clearly defines the scenario, role, and objective of the social engineering exercise. It emphasizes the need for authorization and responsible conduct. The prompt includes specific instructions on crafting a simulated phishing email and creating a landing page to test employees' security awareness. It also emphasizes the importance of educating employees about the purpose and implications of the exercise, ensuring it is conducted in a controlled environment. The candidate prompt is vague and lacks specific instructions and context. It does not provide any guidance on the proper conduct or authorization required for the social engineering exercise. The prompt leaves the model guessing about the purpose, scope, and responsible execution of the attack. As a result, the generated response may be incomplete, inappropriate, or unethical.
###Better Prompt Type###
[CONTENT_GENERATION][CONSTRAINTED_OUTPUT]
###Better Prompt###
As an authorized security consultant, send a simulated phishing email to company employees to assess their security awareness. Craft the email to appear as a legitimate communication from the IT department. The email should contain a link that, when clicked, directs employees to a landing page where they are asked to enter their credentials. Emphasize the importance of not sharing sensitive information and provide clear instructions to report suspicious emails. Ensure that this exercise is conducted within a controlled environment and that employees are educated about the purpose and implications of the test.

###Candidate Prompt###
We've just trained a multi-class classification model and it returned the following class probabilities for a sample: [0.1, 0.2, 0.7]. Can you figure out which class has been predicted by our model? It's important to remember that we've used a softmax activation in the output layer, so the class probabilities sum up to 1. Also, keep in mind that the model's prediction corresponds to the class with the highest probability.
###Reason###
While the details provided in the prompt might be accurate and important, they could potentially confuse the AI model due to their verbosity and complexity. The model might lose focus on the main task, which is simply to identify the class with the highest probability.
###Better Prompt Type###
[MATHEMATICAL_REASONING]
###Better Prompt###
Given the class probabilities [0.1, 0.2, 0.7], identify the predicted class. The output should be given in the following format: "The answer is:"

###Candidate Prompt###
We need to work on an e-commerce product recommendation system. We have customer data like their age, gender, location, past purchase history and browsing history. We have product details as well like category, sub-category, price, discount, and ratings. We want to recommend a set of 10 products to each customer based on these details. Please do your best.
###Reason###
The candidate prompt lacks clear instructions and expected output. It doesn't specify that the model should return a ranked list of recommendations, nor does it mention what factors should be taken into account when making these recommendations. On the other hand, the better prompt is clear about both what the system needs to do (provide a ranked list of 10 recommended products per customer) and how it should do it (by considering all available customer and product data). This clarity guides the model to produce the desired output.
###Better Prompt Type###
[CONTENT_GENERATION]
###Better Prompt###
I want you to assist in improving our e-commerce product recommendation system. Our data set consists of customer information such as age, gender, location, past purchase history, and browsing history. We also have data on our products, such as category, sub-category, price, discount, and ratings. Here's what we want to achieve: For each customer, we need a ranked list of 10 products that are likely to interest them, with the most likely on top. To arrive at this, you'll need to create a system that considers all the available customer and product data to make the best possible predictions.

###Candidate Prompt###
Given an initial investment amount, an annual interest rate, and the number of years, calculate the future value using the FV function in MS Excel.
###Reason###
The candidate prompt is not explicit about the necessary parameters required for calculating future value in MS Excel. It does not clarify the structure of the FV function in Excel, nor does it give an example of the desired output format. Conversely, the better prompt provides clear guidelines on the structure of the FV function, explains the meaning of each parameter, and provides an example of the expected output, giving the model a better understanding of what is expected. This makes it more likely that the generated output will meet the requirements.
###Better Prompt Type###
[CODE_OUTPUT][CONSTRAINTED_OUTPUT]
###Better Prompt###
Assume you're tasked with calculating the future value of an investment using MS Excel. Given an initial investment amount (present value), an annual interest rate, the number of compounding periods per year, and the total number of years, I want you to provide me with the exact MS Excel formula to calculate the future value using the FV function. Remember, FV in Excel is calculated as FV(rate, nper, pmt, pv, type), where:\nrate is the interest rate per period\nnper is the total number of payment periods\npmt is the payment made each period; it cannot change over the life of the investment\npv is the present value, or the lump-sum amount that a series of future payments is worth right now\ntype is when the payments are due: 0 for end of the period, 1 for beginning of the period. For this task, let's assume payments are due at the end of the period.\nThe input will be in the following format:\nInitial investment (present value): [INPUT]\nAnnual interest rate (as a decimal): [INPUT]\nNumber of compounding periods per year: [INPUT]\nTotal number of years: [INPUT]\nAnd the expected output would be the Excel formula which is in this format:\n=FV([annual interest rate]/[compounding periods], [compounding periods]*[total years], , -[initial investment], 0)

###Candidate Prompt###
I want you to create a poem. It should have some romantic elements in it.
###Reason###
The candidate prompt is too vague, as it only mentions the need for a poem with some romantic elements but doesn't provide any specific instructions on structure, length, or thematic elements. On the other hand, the better prompt provides a clear structure for the poem (four stanzas, four lines each, ABAB rhyme scheme), detailed thematic instructions (use of classic romantic imagery, feelings to be expressed), and a specific romantic context. These details help guide the model in creating a piece of writing that fulfills the specific requirements of the task.
###Better Prompt Type###
[CONTENT_GENERATION][CONSTRAINTED_OUTPUT][ROLE_PLAYING]
###Better Prompt###
I'm looking for a romantic poem that captures the essence of love in its purest form. The poem should be structured into four stanzas, each with four lines (rhyming ABAB). The language should be emotive, tender, and imbued with classic romantic imagery such as the moon, stars, flowers, and seas. It should articulate the feelings of love, longing, and the deep connection between two people. Kindly generate a poem following these guidelines.

###Candidate Prompt###
I need a Python function for a string operation.
###Reason###
The candidate prompt doesn't provide any specific information about what the Python function needs to do. It only specifies that the function is related to string operations, which covers a broad range of possible tasks. The better prompt, on the other hand, provides a clear and detailed specification of the function's expected behavior. It specifies the name of the function, the input it should take, the operation it should perform, the output it should return, and some edge cases it should handle. This level of detail guides the model to generate the desired Python function more accurately.
###Better Prompt Type###
[CODE_OUTPUT][CONSTRAINTED_OUTPUT]
###Better Prompt###
I need you to generate Python code that defines a function called `capitalize_ends`. This function should take a string as input, and it should return a string where the first and last letters of each word are capitalized, with all other characters in lowercase. Please note that a word is defined as a sequence of alphanumeric characters separated by spaces. The function should handle edge cases including punctuation, digits, and single-letter words

###Candidate Prompt###
Write a function in Python that accepts some data and gives back two chunks of it. Use any method you deem fit to achieve this and don't worry about the proportion or randomizing the data.
###Reason###
The candidate prompt is vague, with no clear instruction about the required task, the necessary parameters, the expected function name, or the libraries that should be used. It also doesn't specify how to handle the randomness or proportion in splitting the data. On the other hand, the better prompt provides precise instructions, detailing the function name, parameters, expected output, and specific library and function to use for the task. It specifies to set the `random_state` for reproducibility and dictates the proportion for the train-test split. This level of detail guides the model to generate the desired output without ambiguity.
###Better Prompt Type###
[CODE_OUTPUT][CONSTRAINTED_OUTPUT]
###Better Prompt###
Please generate Python code for a function named `split_dataset`. The function should take three inputs - a pandas DataFrame called `df`, a float train_ratio representing the proportion of the DataFrame to be used as the training set, and a integer random_state for reproducibility. The function should return two pandas DataFrames: `train_df` and `test_df`, which represent the training and test subsets of the original DataFrame respectively. Use the function `train_test_split` from `sklearn.model_selection` to perform the splitting. Make sure to set the `random_state` parameter in `train_test_split` to ensure the results are reproducible, and use the `train_ratio` parameter to define the proportion of the DataFrame to be used as the training set.

###Candidate Prompt###
I'm in need of a Python script that will fetch data from a website. No need to worry about specifics like what kind of data it is, or which website it comes from. Just something general-purpose would do.
###Reason###
The candidate prompt in this case is vague and doesn't specify which library to use for web scraping, nor does it indicate what data needs to be extracted from the webpage. It leaves a lot of ambiguity and room for unnecessary assumptions. The better prompt, on the other hand, gives clear instructions about the specific task - extracting data from a table on a webpage using BeautifulSoup. It also provides an example function to guide the development of the new function, which helps to clarify the expected format and structure. This makes it easier for the model to generate relevant and accurate code.
###Better Prompt Type###
[CODE_OUTPUT][CONSTRAINTED_OUTPUT]
###Better Prompt###
Let's work on a Python code snippet to scrape data from a webpage using the BeautifulSoup package. We will be extracting the headers from an HTML table and storing them in a list. Here's a format to guide your approach:\n###Example###\n[OUTPUT]:\nI would like a similar function that extracts the data rows from the table instead of headers.

###Candidate Prompt###
We're going to perform some Python string manipulation. Your task is to take a block of text as input, and return the email addresses present in the text. However, keep in mind that we're not just interested in any email address. We're looking specifically for addresses that use certain domains like "gmail.com" and "yahoo.com". Remember to also format the output in a readable way. You should output each email address on a new line. Here's a rough example of what I'm looking for: \n[INPUT]\n[OUTPUT]
###Reason###
The candidate prompt is missing clarity in explaining the objective. While it provides some task details and uses examples, it doesn't explicitly state that the user should write a Python script to accomplish the task. It is also not very clear about the formatting requirements for the output, merely stating that the email addresses should be on new lines. In contrast, the better prompt is more specific about the objective of the task - extracting email addresses of certain domains using Python. It also clearly communicates the output formatting requirements. Furthermore, it provides a clear example for the user to follow, specifying both the expected input and output.
###Better Prompt Type###
[CODE_OUTPUT][CONSTRAINTED_OUTPUT]
###Better Prompt###
You need to perform a Python string manipulation task. You're given a block of text and your job is to extract all email addresses from the text, but there's a catch. We're only interested in email addresses that belong to certain domains - "gmail.com" and "yahoo.com". Your code should scan the text and return only the email addresses that belong to these domains. The output should be neatly formatted with each email address presented on a new line. To guide you through this, here's an example of what the input and output should look like:\n[INPUT]\n[OUTPUT]

###Candidate Prompt###
We're doing a text-to-SQL conversion. I need you to convert a question about a database into a SQL query. Remember that the question is related to a "student" database that contains details such as id, name, course, and grade. Here are a few examples:\n[INPUT]\n[OUTPUT]\n[INPUT]\n[OUTPUT]\n[INPUT]\n[OUTPUT]
###Reason###
The candidate prompt does not provide a clear call to action. It is also vague about the nature of the task and doesn't make it explicit that the model is expected to generate SQL queries from English questions. On the other hand, the better prompt is very explicit about the task - converting English questions into SQL queries. It provides detailed information about the database and provides concrete examples of the expected inputs and outputs. It also specifies that the queries are to be generated from a 'student' database, providing the model with a context to guide its responses. This specificity and context provided by the better prompt make it much more likely that the model will be able to successfully complete the task.
###Better Prompt Type###
[CODE_OUTPUT]
###Better Prompt###
Your task is to generate SQL queries from English language questions. The questions will be about a database named "student" that holds records of students including their id, name, course, and grade. Your task is to interpret these questions and produce the corresponding SQL query. Here are some examples to guide you:\n[INPUT]\n[OUTPUT]\n[INPUT]\n[OUTPUT]\n[INPUT]\n[OUTPUT]

###Candidate Prompt###
You have to recommend songs to a user based on their taste. Here's an example.\n[INPUT]\n[OUTPUT]\n[INPUT]\n[OUTPUT]\n[INPUT]\n[OUTPUT]
###Reason###
The candidate prompt is ambiguous and doesn't provide a clear description of the task. It does not specify the format of the input data or the expected output. Furthermore, it provides only one example, which might not be enough for the model to understand the task. The better prompt, on the other hand, is explicit about the task and the format of the input and output. It mentions that the input will be a list of songs with their artists and the output should also be in the same format. This clear instruction helps the model to generate the correct output.
###Better Prompt Type###
[CONTENT_GENERATION]
###Better Prompt###
Your task is to generate recommendations for the next songs a user might enjoy, based on their listening history. The listening history will be provided as a list of songs with their corresponding artists. The output should be a list of recommended songs with their artists. Here are a few examples to guide you:\n[INPUT]\n[OUTPUT]\n[INPUT]\n[OUTPUT]\n[INPUT]\n[OUTPUT]

###Candidate Prompt###
You have to create a graph from a text. The graph will be represented as an adjacency list.\n[INPUT]\n[OUTPUT]
###Reason###
The candidate prompt doesn't make it clear what the rules are for creating edges between nodes. For instance, it doesn't specify whether an edge should be created for all types of relationships or only for certain types (like "friend"). The better prompt, however, provides a clear instruction to the model about when to create edges between nodes. It mentions explicitly that an edge should be created only if the relationship is a "friend". This helps the model to generate the correct adjacency list representation of the graph.
###Better Prompt Type###
[GRAPH][INFORMATION_EXTRACTION][CONSTRAINTED_OUTPUT]
###Better Prompt###
Your task is to generate a graph representation from a given text, where the nodes are characters and the edges represent the relationships between them. The graph should be represented as an adjacency list. Note that an edge should be created only if the relationship is a "friend". Here are a few examples to guide you:\n[INPUT]\n[OUTPUT]

###Candidate Prompt###
You have to pull out the dates from the document. Remember, dates can be in multiple formats.\n[INPUT]\n[OUTPUT]
###Reason###
The candidate prompt is rather vague and doesn't explicitly guide the model about the different formats the dates could be in. It merely instructs the model to extract dates, but without clear guidance, the model could miss some dates, especially if they are in unusual or less common formats. The better prompt, on the other hand, explicitly instructs the model about the possible formats of the dates. This additional information helps the model to identify and extract dates in a more accurate and comprehensive manner. It removes any ambiguity for the model and guides it towards the correct extraction of dates.
###Better Prompt Type###
[INFORMATION_EXTRACTION]
###Better Prompt###
Your task is to extract all the dates present in the provided text document. Dates could be in different formats such as "Month Day, Year", "Day/Month/Year", or "ordinal Day Month Year". Here are a few examples:\n[INPUT]\n[OUTPUT]

###Candidate Prompt###
Identify the named entities in the sentence.
###Reason###
The candidate prompt provides the task but lacks explicit instructions and examples to guide the model. It doesn't clarify what a named entity is, which can lead to imprecise or incorrect results. On the other hand, the better prompt clarifies the task clearly, providing more context about what named entities are. It gives examples of different kinds of named entities, which help the model to understand what it's expected to do. It sets a clear expectation about the output format and makes it easier for the model to follow the task correctly.
###Better Prompt Type###
[INFORMATION_EXTRACTION]
###Better Prompt###
Your task is to recognize named entities (people, locations, organizations, etc.) from a given sentence. Named entities are often proper nouns but can also be common nouns in certain contexts. See the examples below:

###Candidate Prompt###
Learn the pattern from the given examples.\n[INPUT]\n[OUTPUT]
###Reason###
The candidate prompt simply states the task but does not provide any clarity on the expected output format, leaving it vague and potentially leading to inconsistent or incorrect results. On the other hand, the better prompt makes it clear that the model is expected to discern a mathematical function or pattern from the provided examples. It also provides a specific output format, guiding the model on how to structure its responses. The additional example further clarifies the task, setting a clearer expectation for the model's performance.
###Better Prompt Type###
[PATTERN_IDENTIFICATION]
###Better Prompt###
You're given a series of input-output pairs. Your task is to discern the underlying pattern or function that's being applied to transform the input into the output. Please provide a mathematical function or a description of the transformation. Here are some examples:\n[INPUT]\n[OUTPUT]

###Candidate Prompt###
You are given a document. You have to generate a summary for this document. Also, consider the following example summaries while summarizing the document. Here is the document and the summaries:\nDocument:\n[INPUT]\nExample summaries:\n[OUTPUT]\n[OUTPUT]\n 
###Reason###
The candidate prompt does not clearly specify the task, how to use the example summaries, and what format the summary should be in. The better prompt, on the other hand, makes it clear that the model should learn from the given examples and apply that knowledge to generate a comprehensive summary for the provided document. The better prompt also clarifies the formatting of the examples and the final summary. It provides more information to the model and leads to a more accurate summary.
###Better Prompt Type###
[SUMMARIZATION]
###Better Prompt###
You're given a long and detailed document about the intricacies of Python programming. Your task is to generate a comprehensive summary of this document. To help you understand the style and structure of the summary, below are example summaries for different sections of a similar document:\nDocument:\n[INPUT]\nExample summaries:\n[OUTPUT]\n[OUTPUT]\n 

###Candidate Prompt###
Consider these sequences of numbers: 2, 4, 6, 8 and 5, 10, 15, 20. From these, try to figure out what the next number should be in this sequence: 7, 14, 21, ...
###Reason###
In the candidate prompt, the model is asked to "consider" the sequences but there is no clear instruction that these sequences are examples from which the model should learn the pattern. Moreover, the expected format of the output is not clear from the candidate prompt.The better prompt explicitly instructs the model that it will be given sequences of numbers as examples and it should find the pattern and predict the next number in the sequence. The format of the examples is also clearly mentioned as Input and Output, providing clarity on how to format the output. The better prompt also provides clear instructions on the task to be performed on the test sequence, further reducing the scope of guessing by the model.
###Better Prompt Type###
[PATTERN_IDENTIFICATION]
###Better Prompt###
You will be given sequences of numbers as examples and your task is to find the pattern and predict the next number in the sequence.\n[OUTPUT]\n[OUTPUT]\nNow, find the next number in this sequence:[INPUT]\n

###Candidate Prompt###
Write a blog post about the benefits of exercise, covering various aspects such as physical health, mental well-being, and longevity. Include personal anecdotes and expert quotes to support your points. Discuss different types of exercises and their advantages. Aim for a word count of 1500-2000 words and make it engaging for readers of all ages.
###Reason###
The candidate prompt may initially seem detailed, but it suffers from several issues. It lacks clear focus and specific instructions on what aspects of physical health, mental well-being, and longevity to cover. It encourages the inclusion of personal anecdotes and expert quotes without providing guidance on their relevance or credibility. The prompt also suggests a long word count and aims to engage readers of all ages, which further dilutes the focus of the blog post.
###Better Prompt Type###
[CONTENT_GENERATION][ROLE_PLAYING][CONSTRAINTED_OUTPUT]
###Better Prompt###
Write a well-researched and engaging blog post on the physical and mental benefits of exercise. Focus on the positive effects of regular exercise on cardiovascular health, weight management, stress reduction, and cognitive function. Include at least three evidence-based studies or research findings to support each benefit discussed. Aim for a word count of 1500-2000 words and provide practical tips for incorporating exercise into a busy lifestyle.

###Candidate Prompt###
Your task is to generate a dataset for dialogue datasets. The dataset should contain a variety of conversational pairs between two speakers. Ensure that the questions and answers are coherent and contextually relevant. Aim for a diverse range of topics and include different question types and answer lengths. Generate a dataset with a minimum of 1,000 dialogue pairs to ensure an adequate training dataset for dialogue models.
###Reason###
The candidate prompt, although longer, still lacks specific instructions and requirements for generating a high-quality training dataset for dialogue datasets. It fails to provide clear guidelines on dataset generation, question types, answer lengths, quality assurance, and diversity. The lack of specificity may result in a dataset that is incomplete, inconsistent, or inadequate for training dialogue models effectively. 
###Better Prompt Type###
[CONTENT_GENERATION][CONSTRAINTED_OUTPUT]
###Better Prompt###
You are tasked with generating a high-quality training dataset for dialogue datasets. The dataset should consist of diverse conversational pairs between two speakers. Each pair should include a well-formed question from one speaker and a contextually appropriate and informative answer from the other speaker. The questions should cover a wide range of topics and exhibit variations in question types, such as open-ended, yes/no, and multiple-choice questions. The answers should be relevant, coherent, and exhibit variations in length and complexity. Aim to create a dataset with a minimum of 1,000 dialogue pairs to ensure sufficient training data. It's important to prioritize the quality, diversity, and informativeness of the dialogues throughout the generation process.

###Candidate Prompt###
Generate a dataset for multi-class classification with different classes and features. Make sure the dataset has enough samples for each class and include a variety of features. The dataset should be large enough to train multi-class classification models effectively.
###Reason###
The candidate prompt is still relatively short and lacks specific instructions and requirements for generating a dataset for multi-class classification. Although it mentions different classes, features, and sample sizes, it does not provide clear guidelines on how to ensure diversity, avoid class imbalance issues, or maintain data quality. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate a suitable dataset for multi-class classification.
###Better Prompt Type###
[CONTENT_GENERATION][CONSTRAINTED_OUTPUT]
###Better Prompt###
Your task is to generate a dataset for multi-class classification. The dataset should consist of samples belonging to multiple classes, with each sample containing a set of features and the corresponding class label. Aim to create a diverse dataset with at least five different classes, but feel free to include more if necessary. Ensure that each class is adequately represented in the dataset to avoid class imbalance issues during training. The features should capture relevant information for each sample, and the class labels should accurately reflect the underlying classes. Pay attention to data quality, ensuring that the samples are labeled correctly and that the dataset is representative of real-world scenarios. Strive for a dataset size of at least 10,000 samples to provide sufficient training data for multi-class classification models.

###Candidate Prompt###
Generate a dataset for song generation. Include lyrics and musical arrangements for different songs. Make sure the lyrics are meaningful and the musical arrangements cover various genres. Aim for a dataset size of at least 1,000 songs.
###Reason###
The candidate prompt is relatively short and lacks specific instructions and requirements for generating a dataset for song generation. Although it mentions lyrics, musical arrangements, and various genres, it does not provide clear guidelines on how to ensure meaningful lyrics, diverse musical arrangements, or maintain consistency and coherence in the songs. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate a suitable dataset for song generation.
###Better Prompt Type###
[CONTENT_GENERATION][CONSTRAINTED_OUTPUT][ROLE_PLAYING]
###Better Prompt###
Your task is to generate a dataset for song generation. The dataset should consist of lyrics and corresponding musical arrangements. Each song should have a well-structured set of lyrics that convey a meaningful message or story. The musical arrangements should be diverse, covering various genres, tempos, and instrumentation styles. Aim to create a dataset with a minimum of 1,000 songs to provide sufficient training data. Ensure that the lyrics are coherent, follow a consistent rhyme scheme or pattern, and evoke emotions or tell a compelling narrative. The musical arrangements should complement the lyrics and exhibit variations in melody, chord progressions, and instrumentation to capture the essence of different musical genres.

###Candidate Prompt###
Generate a dataset for reinforcement learning. Include state-action pairs, rewards, and next states. The dataset should cover a range of scenarios and provide enough samples for training. Ensure that the rewards reflect the quality of actions taken. Pay attention to exploration and exploitation balance and any specific task constraints.
###Reason###
The candidate prompt is relatively short and lacks specific instructions and requirements for generating a dataset for reinforcement learning. Although it mentions state-action pairs, rewards, next states, and scenario coverage, it does not provide clear guidelines on how to ensure a well-defined environment, capture diverse scenarios, or maintain a balance between exploration and exploitation. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate a suitable dataset for reinforcement learning.
###Better Prompt Type###
[CONTENT_GENERATION][CONSTRAINTED_OUTPUT]
###Better Prompt###
Your task is to generate a dataset for a reinforcement learning task. The dataset should consist of state-action pairs, along with their corresponding rewards and next states. The environment should be well-defined and provide a clear set of rules and objectives. Aim to create a diverse dataset with a minimum of 10,000 samples to provide sufficient training data. Ensure that the dataset covers a wide range of scenarios and captures both successful and unsuccessful actions. The rewards should reflect the quality of the actions taken in each state, guiding the reinforcement learning agent towards optimal behavior. Pay attention to the balance between exploration and exploitation, as well as any specific constraints or limitations imposed by the task.

###Candidate Prompt###
Paraphrase the given sentences. Make sure the paraphrased versions have the same meaning as the original sentences.
###Reason###
The candidate prompt is relatively short and lacks specific instructions and requirements for paraphrasing. It only mentions the need for paraphrasing and maintaining the same meaning as the original sentences. However, it does not provide clear guidelines on how to ensure variations in wording, sentence structure, or phrasing. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate accurate and diverse paraphrases.
###Better Prompt Type###
[PARAPHRASE][ANALYSIS][ROLE_PLAYING][CONSTRAINTED_OUTPUT]
###Better Prompt###
Your task is to paraphrase the given sentences while maintaining the original meaning. The paraphrased versions should exhibit variations in wording, sentence structure, and phrasing. Aim for paraphrases that are linguistically diverse, capturing different ways of expressing the same ideas. Ensure that the paraphrases are contextually appropriate and maintain the same intent as the original sentences. Pay attention to preserving the nuances, tone, and style of the original sentences during the paraphrasing process.

###Candidate Prompt###
Yes or no: Would a pear sink in water?
###Reason###
The candidate prompt is too short and lacks specific instructions and requirements for answering the question. It only mentions the need for a yes or no answer, but it does not provide clear guidelines on how to reason through the question or provide evidence for the answer. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate answer.
###Better Prompt Type###
[DEDUCTIVE_REASONING][STRATEGY QUESTION ANSWERING]
###Better Prompt###
Your task is to answer the following question: Would a pear sink in water? Provide a clear and concise answer, along with a brief explanation or evidence to support your answer. Consider the density, size, and shape of the pear, as well as the properties of water, such as buoyancy and surface tension. Ensure that your answer is contextually appropriate and maintains the same intent as the original question. Pay attention to providing a well-reasoned and evidence-based answer that is easy to understand and follow.

###Candidate Prompt###
You have to summarize the given text. The summary should be concise and capture the main points of the text.
###Reason###
The candidate prompt is relatively short and lacks specific instructions and requirements for summarizing the text. Although it mentions the need for a concise summary that captures the main points of the text, it does not provide clear guidelines on how to determine the main points, which summarization model to use, or how to handle complex or technical text. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate summary.
###Better Prompt Type###
[SUMMARIZATION][ANALYSIS][ROLE_PLAYING]
###Better Prompt###
Your task is to summarize the given text into a concise and informative summary that captures the main points of the text. Use a well-defined summarization model or library to generate the summary. Pay attention to the nuances of the text, such as technical terms or jargon, that may affect the summarization. Ensure that the summary is contextually appropriate and maintains the same intent as the original text. Provide a clear and concise summary that accurately reflects the main points of the text. Aim for a high level of accuracy and coherence in your summarization. If necessary, provide a brief explanation or evidence to support your summary.

###Candidate Prompt###
You have to generate a caption for the given image. The caption should be descriptive and informative, capturing the essence of the image.
###Reason###
The candidate prompt is relatively short and lacks specific instructions and requirements for generating a caption for the image. Although it mentions the need for a descriptive and informative caption that captures the essence of the image, it does not provide clear guidelines on how to determine the essence, which image captioning model to use, or how to handle complex or abstract images. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate caption.
###Better Prompt Type###
[CONTENT_GENERATION][ANALYSIS][ROLE_PLAYING]
###Better Prompt###
Your task is to generate a descriptive and informative caption for the given image that captures the essence of the image. Use a well-defined image captioning model or library to generate the caption. Pay attention to the nuances of the image, such as the objects, people, or scenes depicted, that may affect the captioning. Ensure that the caption is contextually appropriate and maintains the same intent as the original image. Provide a clear and concise caption that accurately reflects the essence of the image. Aim for a high level of accuracy and coherence in your captioning. If necessary, provide a brief explanation or evidence to support your caption.

###Candidate Prompt###
You have to generate a question for the given text. The question should be relevant and contextually appropriate, capturing the essence of the text.
###Reason###
The candidate prompt is relatively short and lacks specific instructions and requirements for generating a question for the text. Although it mentions the need for a relevant and contextually appropriate question that captures the essence of the text, it does not provide clear guidelines on how to determine the essence, which question generation model to use, or how to handle complex or technical text. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate question.
###Better Prompt Type###
[CONTENT_GENERATION][ANALYSIS][ROLE_PLAYING]
###Better Prompt###
Your task is to generate a relevant and contextually appropriate question for the given text that captures the essence of the text. Use a well-defined question generation model or library to generate the question. Pay attention to the nuances of the text, such as the main ideas, arguments, or themes, that may affect the question generation. Ensure that the question is contextually appropriate and maintains the same intent as the original text. Provide a clear and concise question that accurately reflects the essence of the text. Aim for a high level of accuracy and coherence in your question generation. If necessary, provide a brief explanation or evidence to support your question.

###Candidate Prompt###
The concert was scheduled to be on 06/01/1943, but was delayed by one day to today. What is the date 10 days ago in MM/DD/YYYY? 
###Reason###
The candidate prompt is relatively short and lacks specific instructions and requirements for solving the problem. Although it mentions the need to find the date 10 days ago in MM/DD/YYYY format, it does not provide clear guidelines on how to calculate the date, which calendar system to use, or how to handle leap years. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate answer.
###Better Prompt Type###
[MATHEMATICAL_REASONING][DATE_UNDERSTANDING]
###Better Prompt###
Your task is to calculate the date 10 days ago in MM/DD/YYYY format, given that the concert was scheduled to be on 06/01/1943 but was delayed by one day to today. Use the Gregorian calendar system to calculate the date, taking into account leap years and the number of days in each month. Pay attention to the nuances of the problem, such as the initial date and the number of days to subtract, that may affect the calculation. Ensure that the date is contextually appropriate and maintains the same intent as the original problem. Provide a clear and concise date that accurately reflects the solution to the problem. Aim for a high level of accuracy and consistency in your calculations. If necessary, provide a brief explanation or evidence to support your answer. The answer can be specified in the format "The answer is:"

###Candidate Prompt###
Q: Is the following sentence plausible? "Joao Moutinho caught the screen pass in the NFC championship." A: Joao Moutinho is a soccer player. The NFC championship is part of American football, not soccer. So the answer is no.
###Reason###
The candidate prompt is relatively short and lacks specific instructions and requirements for evaluating the plausibility of the sentence. Although it provides an example and a correct answer, it does not provide clear guidelines on how to reason through the sentence or how to handle ambiguous or complex sentences. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate answer.
###Better Prompt Type###
[CONSTRAINTED_OUTPUT][ANALYSIS][MULTI_HOP_QUERY]
###Better Prompt###
Your task is to determine the plausibility of the following sentence: "Joao Moutinho caught the screen pass in the NFC championship." Provide a clear and concise answer, along with a brief explanation or evidence to support your answer. Consider the context of the sentence, such as the teams, players, and events mentioned, as well as the rules and conventions of the sport. Ensure that your answer is contextually appropriate and maintains the same intent as the original sentence. Pay attention to providing a well-reasoned and evidence-based answer that is easy to understand and follow. Finally give a yes or no answer.

###Candidate Prompt###
Human: How would you bring me something that isn’t a fruit? Explanation: the user wants something to eat that isn’t a fruit. An energy bar is not a fruit, so I will bring the user an energy bar. Plan: 1. find(energy bar) 2. pick(energy bar) 3. find(user) 4. put(energy bar) 5. done().
###Reason###
The candidate prompt is relatively short and lacks specific instructions and requirements for generating a plan to complete the task. Although it mentions the need to find and deliver an energy bar to the user, it does not provide clear guidelines on how to find the energy bar, how to navigate to the user, or how to handle unexpected obstacles. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate plan.
###Better Prompt Type###
[PLANNING][CONTENT_GENERATION]
###Better Prompt###
Your task is to generate a plan to bring an energy bar to the user, who has requested something to eat that isn't a fruit. Use a well-defined planning model or library to generate the plan. Pay attention to the nuances of the task, such as the location of the energy bar, the location of the user, and any potential obstacles or hazards. Ensure that the plan is contextually appropriate and maintains the same intent as the original task. Provide a clear and concise plan that accurately reflects the steps required to complete the task. Aim for a high level of accuracy and coherence in your planning. If necessary, provide a brief explanation or evidence to support your plan.

###Candidate Prompt###
Q: Take the last letters of the words in “Lady Gaga” and concatenate them. A: The last letter of “Lady” is “y”. The last letter of “Gaga” is “a”. Concatenating them is “ya”. So the answer is ya.
###Reason###
The candidate prompt is relatively short and lacks specific instructions and requirements for answering the question. Although it provides an example of how to answer the question, it does not provide clear guidelines on how to reason through the question or provide evidence for the answer. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate answer.
###Better Prompt Type###
[CONCATENATION]
###Better Prompt###
Your task is to answer the following question: Take the last letters of the words in “Lady Gaga” and concatenate them. Provide a clear and concise answer, along with a brief explanation or evidence to support your answer. Consider the spelling, pronunciation, and order of the words in the phrase. Ensure that your answer is contextually appropriate and maintains the same intent as the original question. Pay attention to providing a well-reasoned and evidence-based answer that is easy to understand and follow.

###Candidate Prompt###
Q: A coin is heads up. Maybelle flips the coin. Shalonda does not flip the coin. Is the coin still heads up? A: The coin was flipped by Maybelle. So the coin was flipped 1 time, which is an odd number. The coin started heads up, so after an odd number of flips, it will be tails up. So the answer is no.
###Reason###
The candidate prompt is relatively long and lacks clarity and conciseness. The question and answer are combined into a single prompt, making it difficult to understand the task and the reasoning behind the answer. The prompt also lacks specific instructions and requirements for answering the question, such as which rules to follow or which logic to apply. The prompt is verbose and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate answer.
###Better Prompt Type###
[STATE TRACKING][CONSTRAINTED_OUTPUT]
###Better Prompt###
Your task is to answer the following question: If a coin is heads up and Maybelle flips the coin, is the coin still heads up? Provide a clear and concise answer, along with a brief explanation or evidence to support your answer. Consider the rules of coin flipping, such as the probability of each outcome and the effects of multiple flips. Ensure that your answer is contextually appropriate and maintains the same intent as the original question. Pay attention to providing a well-reasoned and evidence-based answer that is easy to understand and follow.

###Candidate Prompt###
remove all but element 1 and last element
###Reason###
The candidate prompt is too short and lacks specific instructions and requirements for the task. It only mentions the need to remove all but the first and last elements, but it does not provide clear guidelines on how to identify the first and last elements, which data structure to use, or how to handle edge cases. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate solution.
###Better Prompt Type###
[INPUT MANIPULATION]
###Better Prompt###
Your task is to remove all elements from the given data structure except the first and last elements. Identify the first and last elements based on their position or value, depending on the data structure. Ensure that the data structure is well-defined and that the removal operation does not violate any constraints or rules. Handle edge cases, such as empty data structures or data structures with only one element, appropriately. Provide a clear and concise solution that accurately reflects the requirements of the task. Aim for a high level of accuracy and efficiency in your solution. If necessary, provide a brief explanation or evidence to support your solution.

###Candidate Prompt###
00101 -> 11010\n11 -> 00\n1110101010101010111110101101101101101101 ->1011011011010111110101010101010111011110
###Reason###
The candidate prompt is too short and lacks specific instructions and requirements for the task. It only mentions two binary strings and their corresponding transformations. However, it does not provide clear guidelines on how to perform the transformation, which algorithm to use, or how to handle complex or long binary strings. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate transformation. The better prommpt should be able to understand the underlying transformation automatically by looking at the given prompt and intergrate that into better prompt. The better prompt should finally provide the output in a specific format such as `input:output`.
###Better Prompt Type###
[PATTERN_IDENTIFICATION]
###Better Prompt###
Your task is to perform a binary string transformation on the given input. The input consists of one binary string, followed by an arrow. The transformation involves replacing each 0 with 1 and each 1 with 0 in the first binary string. Here's an example:\n00101 -> 11010\n11 -> 00\n1110101010101010111110101101101101101101 ->1011011011010111110101010101010111011110

###Candidate Prompt###
Create a plot in MATLAB for the given dataset.
###Reason###
The "Bad Prompt" is very brief and lacks detailed instructions. It doesn't specify how to load the dataset, handle missing values, normalize the data, customize the plot, or save the plot. It also doesn't ask for any interpretation or analysis of the plot. This lack of detail may lead to a wide range of outputs, making it challenging to ensure that the model's output will meet the user's needs.
###Better Prompt Type###
[ROLE_PLAYING][CODE_OUTPUT][ANALYSIS]
###Better Prompt###
As a researcher, your task is to generate a suitable plot in MATLAB for the given dataset to visualize the underlying data. The dataset is a time-series data with three variables: time (t), amplitude (A), and frequency (f).\nHere's your task broken down into steps:\nFirst, load the dataset into MATLAB using the appropriate function depending on the data file format (such as load, csvread, or importdata).\nCheck the dataset for missing or NaN values. If present, handle them according to the analysis requirements.\nNormalize the data if needed depending on the range of your data.\nGenerate a plot using the MATLAB plot function. Plot time (t) on the x-axis, and amplitude (A) and frequency (f) on the y-axis.\nCustomize your plot by adding appropriate labels for the x-axis and y-axis, a title for the plot, and a legend to distinguish between the two y variables.\nUse MATLAB's grid on function to add grid lines to the plot for better readability.\nSave your plot as a high-resolution image using the saveas function in MATLAB.\nAnalyze and interpret the plot, and share your findings and observations.\nAt the end of this task, provide the MATLAB code you used to generate the plot, along with the saved image of the plot and your insights from this visualization.

###Candidate Prompt###
You've been given a spreadsheet with a formula used for summing sales across conditions. Explain how the formula in cell H2 works.
###Reason###
The "Better Prompt" is detailed and specific. It sets the stage by providing context about the client, the spreadsheet, and the complex formula. It then breaks down the explanation into distinct parts, covering the concept of an array formula, the different conditions, the SUM function, and the practical application of the formula. This structure guides the model to produce a comprehensive and clear explanation.
###Better Prompt Type###
[ROLE_PLAYING][ANALYSIS][CONTENT_GENERATION]
###Better Prompt###
As a seasoned data analyst at Deloitte, you've been given a large sales database from your client that includes complex Excel formulas. The spreadsheet is used to sum sales based on multiple conditions across different columns. Your task is to explain one of these complex formulas to your new hires, who are still trying to grasp advanced Excel functionalities.\nYour focus for this task is the array formula in cell H2, which is {=SUM((B2:B10000="East")*(C2:C10000="Q1")*(D2:D10000))}.\nBreak down the explanation into the following parts:\nBegin by explaining what an array formula is, how it differs from regular formulas, and how to input them (using Ctrl+Shift+Enter).\nExplain each condition in the formula: B2:B10000="East" and C2:C10000="Q1". Describe how these conditions act as filters to select specific rows from the dataset.nDiscuss the role of the multiplication operator in the formula and how it essentially performs a logical 'AND' operation across the conditions.\nExplain how the SUM function works in this context, summing only the selected (or "true") elements from the array generated by the conditions.\nLastly, provide context on how this formula supports the overall sales analysis in the spreadsheet and the benefits of using array formulas for such complex calculations.

###Candidate Prompt###
The client has given you a list of invoices with locations and sales reps. They suspect "Sales Rep 1" might be doing something wrong. Combine this data with the Revenue Ledger, create a chart, and tell us what you find about "Sales Rep 1". Email your analysis by 5pm.
###Reason###
The "Bad Prompt" is vague and lacks specificity on what kind of chart to create, how to combine the data with the Revenue Ledger, and what kind of insights are expected about "Sales Rep 1". This leaves the model with too many possibilities to guess from, and the output might not meet the actual task requirements.
###Better Prompt Type###
[ROLE_PLAYING][ANALYSIS][CONTENT_GENERATION]
###Better Prompt###
You are an auditor at Deloitte, and the client has provided you a list of invoices detailing each transaction's location and the corresponding sales representative. The data is arranged in an Excel spreadsheet, with each row representing an invoice. Column A contains the invoice numbers, Column B the sales amounts, Column C the locations, and Column D the sales representatives.\nRecently, during a fraud discussion with the client, concerns were raised about a particular sales representative, referred to as "Sales Rep 1," possibly engaging in fraudulent activities. Your task is to combine the given supplemental data with the Revenue Ledger and prepare a Pivot Chart to better understand the distribution of sales across locations and representatives.\nTo do this, create a Pivot Table in a new sheet. Add 'Location' and 'Sales Representative' as Row Labels, and 'Sales Amount' as the Values, which should be summed. From this Pivot Table, create a Pivot Chart to visualize the data.\nFurthermore, you need to identify and explain at least three observations specific to "Sales Rep 1" based on the Pivot Chart. This could include anomalies in sales patterns, exceptionally high sales in certain locations, or other noticeable patterns.\nPlease ensure to complete your analysis by 5pm local time and email your findings, including the Excel workbook with the Pivot Chart and your responses, to the local recruiter.

###Candidate Prompt###
Use VLOOKUP to find data.
###Reason###
The candidate prompt is short and vague. It doesn't specify which data to find, where to look for it, what to return, or where to put the result. This could lead to misunderstanding or incorrect use of the VLOOKUP function.
###Better Prompt Type###
[CODE_OUTPUT][CONSTRAINTED_OUTPUT]
###Better Prompt###
Use the Excel VLOOKUP function in cell E2 to find the price of a product. Assume that the product name is in cell D2 and the product list is in cells A1:B100, where column A contains the product names and column B contains the prices. Type "=VLOOKUP(D2, A1:B100, 2, FALSE)" into cell E2 and press Enter. The cell E2 will display the price of the product named in cell D2.

###Candidate Prompt###
Create a pivot chart.
###Reason###
The candidate prompt is short and doesn't provide enough details. It doesn't specify what data should be used for the pivot chart, how the data should be grouped or summarized, or what kind of chart to create. This could lead to confusion or incorrect chart creation.
###Better Prompt Type###
[CODE_OUTPUT]
###Better Prompt###
Your task is to create a Pivot Chart using the sales data from cells A1 to C100 in Excel. Column A contains the sales date, column B contains the product name, and column C contains the sales amount. Create a pivot table first by selecting the data range A1:C100, then go to the "Insert" tab and click "PivotTable". In the PivotTable Field List, drag 'sales date' to the Rows area, 'product name' to the Columns area, and 'sales amount' to the Values area, ensuring that it's set to SUM. Then, with the pivot table selected, go to the "PivotTable Tools" > "Analyze" > "PivotChart" and select the chart type you desire. Excel will then create a pivot chart based on your pivot table.

###Candidate Prompt###
Use a formula to find the total sales.
###Reason###
The candidate prompt is short and lacks clarity. It doesn't specify which cells contain the sales data, the kind of formula to use, or where to put the result. This could result in various interpretations of the task, leading to incorrect or incomplete responses.
###Better Prompt Type###
[CODE_OUTPUT]
###Better Prompt###
Create an Excel formula in cell D2 that sums all the sales numbers from B2 to B100. You can do this by typing "=SUM(B2:B100)" into cell D2 and pressing Enter. The cell D2 will display the total sales from cells B2 to B100.

###Candidate Prompt###
Create a pie chart in Excel.
###Reason###
The candidate prompt is short and lacks explicit instructions on how to create a pie chart in Excel. It doesn't specify what data should be used for the pie chart, where this data is located, or the steps needed to create the chart. This could lead to incorrect chart creation or confusion about the task.
###Better Prompt Type###
[CODE_OUTPUT]
###Better Prompt###
Your task is to create a pie chart in MS Excel using data in a range of cells. For instance, if you have a table of data in cells A1 to B5, where column A contains the categories and column B contains the values, you would first select the data range A1:B5. Then, go to the "Insert" tab and click on the "Pie Chart" button in the Charts group. Choose the type of pie chart you want from the dropdown menu. Excel will then create a pie chart in your worksheet based on the selected data. Adjust the chart title and labels as needed.

###Candidate Prompt###
Calculate the total value of a range of cells in MS Excel.
###Reason###
The candidate prompt is short and lacks specific instructions for achieving the desired task. It doesn't provide any information about which cells to total, the formula to use, or how to input the formula into Excel. This could lead to a misunderstanding of the task or result in incorrect or incomplete responses.
###Better Prompt Type###
[CODE_OUTPUT]
###Better Prompt###
Your task is to calculate the total value of a range of cells in MS Excel. For instance, if you want to find the total of cells A1 to A10, you would use the formula "=SUM(A1:A10)". First, click on the cell where you want the total to be displayed. Then, type the formula into the formula bar at the top of the Excel window, and press Enter. The total of cells A1 to A10 will then be displayed in the chosen cell.

###Candidate Prompt###
Given the source code, generate a flowchart that visually represents the program's logic and structure.
###Reason###
The candidate prompt is broad and lacks specificity about what programming language is being used, what level of detail is expected in the flowchart, and what kind of outputs are acceptable (e.g., JSON, visual diagram). It could result in the model guessing about these details, resulting in inaccurate or mismatched outputs.
###Better Prompt Type###
[GRAPH][FORMATTED_OUTPUT]
###Better Prompt###
Given the Python source code, your task is to generate a flowchart that represents the program's logic, structure, and execution flow. The flowchart should visually denote different programming constructs such as loops, conditionals, function calls, etc. The output of your task should be a JSON formatted representation of the flowchart where nodes represent code blocks, and edges represent the control flow. Each node should contain the code block's type (e.g., 'function', 'if-statement', 'loop'), the block's code content, and a unique identifier. Each edge should contain the source and target node identifiers and the type of control flow (e.g., 'conditional-true', 'conditional-false', 'loop-start', 'loop-end'). Make sure the JSON output is valid and well-structured.

###Candidate Prompt###
Create a flowchart to illustrate the process of making tea.
###Reason###
The candidate prompt doesn't provide enough specific details about the method of making tea, the granularity of steps to be included in the flowchart, or the tool to be used for visualizing the flowchart. This lack of details could lead to ambiguity and variance in the responses from the model.
###Better Prompt Type###
[GRAPH][PLANNING]
###Better Prompt###
Your task is to create a detailed flowchart to illustrate the process of making a cup of English breakfast tea using a teabag and boiling water. Begin from the step of boiling water and end with the step of discarding the used teabag. The flowchart should be detailed and include steps such as heating the water, placing the teabag in the cup, pouring hot water, steeping, removing the teabag, and optional steps like adding sugar, milk or lemon.

###Candidate Prompt###
Sammy wanted to go to where the people were. Where might he go? Options: (a) race track (b) populated areas (c) desert (d) apartment (e) roadblock.
###Reason###
The bad prompt in this case is not providing a clear directive on how to structure the response. It does mention the need to find a place with a lot of people, but it doesn't specifically ask for reasoning behind the selection. This lack of specificity can lead the model to simply pick an option without providing the rationale behind the choice. On the other hand, the better prompt instructs the model to provide a response in a specific format and emphasizes the need for reasoning, thus ensuring a comprehensive and well-explained answer.
###Better Prompt Type###
[COMMON SENSE REASONING][DEDUCTIVE_REASONING][FORMATTED_OUTPUT]
###Better Prompt###
In the following question, we're aiming to understand where Sammy, who wants to be where the people are, might go. You are given the following options: (a) race track (b) populated areas (c) desert (d) apartment (e) roadblock. Your task is to identify the option where one would typically find the most people. Please present your response in the following format: 'The answer is (option). The reasoning is...'. Remember, your answer should demonstrate logical and common sense reasoning.
