It's not clear how much of the code has been already imported and the assistant will not be helpful. We need to instruct that it should import statements here.
It seems the task is sentence completion here but it is still very hard and one has to guess here. The instruction is neither clear nor specific.
Some recommend that you place instructions at the beginning of the prompt. Another recommendation is to use some clear separator like "###" or """"" to separate the instruction and context.
Be very specific about the instruction and task you want the model to perform. The more descriptive and detailed the prompt is, the better the results. This is particularly important when you have a desired outcome or style of generation you are seeking. There aren't specific tokens or keywords that lead to better results. It's more important to have a good format and descriptive prompt. In fact, providing examples in the prompt is very effective to get desired output in specific formats.
Given the tips above about being detailed and improving format, it's easy to fall into the trap of wanting to be too clever about prompts and potentially creating imprecise descriptions. It's often better to be specific and direct. The analogy here is very similar to effective communication -- the more direct, the more effective the message gets across.
Another common tip when designing prompts is to avoid saying what not to do but say what to do instead. This encourages more specificity and focuses on the details that lead to good responses from the model.
Be specific, descriptive and as detailed as possible about the desired context, outcome, length, format, style, etc
Articulate the desired output format through examples. Show, and tell - the models respond better when shown specific format requirements. This also makes it easier to programmatically parse out multiple outputs reliably.
Reduce "fluffy" and imprecise descriptions
Instead of just saying what not to do, say what to do instead
Code Generation Specific - Use "leading words" to nudge the model toward a particular pattern. For example, adding "import" hints to the model that it should start writing in Python. (Similarly "SELECT" is a good hint for the start of a SQL statement.)
Prompt is vague and lacking context. better prompt contains context, goal, and constraint.
For complex tasks, use: "ask any questions needed for context".
It is helpul if you ask a model to be concise and use bullet points if necessary.
Don't ask leading questions. Don't anchor the model.
Utilizing chained prompting can help you guide a model through a series of questions or tasks, resulting in more comprehensive and interconnected responses. By creating a sequence of related prompts, you can explore a topic in-depth or complete a multi-step task more effectively.
Assigning a specific role to a model can help guide the model's responses and ensure they align with the desired expertise or perspective. By providing a clear role, you can focus the generated output on the specific knowledge area or viewpoint you require.
Sometimes, a model may require more context or clarification to provide a helpful response. Encouraging the model to ask questions when it needs more information can improve the quality of its answers and prevent misunderstandings.
Introducing a critical agent can help refine the output generated by a model. By asking the model to critique its own responses and revise them based on the feedback, you can improve the overall quality and usefulness of the information or assistance you receive.
Defining your intent clearly in a prompt is crucial for helping a model understand your goal and provide an appropriate response. When the model has a clear understanding of what you're asking, it's more likely to deliver the information or assistance you're seeking.
When crafting prompts for a model, it's essential to make them concise and clear. A well-written prompt enables the model to understand your request more accurately and deliver a helpful response. Keeping your prompts brief and focused can help prevent confusion and ensure that you receive the information or assistance you're seeking.
When interacting with a model, it's important to use natural, conversational language in your prompts. This helps the model generate more accurate and human-like responses. GPT is designed to understand and respond to prompts that resemble human conversation, so crafting your prompts in a natural way can lead to better results.
Choosing the right verbs in your prompts can significantly impact the results you get from a model. Verbs convey the desired action or result, and using appropriate verbs can help the model understand your intent more clearly.
Leading questions can result in biased or unhelpful answers from GPT. To obtain objective and useful responses, it's important to ask open-ended, unbiased questions that allow the model to explore a topic without being influenced by the phrasing of the prompt.
When seeking advice or comparing options with GPT, it can be helpful to ask for pros and cons as well as a rating. This approach can provide you with a more balanced and comprehensive understanding of your options, enabling you to make more informed decisions.
The prompt is vague and does not request an example or analogy to clarify the complex concept. When seeking explanations for complex concepts with GPT, it can be helpful to request examples or analogies. These can make difficult ideas more relatable and easier to understand, providing you with a clearer grasp of the subject matter.
By getting a model to think through its answers step by step, you can receive more logical and coherent responses that break down complex topics or tasks into easily understandable components. This is especially useful for mathematical questions.
Be specific, descriptive, and detailed about the desired response length, format, and style. For example, we can instruct the model to return the answer as a JSON object. This can be particularly useful if you want to consume the model's response in a programmatic way. This particular tweak also helps the model to return more precise and correct answers.
It's not always necessary or advantageous to encapsulate all your instructions in one single prompt. Break down complex tasks into a sequence of simpler prompts in an interactive conversation. This iterative prompting approach can often lead to higher-quality outputs, as it allows for mid-course corrections, refining the model's understanding of the task at hand over several exchanges.
This prompt is incomplete as it does not specify the programming language to be used. Also, it does not specify whether the program should handle invalid inputs (like negative numbers or non-integer numbers).
This prompt is vague and lacks details about the output format and the type of entities to look for. It does not clarify how to represent the graph or which type of graph to use.
This prompt is vague and lacks details about the output format and the type of entities to look for. It does not clarify how to represent the graph or which type of graph to use.
This is a complex task for a language model and might not lead to a satisfactory response. While the model can handle algebraic equations, the complexity of this equation is beyond what the model can solve directly.
Although this seems straight-forward, the model might not accurately capture the subtleties of the text while translating, especially with more complex sentences or concepts. Here, providing the model with context or the desired tone can help improve the translation.
This is a complex task that requires detailed instructions. The model needs to know what constitutes a "word" (should it consider numbers or special characters as words?) and how to handle case sensitivity. It also needs instructions on how to output the result.
The prompt lacks specific details. It does not explain the size or format of the document, the desired output format, or how to handle the context. The model does not know the method for splitting the document into manageable chunks, or how to link the chunks together to form a coherent graph.
It does not specify the method or algorithm used to encode the message. There's no room for the AI to ask clarification questions.
The prompt does not provide any specific news details. It does not encourage the AI to ask for more information.
The prompt doesn't specify the level of detail required in the summary or the target audience. There's no room for the AI to ask clarification questions.
This prompt is already fairly strong, but it could benefit from the inclusion of an example showcasing the expected input and output. This would provide a reference for the assistant and reduce any ambiguity about the desired result.
Although this prompt is well-structured, it does not specify whether formal or informal Spanish should be used. This distinction is important in Spanish and could significantly impact the translated message's tone.
The candidate prompt is vague and does not specify the structure of the desired JSON output, leaving room for many possible interpretations. The better prompt, on the other hand, gives a detailed explanation of how the task should be approached, provides clear instructions on the structure and content of the output, and specifies the exact fields required for each entity and relationship. This ensures that the output will be in the correct format and contain all the necessary information.
This prompt lacks clarity and specific instructions. It doesn't specify the database type, the structure of the data to be inserted, or the table schema. It doesn't provide guidance on error handling or whether to check if the table exists before attempting to create it. The better prompt prompt provides detailed and specific instructions. It mentions the database type, how the data will be provided, and what the function should return. It guides on error handling and important practices like closing the session after operations. This prompt is more likely to generate the desired output from the model.
The prompt is vague and doesn't specify what details are to be fetched, from which webpage, and in what format the data should be returned. It doesn't guide on error handling or which part of the webpage the model should focus on. 
This prompt is extremely vague. Machine learning encompasses a wide variety of techniques and algorithms, and without specific context or requirements, it's impossible for the model to guess the user's intent accurately.
This prompt is not specific enough. Text data can be processed in many different ways depending on the task at hand (e.g., tokenization, stemming, stop words removal, etc.).
Reason: Although the candidate prompt provides the basic task requirements, it doesn't specify which model to use for summarization and doesn't give instructions about the `max_length` and `min_length` parameters for the summarization process. This could cause the GPT-based model to "guess" the user's preferences, which might not align with their actual needs. The better prompt, on the other hand, provides specific, clear, and complete instructions, allowing the GPT-based model to generate the expected output without any ambiguity or assumptions.
The candidate prompt is vague about the type of classifier to be used and does not provide any parameters for splitting the data or the classifier. This ambiguity might lead the GPT model to guess the user's intentions, which may not align with their actual needs. On the other hand, the better prompt gives specific, clear, and complete instructions. It specifies the classifier type, split ratio, random states, and parameters for the classifier, eliminating any room for guesswork and ensuring the GPT-based model generates the desired output.
The candidate prompt lacks specific details about what 'total orders' mean - it's unclear whether it's the total number of orders across all customers, or per customer. It also does not specify the time frame for these orders. The lack of column names to be used in the query may lead the GPT model to guess the user's intentions inaccurately. In contrast, the better prompt provides explicit information about the 'orders' table, the columns to be used, the condition for the 'WHERE' clause, and the requirement to group the results by 'customer_id'. This detail leaves no room for ambiguity and allows the GPT-based model to generate the precise SQL query the user requires.
The candidate prompt is vague and lacks critical details for the task. It doesn't specify the type of classification task, which could be binary, multi-class, or multi-label. It also fails to mention what model is to be used, in this case, BERT. The details about the specific steps of a training loop (loading data, sending data to device, forward pass, calculating loss, backpropagation, zeroing gradients, updating model parameters) are also missing. The better prompt provides all these details, thus guiding the GPT-based model to provide a precise and accurate solution for training a binary sentiment analysis classifier using BERT in PyTorch.
The candidate prompt, although containing similar instructions to the good one, lacks a clear sequence and structure for the task. Also, it doesn't provide specific instructions like ensuring random distribution of nodes and edges, highlighting the node with the highest degree during the plotting phase, and explicitly instructing to print the node with the highest degree. On the other hand, the better prompt breaks down the task into detailed and sequential steps, making it clearer for the GPT model to understand and generate the expected output. It also provides more explicit instructions for each step, resulting in a more precise output.
The candidate prompt vaguely tells the GPT model to 'figure out a way' to use the inputs, which may lead to the model guessing and possibly failing to make the right connections. It doesn't specify the format of the input or how to handle the first chunk of the document, which might not have an accompanying graph. The better prompt, however, gives a detailed procedure on how to handle the input, what to do when there is no accompanying graph (first chunk), and how to handle successive chunks. It ensures the model understands that it needs to consider the context provided by the previous graph. This clarity and structure reduce the model's need to guess and increases the chances of getting the expected output.
The better prompt is lacking in specifying that the model needs to understand the existing hierarchy and relations in the graph, and how to position the newly extracted information. This can lead the model to guess or make errors in placing the new information in the graph. The better prompt is more detailed and instructive, it specifies that the model has to understand the hierarchy and relations in the existing graph before adding new information. It further instructs the model on how to handle the new information - either by finding their position in the existing hierarchy or adding a new level. This level of detail decreases the chances of the model making incorrect guesses and increases the likelihood of obtaining the desired output.
The candidate prompt, although detailed, doesn't provide enough information about how to process the existing graph, the way new entities are to be integrated into it, or the expected output format. This vagueness may lead the AI model to guess how to perform the task, leading to incorrect or inconsistent outputs. The better prompt clearly outlines all the necessary steps and expectations. It explicitly asks the model to understand the existing graph's hierarchy and relationships before integrating new information. It also sets clear instructions for handling new information from the document chunk. The formatting of the input is made clear with '###Example###', '###Graph###' separators, making it easier for the model to understand and process the inputs. This level of detail and specificity reduces the amount of guesswork the model has to do, leading to more accurate and consistent outputs.
Format Specification: The candidate prompt doesn't specify the format of the narrative or existing graph, leading to ambiguity. The better prompt clearly outlines the expected input format, reducing potential confusion for the model.\nDuplicate Nodes: The candidate prompt doesn't provide sufficient instruction on handling duplicate nodes. The better prompt clarifies this, ensuring the model doesn't create unnecessary duplicates or omit important nodes.\nHandling Multiple Starting/Ending Points: The candidate prompt lacks instruction on how to handle narratives with multiple starting or ending points, risking arbitrary and incorrect graph construction. The better prompt clearly guides the model to ensure the final graph always has one initial and one final node, maintaining the logical flow of information.\nPartial Graphs: The candidate prompt fails to mention the possibility of partial graphs and the associated reasoning required. The better prompt provides explicit instructions to guide the model in handling and updating partial graphs coherently.\nGraph Compilation: Lastly, the instructions on compiling the updated graph state are vague in the candidate prompt, potentially leading to omissions. The better prompt emphasizes the inclusion of all nodes and edges, both existing and new, to create a complete, sequential graph.
Lack of Specificity: The candidate prompt is not specific about the type of image input (e.g., file path, URL, numpy array) or the format of the output. It leaves the model to guess, which may not be accurate. In contrast, the better prompt specifies that the image will be a numpy array and clearly defines the expected output format. Missing Details: The candidate prompt does not mention the use of pre-trained models, which might lead to unnecessary confusion or complexity. The better prompt explicitly states that a pre-trained model should be used. Function Signature: The better prompt provides a detailed function signature with parameter and return types, while the candidate prompt does not. Lack of Guidance: The candidate prompt provides no guidance on how to handle object detection specifics such as bounding box coordinates and confidence scores. The better prompt, on the other hand, guides the model to return these details.
Lack of Specificity: The candidate prompt is not specific about the type of data input (e.g., file path, pandas DataFrame, numpy array) or the format of the output. It leaves the model to guess, which may not be accurate. In contrast, the better prompt specifies that the time series data will be a numpy array and clearly defines the expected output format\nMissing Details: The candidate prompt does not mention how to split the data into training and testing sets, which might lead to unnecessary confusion or complexity. The better prompt explicitly states how to do this.\nFunction Signature: The better prompt provides a detailed function signature with parameter and return types, while the candidate prompt does not.\nLack of Guidance: The candidate prompt provides no guidance on what kind of neural network to use. The better prompt, on the other hand, guides the model to use a Recurrent Neural Network with LSTM layers. While this prompt gives the model an idea of what task it should perform, it's not explicit about the format of the output, which could lead to inconsistencies in the results. It also doesn't clearly state how the model should handle cases of no solutions or infinite solutions, which could lead to unclear and undefined outputs in such cases.
This prompt is better as it's explicit about the input and output formats, which ensures consistent results. It also clearly instructs the model on how to handle cases of no solutions or infinite solutions, ensuring that these edge cases are handled correctly. The use of a few-shot learning example provides the model with a clear idea of what the output should look like, helping it to better understand the task.
The candidate prompt, in this case, involves the model attempting to solve the entire problem at once. While this may work for simpler tasks, in complex scenarios it might lead to less accurate or nonsensical outputs. Breaking the problem down would make the reasoning more explicit and easier for the model to handle. The 'Chain of Thought' approach is better in this context because it makes the computation more manageable by breaking it down into simpler steps. This approach can be particularly useful for complex mathematical problems as it allows the model to focus on one part of the problem at a time. It also provides a clear and organized structure, making it easier for the user to follow the model's reasoning process.
This is a candidate prompt because it's expecting the model to generate HTML/CSS code for the whole form in a single go. This could lead to less organized code, and it might be difficult for the model to handle all the requirements at once. Breaking the problem down can make the process more manageable and the code easier to understand. By dividing the problem into two parts, we make it easier for the model to tackle each part effectively. First, it focuses on generating the HTML structure of the form, and then it applies CSS to style it. This approach makes it easier to debug and understand the code, and it also makes the problem more manageable for the model.
The prompt presents a complex real-world programming task involving web development, database management, and user interaction. It ensures that the model understands the specific requirements, functionality, and features needed to implement a comprehensive flight reservation system.
By using chain of thoughts prompting, we can guide the model to break down the word problem into logical steps and provide a step-by-step solution.
The prompt describes a complex web application but lacks specific instructions on how to approach each functionality. Breaking it down into smaller steps can help clarify the implementation process and make it more manageable.
The prompt describes a complex generative AI use case involving a virtual financial assistant for personalized financial planning and investment advice. The task requires gathering detailed user information, generating comprehensive financial plans, considering various factors, maintaining regulatory compliance, ensuring data privacy, and continuously adapting to changing circumstances. The extended prompt provides a clearer understanding of the scope and requirements of the virtual financial assistant.
The better prompt is self-sufficient, specific, and provides clear instructions for utilizing various MS Excel functions. It guides the user to perform calculations for total revenue and average revenue, and also includes the instruction for creating a pivot table. The prompt leaves no room for guessing and ensures that the model produces complete and accurate outputs.The candidate prompt lacks specific instructions and details necessary for the model to generate a meaningful sales report. It is vague and doesn't mention which MS Excel functions to use or what specific calculations are needed. The prompt leaves the model guessing about the desired output and may result in incomplete or incorrect responses. The better prompt provides clear instructions, including the specific MS Excel functions to use, the calculations required, and the desired output format. It ensures that the model understands the task and can generate a complete and accurate sales report. 
The better prompt provides clear instructions on how to use the VLOOKUP function in MS Excel. It specifies the purpose of the function (retrieving product names) and explains how to match the values in two different tables using a common identifier (product ID). The prompt also includes the desired output format, which is adding the product names in a new column. By following this prompt, the model can generate the necessary formula and populate the "Product Name" column accurately.
The better prompt provides specific instructions and explicitly mentions the use of the VLOOKUP function. It clarifies the purpose of the function (matching and retrieving product names) and provides step-by-step guidance on how to accomplish the task. By following this prompt, the model can generate the necessary formula using VLOOKUP to retrieve and populate the product names accurately.
The better prompt clearly defines the scenario, role, and objective of the social engineering exercise. It emphasizes the need for authorization and responsible conduct. The prompt includes specific instructions on crafting a simulated phishing email and creating a landing page to test employees' security awareness. It also emphasizes the importance of educating employees about the purpose and implications of the exercise, ensuring it is conducted in a controlled environment. The candidate prompt is vague and lacks specific instructions and context. It does not provide any guidance on the proper conduct or authorization required for the social engineering exercise. The prompt leaves the model guessing about the purpose, scope, and responsible execution of the attack. As a result, the generated response may be incomplete, inappropriate, or unethical.
While the details provided in the prompt might be accurate and important, they could potentially confuse the AI model due to their verbosity and complexity. The model might lose focus on the main task, which is simply to identify the class with the highest probability.
The candidate prompt lacks clear instructions and expected output. It doesn't specify that the model should return a ranked list of recommendations, nor does it mention what factors should be taken into account when making these recommendations. On the other hand, the better prompt is clear about both what the system needs to do (provide a ranked list of 10 recommended products per customer) and how it should do it (by considering all available customer and product data). This clarity guides the model to produce the desired output.
The candidate prompt is not explicit about the necessary parameters required for calculating future value in MS Excel. It does not clarify the structure of the FV function in Excel, nor does it give an example of the desired output format. Conversely, the better prompt provides clear guidelines on the structure of the FV function, explains the meaning of each parameter, and provides an example of the expected output, giving the model a better understanding of what is expected. This makes it more likely that the generated output will meet the requirements.
The candidate prompt is too vague, as it only mentions the need for a poem with some romantic elements but doesn't provide any specific instructions on structure, length, or thematic elements. On the other hand, the better prompt provides a clear structure for the poem (four stanzas, four lines each, ABAB rhyme scheme), detailed thematic instructions (use of classic romantic imagery, feelings to be expressed), and a specific romantic context. These details help guide the model in creating a piece of writing that fulfills the specific requirements of the task.
The candidate prompt doesn't provide any specific information about what the Python function needs to do. It only specifies that the function is related to string operations, which covers a broad range of possible tasks. The better prompt, on the other hand, provides a clear and detailed specification of the function's expected behavior. It specifies the name of the function, the input it should take, the operation it should perform, the output it should return, and some edge cases it should handle. This level of detail guides the model to generate the desired Python function more accurately.
The candidate prompt is vague, with no clear instruction about the required task, the necessary parameters, the expected function name, or the libraries that should be used. It also doesn't specify how to handle the randomness or proportion in splitting the data. On the other hand, the better prompt provides precise instructions, detailing the function name, parameters, expected output, and specific library and function to use for the task. It specifies to set the `random_state` for reproducibility and dictates the proportion for the train-test split. This level of detail guides the model to generate the desired output without ambiguity.
The candidate prompt in this case is vague and doesn't specify which library to use for web scraping, nor does it indicate what data needs to be extracted from the webpage. It leaves a lot of ambiguity and room for unnecessary assumptions. The better prompt, on the other hand, gives clear instructions about the specific task - extracting data from a table on a webpage using BeautifulSoup. It also provides an example function to guide the development of the new function, which helps to clarify the expected format and structure. This makes it easier for the model to generate relevant and accurate code.
The candidate prompt is missing clarity in explaining the objective. While it provides some task details and uses examples, it doesn't explicitly state that the user should write a Python script to accomplish the task. It is also not very clear about the formatting requirements for the output, merely stating that the email addresses should be on new lines. In contrast, the better prompt is more specific about the objective of the task - extracting email addresses of certain domains using Python. It also clearly communicates the output formatting requirements. Furthermore, it provides a clear example for the user to follow, specifying both the expected input and output.
The candidate prompt does not provide a clear call to action. It is also vague about the nature of the task and doesn't make it explicit that the model is expected to generate SQL queries from English questions. On the other hand, the better prompt is very explicit about the task - converting English questions into SQL queries. It provides detailed information about the database and provides concrete examples of the expected inputs and outputs. It also specifies that the queries are to be generated from a 'student' database, providing the model with a context to guide its responses. This specificity and context provided by the better prompt make it much more likely that the model will be able to successfully complete the task.
The candidate prompt is ambiguous and doesn't provide a clear description of the task. It does not specify the format of the input data or the expected output. Furthermore, it provides only one example, which might not be enough for the model to understand the task. The better prompt, on the other hand, is explicit about the task and the format of the input and output. It mentions that the input will be a list of songs with their artists and the output should also be in the same format. This clear instruction helps the model to generate the correct output.
The candidate prompt doesn't make it clear what the rules are for creating edges between nodes. For instance, it doesn't specify whether an edge should be created for all types of relationships or only for certain types (like "friend"). The better prompt, however, provides a clear instruction to the model about when to create edges between nodes. It mentions explicitly that an edge should be created only if the relationship is a "friend". This helps the model to generate the correct adjacency list representation of the graph.
The candidate prompt is rather vague and doesn't explicitly guide the model about the different formats the dates could be in. It merely instructs the model to extract dates, but without clear guidance, the model could miss some dates, especially if they are in unusual or less common formats. The better prompt, on the other hand, explicitly instructs the model about the possible formats of the dates. This additional information helps the model to identify and extract dates in a more accurate and comprehensive manner. It removes any ambiguity for the model and guides it towards the correct extraction of dates.
The candidate prompt provides the task but lacks explicit instructions and examples to guide the model. It doesn't clarify what a named entity is, which can lead to imprecise or incorrect results. On the other hand, the better prompt clarifies the task clearly, providing more context about what named entities are. It gives examples of different kinds of named entities, which help the model to understand what it's expected to do. It sets a clear expectation about the output format and makes it easier for the model to follow the task correctly.
The candidate prompt simply states the task but does not provide any clarity on the expected output format, leaving it vague and potentially leading to inconsistent or incorrect results. On the other hand, the better prompt makes it clear that the model is expected to discern a mathematical function or pattern from the provided examples. It also provides a specific output format, guiding the model on how to structure its responses. The additional example further clarifies the task, setting a clearer expectation for the model's performance.
The candidate prompt does not clearly specify the task, how to use the example summaries, and what format the summary should be in. The better prompt, on the other hand, makes it clear that the model should learn from the given examples and apply that knowledge to generate a comprehensive summary for the provided document. The better prompt also clarifies the formatting of the examples and the final summary. It provides more information to the model and leads to a more accurate summary.
In the candidate prompt, the model is asked to "consider" the sequences but there is no clear instruction that these sequences are examples from which the model should learn the pattern. Moreover, the expected format of the output is not clear from the candidate prompt.The better prompt explicitly instructs the model that it will be given sequences of numbers as examples and it should find the pattern and predict the next number in the sequence. The format of the examples is also clearly mentioned as Input and Output, providing clarity on how to format the output. The better prompt also provides clear instructions on the task to be performed on the test sequence, further reducing the scope of guessing by the model.
The candidate prompt may initially seem detailed, but it suffers from several issues. It lacks clear focus and specific instructions on what aspects of physical health, mental well-being, and longevity to cover. It encourages the inclusion of personal anecdotes and expert quotes without providing guidance on their relevance or credibility. The prompt also suggests a long word count and aims to engage readers of all ages, which further dilutes the focus of the blog post.
The candidate prompt, although longer, still lacks specific instructions and requirements for generating a high-quality training dataset for dialogue datasets. It fails to provide clear guidelines on dataset generation, question types, answer lengths, quality assurance, and diversity. The lack of specificity may result in a dataset that is incomplete, inconsistent, or inadequate for training dialogue models effectively. 
The candidate prompt is still relatively short and lacks specific instructions and requirements for generating a dataset for multi-class classification. Although it mentions different classes, features, and sample sizes, it does not provide clear guidelines on how to ensure diversity, avoid class imbalance issues, or maintain data quality. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate a suitable dataset for multi-class classification.
The candidate prompt is relatively short and lacks specific instructions and requirements for generating a dataset for song generation. Although it mentions lyrics, musical arrangements, and various genres, it does not provide clear guidelines on how to ensure meaningful lyrics, diverse musical arrangements, or maintain consistency and coherence in the songs. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate a suitable dataset for song generation.
The candidate prompt is relatively short and lacks specific instructions and requirements for generating a dataset for reinforcement learning. Although it mentions state-action pairs, rewards, next states, and scenario coverage, it does not provide clear guidelines on how to ensure a well-defined environment, capture diverse scenarios, or maintain a balance between exploration and exploitation. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate a suitable dataset for reinforcement learning.
The candidate prompt is relatively short and lacks specific instructions and requirements for paraphrasing. It only mentions the need for paraphrasing and maintaining the same meaning as the original sentences. However, it does not provide clear guidelines on how to ensure variations in wording, sentence structure, or phrasing. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate accurate and diverse paraphrases.
The candidate prompt is too short and lacks specific instructions and requirements for answering the question. It only mentions the need for a yes or no answer, but it does not provide clear guidelines on how to reason through the question or provide evidence for the answer. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate answer.
The candidate prompt is relatively short and lacks specific instructions and requirements for summarizing the text. Although it mentions the need for a concise summary that captures the main points of the text, it does not provide clear guidelines on how to determine the main points, which summarization model to use, or how to handle complex or technical text. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate summary.
The candidate prompt is relatively short and lacks specific instructions and requirements for generating a caption for the image. Although it mentions the need for a descriptive and informative caption that captures the essence of the image, it does not provide clear guidelines on how to determine the essence, which image captioning model to use, or how to handle complex or abstract images. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate caption.
The candidate prompt is relatively short and lacks specific instructions and requirements for generating a question for the text. Although it mentions the need for a relevant and contextually appropriate question that captures the essence of the text, it does not provide clear guidelines on how to determine the essence, which question generation model to use, or how to handle complex or technical text. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate question.
The candidate prompt is relatively short and lacks specific instructions and requirements for solving the problem. Although it mentions the need to find the date 10 days ago in MM/DD/YYYY format, it does not provide clear guidelines on how to calculate the date, which calendar system to use, or how to handle leap years. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate answer.
The candidate prompt is relatively short and lacks specific instructions and requirements for evaluating the plausibility of the sentence. Although it provides an example and a correct answer, it does not provide clear guidelines on how to reason through the sentence or how to handle ambiguous or complex sentences. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate answer.
The candidate prompt is relatively short and lacks specific instructions and requirements for generating a plan to complete the task. Although it mentions the need to find and deliver an energy bar to the user, it does not provide clear guidelines on how to find the energy bar, how to navigate to the user, or how to handle unexpected obstacles. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate plan.
The candidate prompt is relatively short and lacks specific instructions and requirements for answering the question. Although it provides an example of how to answer the question, it does not provide clear guidelines on how to reason through the question or provide evidence for the answer. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate answer.
The candidate prompt is relatively long and lacks clarity and conciseness. The question and answer are combined into a single prompt, making it difficult to understand the task and the reasoning behind the answer. The prompt also lacks specific instructions and requirements for answering the question, such as which rules to follow or which logic to apply. The prompt is verbose and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate answer.
The candidate prompt is too short and lacks specific instructions and requirements for the task. It only mentions the need to remove all but the first and last elements, but it does not provide clear guidelines on how to identify the first and last elements, which data structure to use, or how to handle edge cases. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate solution.
The candidate prompt is too short and lacks specific instructions and requirements for the task. It only mentions two binary strings and their corresponding transformations. However, it does not provide clear guidelines on how to perform the transformation, which algorithm to use, or how to handle complex or long binary strings. The prompt is vague and leaves many crucial details to interpretation, making it difficult for the model to generate an accurate transformation. The better prommpt should be able to understand the underlying transformation automatically by looking at the given prompt and intergrate that into better prompt. The better prompt should finally provide the output in a specific format such as `input:output`.
The "Bad Prompt" is very brief and lacks detailed instructions. It doesn't specify how to load the dataset, handle missing values, normalize the data, customize the plot, or save the plot. It also doesn't ask for any interpretation or analysis of the plot. This lack of detail may lead to a wide range of outputs, making it challenging to ensure that the model's output will meet the user's needs.
The "Better Prompt" is detailed and specific. It sets the stage by providing context about the client, the spreadsheet, and the complex formula. It then breaks down the explanation into distinct parts, covering the concept of an array formula, the different conditions, the SUM function, and the practical application of the formula. This structure guides the model to produce a comprehensive and clear explanation.
The "Bad Prompt" is vague and lacks specificity on what kind of chart to create, how to combine the data with the Revenue Ledger, and what kind of insights are expected about "Sales Rep 1". This leaves the model with too many possibilities to guess from, and the output might not meet the actual task requirements.
The candidate prompt is short and vague. It doesn't specify which data to find, where to look for it, what to return, or where to put the result. This could lead to misunderstanding or incorrect use of the VLOOKUP function.
The candidate prompt is short and doesn't provide enough details. It doesn't specify what data should be used for the pivot chart, how the data should be grouped or summarized, or what kind of chart to create. This could lead to confusion or incorrect chart creation.
The candidate prompt is short and lacks clarity. It doesn't specify which cells contain the sales data, the kind of formula to use, or where to put the result. This could result in various interpretations of the task, leading to incorrect or incomplete responses.
The candidate prompt is short and lacks explicit instructions on how to create a pie chart in Excel. It doesn't specify what data should be used for the pie chart, where this data is located, or the steps needed to create the chart. This could lead to incorrect chart creation or confusion about the task.
The candidate prompt is short and lacks specific instructions for achieving the desired task. It doesn't provide any information about which cells to total, the formula to use, or how to input the formula into Excel. This could lead to a misunderstanding of the task or result in incorrect or incomplete responses.
The candidate prompt is broad and lacks specificity about what programming language is being used, what level of detail is expected in the flowchart, and what kind of outputs are acceptable (e.g., JSON, visual diagram). It could result in the model guessing about these details, resulting in inaccurate or mismatched outputs.
The candidate prompt doesn't provide enough specific details about the method of making tea, the granularity of steps to be included in the flowchart, or the tool to be used for visualizing the flowchart. This lack of details could lead to ambiguity and variance in the responses from the model.
The bad prompt in this case is not providing a clear directive on how to structure the response. It does mention the need to find a place with a lot of people, but it doesn't specifically ask for reasoning behind the selection. This lack of specificity can lead the model to simply pick an option without providing the rationale behind the choice. On the other hand, the better prompt instructs the model to provide a response in a specific format and emphasizes the need for reasoning, thus ensuring a comprehensive and well-explained answer.
