The Prompt Engineering Playbook for Programmers
《程序员提示工程实战指南》
Turn AI coding assistants into more reliable development partners
让 AI 编程助手成为更可靠的开发伙伴
Developers are increasingly relying on AI coding assistants to accelerate our daily workflows. These tools can autocomplete functions, suggest bug fixes, and even generate entire modules or MVPs. Yet, as many of us have learned, the quality of the AI’s output depends largely on the quality of the prompt you provide. In other words, prompt engineering has become an essential skill. A poorly phrased request can yield irrelevant or generic answers, while a well-crafted prompt can produce thoughtful, accurate, and even creative code solutions. This write-up takes a practical look at how to systematically craft effective prompts for common development tasks.
开发者们日益依赖 AI 编程助手来加速日常工作流程。这些工具能自动补全函数、建议错误修复,甚至生成完整模块或最小可行产品。然而正如我们许多人已经认识到的,AI 输出的质量很大程度上取决于你提供的提示质量。换言之,提示工程已成为一项必备技能。一个表述不当的请求可能产生无关或泛泛的答案,而精心设计的提示则能生成深思熟虑、准确甚至富有创意的代码解决方案。本文将从实践角度探讨如何系统地为常见开发任务构建有效提示。
AI pair programmers are powerful but not magical – they have no prior knowledge of your specific project or intent beyond what you tell them or include as context. The more information you provide, the better the output. We’ll distill key prompt patterns, repeatable frameworks, and memorable examples that have resonated with developers. You’ll see side-by-side comparisons of good vs. bad prompts with actual AI responses, along with commentary to understand why one succeeds where the other falters. Here’s a cheat sheet to get started:
AI 结对编程工具虽然强大但并非魔法——它们对你具体项目或意图的认知完全取决于你提供的信息或上下文。你提供的信息越丰富,输出结果就越好。我们将提炼关键提示模式、可复用的框架结构以及开发者们验证过的典型案例。你将看到优质与劣质提示的实际 AI 响应对比,并附有解析说明为何前者成功而后者失败。以下是入门速查指南:
Foundations of effective code prompting
高效代码提示的基础
Prompting an AI coding tool is somewhat like communicating with a very literal, sometimes knowledgeable collaborator. To get useful results, you need to set the stage clearly and guide the AI on what you want and how you want it.
向 AI 编程工具发出提示,有点像与一个非常字面化、有时知识渊博的协作者交流。要获得有用的结果,你需要清晰地设定场景,并引导 AI 了解你想要什么以及如何实现。
Below are foundational principles that underpin all examples in this playbook:
以下是贯穿本手册所有示例的基本原则:
Provide rich context. Always assume the AI knows nothing about your project beyond what you provide. Include relevant details such as the programming language, framework, and libraries, as well as the specific function or snippet in question. If there’s an error, provide the exact error message and describe what the code is supposed to do. Specificity and context make the difference between vague suggestions and precise, actionable solutions . In practice, this means your prompt might include a brief setup like: “I have a Node.js function using Express and Mongoose that should fetch a user by ID, but it throws a TypeError. Here’s the code and error…”. The more setup you give, the less the AI has to guess.
提供丰富上下文。始终假设 AI 对你的项目一无所知,除非你主动提供信息。包括相关细节,如编程语言、框架和库,以及具体的函数或代码片段。如果存在错误,提供确切的错误信息并描述代码预期功能。具体性和上下文决定了输出是模糊建议还是精确可行的解决方案。实践中,你的提示可能包含类似这样的简要设置:"我有一个使用 Express 和 Mongoose 的 Node.js 函数,本应通过 ID 获取用户,但抛出 TypeError。以下是代码和错误..."。你提供的背景信息越多,AI 需要猜测的内容就越少。Be specific about your goal or question. Vague queries lead to vague answers. Instead of asking something like “Why isn’t my code working?”, pinpoint what insight you need. For example: “This JavaScript function is returning undefined instead of the expected result. Given the code below, can you help identify why and how to fix it?” is far more likely to yield a helpful answer. One prompt formula for debugging is: “It’s expected to do [expected behavior] but instead it’s doing [current behavior] when given [example input]. Where is the bug?” . Similarly, if you want an optimization, ask for a specific kind of optimization (e.g. “How can I improve the runtime performance of this sorting function for 10k items?”). Specificity guides the AI’s focus .
明确你的目标或问题。模糊的提问只会得到模糊的回答。与其问“为什么我的代码不工作?”,不如明确指出你需要什么具体信息。例如:“这个 JavaScript 函数返回的是 undefined 而不是预期结果。根据下面的代码,能否帮我找出原因并提供修复方案?”这样的提问更有可能获得有用的答案。调试时可采用这个提问模板:“预期行为是[预期行为],但输入[示例输入]时实际行为是[当前行为]。问题出在哪里?”。同样,若需要优化,应明确优化类型(如“如何改进这个排序函数处理 1 万条数据时的运行时性能?”)。具体性能够引导 AI 的关注方向。Break down complex tasks. When implementing a new feature or tackling a multi-step problem, don’t feed the entire problem in one gigantic prompt. It’s often more effective to split the work into smaller chunks and iterate. For instance, “First, generate a React component skeleton for a product list page. Next, we’ll add state management. Then, we’ll integrate the API call.” Each prompt builds on the previous. It’s often not advised to ask for a whole large feature in one go; instead, start with a high-level goal and then iteratively ask for each piece . This approach not only keeps the AI’s responses focused and manageable, but also mirrors how a human would incrementally build a solution.
分解复杂任务。在实现新功能或处理多步骤问题时,不要一次性输入整个问题作为庞大提示。更有效的方法是将工作拆分为小块并迭代完成。例如:"首先生成一个产品列表页的 React 组件骨架,接着添加状态管理,然后集成 API 调用。"每个提示都建立在前一个的基础上。通常不建议一次性要求实现整个大型功能,而应从高层次目标开始,然后逐步请求每个组成部分。这种方法不仅能保持 AI 响应的专注性和可管理性,还模拟了人类逐步构建解决方案的方式。Include examples of inputs/outputs or expected behavior. If you can illustrate what you want with an example, do it. For example, “Given the array [3,1,4], this function should return [1,3,4].” Providing a concrete example in the prompt helps the AI understand your intent and reduces ambiguity . It’s akin to giving a junior developer a quick test case – it clarifies the requirements. In prompt engineering terms, this is sometimes called “few-shot prompting,” where you show the AI a pattern to follow. Even one example of correct behavior can guide the model’s response significantly.
包含输入/输出示例或预期行为说明。如能通过示例说明需求,请直接展示。例如:"给定数组[3,1,4],该函数应返回[1,3,4]"。在提示词中提供具体示例有助于 AI 理解意图并减少歧义,这就像给初级开发者一个快速测试用例——能明确需求。在提示工程术语中,这种方法有时被称为"少样本提示",即向 AI 展示需要遵循的模式。即使仅提供一个正确行为示例,也能显著引导模型的响应方向。Leverage roles or personas. A powerful technique popularized in many viral prompt examples is to ask the AI to “act as” a certain persona or role. This can influence the style and depth of the answer. For instance, “Act as a senior React developer and review my code for potential bugs” or “You are a JavaScript performance expert. Optimize the following function.” By setting a role, you prime the assistant to adopt the relevant tone – whether it’s being a strict code reviewer, a helpful teacher for a junior dev, or a security analyst looking for vulnerabilities. Community-shared prompts have shown success with this method, such as “Act as a JavaScript error handler and debug this function for me. The data isn’t rendering properly from the API call.” . In our own usage, we must still provide the code and problem details, but the role-play prompt can yield more structured and expert-level guidance.
善用角色或人物设定。许多热门提示案例中流行的一种强大技巧,就是要求 AI"扮演"某个特定角色或身份。这能显著影响回答的风格与深度。例如:"以资深 React 开发者的身份审查我的代码,找出潜在缺陷"或"你是一名 JavaScript 性能专家,请优化以下函数"。通过设定角色,你能引导助手采用相应的语气——无论是作为严格的代码审查员、辅导初级开发者的耐心导师,还是寻找漏洞的安全分析师。社区分享的提示已证明这种方法的有效性,比如"扮演 JavaScript 错误处理器,帮我调试这个函数。API 调用返回的数据无法正确渲染"。实际使用时,我们仍需提供具体代码和问题细节,但角色扮演式提示往往能获得更具结构性且专业水准的指导。
Iterate and refine the conversation. Prompt engineering is an interactive process, not a one-shot deal. Developers often need to review the AI’s first answer and then ask follow-up questions or make corrections. If the solution isn’t quite right, you might say, “That solution uses recursion, but I’d prefer an iterative approach – can you try again without recursion?” Or, “Great, now can you improve the variable names and add comments?” The AI remembers the context in a chat session, so you can progressively steer it to the desired outcome. The key is to view the AI as a partner you can coach – progress over perfection on the first try .
不断迭代优化对话过程。提示工程是一个交互式流程,而非一蹴而就的工作。开发者通常需要先审视 AI 的首次回答,继而提出跟进问题或进行修正。若解决方案不够理想,可以说:"这个方案使用了递归,但我更倾向迭代实现——能否改用非递归方式重写?"或者:"很好,现在能否优化变量命名并添加注释?"AI 会记住聊天会话的上下文,因此您可以逐步引导其达成预期目标。关键在于将 AI 视为可调教的合作伙伴——追求渐进完善,而非首次尝试就要求完美。Maintain code clarity and consistency. This last principle is a bit indirect but very important for tools that work on your code context. Write clean, well-structured code and comments, even before the AI comes into play. Meaningful function and variable names, consistent formatting, and docstrings not only make your code easier to understand for humans, but also give the AI stronger clues about what you’re doing. If you show a consistent pattern or style, the AI will continue it . Treat these tools as extremely attentive junior developers – they take every cue from your code and comments.
保持代码清晰与一致性。这一最后原则虽有些间接,但对处理代码上下文的工具至关重要。在 AI 介入前,就应编写整洁、结构良好的代码和注释。有意义的功能和变量命名、一致的格式以及文档字符串不仅便于人类理解,也为 AI 提供了更明确的线索。如果你展示出一致的模式或风格,AI 会延续这种模式。将这些工具视为极其专注的初级开发人员——它们会从你的代码和注释中捕捉每一个线索。
With these foundational principles in mind, let’s dive into specific scenarios. We’ll start with debugging, perhaps the most immediate use-case: you have code that’s misbehaving, and you want the AI to help figure out why.
基于这些基本原则,让我们深入具体场景。首先从调试开始,这或许是最直接的应用场景:当代码出现异常行为时,你希望 AI 协助找出原因。
Prompt patterns for debugging code
调试代码的提示模式
Debugging is a natural fit for an AI assistant. It’s like having a rubber-duck that not only listens, but actually talks back with suggestions. However, success largely depends on how you present the problem to the AI. Here’s how to systematically prompt for help in finding and fixing bugs:
调试是 AI 助手的天然用武之地。就像拥有一个不仅能倾听、还能给出建议的橡皮鸭。然而,成功很大程度上取决于你如何向 AI 呈现问题。以下是系统性地寻求帮助以发现和修复错误的提示方法:
1. Clearly describe the problem and symptoms. Begin your prompt by describing what is going wrong and what the code is supposed to do. Always include the exact error message or incorrect behavior. For example, instead of just saying “My code doesn’t work,” you might prompt: “I have a function in JavaScript that should calculate the sum of an array of numbers, but it’s returning NaN (Not a Number) instead of the actual sum. Here is the code: [include code]. It should output a number (the sum) for an array of numbers like [1,2,3], but I’m getting NaN. What could be the cause of this bug?” This prompt specifies the language, the intended behavior, the observed wrong output, and provides the code context – all crucial information. Providing a structured context (code + error + expected outcome + what you’ve tried) gives the AI a solid starting point . By contrast, a generic question like “Why isn’t my function working?” yields meager results – the model can only offer the most general guesses without context.
1. 清晰描述问题及症状。在提问时,首先说明出现的错误现象及代码预期功能。务必包含确切的错误信息或异常行为。例如,不要简单说"我的代码不工作",而应该这样提问:"我的 JavaScript 函数本应计算数字数组的总和,但返回的是 NaN(非数字)而非实际求和结果。代码如下:[附代码]。对于[1,2,3]这样的数字数组,本应输出数字(求和结果),但我得到的是 NaN。这个错误可能是什么原因导致的?" 该提问明确了编程语言、预期行为、实际错误输出,并提供了代码上下文——这些都是关键信息。提供结构化上下文(代码+错误+预期结果+已尝试方案)能为 AI 奠定良好的分析基础。相比之下,像"为什么我的函数不工作?"这样泛泛的问题收效甚微——缺乏上下文时模型只能给出最笼统的猜测。
2. Use a step-by-step or line-by-line approach for tricky bugs. For more complex logic bugs (where no obvious error message is thrown but the output is wrong), you can prompt the AI to walk through the code’s execution. For instance: “Walk through this function line by line and track the value of total at each step. It’s not accumulating correctly – where does the logic go wrong?” This is an example of a rubber duck debugging prompt – you’re essentially asking the AI to simulate the debugging process a human might do with prints or a debugger. Such prompts often reveal subtle issues like variables not resetting or incorrect conditional logic, because the AI will spell out the state at each step. If you suspect a certain part of the code, you can zoom in: “Explain what the filter call is doing here, and if it might be excluding more items than it should.” Engaging the AI in an explanatory role can surface the bug in the process of explanation.
2. 对于疑难 bug 采用分步或逐行排查法。面对更复杂的逻辑错误(未抛出明显错误但输出异常时),可要求 AI 逐步跟踪代码执行过程。例如:"请逐行分析此函数并记录 total 变量在每一步的值。它的累加逻辑有问题——问题出在哪里?" 这属于橡皮鸭调试法的提示方式——本质上是让 AI 模拟人类通过打印语句或调试器进行的排错流程。此类提示常能暴露诸如变量未重置或条件逻辑错误等微妙问题,因为 AI 会逐步阐明每个状态。若怀疑特定代码段,可聚焦询问:"请解释此处 filter 调用的作用,是否存在过滤条件过严的情况?" 让 AI 扮演解释者角色往往能在阐述过程中暴露问题所在。
3. Provide minimal reproducible examples when possible. Sometimes your actual codebase is large, but the bug can be demonstrated in a small snippet. If you can extract or simplify the code that still reproduces the issue, do so and feed that to the AI. This not only makes it easier for the AI to focus, but also forces you to clarify the problem (often a useful exercise in itself). For example, if you’re getting a TypeError in a deeply nested function call, try to reproduce it with a few lines that you can share. Aim to isolate the bug with the minimum code, make an assumption about what’s wrong, test it, and iterate . You can involve the AI in this by saying: “Here’s a pared-down example that still triggers the error [include snippet]. Why does this error occur?” By simplifying, you remove noise and help the AI pinpoint the issue. (This technique mirrors the advice of many senior engineers: if you can’t immediately find a bug, simplify the problem space. The AI can assist in that analysis if you present a smaller case to it.)
3. 尽可能提供最小可复现示例。有时你的实际代码库很大,但错误可能只需一小段代码就能复现。如果能提取或简化出仍能重现问题的代码片段,就将其提供给 AI。这不仅能让 AI 更专注,还能迫使你厘清问题(这本身通常就是有益的练习)。例如,如果在深层嵌套函数调用中出现 TypeError,尝试用几行可分享的代码复现它。力求用最简代码隔离错误,对问题原因做出假设并测试验证,如此迭代。你可以这样让 AI 参与分析:"这是一个精简后仍能触发错误的示例[附代码片段]。为什么会出现这个错误?"通过简化,你消除了干扰信息,帮助 AI 准确定位问题。(这种技巧与许多资深工程师的建议不谋而合:如果无法立即找到错误,就缩小问题范围。当你向 AI 呈现更小的案例时,它能协助完成这种分析。)
4. Ask focused questions and follow-ups. After providing context, it’s often effective to directly ask what you need, for example: “What might be causing this issue, and how can I fix it?” . This invites the AI to both diagnose and propose a solution. If the AI’s first answer is unclear or partially helpful, don’t hesitate to ask a follow-up. You could say, “That explanation makes sense. Can you show me how to fix the code? Please provide the corrected code.” In a chat setting, the AI has the conversation history, so it can directly output the modified code. If you’re using an inline tool like Copilot in VS Code or Cursor without a chat, you might instead write a comment above the code like // BUG: returns NaN, fix this function and see how it autocompletes – but in general, the interactive chat yields more thorough explanations. Another follow-up pattern: if the AI gives a fix but you don’t understand why, ask “Can you explain why that change solves the problem?” This way you learn for next time, and you double-check that the AI’s reasoning is sound.
4. 提出具体问题并持续追问。在提供背景信息后,直接询问需求往往更高效,例如:"这个问题可能是什么原因导致的?该如何修复?"——这样能引导 AI 同时进行问题诊断和解决方案建议。若 AI 的首轮回复不够清晰或仅部分有用,不妨继续追问:"这个解释很合理,能否展示具体修复代码?请提供修正后的代码。"在聊天场景中,AI 能基于对话历史直接输出修改后的代码。若使用 VS Code 中的 Copilot 或 Cursor 等行内工具(非聊天模式),则可在代码上方添加注释如// BUG: 返回 NaN,修复此函数,观察其自动补全效果——但通常交互式聊天能获得更详尽的解释。另一种追问模式:当 AI 给出修复方案但你未理解原理时,可询问"能否解释这个修改为何能解决问题?"如此既能积累经验,又可验证 AI 的推理是否合理。
Now, let’s illustrate these debugging prompt principles with a concrete example, showing a poor prompt vs. improved prompt and the difference in AI responses:
现在,让我们通过一个具体示例来演示这些调试提示原则,展示一个低效提示与优化提示的对比,以及 AI 响应的差异:
Debugging example: poor vs. improved prompt
调试示例:低效提示与改进提示对比
Imagine we have a simple Node.js function meant to convert a list of user objects to a lookup map by user ID. However, it’s throwing an error. Here’s the buggy code:
假设我们有一个简单的 Node.js 函数,用于将用户对象列表按用户 ID 转换为查找映射表,但它出现了错误。以下是存在问题的代码:
// Buggy function: converts array of users to a map by ID
function mapUsersById(users) {
const userMap = {};
for (let i = 0; i <= users.length; i++) {
// Note: using <= will go out of bounds on the last iteration
const user = users[i];
userMap[user.id] = user;
}
return userMap;
}
// Example usage:
const result = mapUsersById([{ id: 1, name: "Alice" }]);
This code is supposed to iterate through the users array and build an object where keys are user IDs. However, there is a bug: the loop uses i <= users.length instead of i < users.length. This will cause users[i] to be undefined on the last iteration, likely leading to a runtime error (e.g., attempting to access property id of undefined). Let’s see how an AI assistant might respond to two different prompts about this issue.
这段代码本应遍历用户数组并构建一个以用户 ID 为键的对象。但存在一个错误:循环条件使用了 i <= users.length 而非 i < users.length。这将导致最后一次循环时 users[i]为 undefined,很可能会引发运行时错误(例如尝试访问 undefined 的 id 属性)。我们来看看 AI 助手对这个问题两种不同提示的响应差异。
❌ Poor Prompt: “Why isn’t my mapUsersById function working?”
❌ 低效提示:"为什么我的 mapUsersById 函数不工作?"
AI’s response to the poor prompt (simulated):
AI 对劣质提示的回应(模拟):
It’s hard to say without more information. Are you sure the function is being called with a proper array of users? Perhaps the issue is that users is empty or not an array, causing the loop not to execute properly. It could also be that there’s an error when setting the property on userMap. Make sure that each user has an id property. If users is undefined or null, that would also cause it to not work.
很难说,没有更多信息。你确定函数是用正确的用户数组调用的吗?可能是用户数组为空或不是数组,导致循环无法正常执行。也可能是设置 userMap 属性时出错。确保每个用户都有 id 属性。如果 users 是 undefined 或 null,也会导致它不工作。
Analysis: This answer is quite vague. The prompt gave no details except the function name, so the AI is guessing at generic causes (“maybe the array is empty or not an array”, “make sure each user has an id”). It did not identify the actual bug. This is the kind of response you get when the question is underspecified – the model can only regurgitate common debugging tips. As a developer, this isn’t very helpful because it doesn’t address the real mistake. The lack of context and specificity in the prompt yields a generic answer (essentially the AI equivalent of “have you tried turning it off and on again?”). This aligns with the community advice that vague prompts lead to irrelevant or generic answers .
分析:这个回答相当模糊。提示中除了函数名外没有提供任何细节,所以 AI 只能猜测一些常见原因(“可能数组为空或不是数组”,“确保每个用户都有 id”)。它没有识别出实际的错误。这就是问题描述不明确时得到的回应——模型只能复述常见的调试建议。作为开发者,这帮助不大,因为它没有解决真正的错误。提示中缺乏上下文和具体性导致了泛泛而谈的回答(本质上相当于 AI 版的“你试过关机重启吗?”)。这与社区建议一致:模糊的提示会导致无关或笼统的答案。
✅ Improved Prompt: “I have a JavaScript function mapUsersById that should convert an array of user objects into a map (object) keyed by user ID. However, it throws an error when I run it. For example, when I pass [ {id: 1, name: "Alice"} ], I get TypeError: Cannot read property 'id' of undefined. Here is the function code:
✅ 改进后的提示:“我有一个 JavaScript 函数 mapUsersById,它应该将用户对象数组转换为按用户 ID 作为键的映射(对象)。但运行时却报错。例如,当我传入[ {id: 1, name: "Alice"} ]时,出现 TypeError: Cannot read property 'id' of undefined 错误。以下是函数代码:
function mapUsersById(users) {
const userMap = {};
for (let i = 0; i <= users.length; i++) {
const user = users[i];
userMap[user.id] = user;
}
return userMap;
}
It should return { "1": {id: 1, name: "Alice"} }. What is the bug and how can I fix it?”
它应该返回{ "1": {id: 1, name: "Alice"} }。请问这个 bug 是什么?该如何修复?”
AI’s response to the improved prompt (simulated):
AI 对改进后提示的模拟回复:
*The error Cannot read property 'id' of undefined suggests that at some point, user is undefined, and then you try to access user.id. Looking at your loop, the issue is with the loop bounds. You are using i <= users.length. If users.length is 1 (for one user in the array), the loop runs with i = 0 and i = 1. When i is 1, users[1] is undefined (since the only index in the array is 0). Thus user becomes undefined on that iteration, and user.id throws a TypeError. The fix is to change the loop condition to i < users.length so that i goes from 0 to users.length-1. For example:
*错误信息 Cannot read property 'id' of undefined 表明在某个时刻 user 是 undefined,而你尝试访问 user.id。查看循环部分,问题出在循环边界条件上。你使用了 i <= users.length。如果 users.length 为 1(数组中只有一个用户),循环会以 i = 0 和 i = 1 运行。当 i 为 1 时,users[1]是 undefined(因为数组中唯一的索引是 0)。因此在该次迭代中 user 变为 undefined,访问 user.id 就会抛出 TypeError。修复方法是将循环条件改为 i < users.length,这样 i 的取值范围就是从 0 到 users.length-1。例如:
for (let i = 0; i < users.length; i++) {
const user = users[i];
userMap[user.id] = user;
}
This way, i will be 0 for the first (and only) element, and then the loop stops. With that change, your function will no longer hit an undefined user and should work as expected, returning {"1": {id: 1, name: "Alice"}} for the given input.*
这样,第一个(也是唯一一个)元素的索引将为 0,然后循环终止。经过此修改后,您的函数将不再遇到未定义的用户,并会按预期工作——对于给定的输入返回{"1": {id: 1, name: "Alice"}}。
Analysis: This response directly pinpoints the bug (using <= in the loop) and explains why it causes user to be undefined. It then provides the correct code solution (i < users.length) and confirms the expected outcome. The improved prompt succeeded because it followed our principles: it specified the language (JavaScript), described the function’s purpose, included the exact error message and a sample input, and even provided the code snippet in question. Notice how the AI used the error message as a clue to focus on the loop bounds – a targeted prompt enabled the AI to engage in true problem-solving, effectively simulating how a human debugger would think: “where could undefined come from? likely from the loop indexing”. This is a concrete demonstration of the benefit of detailed prompts.
分析:该回答直接指出了错误原因(循环中使用<=)并解释了为何会导致 user 未定义。随后提供了正确的代码解决方案(i < users.length)并确认了预期结果。改进后的提示之所以成功,是因为遵循了我们的原则:明确了编程语言(JavaScript)、描述了函数用途、包含了确切的错误信息和示例输入,甚至提供了相关代码片段。注意 AI 如何利用错误信息作为线索聚焦于循环边界——精准的提示使 AI 能够进行真正的故障排查,有效模拟人类调试者的思维过程:"undefined 可能来自哪里?很可能来自循环索引"。这生动展示了详细提示带来的优势。
Additional Debugging Tactics: Beyond identifying obvious bugs, you can use prompt engineering for deeper debugging assistance:
额外调试策略:除了识别明显错误外,你还可以运用提示工程获得更深层次的调试辅助:
Ask for potential causes. If you’re truly stumped, you can broaden the question slightly: “What are some possible reasons for a TypeError: cannot read property 'foo' of undefined in this code?” along with the code. The model might list a few scenarios (e.g. the object wasn’t initialized, a race condition, wrong variable scoping, etc.). This can give you angles to investigate that you hadn’t considered. It’s like brainstorming with a colleague.
询问潜在原因。若完全陷入困境,可稍微扩展问题范围:"这段代码出现'TypeError: cannot read property 'foo' of undefined'可能有哪些原因?"并附上代码。模型可能会列举若干场景(如对象未初始化、竞态条件、变量作用域错误等)。这能提供你未曾考虑过的排查方向,就像与同事进行头脑风暴。“Ask the Rubber Duck” – i.e., explain your code to the AI. This may sound counterintuitive (why explain to the assistant?), but the act of writing an explanation can clarify your own understanding, and you can then have the AI verify or critique it. For example: “I will explain what this function is doing: [your explanation]. Given that, is my reasoning correct and does it reveal where the bug is?” The AI might catch a flaw in your explanation that points to the actual bug. This technique leverages the AI as an active rubber duck that not only listens but responds.
"向橡皮鸭提问"——即向 AI 解释你的代码。这看似违反直觉(为何要向助手解释?),但撰写解释的过程能厘清自身理解,随后可让 AI 验证或指正。例如:"我将解释这个函数的功能:[你的解释]。基于此,我的推理是否正确?是否揭示了错误所在?"AI 可能会发现你解释中的漏洞,从而指向实际错误。这项技术将 AI 转化为能主动回应的橡皮鸭,而不仅是被动倾听者。Have the AI create test cases. You can ask: “Can you provide a couple of test cases (inputs) that might break this function?” The assistant might come up with edge cases you didn’t think of (empty array, extremely large numbers, null values, etc.). This is useful both for debugging and for generating tests for future robustness.
让 AI 生成测试用例。你可以询问:"能否提供几个可能破坏这个函数的测试用例(输入)?" 助手可能会提出你没想到的边界情况(空数组、极大数值、空值等)。这对调试和生成未来健壮性测试都很有帮助。Role-play a code reviewer. As an alternative to a direct “debug this” prompt, you can say: “Act as a code reviewer. Here’s a snippet that isn’t working as expected. Review it and point out any mistakes or bad practices that could be causing issues: [code]”. This sets the AI into a critical mode. Many developers find that phrasing the request as a code review yields a very thorough analysis, because the model will comment on each part of the code (and often, in doing so, it spots the bug). In fact, one prompt engineering tip is to explicitly request the AI to behave like a meticulous reviewer . This can surface not only the bug at hand but also other issues (e.g. potential null checks missing) which might be useful.
扮演代码审查员角色。不同于直接使用"调试这个"的提示,你可以说:"扮演代码审查员角色。这里有一段未按预期运行的代码片段,请审查并指出可能导致问题的错误或不良实践:[代码]"。这会让 AI 进入批判模式。许多开发者发现,以代码审查的形式提出请求会得到非常详尽的分析,因为模型会逐段评论代码(通常在这个过程中就能发现错误)。事实上,一个提示工程技巧就是明确要求 AI 表现得像个一丝不苟的审查员。这样不仅能发现当前错误,还可能暴露其他问题(比如缺少空值检查),这些都可能很有用。
In summary, when debugging with an AI assistant, detail and direction are your friends. Provide the scenario, the symptoms, and then ask pointed questions. The difference between a flailing “it doesn’t work, help!” prompt and a surgical debugging prompt is night and day, as we saw above. Next, we’ll move on to another major use case: refactoring and improving existing code.
总之,在使用 AI 助手进行调试时,细节和方向是关键。提供场景、症状,然后提出针对性的问题。正如前文所示,一个慌乱无措的"它不工作了,帮帮我!"提示与一个精准的调试提示之间的差异可谓天壤之别。接下来,我们将转向另一个主要应用场景:重构和改进现有代码。
Prompt patterns for refactoring and optimization
重构与优化的提示模式
Refactoring code – making it cleaner, faster, or more idiomatic without changing its functionality – is an area where AI assistants can shine. They’ve been trained on vast amounts of code, which includes many examples of well-structured, optimized solutions. However, to tap into that knowledge effectively, your prompt must clarify what “better” means for your situation. Here’s how to prompt for refactoring tasks:
代码重构——在不改变功能的前提下使其更简洁、更快速或更符合语言习惯——正是 AI 助手大显身手的领域。它们经过海量代码训练,其中包含大量结构良好、优化解决方案的示例。但要有效利用这些知识,你的提示必须明确说明在当前情境下"更好"的具体含义。以下是针对重构任务的提示技巧:
1. State your refactoring goals explicitly. “Refactor this code” on its own is too open-ended. Do you want to improve readability? Reduce complexity? Optimize performance? Use a different paradigm or library? The AI needs a target. A good prompt frames the task, for example: “Refactor the following function to improve its readability and maintainability (reduce repetition, use clearer variable names).” Or “Optimize this algorithm for speed – it’s too slow on large inputs.” By stating specific goals, you help the model decide which transformations to apply . For instance, telling it you care about performance might lead it to use a more efficient sorting algorithm or caching, whereas focusing on readability might lead it to break a function into smaller ones or add comments. If you have multiple goals, list them out. A prompt template from the Strapi guide suggests even enumerating issues: “Issues I’d like to address: 1) [performance issue], 2) [code duplication], 3) [outdated API usage].” . This way, the AI knows exactly what to fix. Remember, it will not inherently know what you consider a problem in the code – you must tell it.
1. 明确说明重构目标。仅说"重构这段代码"过于开放。你是想提高可读性?降低复杂度?优化性能?采用不同范式或库?AI 需要一个明确目标。好的提示应框定任务范围,例如:"重构以下函数以提高可读性和可维护性(减少重复代码,使用更清晰的变量名)"或"优化此算法速度——处理大型输入时太慢"。通过指定具体目标,你能帮助模型决定采用何种转换方式。例如,强调性能需求可能促使 AI 采用更高效的排序算法或缓存机制,而侧重可读性则可能让 AI 将函数拆分为更小单元或添加注释。若有多重目标,请逐一列出。Strapi 指南中的提示模板甚至建议枚举具体问题:"需要解决的问题:1)[性能问题],2)[代码重复],3)[过时的 API 调用]"。这样 AI 就能精准定位修改方向。请记住,AI 不会自动识别你认为的代码问题——你必须明确告知。
2. Provide the necessary code context. When refactoring, you’ll typically include the code snippet that needs improvement in the prompt. It’s important to include the full function or section that you want to be refactored, and sometimes a bit of surrounding context if relevant (like the function’s usage or related code, which could affect how you refactor). Also mention the language and framework, because “idiomatic” code varies between, say, idiomatic Node.js vs. idiomatic Deno, or React class components vs. functional components. For example: “I have a React component written as a class. Please refactor it to a functional component using Hooks.” The AI will then apply the typical steps (using useState, useEffect, etc.). If you just said “refactor this React component” without clarifying the style, the AI might not know you specifically wanted Hooks.
2. 提供必要的代码上下文。进行重构时,通常需要在提示中包含待改进的代码片段。重要的是包含您想要重构的完整函数或代码段,有时还需附带相关上下文(例如函数的使用方式或关联代码,这些可能影响重构方式)。同时注明编程语言和框架,因为"地道"的代码风格会因环境而异,比如 Node.js 与 Deno 的地道写法不同,React 类组件与函数式组件也有差异。例如:"我有一个用类编写的 React 组件,请使用 Hooks 将其重构为函数式组件。"AI 就会按照典型步骤(使用 useState、useEffect 等)进行操作。如果仅说"重构这个 React 组件"而不说明风格要求,AI 可能无法明确您需要改用 Hooks。
Include version or environment details if relevant. For instance, “This is a Node.js v14 codebase” or “We’re using ES6 modules”. This can influence whether the AI uses certain syntax (like import/export vs. require), which is part of a correct refactoring. If you want to ensure it doesn’t introduce something incompatible, mention your constraints.
如涉及相关版本或环境信息请一并提供。例如“这是一个 Node.js v14 代码库”或“我们使用 ES6 模块”。这些信息会影响 AI 是否采用特定语法(如 import/export 与 require 的差异),这也是正确重构的一部分。若需确保不引入不兼容内容,请注明您的约束条件。
3. Encourage explanations along with the code. A great way to learn from an AI-led refactor (and to verify its correctness) is to ask for an explanation of the changes. For example: “Please suggest a refactored version of the code, and explain the improvements you made.” This was even built into the prompt template we referenced: “…suggest refactored code with explanations for your changes.” . When the AI provides an explanation, you can assess if it understood the code and met your objectives. The explanation might say: “I combined two similar loops into one to reduce duplication, and I used a dictionary for faster lookups,” etc. If something sounds off in the explanation, that’s a red flag to examine the code carefully. In short, use the AI’s ability to explain as a safeguard – it’s like having the AI perform a code review on its own refactor.
3. 鼓励附带代码解释。从 AI 主导的重构中学习(并验证其正确性)的最佳方式,就是要求对修改内容进行说明。例如:"请提供重构后的代码版本,并解释你所做的改进。"这一点甚至内置于我们参考的提示模板中:"……建议附带修改说明的重构代码"。当 AI 给出解释时,你可以评估它是否理解代码并达成你的目标。解释可能会说:"我将两个相似循环合并为一个以减少重复,并使用字典实现更快查找"等。如果解释中有任何可疑之处,那就是需要仔细检查代码的危险信号。简而言之,利用 AI 的解释能力作为安全防护——这就像让 AI 对自己的重构进行代码审查。
4. Use role-play to set a high standard. As mentioned earlier, asking the AI to act as a code reviewer or senior engineer can be very effective. For refactoring, you might say: “Act as a seasoned TypeScript expert and refactor this code to align with best practices and modern standards.” This often yields not just superficial changes, but more insightful improvements because the AI tries to live up to the “expert” persona. A popular example from a prompt guide is having the AI role-play a mentor: “Act like an experienced Python developer mentoring a junior. Provide explanations and write docstrings. Rewrite the code to optimize it.” . The result in that case was that the AI used a more efficient data structure (set to remove duplicates) and provided a one-line solution for a function that originally used a loop . The role-play helped it not only refactor but also explain why the new approach is better (in that case, using a set is a well-known optimization for uniqueness).
4. 通过角色扮演设定高标准。如前所述,让 AI 扮演代码审查员或高级工程师角色往往效果显著。例如重构代码时可以说:"作为资深 TypeScript 专家,请按照最佳实践和现代标准重构这段代码。"这种方式不仅能产生表面修改,还能带来更具洞察力的优化,因为 AI 会努力契合"专家"人设。某提示指南中的经典案例是让 AI 扮演导师角色:"假设你是指导初级开发者的资深 Python 程序员,请提供解释并编写文档字符串,同时优化重写这段代码。"最终 AI 不仅采用了更高效的数据结构(用集合去重),还将原本使用循环的函数改写成了单行解决方案。这种角色扮演不仅促成了代码重构,还让 AI 解释了新方法的优势(在此案例中,使用集合是实现去重的经典优化方案)。
Now, let’s walk through an example of refactoring to see how a prompt can influence the outcome. We will use a scenario in JavaScript (Node.js) where we have some less-than-ideal code and we want it improved.
现在,让我们通过一个重构示例来看看提示如何影响结果。我们将使用一个 JavaScript(Node.js)场景,其中有一段不太理想的代码,我们想要改进它。
Refactoring example: poor vs. improved prompt
重构示例:差提示 vs 优提示
Suppose we have a function that makes two database calls and does some processing. It works, but it’s not pretty – there’s duplicated code and it’s hard to read. We’d like to refactor it for clarity and efficiency. Here’s the original code:
假设我们有一个函数需要执行两次数据库调用并进行一些处理。虽然它能工作,但代码并不优雅——存在重复代码且难以阅读。我们希望重构它以提升清晰度和效率。原始代码如下:
// Original function: Fetches two lists and processes them (needs refactoring)
async function getCombinedData(apiClient) {
// Fetch list of users
const usersResponse = await apiClient.fetch('/users');
if (!usersResponse.ok) {
throw new Error('Failed to fetch users');
}
const users = await usersResponse.json();
// Fetch list of orders
const ordersResponse = await apiClient.fetch('/orders');
if (!ordersResponse.ok) {
throw new Error('Failed to fetch orders');
}
const orders = await ordersResponse.json();
// Combine data (match user to orders)
const result = [];
for (let user of users) {
const userOrders = orders.filter(o => o.userId === user.id);
result.push({ user, orders: userOrders });
}
return result;
}
Issues in this code: We see duplicated fetch logic for users and orders (could be refactored into a helper or loop), lack of error detail (just a generic error message), and it could perhaps run the fetches in parallel to improve speed. Let’s see two prompting approaches:
这段代码存在的问题包括:用户和订单数据获取逻辑重复(可重构为辅助函数或循环)、错误信息过于笼统(仅显示通用错误提示),此外可以考虑并行执行数据获取以提高速度。下面我们来看两种提示方案:
❌ Prompt 1 (Underspecified): “Refactor the above getCombinedData function.”
❌ 提示 1(未充分说明):"重构上面的 getCombinedData 函数。"
AI’s response to Prompt 1:
AI 对提示 1 的回应:
*Sure. Here’s a refactored version of getCombinedData:
*好的。以下是重构后的 getCombinedData 版本:
async function getCombinedData(apiClient) {
// Fetch users and orders in parallel
const [usersResponse, ordersResponse] = await Promise.all([
apiClient.fetch('/users'),
apiClient.fetch('/orders')
]);
if (!usersResponse.ok || !ordersResponse.ok) {
throw new Error('Failed to fetch data');
}
const [users, orders] = await Promise.all([
usersResponse.json(),
ordersResponse.json()
]);
// Map orders by userId for efficient lookup
const ordersByUser = {};
for (const order of orders) {
if (!ordersByUser[order.userId]) {
ordersByUser[order.userId] = [];
}
ordersByUser[order.userId].push(order);
}
// Combine data
return users.map(user => ({
user,
orders: ordersByUser[user.id] || []
}));
}
I fetched users and orders in parallel to improve performance. I also consolidated error handling and used a map (ordersByUser) to group orders by user, which makes the combination step cleaner and potentially faster.*
我并行获取用户和订单数据以提高性能。同时整合了错误处理逻辑,并使用映射表(ordersByUser)按用户分组订单,这使得数据合并步骤更清晰且可能更高效。*
Analysis: This is actually a pretty good refactor, hitting several improvements: parallel fetching, grouping orders by user for efficiency, cleaner combination using .map and defaulting to an empty array. The AI also explained what it did. So why do we label this prompt as underspecified or “poor”? In this case, the AI happened to make good choices, but it assumed a lot because the prompt was open-ended. It assumed that performance should be improved (parallel requests, which is a smart change but what if parallel wasn’t safe here?), and it changed error handling to a generic message “Failed to fetch data,” which in combining the errors lost the specificity of which call failed.
分析:这次重构实际上相当不错,实现了多项改进:并行获取数据、按用户分组订单以提高效率、使用.map 实现更简洁的组合操作并默认返回空数组。AI 还解释了其修改内容。那么为什么我们仍将这个提示标记为定义不完整或"欠佳"呢?在这种情况下,AI 碰巧做出了正确的选择,但由于提示过于开放,它做了大量假设。它假设应该提升性能(采用并行请求,这虽然是明智的改动,但如果并行操作在此场景不安全呢?),还将错误处理改为通用提示"获取数据失败",这种合并错误的方式丢失了具体是哪个调用失败的信息。
The risk with a generic “Refactor this” prompt is that the AI might make changes that alter behavior in unintended ways. Here, combining error handling might be acceptable or not, depending on requirements. The assistant also made a structural change (using a dictionary to group orders) – that’s fine and probably for the better, but if we had specific constraints (like wanting to stream results or keep memory usage low), we didn’t inform it. So while the result looks good, it was somewhat lucky that the AI’s own interpretation of “refactor” aligned with what we wanted. This illustrates that an underspecified prompt can sometimes get good results on simple cases, but for complex code, it might go off in an undesired direction or miss certain improvements you care about.
使用通用的“重构这段代码”提示存在风险,AI 可能会做出改变行为逻辑的意外修改。在这个例子中,合并错误处理可能符合需求也可能不符合。助手还进行了结构性变更(使用字典来分组订单)——这没问题甚至可能是改进,但如果我们有特定约束条件(比如希望流式输出结果或保持低内存占用),我们并没有告知它。因此虽然结果看起来不错,但 AI 对"重构"的理解恰好符合我们的期望实属幸运。这说明在简单场景下,不够具体的提示有时也能获得好结果,但对于复杂代码,它可能会偏离预期方向或遗漏你关心的某些优化点。
✅ Prompt 2 (Goal-Oriented): “Refactor the above getCombinedData function to eliminate duplicate code and improve performance. Specifically: (1) Avoid repeating the fetch logic for users and orders – maybe use a helper or fetch them together. (2) Fetch both lists in parallel if possible. (3) Keep the error handling for each fetch (we want to know which call failed). (4) Improve the combination of data, possibly by using a more efficient structure for lookup instead of a nested loop. Provide the refactored code with comments explaining the changes.”
✅ 提示 2(目标导向型):“重构上述 getCombinedData 函数以消除重复代码并提升性能。具体要求:(1) 避免重复获取用户和订单数据的逻辑——可考虑使用辅助函数或同时获取;(2) 尽可能并行获取两个列表;(3) 保留每个数据获取的错误处理(需明确知道哪个调用失败);(4) 改进数据合并方式,建议使用更高效的查找结构替代嵌套循环。请提供重构后的代码并附注释说明修改点。”
AI’s response to Prompt 2:
AI 对提示 2 的回应:
*Sure! Here’s an improved version of getCombinedData with the requested refactoring:
*好的!以下是经过重构改进的 getCombinedData 版本:
async function getCombinedData(apiClient) {
// 1. Fetch users and orders in parallel to improve performance
const [usersResponse, ordersResponse] = await Promise.all([
apiClient.fetch('/users'),
apiClient.fetch('/orders')
]);
// 2. Handle errors separately to know which call failed
if (!usersResponse.ok) {
throw new Error('Failed to fetch users');
}
if (!ordersResponse.ok) {
throw new Error('Failed to fetch orders');
}
// Parse JSON responses in parallel
const [users, orders] = await Promise.all([
usersResponse.json(),
ordersResponse.json()
]);
// 3. Use a lookup map for orders by userId for efficient matching
const ordersByUser = orders.reduce((map, order) => {
if (!map[order.userId]) {
map[order.userId] = [];
}
map[order.userId].push(order);
return map;
}, {});
// 4. Combine data using the lookup map
return users.map(user => ({
user,
orders: ordersByUser[user.id] || [] // empty array if no orders for user
}));
}
Explanation of changes: 变更说明
Parallel fetching: I used Promise.all to fetch users and orders at the same time (point 1).
并行获取:我使用 Promise.all 同时获取用户和订单数据(要点 1)。Preserved individual error handling: I kept separate checks for usersResponse.ok and ordersResponse.ok so that any error clearly indicates which fetch failed (point 2).
保留独立错误处理:我分别为 usersResponse.ok 和 ordersResponse.ok 设置了单独检查,以便任何错误都能明确指示是哪次获取失败(要点 2)。Parallel JSON parsing: Also used Promise.all for response.json() calls, which is a minor improvement.
并行 JSON 解析:同样使用了 Promise.all 来处理 response.json()调用,这是一个小的优化。Efficient combination: Instead of a nested loop, I created an ordersByUser map (point 3). This groups orders by userId in a single pass. Then I map over users to build the result, pulling each user’s orders from the map in O(1) time (point 4). This is more efficient and concise.*
高效组合方案:我没有使用嵌套循环,而是创建了一个 ordersByUser 映射(要点 3)。这样只需单次遍历就能按 userId 对订单进行分组。随后通过映射用户列表构建结果,以 O(1)时间复杂度从映射中提取每个用户的订单(要点 4)。这种实现既高效又简洁。
Analysis: This response directly addressed all the specified goals. The code is refactored to be cleaner and faster, and it maintained separate error messages as requested. The AI’s explanation confirms each point we listed, which shows it carefully followed the prompt instructions. This is a great outcome because we, as the prompter, defined what “refactor” meant in this context. By doing so, we guided the AI to produce a solution that matches our needs with minimal back-and-forth. If the AI had overlooked one of the points (say it still merged the error handling), we could easily prompt again: “Looks good, but please ensure the error messages remain distinct for users vs orders.” – however, in this case it wasn’t needed because our prompt was thorough.
分析:该响应完全符合所有指定目标。代码经过重构后更清晰高效,并按要求保留了独立的错误信息。AI 的解释逐一验证了我们列出的要点,表明其严格遵循了提示指令。这是理想的结果,因为我们作为提示方明确定义了本次语境中"重构"的含义。通过这种方式,我们以最少的往返沟通就引导 AI 产出了符合需求的解决方案。如果 AI 遗漏了某个要点(比如仍合并了错误处理),我们可以轻松追加提示:"方案不错,但请确保用户和订单的错误信息保持独立"——不过本例中无需如此,因为我们的初始提示足够全面。
This example demonstrates a key lesson: when you know what you want improved, spell it out. AI is good at following instructions, but it won’t read your mind. A broad “make this better” might work for simple things, but for non-trivial code, you’ll get the best results by enumerating what “better” means to you. This aligns with community insights that clear, structured prompts yield significantly improved results .
这个示例揭示了一个关键要点:当你知道需要改进什么时,务必明确说明。AI 擅长遵循指令,但无法读懂你的心思。笼统的"改进这段代码"可能适用于简单场景,但对于复杂代码,只有明确列出"更好"的具体标准才能获得最佳效果。这与社区共识高度一致——清晰、结构化的提示词能显著提升输出质量。
Additional Refactoring Tips:
额外重构技巧:
Refactor in steps: If the code is very large or you have a long list of changes, you can tackle them one at a time. For example, first ask the AI to “refactor for readability” (focus on renaming, splitting functions), then later “optimize the algorithm in this function.” This prevents overwhelming the model with too many instructions at once and lets you verify each change stepwise.
分步重构:如果代码量很大或需要修改的内容很多,可以逐个解决。例如,先让 AI"重构以提高可读性"(重点重命名、拆分函数),之后再要求"优化这个函数的算法"。这样能避免一次性给模型过多指令造成负担,并让你逐步验证每个改动。Ask for alternative approaches: Maybe the AI’s first refactor works but you’re curious about a different angle. You can ask, “Can you refactor it in another way, perhaps using functional programming style (e.g. array methods instead of loops)?” or “How about using recursion here instead of iterative approach, just to compare?” This way, you can evaluate different solutions. It’s like brainstorming multiple refactoring options with a colleague.
寻求替代方案:也许 AI 的首次重构可行,但你想尝试不同角度。可以询问:"能否用另一种方式重构?比如采用函数式编程风格(用数组方法替代循环)?"或"这里用递归代替迭代方案如何?方便对比。"这样就能评估不同解决方案,就像和同事头脑风暴多种重构选项。Combine refactoring with explanation to learn patterns: We touched on this, but it’s worth emphasizing – use the AI as a learning tool. If it refactors code in a clever way, study the output and explanation. You might discover a new API or technique (like using reduce to build a map) that you hadn’t used before. This is one reason to ask for explanations: it turns an answer into a mini-tutorial, reinforcing your understanding of best practices.
将重构与解释相结合以学习模式:我们之前提到过这一点,但值得再次强调——将 AI 作为学习工具。如果它以巧妙的方式重构代码,请仔细研究输出结果和解释。你可能会发现之前未曾使用过的新 API 或技术(例如使用 reduce 构建映射)。这就是要求解释的原因之一:它将答案转化为微型教程,强化你对最佳实践的理解。Validation and testing: After any AI-generated refactor, always run your tests or try the code with sample inputs. AI might inadvertently introduce subtle bugs, especially if the prompt didn’t specify an important constraint. For example, in our refactor, if the original code intentionally separated fetch errors for logging but we didn’t mention logging, the combined error might be less useful. It’s our job to catch that in review. The AI can help by writing tests too – you could ask “Generate a few unit tests for the refactored function” to ensure it behaves the same as before on expected inputs.
验证与测试:完成 AI 生成的代码重构后,务必运行测试或用示例输入验证代码。AI 可能会无意引入细微错误,特别是当提示未明确重要约束条件时。例如在我们的重构案例中,若原始代码特意分离了用于日志记录的获取错误,而我们未提及日志功能,合并后的错误信息可能降低实用性。这需要我们在代码审查时发现。AI 同样能协助编写测试——你可以要求"为重构后的函数生成几个单元测试",以确保其在预期输入下保持原有行为。
At this point, we’ve covered debugging and refactoring – improving existing code. The next logical step is to use AI assistance for implementing new features or generating new code. We’ll explore how to prompt for that scenario effectively.
至此,我们已经探讨了调试和重构——改进现有代码。接下来的合理步骤是利用 AI 辅助实现新功能或生成新代码。我们将深入研究如何为此场景设计有效的提示。
Modern debugging scenarios
现代调试场景
React Hook dependency issues
React Hook 依赖问题
❌ Poor Prompt: "My useEffect isn't working right"
❌ 错误提示:"我的 useEffect 运行不正常"
✅ Enhanced Prompt: ✅ 优化后的提示:
I have a React component that fetches user data, but it's causing infinite re-renders. Here's my code:
const UserProfile = ({ userId }) => {
const [user, setUser] = useState(null);
const [loading, setLoading] = useState(true);
useEffect(() => {
fetchUser(userId).then(setUser).finally(() => setLoading(false));
}, [userId, setUser, setLoading]); // Problem is here
return loading ? <div>Loading...</div> : <div>{user?.name}</div>;
};
Expected behavior: Should fetch user data once when userId changes Actual behavior: Component re-renders infinitely Error in console: "Warning: Maximum update depth exceeded"
预期行为:当 userId 变化时应仅获取一次用户数据
实际行为:组件无限重新渲染
控制台报错:"警告:超过最大更新深度"
What's causing this infinite loop and how do I fix the dependency array?
是什么导致了这种无限循环?如何修复依赖数组?
Why this works: Provides exact code, error message, expected vs actual behavior, and focuses on a specific React pattern that's commonly misunderstood.
为什么有效:提供了确切的代码、错误信息、预期与实际行为的对比,并聚焦于一个常被误解的特定 React 模式。
Modern state architecture
现代状态架构
❌ Poor Prompt: "Build the state management for my Next.js ecommerce app”
❌ 错误提示示例:"为我的 Next.js 电商应用构建状态管理"
✅ Enhanced Prompt: ✅ 优化后的提示:
I'm building a Next.js 14 e-commerce app and need to design the state management architecture. Here are my requirements:
我正在构建一个 Next.js 14 电商应用,需要设计状态管理架构。我的需求如下:
Components: 组件:
Product listing page (needs: products[], filters, pagination)
产品列表页(需包含:产品数组[]、筛选器、分页)Shopping cart (needs: cart items, totals, shipping info)
购物车(需包含:购物车商品、总计金额、配送信息)User auth (needs: user profile, auth status, preferences)
用户认证(需包含:用户资料、认证状态、偏好设置)Real-time notifications (needs: toast messages, error states)
实时通知(需包含:Toast 消息提示、错误状态)
Technical constraints: 技术限制:
Next.js 14 with App Router and Server Components
Next.js 14(使用 App Router 和 Server Components)TypeScript strict mode TypeScript 严格模式
Server-side data fetching for SEO
服务端数据抓取以优化 SEOClient-side interactivity for cart/user actions
客户端交互实现购物车/用户操作State should persist across navigation
导航切换时应保持状态持久化
Should I use: 我应该选用:
Zustand stores for each domain (cart, auth, notifications)
每个领域(购物车、认证、通知)使用 Zustand 存储React Query/TanStack Query for server state + Zustand for client state
服务器状态使用 React Query/TanStack Query + 客户端状态使用 ZustandA single Zustand store with slices
单一 Zustand 存储库(带分片)
Please provide a recommended architecture with code examples showing how to structure stores and integrate with Next.js App Router patterns.
请提供一个推荐的架构方案,并附上代码示例,展示如何构建存储层以及与 Next.js 应用路由模式的集成。
Why this works: Real-world scenario with specific tech stack, clear requirements, and asks for architectural guidance with implementation details.
为什么有效:真实场景结合特定技术栈,需求明确,且要求提供包含实现细节的架构指导。
Prompt patterns for implementing new features
实现新功能的提示模式
One of the most exciting uses of AI code assistants is to help you write new code from scratch or integrate a new feature into an existing codebase. This could range from generating a boilerplate for a React component to writing a new API endpoint in an Express app. The challenge here is often that these tasks are open-ended – there are many ways to implement a feature. Prompt engineering for code generation is about guiding the AI to produce code that fits your needs and style. Here are strategies to do that:
AI 代码助手最令人兴奋的用途之一就是帮助您从零开始编写代码或将新功能集成到现有代码库中。这可能包括生成 React 组件的样板代码,或在 Express 应用中编写新的 API 端点。这里的挑战通常在于这些任务是开放式的——实现一个功能有多种方式。针对代码生成的提示工程,关键在于引导 AI 生成符合您需求和风格的代码。以下是实现这一目标的策略:
1. Start with high-level instructions, then drill down. Begin by outlining what you want to build in plain language, possibly breaking it into smaller tasks (similar to our advice on breaking down complex tasks earlier). For example, say you want to add a search bar feature to an existing web app. You might first prompt: “Outline a plan to add a search feature that filters a list of products by name in my React app. The products are fetched from an API.”
1. 从高层次指令开始,逐步细化。首先用通俗语言描述你想构建的内容,可以将其拆分为更小的任务(类似于我们之前关于分解复杂任务的建议)。例如,假设你想为现有网页应用添加搜索栏功能。可以先这样提示:"概述一个为我的 React 应用添加搜索功能的方案,该功能需要能按名称筛选产品列表。产品数据通过 API 获取。"
The AI might give you a step-by-step plan: “1. Add an input field for the search query. 2. Add state to hold the query. 3. Filter the products list based on the query. 4. Ensure it’s case-insensitive, etc.” Once you have this plan (which you can refine with the AI’s help), you can tackle each bullet with focused prompts.
AI 可能会给你一个分步计划:"1. 添加搜索查询的输入框 2. 添加状态来保存查询 3. 根据查询筛选产品列表 4. 确保不区分大小写等"。当你获得这个计划后(可以借助 AI 进一步优化),就能针对每个要点进行专注的提示词设计。
For instance: “Okay, implement step 1: create a SearchBar component with an input that updates a searchQuery state.” After that, “Implement step 3: given the searchQuery and an array of products, filter the products (case-insensitive match on name).” By dividing the feature, you ensure each prompt is specific and the responses are manageable. This also mirrors iterative development – you can test each piece as it’s built.
例如:"好的,执行第一步:创建一个带有输入框的 SearchBar 组件,该输入框会更新 searchQuery 状态。"接着,"执行第三步:根据 searchQuery 和产品数组,筛选出产品名称不区分大小写的匹配项。"通过拆分功能,你可以确保每个提示都具体明确,且响应易于管理。这也反映了迭代式开发——你可以在每个部分构建完成后立即测试。
2. Provide relevant context or reference code. If you’re adding a feature to an existing project, it helps tremendously to show the AI how similar things are done in that project. For example, if you already have a component that is similar to what you want, you can say: “Here is an existing UserList component (code…). Now create a ProductList component that is similar but includes a search bar.”
2. 提供相关上下文或参考代码。如果你正在为现有项目添加功能,向 AI 展示该项目中类似功能的实现方式会大有帮助。例如,如果已有与目标组件相似的现有组件,你可以说:"这是一个现有的 UserList 组件(代码...)。现在请创建一个类似的 ProductList 组件,但要包含搜索栏。"
The AI will see the patterns (maybe you use certain libraries or style conventions) and apply them. Having relevant files open or referencing them in your prompt provides context that leads to more project-specific and consistent code suggestions . Another trick: if your project uses a particular coding style or architecture (say Redux for state or a certain CSS framework), mention that. “We use Redux for state management – integrate the search state into Redux store.”
AI 会识别这些模式(可能是你使用的某些库或代码风格约定)并加以应用。保持相关文件处于打开状态或在提示中引用它们,能够提供上下文,从而获得更符合项目需求且一致的代码建议。另一个技巧:如果你的项目采用特定编码风格或架构(比如用 Redux 管理状态或某个 CSS 框架),记得明确说明。"我们使用 Redux 进行状态管理——请将搜索状态集成到 Redux store 中。"
A well-trained model will then generate code consistent with Redux patterns, etc. Essentially, you are teaching the AI about your project’s environment so it can tailor the output. Some assistants can even use your entire repository as context to draw from; if using those, ensure you point it to similar modules or documentation in your repo.
训练有素的模型随后会生成符合 Redux 等模式的代码。本质上,您是在向 AI 传授项目环境知识,使其能定制输出内容。某些智能助手甚至能以整个代码库作为上下文参考;若使用这类工具,请确保为其指向代码库中相似的模块或文档。
If starting something new but you have a preferred approach, you can also mention that: “I’d like to implement this using functional programming style (no external state, using array methods).” Or, “Ensure to follow the MVC pattern and put logic in the controller, not the view.” These are the kind of details a senior engineer might remind a junior about, and here you are the senior telling the AI.
如果你要开始新项目但有偏好的实现方式,也可以直接说明:"我希望采用函数式编程风格(无外部状态,使用数组方法)来实现"。或者说:"请确保遵循 MVC 模式,将逻辑放在控制器而非视图中"。这些正是高级工程师会提醒初级开发者的细节,而现在你就是在这样指导 AI。
3. Use comments and TODOs as inline prompts. When working directly in an IDE with Copilot, one effective workflow is writing a comment that describes the next chunk of code you need, then letting the AI autocomplete it. For example, in a Node.js backend, you might write: // TODO: Validate the request payload (ensure name and email are provided) and then start the next line. Copilot often picks up on the intent and generates a block of code performing that validation. This works because your comment is effectively a natural language prompt. However, be prepared to edit the generated code if the AI misinterprets – as always, verify its correctness.
3. 将注释和 TODO 事项作为内联提示使用。在 IDE 中直接使用 Copilot 工作时,一种高效的工作流是先编写描述下一段所需代码的注释,然后让 AI 自动补全。例如,在 Node.js 后端中,你可能会写:// TODO: 验证请求负载(确保提供姓名和邮箱),然后另起一行。Copilot 通常能理解意图并生成执行该验证的代码块。这种方法有效是因为你的注释本质上就是自然语言提示。不过,如果 AI 理解有误,请准备好编辑生成的代码——始终要验证其正确性。
4. Provide examples of expected input/output or usage. Similar to what we discussed before, if you’re asking the AI to implement a new function, include a quick example of how it will be used or a simple test case. For instance: “Implement a function formatPrice(amount) in JavaScript that takes a number (like 2.5) and returns a string formatted in USD (like $2.50). For example, formatPrice(2.5) should return '$2.50'.”
4. 提供预期输入/输出或使用示例。正如我们之前讨论的,如果你要求 AI 实现一个新函数,请包含一个快速使用示例或简单测试用例。例如:"用 JavaScript 实现一个 formatPrice(amount)函数,接收数字参数(如 2.5)并返回格式化为美元的字符串(如$2.50)。示例:formatPrice(2.5)应返回'$2.50'。"
By giving that example, you constrain the AI to produce a function consistent with it. Without the example, the AI might assume some other formatting or currency. The difference could be subtle but important. Another example in a web context: “Implement an Express middleware that logs requests. For instance, a GET request to /users should log ‘GET /users’ to the console.” This makes it clear what the output should look like. Including expected behavior in the prompt acts as a test the AI will try to satisfy.
通过提供这个示例,你限制了 AI 生成与之相符的函数。若没有示例,AI 可能会采用其他格式或货币单位,这种差异虽细微却至关重要。再举一个 Web 开发场景的例子:"实现一个 Express 中间件来记录请求。例如,对/users 的 GET 请求应在控制台输出'GET /users'。"这明确了输出应有的形式。在提示中包含预期行为相当于为 AI 设置了一个需要满足的测试标准。
5. When the result isn’t what you want, rewrite the prompt with more detail or constraints. It’s common that the first attempt at generating a new feature doesn’t nail it. Maybe the code runs but is not idiomatic, or it missed a requirement. Instead of getting frustrated, treat the AI like a junior dev who gave a first draft – now you need to give feedback. For example, “The solution works but I’d prefer if you used the built-in array filter method instead of a for loop.” Or, “Can you refactor the generated component to use React Hooks for state instead of a class component? Our codebase is all functional components.” You can also add new constraints: “Also, ensure the function runs in O(n) time or better, because n could be large.” This iterative prompting is powerful. A real-world scenario: one developer asked an LLM to generate code to draw an ice cream cone using a JS canvas library, but it kept giving irrelevant output until they refined the prompt with more specifics and context . The lesson is, don’t give up after one try. Figure out what was lacking or misunderstood in the prompt and clarify it. This is the essence of prompt engineering – each tweak can guide the model closer to what you envision.
5. 当结果不符合预期时,用更多细节或约束条件重写提示词。首次尝试生成新功能时效果不理想是常态——可能代码能运行但不够地道,或是遗漏了某些需求。与其沮丧,不如把 AI 当作提交了初稿的初级开发者,现在你需要给出反馈。例如:"这个方案可行,但我希望改用内置的数组 filter 方法而非 for 循环";或"能否将生成的组件重构为使用 React Hooks 管理状态?我们的代码库全部采用函数式组件"。你还可以追加新约束:"另外请确保函数时间复杂度为 O(n)或更优,因为 n 可能很大"。这种迭代式提示非常强大。真实案例:某开发者要求 LLM 用 JS 画布库生成冰淇淋甜筒绘制代码,在反复输出无关内容后,最终通过补充具体细节和上下文获得理想结果。关键经验是:不要一次失败就放弃,应分析提示中缺失或误解的部分并进行澄清。 这正是提示工程的核心所在——每一次微调都能让模型更接近你的设想。
Let’s illustrate feature implementation prompting with an example scenario:
让我们通过一个示例场景来说明功能实现的提示方式:
Feature implementation example: building a React component with AI assistance
功能实现示例:借助 AI 辅助构建 React 组件
Suppose we have a React application and we want to add a new component: a product list with a search filter and a sort dropdown. This is a fairly typical feature in a web app. We’ll focus on prompting for the search functionality within this component for brevity.
假设我们有一个 React 应用,需要添加一个新组件:包含搜索筛选器和排序下拉菜单的产品列表。这是 Web 应用中相当常见的功能。为简洁起见,我们将重点讨论该组件中搜索功能的提示设计。
Initial Prompt (Feature request): “Create a React functional component called ProductList that displays a list of products and includes a text input to filter the products by name. The component should: fetch an array of products from /api/products (assume it returns JSON array of {id, name, ...}), store them in state, and allow the user to type in a search box to filter the displayed products by name (case-insensitive match). Display the filtered list in a simple <ul> with product names. Include basic error handling (e.g. show an error message if the API call fails) and a loading state while fetching.”
初始提示(功能需求):“创建一个名为 ProductList 的 React 函数式组件,用于显示产品列表并包含一个文本输入框用于按名称筛选产品。该组件应:从/api/products 获取产品数组(假设返回 JSON 格式的{id, name, ...}数组),将其存储在状态中,并允许用户在搜索框中输入内容以按名称筛选显示的产品(不区分大小写匹配)。在简单的<ul>中显示筛选后的产品名称列表。包含基本错误处理(例如 API 调用失败时显示错误信息)以及数据加载时的加载状态。”
That’s a long prompt but it’s specific. Let’s assume we give this to an LLM in a chat.
这个提示虽然较长但很具体。假设我们在聊天中把这个需求提供给 LLM。
AI’s response (summary of what it might produce): The AI would likely output a ProductList component something like:
AI 的响应(可能生成的摘要):AI 可能会输出类似以下的产品列表组件:
import React, { useState, useEffect } from 'react';
function ProductList() {
const [products, setProducts] = useState([]);
const [search, setSearch] = useState('');
const [loading, setLoading] = useState(true);
const [error, setError] = useState(null);
useEffect(() => {
async function fetchProducts() {
try {
const res = await fetch('/api/products');
if (!res.ok) throw new Error('Network response was not ok');
const data = await res.json();
setProducts(data);
} catch (err) {
console.error('Failed to fetch products:', err);
setError('Failed to load products.');
} finally {
setLoading(false);
}
}
fetchProducts();
}, []);
const handleSearchChange = (e) => {
setSearch(e.target.value);
};
// Filter products by search query (case-insensitive)
const filteredProducts = products.filter(product =>
product.name.toLowerCase().includes(search.toLowerCase())
);
if (loading) return <div>Loading products...</div>;
if (error) return <div>{error}</div>;
return (
<div className="product-list">
<input
type="text"
placeholder="Search products"
value={search}
onChange={handleSearchChange}
/>
<ul>
{filteredProducts.map(prod => (
<li key={prod.id}>{prod.name}</li>
))}
</ul>
</div>
);
}
export default ProductList;
(Plus possibly some comments in the code explaining each part, if we asked for them.)
《程序员提示工程手册》
(可能还包括代码中各部分的注释说明,如果我们要求添加的话。)
Now, let’s analyze this result and consider prompt refinements:
现在,让我们分析这个结果并考虑如何优化提示:
Suppose the above code came back, but our project uses a context or a global store for data fetching instead of fetching inside the component. This is something we didn’t specify. The AI made a choice to fetch in useEffect. If we prefer using a custom hook or context, we should have hinted at that. So we could refine our prompt: “Actually, in our app we use a custom hook useProducts() that already handles fetching. Please refactor the component to use useProducts hook instead of directly calling fetch.” The AI would then adjust, perhaps assuming useProducts returns { products, loading, error } and simplify the component accordingly.
假设上述代码返回了,但我们的项目使用上下文或全局存储进行数据获取,而不是在组件内部直接获取。这一点我们之前没有明确说明。AI 选择了在 useEffect 中直接获取数据。如果我们更倾向于使用自定义钩子或上下文,就应该在提示中给出暗示。因此我们可以优化提示为:"实际上,在我们的应用中,我们使用了一个已经处理数据获取的自定义钩子 useProducts()。请重构该组件,改用 useProducts 钩子而不是直接调用 fetch。"这样 AI 就会相应调整,可能会假设 useProducts 返回{ products, loading, error }并相应地简化组件。Another refinement: maybe we realize we also want a sort dropdown (which we didn’t mention initially). We can now extend the conversation: “Great, now add a dropdown to sort the products by name (A-Z or Z-A). The dropdown should let the user choose ascending or descending, and the list should sort accordingly in addition to the filtering.” Because the AI has the context of the existing code, it can insert a sort state and adjust the rendering. We provided a clear new requirement, and it will attempt to fulfill it, likely by adding something like:
另一个改进点:我们可能意识到还需要一个排序下拉菜单(最初并未提及)。现在可以扩展对话:“很好,现在添加一个下拉菜单,用于按名称(A-Z 或 Z-A)对产品进行排序。下拉菜单应允许用户选择升序或降序,列表除了过滤外还应相应排序。”由于 AI 已掌握现有代码的上下文,它可以插入排序状态并调整渲染逻辑。我们提供了一个明确的新需求,AI 将尝试实现它,很可能会添加类似这样的代码:
const [sortOrder, setSortOrder] = useState('asc');
// ... a select input for sortOrder ...
// and sort the filteredProducts before rendering:
const sortedProducts = [...filteredProducts].sort((a, b) => {
if (sortOrder === 'asc') return a.name.localeCompare(b.name);
else return b.name.localeCompare(a.name);
});
(plus the dropdown UI). (加上下拉菜单 UI)。
By iterating like this, feature by feature, we simulate a development cycle with the AI. This is far more effective than trying to prompt for the entire, complex component with all features in one go initially. It reduces mistakes and allows mid-course corrections as requirements become clearer.
通过这样逐个功能地迭代,我们模拟了与 AI 的开发周期。这种方法远比一开始就试图一次性提示生成包含所有功能的复杂组件要有效得多。它能减少错误,并在需求逐渐明确时允许中途修正。If the AI makes a subtle mistake (say it forgot to make the search filter case-insensitive), we just point that out: “Make the search case-insensitive.” It will adjust the filter to use lowercase comparison (which in our pseudo-output it already did, but if not it would fix it).
如果 AI 犯了一个细微的错误(比如忘记让搜索过滤器不区分大小写),我们只需指出:“让搜索不区分大小写。”它就会调整过滤器使用小写比较(在我们的伪代码输出中已经这样做了,但如果没有,它会修正)。
This example shows that implementing features with AI is all about incremental development and prompt refinement. A Twitter thread might exclaim how someone built a small app by continually prompting an LLM for each part – that’s essentially the approach: build, review, refine, extend. Each prompt is like a commit in your development process.
这个例子展示了用 AI 实现功能的核心在于增量开发和提示优化。一条推特线程可能会惊叹某人如何通过持续向 LLM 提问每个组件来构建小型应用——这本质上就是该方法:构建、评审、改进、扩展。每个提示都像是开发流程中的一次代码提交。
Additional tips for feature implementation:
功能实现的额外技巧:
Let the AI scaffold, then you fill in specifics: Sometimes it’s useful to have the AI generate a rough structure, then you tweak it. For example, “Generate the skeleton of a Node.js Express route for user registration with validation and error handling.” It might produce a generic route with placeholders. You can then fill in the actual validation rules or database calls which are specific to your app. The AI saves you from writing boilerplate, and you handle the custom logic if it’s sensitive.
让 AI 搭建框架,再由你填充细节:有时让 AI 先生成一个粗略结构,再由你进行调整会很有帮助。例如:"生成一个包含验证和错误处理的 Node.js Express 用户注册路由骨架"。它可能会生成一个带有占位符的通用路由,然后你就可以填入实际应用中特定的验证规则或数据库调用。AI 帮你省去了编写模板代码的时间,而你可以处理那些敏感的定制逻辑。Ask for edge case handling: When generating a feature, you might prompt the AI to think of edge cases: “What edge cases should we consider for this feature (and can you handle them in the code)?” For instance, in the search example, an edge case might be “what if the products haven’t loaded yet when the user types?” (though our code handles that via loading state) or “what if two products have the same name” (not a big issue but maybe mention it). The AI could mention things like empty result handling, very large lists (maybe needing debounce for search input), etc. This is a way to leverage the AI’s training on common pitfalls.
要求处理边界情况:在生成功能时,可以提示 AI 考虑边界情况:"这个功能需要考虑哪些边界情况(你能在代码中处理它们吗)?"例如在搜索功能中,边界情况可能是"如果用户输入时产品数据尚未加载怎么办?"(虽然我们的代码通过加载状态处理了这种情况),或者"如果两个产品同名怎么办"(虽不是大问题但可以提一下)。AI 可能会提到空结果处理、超长列表(可能需要对搜索输入进行防抖处理)等情况。这是利用 AI 对常见陷阱的训练经验的好方法。Documentation-driven development: A nifty approach some have taken is writing a docstring or usage example first and having the AI implement the function to match. For example:
文档驱动开发:一种巧妙的做法是先编写文档字符串或使用示例,然后让 AI 根据描述实现相应功能。例如:
/**
* Returns the nth Fibonacci number.
* @param {number} n - The position in Fibonacci sequence (0-indexed).
* @returns {number} The nth Fibonacci number.
*
* Example: fibonacci(5) -> 5 (sequence: 0,1,1,2,3,5,…)
*/
function fibonacci(n) {
// ... implementation
}
If you write the above comment and function signature, an LLM might fill in the implementation correctly because the comment describes exactly what to do and even gives an example. This technique ensures you clarify the feature in words first (which is a good practice generally), and then the AI uses that as the spec to write the code.
如果你编写上述注释和函数签名,LLM 可能会正确填充实现代码,因为注释精确描述了需要完成的任务并提供了示例。这种技术确保你先用文字明确功能需求(这本身就是良好的编程实践),然后 AI 会将其作为规范来编写代码。
Having covered prompting strategies for debugging, refactoring, and new code generation, let’s turn our attention to some common pitfalls and anti-patterns in prompt engineering for coding. Understanding these will help you avoid wasting time on unproductive interactions and quickly adjust when the AI isn’t giving you what you need.
在探讨了调试、重构和新代码生成的提示策略后,现在让我们关注编程提示工程中一些常见的陷阱和反模式。理解这些将帮助您避免在无效交互上浪费时间,并在 AI 未能提供所需内容时快速调整。
Common prompt anti-Patterns and how to avoid them
常见提示反模式及规避方法
Not all prompts are created equal. By now, we’ve seen numerous examples of effective prompts, but it’s equally instructive to recognize anti-patterns – common mistakes that lead to poor AI responses.
并非所有提示都生而平等。至此我们已经见识了许多有效提示的示例,但识别反模式——那些导致 AI 响应不佳的常见错误——同样具有指导意义。
Here are some frequent prompt failures and how to fix them:
以下是常见的提示词错误及修正方法:
Anti-Pattern: The Vague Prompt. This is the classic “It doesn’t work, please fix it” or “Write something that does X” without enough detail. We saw an example of this when the question “Why isn’t my function working?” got a useless answer . Vague prompts force the AI to guess the context and often result in generic advice or irrelevant code. The fix is straightforward: add context and specifics. If you find yourself asking a question and the answer feels like a Magic 8-ball response (“Have you tried checking X?”), stop and reframe your query with more details (error messages, code excerpt, expected vs actual outcome, etc.). A good practice is to read your prompt and ask, “Could this question apply to dozens of different scenarios?” If yes, it’s too vague. Make it so specific that it could only apply to your scenario.
反模式:模糊提示。这是典型的“它不工作了,请修复”或“写一个能实现 X 功能的东西”这类缺乏细节的提问。我们曾看到“为什么我的函数不工作?”这样的问题得到了毫无用处的回答。模糊提示迫使 AI 猜测上下文,通常导致给出通用建议或不相关的代码。解决方法很简单:添加上下文和具体细节。如果你发现自己提出的问题得到的回答像是魔法八球式的回应(“你试过检查 X 吗?”),请停下来用更多细节(错误信息、代码片段、预期与实际结果等)重新组织问题。一个好的做法是阅读你的提示并自问:“这个问题能套用到几十种不同场景吗?”如果是,那就太模糊了。要让提示具体到只能适用于你的特定场景。Anti-Pattern: The Overloaded Prompt. This is the opposite issue: asking the AI to do too many things at once. For instance, “Generate a complete Node.js app with authentication, a front-end in React, and deployment scripts.” Or even on a smaller scale, “Fix these 5 bugs and also add these 3 features in one go.” The AI might attempt it, but you’ll likely get a jumbled or incomplete result, or it might ignore some parts of the request. Even if it addresses everything, the response will be long and harder to verify. The remedy is to split the tasks. Prioritize: do one thing at a time, as we emphasized earlier. This makes it easier to catch mistakes and ensures the model stays focused. If you catch yourself writing a paragraph with multiple “and” in the instructions, consider breaking it into separate prompts or sequential steps.
反模式:过度负载的提示。这是相反的问题:要求 AI 一次性完成过多任务。例如,"生成一个完整的 Node.js 应用,包含身份验证、React 前端和部署脚本。"或者更小规模的请求:"一次性修复这 5 个 bug 并添加这 3 个功能。"AI 可能会尝试执行,但结果往往杂乱不完整,或忽略部分请求。即使它处理了所有要求,响应也会冗长且难以验证。解决方法是拆分任务:优先处理,一次只做一件事,正如我们之前强调的。这样更容易发现错误并确保模型保持专注。如果你发现指令中包含多个"并且"的长段落,考虑将其拆分为独立提示或分步执行。Anti-Pattern: Missing the Question. Sometimes users will present a lot of information but never clearly ask a question or specify what they need. For example, dumping a large code snippet and just saying “Here’s my code.” This can confuse the AI – it doesn’t know what you want. Always include a clear ask, such as “Identify any bugs in the above code”, “Explain what this code does”, or “Complete the TODOs in the code”. A prompt should have a purpose. If you just provide text without a question or instruction, the AI might make incorrect assumptions (like summarizing the code instead of fixing it, etc.). Make sure the AI knows why you showed it some code. Even a simple addition like, “What’s wrong with this code?” or “Please continue implementing this function.” gives it direction.
反模式:缺少明确问题。有时用户会提供大量信息,却始终没有明确提出疑问或说明需求。例如直接贴出一大段代码并只说"这是我的代码"。这会让人工智能感到困惑——它不知道你想要什么。务必包含清晰的请求,比如"找出上述代码中的错误"、"解释这段代码的功能"或"完成代码中的 TODO 项"。每个提示都应该有明确目的。如果只提供文本而不附带问题或指令,AI 可能会做出错误假设(比如总结代码而非修复错误等)。确保 AI 知道你展示代码的意图。即使是简单的补充说明,如"这段代码有什么问题?"或"请继续实现这个函数",也能为 AI 指明方向。Anti-Pattern: Vague Success Criteria. This is a subtle one – sometimes you might ask for an optimization or improvement, but you don’t define what success looks like. For example, “Make this function faster.” Faster by what metric? If the AI doesn’t know your performance constraints, it might micro-optimize something that doesn’t matter or use an approach that’s theoretically faster but practically negligible. Or “make this code cleaner” – “cleaner” is subjective. We dealt with this by explicitly stating goals like “reduce duplication” or “improve variable names” etc. The fix: quantify or qualify the improvement. E.g., “optimize this function to run in linear time (current version is quadratic)” or “refactor this to remove global variables and use a class instead.” Basically, be explicit about what problem you’re solving with the refactor or feature. If you leave it too open, the AI might solve a different problem than the one you care about.
反模式:模糊的成功标准。这是一个微妙的问题——有时你可能会要求优化或改进,却没有明确定义成功的标准。例如,“让这个函数运行更快。”以什么指标衡量更快?如果 AI 不了解你的性能约束,它可能会对无关紧要的部分进行微观优化,或采用理论上更快但实际效果微乎其微的方法。又如“让这段代码更整洁”——“整洁”是主观的。我们通过明确目标来解决这个问题,比如“减少重复”或“改进变量命名”等。解决方法:量化或限定改进标准。例如,“优化此函数使其以线性时间运行(当前版本为二次复杂度)”或“重构此代码以消除全局变量,改用类实现”。核心原则是明确说明你希望通过重构或新功能解决什么问题。如果标准过于开放,AI 可能会解决一个你并不关心的问题。Anti-Pattern: Ignoring AI’s Clarification or Output. Sometimes the AI might respond with a clarifying question or an assumption. For instance: “Are you using React class components or functional components?” or “I assume the input is a string – please confirm.” If you ignore these and just reiterate your request, you’re missing an opportunity to improve the prompt. The AI is signaling that it needs more info. Always answer its questions or refine your prompt to include those details. Additionally, if the AI’s output is clearly off (like it misunderstood the question), don’t just retry the same prompt verbatim. Take a moment to adjust your wording. Maybe your prompt had an ambiguous phrase or omitted something essential. Treat it like a conversation – if a human misunderstood, you’d explain differently; do the same for the AI.
反模式:忽视 AI 的澄清或输出。有时 AI 可能会以澄清问题或假设的方式回应,例如:"您使用的是 React 类组件还是函数组件?"或"我假设输入是字符串——请确认。"如果忽略这些提示而只是重复请求,就会错失优化提示词的机会。AI 正在表明它需要更多信息。务必回答其问题或完善提示词以包含这些细节。此外,如果 AI 的输出明显有误(比如误解了问题),不要原封不动地重试相同提示。花点时间调整措辞,可能是提示词中存在歧义短语或遗漏了关键内容。将其视为对话——如果人类产生误解,你会换种方式解释;对 AI 也应采取相同策略。Anti-Pattern: Varying Style or Inconsistency. If you keep changing how you ask or mixing different formats in one go, the model can get confused. For example, switching between first-person and third-person in instructions, or mixing pseudocode with actual code in a confusing way. Try to maintain a consistent style within a single prompt. If you provide examples, ensure they are clearly delineated (use Markdown triple backticks for code, quotes for input/output examples, etc.). Consistency helps the model parse your intent correctly. Also, if you have a preferred style (say, ES6 vs ES5 syntax), consistently mention it, otherwise the model might suggest one way in one prompt and another way later.
反模式:风格多变或不一致。如果频繁改变提问方式或在同一指令中混用不同格式,模型可能会感到困惑。例如在指令中切换第一人称和第三人称,或以混乱的方式混合伪代码与实际代码。请尽量在单个提示中保持风格一致。若提供示例,需确保明确区分(代码使用 Markdown 三重反引号,输入/输出示例使用引号等)。一致性有助于模型正确解析你的意图。此外,若有偏好风格(如 ES6 与 ES5 语法),请始终明确说明,否则模型可能在这次提示中建议一种方式,下次又建议另一种。Anti-Pattern: Vague references like “above code”. When using chat, if you say “the above function” or “the previous output”, be sure the reference is clear. If the conversation is long and you say “refactor the above code”, the AI might lose track or pick the wrong code snippet to refactor. It’s safer to either quote the code again or specifically name the function you want refactored. Models have a limited attention window, and although many LLMs can refer to prior parts of the conversation, giving it explicit context again can help avoid confusion. This is especially true if some time (or several messages) passed since the code was shown.
反模式:模糊引用如“上述代码”。使用聊天功能时,若提及“上面的函数”或“之前的输出”,务必确保引用明确。在较长对话中说“重构上述代码”时,AI 可能丢失上下文或选错待重构的代码片段。更稳妥的做法是重新引用代码或明确指定函数名。模型存在注意力窗口限制,尽管许多 LLMs 能回溯对话历史,但重新提供显式上下文有助于避免混淆——特别是当代码展示后已间隔较长时间(或多条消息)时更需注意。
Finally, here’s a tactical approach to rewriting prompts when things go wrong:
最后,当遇到问题时,这里有一个重写提示词的有效策略:
Identify what was missing or incorrect in the AI’s response. Did it solve a different problem? Did it produce an error or a solution that doesn’t fit? For example, maybe you asked for a solution in TypeScript but it gave plain JavaScript. Or it wrote a recursive solution when you explicitly wanted iterative. Pinpoint the discrepancy.
识别 AI 响应中缺失或错误的部分。它是否解决了不同的问题?是否产生了错误或不匹配的解决方案?例如,你可能要求用 TypeScript 编写解决方案,但它给出了普通 JavaScript 代码;或者你明确要求迭代方案时它却提供了递归实现。请精准指出这些差异。Add or emphasize that requirement in a new prompt. You might say, “The solution should be in TypeScript, not JavaScript. Please include type annotations.” Or, “I mentioned I wanted an iterative solution – please avoid recursion and use a loop instead.” Sometimes it helps to literally use phrases like “Note:” or “Important:” in your prompt to highlight key constraints (the model doesn’t have emotions, but it does weigh certain phrasing as indicating importance). For instance: “Important: Do not use any external libraries for this.” or “Note: The code must run in the browser, so no Node-specific APIs.”.
在新的提示中补充或强调该要求。你可以这样说:"解决方案应使用 TypeScript 而非 JavaScript,请包含类型注解。" 或者:"我提到需要迭代解法——请避免递归,改用循环实现。"有时直接在提示中使用"注意:"或"重要:"等字眼能有效突出关键约束(模型虽无情感,但会识别特定措辞的重要性)。例如:"重要:不得使用任何外部库。" 或 "注意:代码需在浏览器中运行,因此不能使用 Node 专用 API。"Break down the request further if needed. If the AI repeatedly fails on a complex request, try asking for a smaller piece first. Or ask a question that might enlighten the situation: “Do you understand what I mean by X?” The model might then paraphrase what it thinks you mean, and you can correct it if it’s wrong. This is meta-prompting – discussing the prompt itself – and can sometimes resolve misunderstandings.
必要时可将请求进一步拆分。若 AI 在复杂请求上反复失败,可先尝试要求完成其中一小部分。或者提出一个可能启发思路的问题:"你理解我所说的 X 是什么意思吗?"模型可能会复述它对你的意思的理解,这时若存在偏差便可进行纠正。这就是元提示(meta-prompting)——对提示本身进行讨论——有时能有效消除理解偏差。Consider starting fresh if the thread is stuck. Sometimes after multiple tries, the conversation may reach a confused state. It can help to start a new session (or clear the chat history for a moment) and prompt from scratch with a more refined ask that you’ve formulated based on previous failures. The model doesn’t mind repetition, and a fresh context can eliminate any accumulated confusion from prior messages.
若线程陷入僵局,不妨考虑重新开始。经过多次尝试后,对话可能会进入混乱状态。此时开启新会话(或暂时清空聊天记录)并根据先前失败经验优化提问方式重新发起对话往往更有效。模型不介意重复操作,全新的对话上下文能消除之前消息积累的混乱状态。
By being aware of these anti-patterns and their solutions, you’ll become much faster at adjusting your prompts on the fly. Prompt engineering for developers is very much an iterative, feedback-driven process (as any programming task is!). The good news is, you now have a lot of patterns and examples in your toolkit to draw from.
通过认识这些反模式及其解决方案,你将能更快速地即时调整提示。对于开发者而言,提示工程本质上是一个迭代的、由反馈驱动的过程(正如所有编程任务一样!)。好消息是,你现在已经掌握了大量可随时调用的模式和示例。
Conclusion 结语
Prompt engineering is a bit of an art and a bit of a science – and as we’ve seen, it’s quickly becoming a must-have skill for developers working with AI code assistants. By crafting clear, context-rich prompts, you essentially teach the AI what you need, just as you would onboard a human team member or explain a problem to a peer. Throughout this article, we explored how to systematically approach prompts for debugging, refactoring, and feature implementation:
提示工程既是艺术也是科学——正如我们所看到的,它正迅速成为开发者使用 AI 编程助手时的必备技能。通过构建清晰、富含上下文的提示,你实质上是在教导 AI 理解你的需求,就像培训新团队成员或向同事解释问题一样。在本文中,我们系统性地探讨了如何构建用于调试、重构和功能实现的提示:
We learned to feed the AI the same information you’d give a colleague when asking for help: what the code is supposed to do, how it’s misbehaving, relevant code snippets, and so on – thereby getting much more targeted help .
我们学会了向 AI 提供与向同事求助时相同的信息:代码预期实现的功能、出现的异常行为、相关代码片段等——从而获得更具针对性的帮助。We saw the power of iterating with the AI, whether it’s stepping through a function’s logic line by line, or refining a solution through multiple prompts (like turning a recursive solution into an iterative one, then improving variable names) . Patience and iteration turn the AI into a true pair programmer rather than a one-shot code generator.
我们见证了与 AI 迭代的强大之处,无论是逐行梳理函数逻辑,还是通过多次提示优化解决方案(比如将递归方案改为迭代实现,再改进变量命名)。耐心和迭代使 AI 成为真正的结对编程伙伴,而非一次性代码生成器。We utilized role-playing and personas to up-level the responses – treating the AI as a code reviewer, a mentor, or an expert in a certain stack . This often produces more rigorous and explanation-rich outputs, which not only solve the problem but educate us in the process.
我们通过角色扮演和设定身份来提升回答质量——将 AI 视为代码审查员、导师或特定技术栈专家。这种方法通常能产生更严谨且解释详尽的输出,不仅解决问题,还在过程中教育我们。For refactoring and optimization, we emphasized defining what “good” looks like (be it faster, cleaner, more idiomatic, etc.) , and the AI showed that it can apply known best practices when guided (like parallelizing calls, removing duplication, handling errors properly). It’s like having access to the collective wisdom of countless code reviewers – but you have to ask the right questions to tap into it.
在重构与优化方面,我们着重强调明确"优秀"的标准(无论是更快速、更简洁、更符合语言惯例等)。实践表明,当给予适当引导时,AI 能够应用已知的最佳实践(如并行化调用、消除重复代码、正确处理错误)。这就像获得了无数代码审查者的集体智慧——但前提是你要提出正确的问题才能解锁这些知识。We also demonstrated building new features step by step with AI assistance, showing that even complex tasks can be decomposed and tackled one prompt at a time. The AI can scaffold boilerplate, suggest implementations, and even highlight edge cases if prompted – acting as a knowledgeable co-developer who’s always available.
我们还逐步演示了如何借助 AI 构建新功能,证明即便是复杂任务也能被拆解并通过逐个提示攻克。AI 能搭建基础框架、建议实现方案,甚至在提示下指出边界情况——就像一位随时待命、知识渊博的协同开发者。Along the way, we identified pitfalls to avoid: keeping prompts neither too vague nor too overloaded, always specifying our intent and constraints, and being ready to adjust when the AI’s output isn’t on target. We cited concrete examples of bad prompts and saw how minor changes (like including an error message or expected output) can dramatically improve the outcome.
在此过程中,我们总结出需要避免的误区:提示词既不能过于模糊也不应过度冗杂,始终明确表达意图和约束条件,当 AI 输出偏离目标时要及时调整。我们列举了不良提示词的具体案例,并展示了细微改动(比如包含错误信息或预期输出)如何显著提升生成效果。
As you incorporate these techniques into your workflow, you’ll likely find that working with AI becomes more intuitive. You’ll develop a feel for what phrasing gets the best results and how to guide the model when it goes off course. Remember that the AI is a product of its training data – it has seen many examples of code and problem-solving, but it’s you who provides direction on which of those examples are relevant now. In essence, you set the context, and the AI follows through.
随着你将上述技巧融入工作流程,会逐渐发现与 AI 协作变得更加得心应手。你将培养出对最佳表达方式的敏锐直觉,并掌握在模型偏离轨道时的引导技巧。请记住:AI 是其训练数据的产物——它见识过大量代码范例和解题方案,但由你来决定当前哪些范例具有相关性。本质上,你设定上下文语境,AI 负责执行落地。
It’s also worth noting that prompt engineering is an evolving practice. The community of developers is constantly discovering new tricks – a clever one-liner prompt or a structured template can suddenly go viral on social media because it unlocks a capability people didn’t realize was there. Stay tuned to those discussions (on Hacker News, Twitter, etc.) because they can inspire your own techniques. But also, don’t be afraid to experiment yourself. Treat the AI as a flexible tool – if you have an idea (“what if I ask it to draw an ASCII diagram of my architecture?”), just try it. You might be surprised at the results, and if it fails, no harm done – you’ve learned something about the model’s limits or needs.
值得注意的是,提示工程是一门不断发展的实践。开发者社区持续发掘新技巧——一个巧妙的单行提示或结构化模板可能突然在社交媒体爆红,只因它解锁了人们未曾意识到的功能。请持续关注这些讨论(如 Hacker News、Twitter 等平台),它们能激发你的创作灵感。但同样重要的是,不要畏惧亲自实验。将 AI 视为灵活工具——当你产生想法时(比如"如果让它用 ASCII 字符绘制我的架构图会怎样?"),尽管尝试。结果可能会让你惊喜,即便失败也无妨——你已借此了解了模型的局限或需求。
In summary, prompt engineering empowers developers to get more out of AI assistants. It’s the difference between a frustrating experience (“this tool is useless, it gave me nonsense”) and a productive one (“this feels like pair programming with an expert who writes boilerplate for me”). By applying the playbook of strategies we’ve covered – from providing exhaustive context to nudging the AI’s style and thinking – you can turn these code-focused AI tools into true extensions of your development workflow. The end result is not only that you code faster, but often you pick up new insights and patterns along the way (as the AI explains things or suggests alternatives), leveling up your own skillset.
总之,提示工程让开发者能更高效地运用 AI 助手。这决定了两种截然不同的体验:令人沮丧的("这工具太没用了,净给我胡言乱语")与高效协作的("感觉就像在和专家结对编程,还能帮我写样板代码")。通过运用我们介绍的策略手册——从提供详尽上下文到引导 AI 的风格与思考方式——你可以将这些代码导向的 AI 工具真正转化为开发流程的延伸。最终效果不仅是编码速度提升,更能在 AI 解释原理或提出替代方案时获得新见解与模式,从而提升自身技能水平。
As a final takeaway, remember that prompting is an iterative dialogue. Approach it with the same clarity, patience, and thoroughness you’d use when communicating with another engineer. Do that, and you’ll find that AI assistants can significantly amplify your abilities – helping you debug quicker, refactor smarter, and implement features with greater ease.
最后要记住,提示工程是一个迭代对话的过程。要用与工程师沟通时同样的清晰度、耐心和细致态度来对待它。做到这一点,你会发现 AI 助手能极大增强你的能力——帮助你更快调试、更智能重构、更轻松地实现功能。
Happy prompting, and happy coding!
祝您提示愉快,编码快乐!
Further reading: 延伸阅读:
How to write better prompts for GitHub Copilot. GitHub Blog
如何为 GitHub Copilot 编写更优质的提示 . GitHub 官方博客ChatGPT Prompt Engineering for Developers: 13 Best Examples
面向开发者的 ChatGPT 提示工程:13 个最佳实践范例Prompt Engineering for Lazy Programmers: Getting Exactly the Code You Want
给懒人程序员的提示工程:精准获取你想要的代码Best practices for prompting GitHub Copilot in VS Code
在 VS Code 中使用 GitHub Copilot 的最佳提示实践ChatGPT: A new-age Debugger, 10 Prompts
ChatGPT:新时代调试器,10 个实用提示ChatGPT Prompts for Code Review and Debugging
用于代码审查和调试的 ChatGPT 提示集
Subscribe to Elevate 订阅 Elevate
作者:Addy Osmani · 发布于 2 年前
Addy Osmani 关于提升工作效率的新闻简报。加入他在社交媒体上拥有 60 万读者的社群。
For frontend testing and debugging, Browserbase MCP will be really helpful as it can show the coding agent issur with a screenshot.
对于前端测试和调试,Browserbase MCP 将非常有用,因为它能通过截图显示编码代理问题。
This is awesome! I built a tool that checks prompts against these and suggests improvements: https://promptchecker.withcascade.ai/