Positive Guidance vs. Negative Constraints in Prompts: Learning from Pitfalls
This article is split from A Brief Discussion on Mainstream AI Service Systems and Related Tools.
It’s an old problem: I often write notes, and a certain section becomes too long, causing structural imbalance. I didn't want to split it initially, as I would then have to find a way to complete the new article. Every time I write notes, I just want to finish them quickly. However, Claude mentioned there was enough information to split it, so I let it help me divide it into two notes (although after it finished, I still spent some time adjusting it, orz).
Why You Should Care
In the past, "Zero-Social Prompts" (demanding that AI provide only pure answers and omit pleasantries) were popular online. I followed the trend for a while, but found that it sometimes made the conversation context difficult to maintain, so I removed them.
Later, to reduce Gemini's hallucinations, I tried setting several restrictive rules. As a result, the hallucinations didn't decrease much (it often waited for me to proactively remind it before realizing it made a mistake), and it actually caused other side effects: whenever my tone in the conversation was a bit heavy, it would crazily repeat certain phrases in every reply. When asked why, it replied that it was because it was "too focused on a certain constraint." So now, I only keep the bare minimum of necessary constraints.
Why do too many rules make AI go off track?
This phenomenon is known in the industry as Over-prompting. When prompts are not precise enough or there are too many global rules, it leads to Attention Dilution at the model's underlying level. This causes an imbalance in attention weights; to forcibly satisfy all global rules, the AI over-focuses on details that are actually unimportant at the moment, triggering Task Interference, which ultimately makes the logic of the response strange or stiff.
Prompt Strategy: Positive Guidance vs. Negative Constraints
In the world of Prompt Engineering, prompts are generally divided into two categories: Positive Prompts, which directly state what you want, and Negative Prompts, which warn the AI about what it absolutely must not do.
TIP
Prompts can adjust a model's behavioral tendencies, but I don't think you should expect them to cure hallucination problems (though they do have some effect), or make them automatically search the web when their own training data is lacking. That should involve the model training process.
Although they sound similar, the "side effects" of these two approaches differ significantly in actual tuning.
Core Judgment Principles
| Type | Recommended Strategy | Logic |
|---|---|---|
| Shaping Tasks | Positive-oriented | The model needs to converge on a goal; saying "what it should look like" is much more effective than saying "what it shouldn't look like." |
| Safety Boundaries | Negative for red lines, Positive for alternatives | Negative semantics are the most direct, but pairing them with positive ones covers the gaps. |
| Behavioral Switches | Decide based on default bias | Use negative to turn off default behavior; use positive to enable behavior not present by default. |
1. Shaping Scenarios
When you want the model to "take a certain shape"—positive prompts are almost universally superior.
1. Role and Persona Setting
Requiring the model to play a specific professional role, such as a senior engineer, legal consultant, or teaching assistant.
| Prompt Method | Effect | |
|---|---|---|
| ✅ Positive | "You are a senior backend engineer with ten years of experience, familiar with high-concurrency system design. When answering, you consider performance bottlenecks and operational costs." | Provides the model with a clear knowledge framework and perspective; the output naturally incorporates the role's logic and professional vocabulary. |
| ❌ Negative | "Don't answer like a novice, don't be too casual." | The model knows "what not to be," but doesn't know what it should become. The output is often just slightly more formal in tone, lacking real depth. |
Conclusion: Positive is better. Positive prompts provide a "model to imitate," while negative prompts only exclude one end, leaving the remaining space too large.
2. Tone and Communication Style
Setting the register of the response, such as friendly/colloquial, formal/written, or instructional/guided.
| Prompt Method | Effect | |
|---|---|---|
| ✅ Positive | "Please answer in friendly, colloquial Traditional Chinese. The tone should be like explaining to a friend without a technical background. Avoid excessive abbreviations, keep sentences short." | Describes the target audience and tone; the model can comprehensively adjust vocabulary choice, sentence structure, and examples. |
| ❌ Negative | "Don't be too academic, don't use English terminology, don't be too formal." | Excludes a few directions, but the tone itself remains vague; the output may fall between several styles, making it hard to converge. |
Conclusion: Positive is better. Tone is a continuous spectrum; negative prompts can only cut off a few endpoints. Describing the target audience with positive prompts allows the model to find precise positioning. You can pair this with a small amount of negative prompting to avoid common pitfalls.
3. Output Specification Setting
Controlling format and length, such as specifying JSON structure, limiting word count, or number of paragraphs.
| Prompt Method | Effect | |
|---|---|---|
| ✅ Positive (Format) | "Please output strictly in JSON format, with the structure { name: string, score: number, reason: string }, without any extra explanatory text." | The model has a clear template to align with; high stability, almost mandatory for scenarios requiring programmatic parsing. |
| ✅ Positive (Length) | "Please summarize in no more than three sentences, with each sentence not exceeding 25 characters." | A quantifiable convergence goal with a high success rate, effective in automated workflows. |
| ❌ Negative | "Don't give me plain text." / "Don't be long-winded." | The model understands it needs to change, but doesn't know what to change into; format and length remain unstable. |
Conclusion: Positive is better. Specifications are numerical or structural constraints; providing a template or number directly is the only certain way.
4. Chain of Thought (CoT) Guidance
Hoping the model thinks step-by-step rather than giving an answer directly; suitable for complex analysis or reasoning tasks.
| Prompt Method | Effect | |
|---|---|---|
| ✅ Positive | "Please list your analytical premises first, then derive the conclusion step-by-step, and finally provide suggestions. Label each step with a number." | Clearly defines the order and structure of thinking; output quality and verifiability improve, and errors are easier to trace. |
| ❌ Negative | "Don't give the answer directly, don't skip the reasoning process." | The model knows it shouldn't omit the process, but doesn't know what the process should look like; it often just adds a sentence or two formally to get it over with. |
Conclusion: Positive is better. Directly defining "what the steps should look like" is far more effective than "don't omit steps."
2. Defensive Scenarios
When you want the model to "not touch a certain line"—negative prompts have value, but in most cases, it is best to use them in combination with positive ones.
5. Hard Boundary Delineation
Covers both technical boundaries (disabling packages, syntax) and content boundaries (safety, privacy, sensitive content).
| Prompt Method | Effect | |
|---|---|---|
| ⚠️ Positive | "Please use only native JavaScript for implementation. If utility functions are needed, write them yourself." | Guides the model toward a safe direction, but cannot exhaust all situations to avoid; there is a risk of oversights. |
| ✅ Negative | "It is strictly forbidden to import any third-party packages." | Directly draws a forbidden zone; semantics are clear and the constraint is strong. |
Conclusion: Best used together. Negative draws the red line, positive provides the alternative; both are needed to avoid loopholes.
6. Preventing Hallucination and Overconfidence
Requiring the model to state clearly when it is uncertain, avoiding fabricated data.
| Prompt Method | Effect | |
|---|---|---|
| ✅ Positive | "If you are uncertain about a piece of information, please clearly label it as 'Uncertain here' and explain the limitations. Do not fill in the blanks." | Establishes a "standard action for uncertainty" with a clear alternative behavior to execute. |
| ⚠️ Negative | "Don't fabricate data, don't pretend to be sure." | Has some inhibitory effect on obvious hallucinations, but the model sometimes overestimates itself, making the effect unstable. |
Conclusion: Best used together.
7. Scope Limitation (No Out-of-Bounds Answers)
Requiring the model to answer only specific topics; common in customer service or tool-based AI.
| Prompt Method | Effect | |
|---|---|---|
| ✅ Positive | "You are only responsible for answering questions about this product's return/exchange policy and order issues. For other questions, please guide the user to contact customer service." | Clearly defines the scope of responsibility and how to handle out-of-bounds requests. |
| ❌ Negative | "Don't answer questions unrelated to the product, don't provide any suggestions or opinions." | The judgment of "unrelated" has gray areas; handling of edge cases will be inconsistent. |
Conclusion: Positive is better. Letting the model know "what the responsibility is" and "what to do when crossing the line" is more stable than listing prohibited items.
3. Behavioral Switch Scenarios
8. Binary Behavioral Switching
The model has a default bias for a certain behavior, and you want to change it.
| Situation | Recommended Direction | Positive Example | Negative Example | Reason |
|---|---|---|---|---|
| Default does it, you want to turn it off (e.g., auto-commenting) | ✅ Negative | "Please output only the code body, keep it clean." | "Please do not generate any code comments." ✓ | Directly targets the behavior you want to turn off; semantics are clearest. |
| Default doesn't do it, you want to enable it (e.g., provide alternatives) | ✅ Positive | "Please provide at least three alternatives and explain the trade-offs for each." ✓ | "Don't give only one answer, don't ignore other possibilities." | The model needs to know "what the new standard action is"; negative only expresses dissatisfaction and provides no direction. |
Conclusion: Decide based on default bias. To turn off a default behavior → negative is most direct; to enable a behavior not present by default → positive provides the goal.
How to Tell if AI Has Been "Broken" by Prompts
If you feel the AI's way of speaking has become strange recently, you can debug Gemini like this:
- Let the AI redo the answer to see if it is still abnormal.
- Redo it again, this time choosing "Do not use personal settings."
If the answer is clearly normal after turning off the settings, it means the global settings have interfered with the normal conversation. It is recommended to review and simplify the rules.
Generally, if you find it has been "broken," it is best to restart a clean conversation, but remember to turn off the "remember conversation" feature; otherwise, seeing it reply using information from previous conversations is truly baffling.
Summary
The core of Prompt strategy is actually one sentence: Letting the model know "what to do" is more effective than telling it "what not to do."
Positive guidance provides a clear convergence goal so the model knows where to go; negative constraints draw red lines, suitable as safety guardrails. The two are not mutually exclusive; in most scenarios, using them together works best—positive sets the direction, negative guards the bottom line.
As for "how many rules should be written," my experience is not to pursue quantity, but to see if each rule is truly affecting output quality. If the AI's response does not significantly worsen after removing a rule, it is likely just consuming attention weights.
This concept applies to various occasions such as personalized prompts, Rules, Agent instructions, Skill definitions, and Prompt templates. As for how to set them up, you can only try slowly and find your own balance point. I am still exploring this myself.
Change Log
- 2026-03-07 Initial document creation.
