Contents

Prompt Engineering Anti-Patterns

Prompt engineering has exploded in popularity. “Few-shot,” “chain-of-thought,” “role-playing” templates everywhere.

Some genuinely help. But more often, I see technique-stacking making outputs worse. This article covers anti-patterns I think hurt more than help.

Anti-Pattern 1: Role-Play Says Nothing

Most common template:

You are a senior backend engineer with 10 years of experience...

This is noise.

Training data already contains enough “senior engineer code.” Telling it “you’re an engineer” makes it output generic engineer-style—nothing to do with your specific context.

More effective:

You are a Go backend engineer maintaining a high-concurrency API service.
Tech stack: Gin + GORM + Redis.
Constraints: Go 1.21 compatible, no any types allowed...

Specific tech stack, constraints, context—worth far more than “10 years experience.”

Anti-Pattern 2: Few-Shot Abuse

Few-shot gives examples for models to learn format. But many misuse it as “example hoarding.”

Problem: more examples → model increasingly overfits superficial patterns from examples rather than truly understanding the task.

Most egregious prompt I saw: 20 Q&A pairs as few-shot examples. Model started mimicking conversation tone, phrasing, even emoji—completely missing the task.

When few-shot helps:

  • Output format must strictly follow a schema (e.g., JSON)
  • Task boundaries vague, need examples to define “what counts as correct”

When few-shot is burden:

  • Task definition already clear
  • Model needs to reason, not imitate

Anti-Pattern 3: Chain-of-Thought Misuse

CoT makes models “think step by step.” But it solves “long reasoning chains, easy to skip steps”—not every task needs it.

Example:

2 + 2 = ?

No CoT needed, model directly answers 4. Forcing “think” just wastes efficiency.

CoT actually helps:

  • Math derivation
  • Logic reasoning
  • Multi-conditional judgment

CoT unnecessary for:

  • Factual queries
  • Format conversion
  • Simple classification

Anti-Pattern 4: Length Constraints “No Less Than XXX Words”

“Please write an article of no less than 800 characters.”

This constraint is locking the model, not improving quality.

Model hits word count by padding with filler. “In conclusion,” “from the above analysis we can see” start appearing.

Quality isn’t word-count堆出来的. If your content needs padding to “feel complete,” the problem is structure, not word count.

Anti-Pattern 5: Making Model “Thinking Process” Transparent

Some prompts require model “show thinking process first, then conclusion.”

Reasonable in debugging scenarios, but usually just overhead in daily use.

Model’s output “thinking process” essentially simulates “thinking human”—not actual reasoning. These “thoughts” are often废话, wasting tokens.

More efficient usage:

  • Simple tasks: give conclusion directly
  • Complex tasks: have it output a plan first, you approve, then execute
  • Debugging: have it explain assumptions and derivation

Anti-Pattern 6: Opening with “You are an AI”

You are an AI language model...

This only serves to remind the model “you’re just a program, don’t take it seriously.” Zero engineering value.

If your prompt starts with this, it usually means you haven’t clearly defined your task yet.

What Effective Prompts Share

Observing genuinely effective prompts, common traits:

  1. Clear task boundaries — not “write me code,” but “write a Gin endpoint /user/:id with JWT auth”
  2. Specific constraints — not “code must be robust,” but “errors return specific HTTP status codes and JSON format”
  3. Explicit acceptance criteria — not “write better,” but “pass test case A/B/C”
  4. Give raw materials, not二手 summaries — give docs/code snippets directly, not “based on what you know about XXX”

Closing

Prompt engineering isn’t magic—models have ceilings. Good prompts don’t break that ceiling; they let the ceiling perform consistently.

Clear task definition beats fancy techniques. Vague task + technique-stacking = garbage.