Just like software engineering has design patterns (Singleton, Factory, Observer), prompt engineering has emerged with its own set of proven structures. These patterns help reliable outputs, reduce hallucinations, and make your prompts maintainable.
1. The Persona Pattern
The most fundamental pattern. You assume a role to narrow the search space of the LLM.
Act as a Senior React Developer.
Review the following code for performance bottlenecks and accessibility issues.
Why it works
It sets the latent space context. The model shifts its probability distribution towards “expert developer” tokens rather than “general knowledge” tokens.
2. The Chain-of-Thought (CoT) Pattern
Encouraging the model to “show its work” before giving a final answer.
Determine if the customer support ticket is urgent.
First, analyze the sentiment of the message.
Second, check for keywords related to account access or billing.
Finally, output a JSON object with { "urgent": boolean, "reason": string }.
Benefits
- Debuggable reasoning
- Higher accuracy on logic puzzles
- Structured output
3. The Few-Shot Pattern
Providing examples is often more powerful than providing instructions.
Classify the following emails:
Input: "I can't log in."
Output: Technical Issue
Input: "Where is my refund?"
Output: Billing Inquiry
Input: "Do you support dark mode?"
Output: Feature Request
4. The Template Pattern
Separating data from instructions. This is essential for building apps.
Verify the following user data against our policy.
Policy:
{{policy_text}}
User Data:
{{user_json}}
5. The Refusal Breaker (Ethical)
Sometimes models are too cautious. You can guide them to be helpful within safety bounds.
Note: This isn’t about jailbreaking. It’s about framing the task as a theoretical or creative exercise to get past over-eager filters on benign tasks.
Conclusion
Start thinking in patterns. Your prompts will become modular, reusable assets rather than magical incantations.