Blog
Wild & Free Tools

How to Write a System Prompt That Stops Hallucinations

Last updated: April 2026 7 min read

Table of Contents

  1. Why hallucinations happen
  2. The four anti-hallucination rules
  3. Pair with retrieval
  4. Test the prompt
  5. Domain-specific patterns
  6. The trade-off

AI hallucinations are not magic. They happen when the model is asked a question it does not know the answer to, and your prompt has not given it permission to admit that. The model defaults to confident-sounding output because that is what most training data rewards. The fix is to write a system prompt that explicitly rewards uncertainty.

The free system prompt generator ships with a "no hallucination" rule toggle that adds this instruction automatically.

Why Hallucinations Happen

Language models predict the next token. When asked a factual question, they predict the most plausible-sounding answer based on training data. If the actual answer is in training, you get a correct response. If it is not, the model still generates SOMETHING — and that something is usually a confident-sounding fabrication.

The model is not lying. It does not have a concept of truth vs falsehood. It is doing exactly what it was trained to do: generate fluent text. The system prompt is where you change the incentive structure.

The Four Rules That Actually Work

  1. Explicit permission to say I don't know — "If you don't have reliable information about a question, say 'I don't know' or 'I'm not sure' clearly. This is preferred to guessing."
  2. Source citation requirement — "When making factual claims, cite the source if possible. If you cannot cite a source, frame the claim as 'I believe' or 'I think' rather than presenting it as fact."
  3. Refusal of fabricated specifics — "Never invent specific names, dates, statistics, or quotes. If you cannot cite the source, do not include the specific."
  4. Confidence calibration — "Distinguish what you know with high confidence from what you are uncertain about. Use phrases like 'I'm certain that...' vs 'I think...' vs 'I'm not sure but...'"

Pair With Retrieval (RAG)

The most reliable anti-hallucination pattern is retrieval-augmented generation. Instead of relying on the model's memory, you fetch relevant information from a trusted source (your docs, your database, search results) and pass it into the prompt. The system prompt should explicitly tell the model to use the retrieved content:

"For factual questions, use only the information provided in the retrieved context. If the context does not contain the answer, say 'I don't have information about that in my sources' — do not fall back on prior knowledge."

This combination of system prompt + retrieval cuts hallucinations dramatically. The AI cost calculator can show you the API cost of adding retrieval to a chatbot.

Sell Custom Apparel — We Handle Printing & Free Shipping

Testing Whether Your Prompt Actually Works

Build a small eval set of questions that the model SHOULD NOT know: obscure facts, made-up names, recent events outside training cutoff, plausible-sounding but fake products. Run them through your prompt. Count how often the model says "I don't know" vs invents an answer.

A well-tuned anti-hallucination prompt should refuse 80%+ of these. If yours is below 50%, the rules are not strong enough.

Domain-Specific Hallucination Patterns

Different domains have different hallucination risks:

The Trade-Off: Refusal vs Helpfulness

Strong anti-hallucination prompts make the model refuse more. Some users will see this as the bot being unhelpful. The trade-off is real and intentional. For high-stakes domains (medical, legal, financial), refusal is the correct default. For casual use cases (creative writing, brainstorming), looser rules may be better.

Tune the strength to your use case. The free system prompt generator lets you toggle the no-hallucination rule on or off depending on what you need.

Build a Hallucination-Resistant Prompt

Toggle the "admit unknowns" and "no hallucination disclaimer" rules. Generate the prompt.

Open System Prompt Generator
Launch Your Own Clothing Brand — No Inventory, No Risk