Blog
Custom Print on Demand Apparel — Free Storefront for Your Business
Wild & Free Tools

System Prompt vs User Prompt — When to Use Each (With Examples)

Last updated: April 20265 min readAI Tools

The difference between system prompts and user prompts is the most fundamental concept in LLM prompting, and many developers get it wrong. Putting the wrong content in the wrong place wastes tokens, weakens behavior, and makes prompts harder to maintain. This guide is the cheat sheet.

Generate a properly structured system prompt now.

Open System Prompt Generator →

The one-line distinction

System prompt = persistent behavior. User prompt = specific task.

Anything that should apply to every conversation goes in the system prompt. Anything that's specific to one request goes in the user prompt.

What belongs in a system prompt

What belongs in a user prompt

The cheat sheet table

ContentSystem promptUser prompt
Role / personaYesNo
Always-applied rulesYesNo
Output format defaultsYesNo
Forbidden topicsYesNo
Brand voiceYesNo
Specific taskNoYes
Input dataNoYes
Task parametersNoYes
Follow-up requestsNoYes
One-time overridesNoYes

Worked example — wrong way

Here's a common mistake: dumping everything into the user prompt.

System: "You are a helpful AI assistant."

User: "You are a senior data analyst. Always cite sources.
Never give specific financial advice. Always reply in 2-4 paragraphs.
Use plain English.

Now: explain what a P/E ratio is."

Problems:

Worked example — right way

System: "You are a senior data analyst with 10 years of experience.

Always:
- Cite sources or note when something is general knowledge
- Reply in 2-4 paragraphs
- Use plain English

Never give specific financial advice — always recommend consulting a CFA."

User: "Explain what a P/E ratio is."

Why this works better:

What "more authoritative" means in practice

If the system prompt says "never give legal advice" and the user prompt says "give me legal advice on this contract dispute," well-aligned models will refuse, citing the system instruction. This is why system prompts are the right place for safety rules — users can't override them by asking nicely.

However, this isn't 100% reliable. Adversarial users can sometimes get models to ignore system rules ("system prompt jailbreaks"). For high-stakes apps, you also need backend filtering, not just prompt-level rules.

Edge cases

Q: What if I want the user to be able to override a system prompt rule?

Then put the override-able rule in the user prompt instead. System = non-negotiable. User = negotiable.

Q: What about multi-turn conversations?

The system prompt stays fixed at the top of the messages array for the entire conversation. Each new turn appends a new user message and a new assistant response, but the system prompt never changes (best practice).

Q: Can I have multiple system messages?

Technically yes (in OpenAI), but it's discouraged. Stick with one. If you need to add context later in the conversation, use a user message that says "Note: [context]" instead.

Q: What about the "developer message" in newer models?

OpenAI introduced a third role between system and user called "developer" for some models. It has slightly higher priority than user and slightly lower than system. Most apps don't need it — system + user is enough for 95% of use cases.

Token efficiency

Putting persistent rules in the system prompt instead of the user prompt saves tokens significantly:

ApproachSystem prompt tokensUser prompt tokens (per turn)Total for 100 turns
Rules in user prompt5035035,050
Rules in system prompt350505,350
With prompt caching350 (cached)505,350 (most cached)

For a chatbot that handles 1000 conversations a month with 10 turns each, putting rules in the system prompt instead of the user prompt saves roughly 300,000 tokens per month — meaningful at any scale.

Generate a system prompt with the rules in the right place.

Open System Prompt Generator →
Launch Your Own Clothing Brand — No Inventory, No Risk