Blog
Wild & Free Tools

System Prompt for Legal AI Assistants

Last updated: April 2026 8 min read

Table of Contents

  1. The unauthorized practice problem
  2. The base template
  3. The disclaimer
  4. Jurisdiction handling
  5. Document review use cases
  6. Privilege and confidentiality
  7. Common rejections

Building an AI assistant that touches legal work is high stakes. The unauthorized practice of law is a real legal risk in most US states, and the wrong system prompt can put you in trouble fast. This guide walks through what your prompt must include, what it must never do, and the disclaimer language that keeps the assistant on the right side of the line.

The free system prompt generator has a Legal/Compliance use case template that ships with most of these rules pre-toggled, so you do not have to remember them all from scratch.

The Unauthorized Practice of Law Problem

In every US state, only licensed attorneys can practice law. Practicing law generally means giving legal advice to a specific person about a specific situation. AI assistants that do this without a real attorney in the loop are exposed to UPL claims, and so are the companies that build them. The fines and reputational damage are not theoretical — there have been settled cases.

The fix is not to avoid legal topics entirely. The fix is to clearly distinguish "legal information" (general explanations of law, how things work, what concepts mean) from "legal advice" (telling a specific person what they should do). System prompts can enforce this distinction reliably if you write them carefully.

The Base Legal Assistant System Prompt

"You are a legal information assistant — NOT a lawyer and NOT a substitute for one. You provide general legal information to help users understand legal concepts, processes, and their options. You always include a clear disclaimer at the start of your first response that you are an AI providing information, not legal advice, and that users should consult a licensed attorney for advice about their specific situation.

You explain legal concepts in plain English. You summarize documents and explain what they mean. You walk users through general processes (how to file, what to expect, common deadlines). You cite the source of legal information when possible (statute number, court case, reputable legal resource).

You never give advice about a specific person's situation. You never predict legal outcomes. You never tell a user what they should do. You never claim to be a lawyer. When a user asks 'should I,' redirect them to an attorney."

The Disclaimer Language

Your system prompt should mandate a specific disclaimer pattern. Here is one that works:

"At the start of every conversation, include this disclaimer: 'I'm an AI providing general legal information, not legal advice. For advice specific to your situation, consult a licensed attorney in your jurisdiction. This conversation does not create an attorney-client relationship.'"

Mandating the exact wording prevents the model from softening it over time. Some users will ask the AI to "skip the disclaimer" — the system prompt should explicitly refuse that ("I'm required to include this disclaimer — it's not optional").

Sell Custom Apparel — We Handle Printing & Free Shipping

Jurisdiction Handling

Law varies dramatically by jurisdiction. A system prompt for a US legal AI should explicitly handle this: "Always note when laws vary by state or country. If a user does not specify their jurisdiction, ask before answering. Default to no jurisdiction-specific claims when location is unknown."

For international users, add: "If the user is outside the United States, clearly state that you are providing US-focused information and recommend consulting a local attorney."

Document Review and Summarization

One of the highest-value legal AI use cases is summarizing contracts and legal documents. The system prompt should constrain the summary to factual extraction without recommendations:

"When summarizing a document, extract: parties, key dates, financial terms, obligations of each party, termination conditions, and any unusual clauses. Present findings as a factual list. Do not advise the user whether to sign, accept, or modify the document — that is legal advice and outside your role."

This pattern is safe and useful. Users get the information they need without crossing into UPL territory.

Privilege and Confidentiality

If your AI is built into a law firm's internal tools, the system prompt should reinforce confidentiality: "Treat all client information shared in this conversation as confidential. Do not reference it in any way that could identify the client. Do not store information across sessions unless explicitly instructed."

If your AI is a public-facing legal information bot, add: "Warn users not to share confidential or attorney-client privileged information in this chat. This conversation is not protected by attorney-client privilege."

Common Rejections to Bake Into the Prompt

The free system prompt generator legal template includes these refusals by default.

Build a Legal AI Prompt That Stays Compliant

Use the Legal/Compliance template — disclaimers and refusals built in.

Open System Prompt Generator
Launch Your Own Clothing Brand — No Inventory, No Risk