Evaluation & Trust

Reduce Hallucinations by Grounding Your Prompts

Grounding prompts in source material, uncertainty labels, and verification steps can reduce avoidable false claims.

Safety Guide Beginner
Abstract illustration of artificial intelligence research patterns.
Illustration by Google DeepMind on Unsplash. Attribution is included as a good practice.

Quick Answer

Grounding means making the model work from supplied context or verifiable sources instead of free-associating from general training patterns. It does not eliminate errors, but it makes checking easier.

Use this guide when

The reader wants to lower hallucination risk in AI answers.

Working Method

The practical move is to make the model's job visible. Before you ask for the final output, define the important choices you do not want the model to guess.

  1. Provide the source material the answer should use.
  2. Tell the model to distinguish source-backed claims from assumptions.
  3. Ask for direct references to sections or excerpts when appropriate.
  4. Forbid invented citations, statistics, or quotes.
  5. Use a verification pass before relying on the output.

Prompt Example

Too vague

Explain what our policy says.

More useful

Using only the policy excerpt below, answer the employee's question. If the excerpt does not contain enough information, say what is missing. Do not use outside knowledge, invent policy language, or provide legal advice.

Common Pitfalls

  • Asking for a sourced answer without providing sources.
  • Letting the model fill missing policy details.
  • Confusing a grounded summary with a verified final answer.

How to Judge the Answer

A better prompt is only useful if the answer becomes easier to evaluate. Before using the response, check whether it meets the standard you set.

  • The answer points back to supplied material.
  • Missing information is stated plainly.
  • No invented quotes, links, or numbers appear.

FAQ

Does grounding stop hallucinations?

No. It reduces some risks and makes errors easier to catch, but important output still needs review.

What if the source material is incomplete?

Ask the model to say what is missing instead of guessing.

Sources

Selected references that informed this guide: