Question Frameworks

Use the Context Ladder to Add the Right Amount of Detail

The context ladder helps you decide what background to include in a prompt, from the bare task to the full operating environment.

Framework Intermediate
Workshop wall covered with structured sticky notes.
Photo by Hugo Rocha on Unsplash. Attribution is included as a good practice.

Quick Answer

The context ladder is a sequence: task, audience, source material, constraints, examples, edge cases, and evaluation criteria. Climb only as high as the task needs.

Use this guide when

The reader wants a systematic way to add context without overloading the prompt.

Working Method

The practical move is to make the model's job visible. Before you ask for the final output, define the important choices you do not want the model to guess.

  1. Start at the task level: what should the model do?
  2. Add audience and purpose if the answer will be used by someone else.
  3. Add source material when the response must be grounded in specific text or data.
  4. Add constraints when feasibility or risk matters.
  5. Add edge cases and evaluation criteria when the prompt will be reused.

Prompt Example

Too vague

Summarize our notes.

More useful

Summarize these workshop notes for a product lead deciding next month's roadmap. Focus on repeated customer pain points, separate evidence from interpretation, and flag edge cases that only one person mentioned. Use a table followed by a short recommendation.

Common Pitfalls

  • Jumping to examples before the task is defined.
  • Adding every background fact at the same priority.
  • Forgetting to include source material when the answer depends on it.

How to Judge the Answer

A better prompt is only useful if the answer becomes easier to evaluate. Before using the response, check whether it meets the standard you set.

  • The prompt includes enough context for the current risk level.
  • The answer does not invent missing background.
  • The context can be trimmed without changing the result.

FAQ

What if I have too much source material?

Label it clearly and ask the model to extract only what supports the task. For important work, verify the extraction.

Do all prompts need evaluation criteria?

No. Use them when you need repeatability, comparison, or a decision-ready answer.

Sources

Selected references that informed this guide: