Question Frameworks

The Five-Part AI Question Framework

A reusable framework for turning any AI question into a clear brief with goal, context, constraints, output, and review criteria.

Framework Beginner
A chalkboard flowchart used to make a decision.
Photo by Madeline Liu on Unsplash. Attribution is included as a good practice.

Quick Answer

The five-part framework is goal, context, constraints, output, and review criteria. Together, these parts make the question clearer and make the answer easier to evaluate.

Use this guide when

The reader wants a general-purpose structure for stronger prompts.

Working Method

The practical move is to make the model's job visible. Before you ask for the final output, define the important choices you do not want the model to guess.

  1. Goal: state what the answer should help you accomplish.
  2. Context: add the facts, audience, source material, or background the model needs.
  3. Constraints: define limits such as length, tone, tools, budget, risk, or excluded content.
  4. Output: request a shape that supports your next step.
  5. Review criteria: tell the model how the answer will be judged.

Practical Application

Use The Five-Part AI Question Framework as a working pattern, not as a one-time trick. A reusable framework for turning any AI question into a clear brief with goal, context, constraints, output, and review criteria. The practical value comes from applying the idea before the model answers, while you can still shape the task, the context, and the review standard.

For framework-based prompting, the aim is to make the shape of the question reusable. A good framework should help you brief the model, compare answers, and repeat the same kind of task later without rebuilding the prompt from scratch. In this guide, the core moves are to goal: state what the answer should help you accomplish, context: add the facts, audience, source material, or background the model needs, and constraints: define limits such as length, tone, tools, budget, risk, or excluded content. Those details keep the prompt close to the real work instead of asking the model to guess what a useful answer should look like.

This matters most when the output will be reused, shared, or used to make a decision. A prompt that works once can still fail later if the audience changes, the source material changes, or the expected format is unclear. Treat the first useful answer as a draft of your process, then refine the prompt until another person could repeat it and understand why it works.

Example Workflow

A useful three-pass workflow is to draft the brief, ask the model what is still ambiguous, and then request the final answer only after the missing context is filled in. This keeps the conversation from racing toward a polished but under-specified result.

  1. Write the first version of the request in plain language, even if it feels rough.
  2. Add the missing context from this guide: goal, audience, constraints, examples, sources, or review criteria.
  3. Ask for an output that is easy to inspect, then revise the prompt based on what the answer missed.

For question frameworks, that last step is where much of the learning happens. If the model gives a useful but incomplete answer, do not throw away the whole conversation. Ask a focused follow-up that names the gap, such as a missing assumption, unsupported claim, weak example, or format problem.

Deeper Review

For question frameworks, the warning sign is a response that sounds organized but does not reflect the real decision, audience, or constraint. If the answer is tidy but unhelpful, check whether the prompt named the purpose clearly enough and whether the review criteria were visible. Common failure patterns for this topic include skipping review criteria and then accepting a nice-sounding answer, treating constraints as optional preferences, and putting the desired output before the model understands the goal. These are not just writing problems; they are signals that the model may be optimizing for fluency instead of usefulness.

Before you rely on the answer, compare it with the actual situation you are working in. Check whether the response respects the constraints you gave, whether it says what it is assuming, and whether the final format would help you act. If the answer affects money, health, legal obligations, safety, hiring, privacy, or public claims, treat the output as a starting point for verification rather than a final decision.

Prompt Example

Too vague

Give me ideas for improving onboarding.

More useful

Goal: improve new-user activation for a project management app. Context: users often invite a team but do not create the first project. Constraints: assume no engineering changes this month. Output: ranked list of five onboarding experiments. Review criteria: prioritize low effort, measurable impact, and clarity for non-technical users.

Specific Scenario

Use the five-part framework when a question is broad enough to drift. A people-ops lead asking for onboarding ideas might get generic buddy systems and checklist advice. The framework turns that loose request into a short brief the model can actually answer against.

Goal: reduce confusion in the first 10 days for remote hires. Context: 35-person software company, no formal HR team, managers own onboarding. Constraints: no new paid tools, keep manager effort under two hours per hire. Output: 30-day onboarding plan with owners. Review criteria: low setup effort, clear first-week wins, and risks to watch.

The framework is useful because it makes tradeoffs visible. If the AI recommends a complex HR platform or a full training department, the answer has missed the constraints even if it sounds polished.

Mini Checklist

  • The goal says what outcome should improve.
  • The context explains the real operating environment.
  • The constraints rule out unrealistic recommendations.
  • The output format tells the model what shape to return.
  • The review criteria define what a good answer must satisfy.

Common Pitfalls

  • Skipping review criteria and then accepting a nice-sounding answer.
  • Treating constraints as optional preferences.
  • Putting the desired output before the model understands the goal.

How to Judge the Answer

A better prompt is only useful if the answer becomes easier to evaluate. Before using the response, check whether it meets the standard you set.

  • The answer can be judged against the stated criteria.
  • Recommendations reflect real constraints.
  • Another person could reuse the prompt with a different context.

FAQ

Is the framework only for work tasks?

No. It works for learning, personal planning, writing, research, and creative projects too.

What is the most important part?

The goal is usually first. Without it, context and constraints have no priority.

Sources

Selected references that informed this guide: