Quick Answer
A rubric turns quality into visible criteria. Instead of asking whether an AI answer feels good, you score or review it against the qualities that matter for the task.
Use this guide when
The reader wants a method to judge AI output consistently.
Working Method
The practical move is to make the model's job visible. Before you ask for the final output, define the important choices you do not want the model to guess.
- Choose criteria that match the task: accuracy, relevance, completeness, clarity, tone, feasibility, and risk.
- Define what good and poor performance look like for each criterion.
- Ask the model to self-check, but do not rely only on self-checking.
- Use the same rubric on multiple drafts or models when comparing.
- Record recurring failures so the prompt or workflow can be improved.
Prompt Example
Too vague
Which answer is better?
More useful
Evaluate these two draft answers using a rubric with five criteria: factual support, relevance to the question, clarity for a non-technical reader, actionability, and risk of overclaiming. Give brief evidence for each score and recommend what to revise.
Common Pitfalls
- Using vague criteria such as good or professional.
- Letting the model grade itself without evidence.
- Changing criteria mid-comparison.
How to Judge the Answer
A better prompt is only useful if the answer becomes easier to evaluate. Before using the response, check whether it meets the standard you set.
- Criteria are visible before judging.
- Scores or notes cite specific evidence.
- The rubric leads to concrete revisions.
FAQ
Do I need numeric scores?
Not always. A simple pass, revise, fail rubric can be enough for everyday work.
Can the AI create the rubric?
Yes, but you should review whether the criteria match the actual risk and purpose.
Sources
Selected references that informed this guide:
- Overview of prompting strategies Google Cloud
- AI Risk Management Framework NIST