Situational Judgement Tests (SJTs) have become a go-to tool for recruiters and CEOs who want more than just a polished resume—they want to know how a person actually thinks in real-life work scenarios.
But after the test is taken, how do you score it?
What’s considered a “good” answer? And how do you compare candidates fairly?
This guide will walk you through exactly how situational judgement tests are scored, the scoring models used, examples of scoring systems, and how to make confident hiring decisions based on the results.
Want the full picture on how SJTs work first? Start here:
👉 Situational Judgement: Complete Guide for CEOs & Recruiters
What Makes Scoring Situational Judgement Tests Different?
Unlike technical or cognitive tests, SJTs don’t have just one “correct” answer.
They measure soft skills like empathy, communication, and decision-making—so scoring has to reflect that complexity.
The goal is to evaluate how closely a candidate’s judgment aligns with your organization’s values and expected behaviors.
In simple terms: You’re not just scoring what they chose—you’re scoring why and how well they responded.
Scoring Models Used in SJTs
There are two main scoring methods recruiters typically use:
1. Expert Benchmarking (Most Common)
Each response option is pre-scored based on how experts or top-performing employees would respond.
Response Option | Score Assigned |
Ideal/Most Effective | 4 points |
Acceptable | 3 points |
Less Effective | 2 points |
Ineffective/Inappropriate | 1 point |
You compare the candidate’s response to the expert key and assign the appropriate score.
This method is used for most multiple-choice SJTs and is highly scalable.
2. Ranked or Comparative Scoring (Best-to-Worst Method)
In these questions, candidates are asked to rank responses from most to least effective.
Their ranking is then compared to the expert key, and points are awarded based on closeness of match.
Agreement with Expert Ranking | Points Awarded |
Exact match | Full points |
Slight mismatch (1 off) | Partial points |
Major mismatch | Low/zero points |
This approach gives insight into how well a candidate can prioritize decisions, not just choose a decent one.
Example: Scoring a Situational Judgement Question
Scenario:
A colleague consistently turns in work late, affecting the team’s deadlines. What should you do?
Option | Expert Score | Candidate’s Choice |
Speak privately and ask if they need support | 4 (Ideal) | ✅ Selected |
Inform your manager immediately | 2 | |
Publicly call them out in a meeting | 1 | |
Ignore the issue and hope it improves | 1 |
✅ The candidate selected the highest-scoring response.
Result: 4 out of 4 points.
Weighted Scoring Based on Competency
Some SJTs are built around multiple soft skills—like leadership, teamwork, or ethics. In those cases, you can apply weighted scoring to emphasize skills that matter most for the role.
Skill/Competency | Weight (%) |
Decision-Making | 30% |
Communication | 25% |
Conflict Resolution | 25% |
Integrity | 20% |
You then multiply the question score by its weight and calculate the final result.
This works well when you’re building custom SJTs or assessing for leadership roles.
Score Ranges: What’s a “Good” Score?
Here’s a general interpretation scale:
Final Score % | Rating | Interpretation |
85–100% | Excellent | Strong judgment and alignment with expectations |
70–84% | Good | Solid decision-making, room for growth |
50–69% | Average | Acceptable, may need guidance in real scenarios |
Below 50% | Below Expectations | Judgment may not align with role requirements |
Get more scoring insights here:
👉 What Is a Good Score on the Situational Judgement Test
Pro Tips for Fair & Effective SJT Scoring
Tip | Why It Matters |
Use expert input during setup | Ensures scoring reflects real-world expectations |
Keep scenarios job-relevant | Makes scores more predictive of on-the-job behavior |
Randomize answer orders | Reduces pattern recognition and “gaming” the test |
Test the test (pilot run) | Helps you fine-tune scoring before going live |
Combine SJT with interviews | Use SJT scores to guide deeper behavioral questions |
Want to create or improve your own questions? Start here:
👉 350 Situational Judgement Test Sample Questions (with Answers)
How Long Should the Test Be?
A well-scored test also depends on appropriate length.
Test Duration | Ideal Use Case |
10–15 minutes | Entry-level screening or volume hiring |
20–30 minutes | Mid-level assessments |
30–45+ minutes | Leadership, ethics, and advanced roles |
More on timing here:
👉 How Long Is the Situational Judgement Test
Scoring the Casper Situational Judgement Test
The Casper SJT (used in healthcare and education hiring) doesn’t use multiple choice—it uses typed, open-ended responses. These are scored by trained raters based on structured rubrics.
If you’re considering long-form SJTs, Casper-style tests may be the model to explore.
👉 What Is the Casper Situational Judgement Test
Bonus: Benchmark Your Scores Like the UK’s DWP
The UK’s Department for Work and Pensions (DWP) developed a scoring model based on civil service competencies.
You can apply their method by:
- Creating consistent rating rubrics
- Using trained scorers
- Linking SJT results to core job frameworks
Explore that model here:
👉 UK’s DWP Situational Judgement Test
Final Thoughts
Scoring situational judgement tests isn’t just about points—it’s about predicting how people will behave when it counts.
When done right, SJTs give you data-backed insight into soft skills that are often the hardest to measure, yet the most critical to success.
To recap:
- Use expert benchmarking or ranking methods
- Apply weighted scores based on role needs
- Interpret results using standardized ranges
- Combine SJTs with interviews for a complete hiring picture
Explore more tools to enhance your hiring:
- What Do Situational Judgement Tests Measure
- Situational Judgement: Complete Guide for CEOs & Recruiters
Because when it comes to hiring great people—it’s not just what they know, but how they think. ✅