Methodology
Traditional pre-post surveys have a built-in flaw: participants don't rate themselves consistently before and after a program. Retrospective surveys fix this. Here's how, and why it matters for your evaluation data.
The standard approach to measuring program impact looks reasonable: survey participants before the program, survey them again after, compare the scores. But there's a problem researchers have documented since 1979.
Before a leadership program, a participant might rate themselves 4 out of 5 on "I give effective feedback." They believe they're pretty good at it. After the program, they now understand what effective feedback actually looks like. Using this new, more informed standard, they rate themselves 3 out of 5.
The data shows a decrease. The program looks like it made things worse. But the participant genuinely improved. What changed was their frame of reference, not their ability.
Identified by Howard et al. in 1979, response-shift bias occurs when a learning experience changes how participants interpret the rating scale itself. They recalibrate what "good" means, which contaminates the before-after comparison.
The result: traditional pre-post designs systematically underestimate program impact. Sometimes they show no effect, or even a negative effect, for programs that genuinely worked.
A retrospective survey (also called "post-then-pre" or "then-post") collects both ratings in a single sitting, after the program. For each question, participants rate where they are now and where they were before.
Because both ratings happen at the same time, participants use the same frame of reference for both. Their understanding of what "effective feedback" means is consistent across both scores. The shift you see in the data reflects actual change, not a recalibration of the scale.
Two data collection points. Attrition between rounds. Inconsistent frame of reference.
One data collection point. No attrition. Consistent frame of reference.
In ImpactCheck, retrospective surveys present each scale question with two columns. Participants rate where they were before the program and where they are now, side by side.
"I am confident applying what I learned to my day-to-day work"
That +1.6 shift tells a client something satisfaction scores never can: participants believe their confidence measurably increased as a result of the program.
Leadership development programs are especially vulnerable to response-shift bias. The whole point of a program is to change how people think about leadership. If it works, participants' standards change. Traditional pre-post surveys penalise programs for doing their job well.
Rohs (1999) found that traditional pre-post designs underestimated leadership program impact by 7-12% compared to retrospective measures. That's the difference between a program that looks mediocre and one that shows clear results.
The retrospective approach maps to Kirkpatrick's evaluation model at Level 2 (learning) and Level 3 (behaviour). It captures whether participants gained new skills and whether they're applying them, using a consistent frame of reference that traditional designs can't provide.
Retrospective mode isn't always the right choice. It depends on what you're measuring.
You want to know: did something change?
Kirkpatrick Level 2 (learning) and Level 3 (behaviour)
You want to know: how was the experience?
Kirkpatrick Level 1 (reaction)
ImpactCheck supports both. Retrospective mode is a toggle you turn on per survey. Leave it off and you get a straightforward post-program or post-coaching feedback survey.
No method is perfect. The retrospective approach has known limitations, and being upfront about them makes your evaluation more credible.
These are the same limitations acknowledged in the research literature (Hill & Betz, 2005). The consensus is that retrospective surveys produce more accurate self-report data than traditional pre-post designs, while being simpler to administer. For most program evaluation contexts, the trade-off is worth it.
ImpactCheck has retrospective mode built in. Toggle it on, customise the "before" and "after" labels, and your survey collects both ratings automatically.
Create your free accountFree to use. No credit card required.