NZATD 2025: What to look for when testing your learning activities

Thalheimer’s model has an absolutely huge amount packed into it. I’ll be reading up on this for quite some time, I suspect. At a high level, though, it has a huge amount to offer.

NZATD 2025: What to look for when testing your learning activities

In my previous posts (on NN/G and SAM2) based on my NZATD Tahu Ignite workshop, I looked at what UX/UI wisdom has to offer learning designers when it comes to testing e-learning during the design process, and what the SAM2 process says about when in the design process different types of testing are good. This is my third post derived from the workshop.

The third part of the workshop looked at Will Thalheimer’s LTEM model, which tells us what to look for when we are evaluating our work, both before and after rollout.

Will Thalheimer, PhD, is a learning expert and author of Performance-Focused Learner Surveys and LTEM (the Learning-Transfer Evaluation Model). He promotes evaluation methods that go beyond smile sheets, focusing on whether learning actually changes on-the-job behaviour.

In Performance-Focused Learner Surveys, Thalheimer is looking at evaluation, but as part of this, he discusses how to design surveys that truly assess learning impact.

Thalheimer provides tested question formats for gauging learners’ confidence, understanding, and readiness to apply new skills, along with detailed guidance on gathering actionable feedback. He advises keeping questions specific to observable behaviours, avoiding vague satisfaction queries, and prompting learners to share examples of how they will apply their learning, Thalheimer also stresses analysing responses for patterns, triangulating survey data with other measures, and feeding insights back into design.

Thalheimer’s white paper on LTEM is available if you share your email at: https://www.worklearning.com/ltem/

What does Thalheimer recommend?

  • Analyse early pilot results to make design changes before scale-up.
  • Link pilot testing metrics directly to intended performance outcomes.
  • Use multiple data sources (observation, assessments, surveys, performance data).
  • Keep measures valid (test what you intend) and reliable (consistent results).

What do you need, to succeed?

  • Clear definition of performance objectives before pilot starts.
  • Use tools to capture behaviour change and workplace application.
  • Willingness to act on data, even if it means major redesign.

Applying piloting surveys in learning design

‘Just because learners COMPLETE A LEARNING EVENT doesn’t mean they learned… course completion is an inadequate way of evaluating learning.’

Before piloting, establish what the learning intervention is intended to achieve, and then evaluate the pilot on that basis, rather than on learner engagement, perceptions, or even knowledge. How does the intervention affect their decision-making (scenarios) and task performance (observations in the workplace) and what effects does that have on work results?

So what do I think of all this?

Thalheimer’s model has an absolutely huge amount packed into it. I’ll be reading up on this for quite some time, I suspect. At a high level, though, it has a huge amount to offer.

I’m really enthused by the way he splits task performance from decision-making, from knowledge. It has some really obvious implications for assessment design that I like.

I do think his level 1 might need to be split: I see a huge difference between someone failing to attend or complete for accessibility reasons and someone failing for motivational reasons. It’s possible he’s already taking this into account, though.

The other potential gap I’m seeing is to do with ways of identifying the barriers and success factors for transfer of learning into the workplace. Again, he may have these baked in somewhere I have yet to see.

Lots and lots to dig into.