NZATD 2025: How to test e-learning before launch
We need both these types of evaluation during our design and development process, if we want to deliver resources that actually teach the learners what we are trying to teach.

The subheading for this post, if it had one, would be ‘Why Jakob Nielsen should be standard reading for learning designers’.
I first discovered Nielsen in around 2005, when his blog was really ugly, but the research he was sharing about how people use websites was really beautiful.
I signed up for his email newsletter, Alertbox, because it told me vital things to be aware of as a learning designer. Things like: never rely on learners to scroll down for more information, because learners (in Nielsen’s terms, users) don’t always recognise what a scroll bar is. Then five years later, it told me that scroll bars were fine, because most users now recognised them and used them happily. I learnt about the F space, and other ways to structure information on screens.
One thing I loved was the degree to which Nielsen’s site was an example of the advice he gave. It was really easy to use, because he followed all his own advice.
Fast-forward 20 years, and I spent about a quarter of my workshop at this year’s Tahu Ignite conference for learning designers sharing why NN/g (Nielsen Norman Group) is important to keep an eye on, and which of their posts I find most useful.
NN/g’s fabulous website has videos and articles sharing good practice advice that is vital for anyone building websites. (Make no mistake, an e-learning module is a website.)
Jakob Nielsen is the person who stated Jakob’s law: ‘Users spend most of their time on other sites. This means that users prefer your site to work the same way as all the other sites they already know.’
What does NN/g say ‘good’ looks like?
NN/g have a wonderful website, but a lot of it these days is targeted specifically at UX designers. So we have to dig very slightly to find the parts I’m most interested in.
NN/g recommend two main things that I find useful in testing e-learning. One is the heuristic evaluation by another designer, and the other is the usability study with test users.
- The description of how to use a heuristic evaluation has a lot in it for us to consider when we are planning how to do a good learning-design review (Moran and Gordon, 2023).
- The page on test user numbers has a lot we can take into account when we plan our user testing for learning design (Nielsen, 2012).
What does this mean for learning designers?
The key takeaways for me are that we need both types of evaluation during our design and development process, if we want to deliver resources that actually teach the learners what we are trying to teach.
Learning design reviews made systematic
A learning design review can be based on heuristics like these:
- E-learning should include at least one formative assessment activity for each of the learning objectives, with meaningful feedback.
- Summative assessments should allow the learner to demonstrate that they have achieved each of the learning objectives.
- There should be closed captions for all videos, alt-texts for all meaningful images, and adequate colour contrast and font-size for accessibility.
Obviously, these aren’t all the rules we would include in a full learning-design review, but having an agreed set of heuristics is a big step forward for being able to reliably produce good quality modules.
User testing
When it comes to user testing, we can take confidence in the idea that you can get good results, if you are working with well known tools, with just two to five test learners.
They need to be typical members of your audience, and they need to be allowed to work through the module, rather than asked to comment on it. Ideally, watch them as they work through, so you can see where they get stuck and ask them to talk aloud about what they are thinking at that point.
It is nothing short of amazing how effectively this works to identify the issues that learners may run into as they work through the module with this test.
For a half-hour module, it takes a grand total of three hours to do, including an hour to find and organise the sessions, and one learning designer sitting in on each of the two half-hour sessions.
It’s also great for confirming how long the module actually takes a real learner to work through. Estimates for timing are great, but they have nothing on actual learners doing the modules.
I find that user testing like this is also great if I’ve done something a bit adventurous or unusual. It’s surprising how common it is for me to try something cool, and then find at the main review round that the project owners are worried the learners won’t ‘get’ it, or that it’s too hard. I find that learners generally seem to like things to be a bit adventurous, as long as they get the right supports to succeed.
User testing helps me identify whether my cool idea is actually cool or not, without guessing or relying on a reviewer’s (under)estimate of learner capability.