NZATD 2025: When to test e-learning
I like to run my user testing at the same time as the stakeholder review. Not just to cut down the number of iterations, but for another reason.

In my last post based on my NZATD Tahu Ignite workshop, I looked at what UX/UI wisdom has to offer learning designers when it comes to testing e-learning during the design process.
That’s cool, but it only goes so far. One of the main reasons people don’t do testing, or don’t do enough, is because they think it needs to be done only when you have completed the work. Or they look at it and think that it’s just unfeasibly expensive to add extra steps to their design and development process. So they fall back into ADDIE (Analysis, Design, Development, Implementation, and Evaluation), with evaluation sitting firmly at the end.
And then, because it’s at the end, and all the development is completed, there is no budget or time left in which to test whether it actually works. And if you do find that the learning activity doesn’t work at that stage, what do you do? Are you really planning to tell the project sponsor you’ve used up all the resources they approved, and you haven’t met their objectives?
Testing goes quietly out the window at that stage, along with every other form of evaluation.
It’s a disappointing story, but disturbingly common. And it’s a waste.
It also doesn’t have to be this way.
How do you fit testing into the design process?
In my workshop, I shared a brief overview of SAM and its sibling SAM2: the brainchildren of Dr Michael Allen, of Allen Interactions (Allen and Sites, 2012). They are far from the only iterative design processes for learning interventions, but they are well documented and flexible.
I don’t follow the whole SAM2 process. I don’t have an activity called a Savvy Start (although my project do have quite a lot of similarities at this stage). I don’t call my drafts Alpha or Beta, or the final product Gold.
But I do use a system that has a lot of elements from SAM2.
SAM and SAM2 are both designed around the idea of an iterative cycle of planning, acting, and checking.
- In SAM, these are called Design, Develop, and Evaluate.
- SAM2 gets a bit more complex, giving the steps different names depending where you are in the project. In the Design phase, you Design, Prototype, and Evaluate. In the Development phase, you Develop, Implement, and Evaluate. There is a whole mini-SAM process included in the initial planning phase.
For both SAM and SAM2, the fundamental idea is the same, and it is that each significant step you take should include an evaluation step.
I’m going to talk about SAM2 from here on, because if you’re doing a simple job with SAM, it’s pretty similar to an ADDIE process that cycles back to the start anyway. Work out what you need, make it, try it out, and evaluate. Then do it again, until you get what you are after.
SAM2 is where things get meatier, but also more manageable than just repeating the ADDIE cycle.
What does SAM2 say ‘good’ looks like?
The key feature of SAM2 from my perspective is the idea of gradually increasing completion. Your deliverable (a module, a workshop, or whatever other learning resources you are developing) is thought of as having five possible stages.
- It could be like a sketch: the Savvy Start prototype.
- It could be like a concept drawing: the media prototype.
- It could be like a 3D rendering of the house: the design proof.
- It could be like a house that is built, but not yet had its plumbing or flooring installed: the Alpha draft.
- It could have all the building and plumbing work done, but be unfurnished: the Beta draft.
- It could be ready to move in: Gold.
Each of these stages can be reviewed or tested. And each may need more than one round of checks to get right.
You can learn about SAM2 at Allen Interactions’ website.
The problem
SAM2 is a system that seems designed for developing new online learning interactions. It has a whole heap of complexity that makes sense if you are trying to make something really new.
If what you’re doing is developing a face-to-face workshop (online or in person) or a Rise or Chameleon e-learning module, then it’s probably massive overkill and needs to be downgraded. Building a design proof (fully functional prototype with the look and feel) for a Rise module, is just building the Rise module.
For most modern e-learning that is developed in responsive tools, the activities are mostly already developed and the look and feel has just a few options. There simply isn’t the need for such a big design phase.
It’s still well worth considering at the start of the project, though. How much design are you planning to do? I’ve seen projects where teams who are used to Rise have run into real difficulties working in Storyline, not because they lacked the technical skills, but because they tried to skimp on the design phase. Developing e-learning in tools that aren’t specifically for that purpose gets even more complex.
And fixing design issues during what was meant to be the Beta review is far more stressful than I like.
What does this mean for learning designers?
The preparation and design stages are the time for internal reviews and concept presentations
Making changes at this stage is much cheaper than later.
My ‘prototypes’ for e-learning tend to be a text description of the learning experience, together with a diagram showing where the learning objectives are covered and what types of activity I’ll be using.
This costs very little to change, because it’s all in Word, as a SmartArt diagram and some paragraphs. This stage is where it makes sense to try out ideas.
Testing for something like this is a bit tricky, though, because it takes some experience to visualise what it will look like built. I have been known to knock together outlines of modules in Rise for this, with a couple of sample activities.
I find internal reviews are vital at this stage, and if I can swing it, talking with members of the target audience about how they might feel about the activities planned.
Sending it out for review doesn’t tend to be very useful; a prototype like this really works best if you share it in a meeting and talk through the concept that has been sketched out.
The Alpha phase is the time to get the content right
I don’t try to test with users at Beta (which we call Draft 1). There usually isn’t a budget for multiple rounds of testing on my projects, so I focus on working with subject matter experts to get my content right.
Beta is for testing
Once the content is all signed off is a great time to both test and to take the work back to the key stakeholders who need to sign it off.
I like to run my user testing at the same time as the stakeholder review. Not just to cut down the number of iterations, but for another reason.
Stakeholders often challenge the design when they see it built. And these challenges are often requests to ‘dumb down’ the activities. I can think of three projects off the top of my head where senior stakeholders who liked a plan at the start have seen a built module and suddenly worried that the formative activities were too hard. ‘What if they get this wrong?’ they ask me? ‘Won’t the learners be confused here?’
Don‘t get me wrong: I have absolutely been known to misjudge an activity occasionally. That’s part of the point of testing: to find places we aren’t getting things right. I’m not just trying to protect my precious, precious plans.
But for some reason, the beta review seems to make the people signing off on my work really nervous, to the point of juvenilising the learners. Because in all the cases I can think of where this one has come up, testing had shown that typical learners actually liked the activities in question and had not found them confusing.
So by running the user testing alongside the senior stakeholder review, we can get the information we need to ensure the learning activities we are developing are challenging as well as supportive. Sharing the test results can reassure people who are worried that the learners won’t understand something.
It is also still early enough in the process that if the testing brings up a problem, there is budget left to correct it. Sometimes an activity is confusing, or some extra information is needed. Sometimes, better instructions are needed on how to use an online interaction.
The right testing and reviews at the right times
SAM is a very simple model, and SAM2 more complex than the projects I work on requires. But I find them really worth looking at, because SAM2 provides a comprehensive lists of the types of output I might be making, and what kinds of review and testing might be appropriate.
I definitely recommend reading up on them, and then looking at your design and development process to see whether you’re getting the most out of each iteration you work on.