What makes a great learning game?

Survival of the Best Fit is a game that teaches about algorithmic bias. It is a fabulous way to understand how algorithms can reinforce and accentuate structural inequalities. I was struck by the ease with which very poor datasets can enter AI training.

What makes a great learning game?


Survival of the Best Fit by Gabor Csapo, Jihyun Kim, Miha Klasinc, and Alia ElKattan (2019) is a game that teaches about algorithmic bias. It is a fabulous way to understand how algorithms can reinforce and accentuate structural inequalities. I was struck by the ease with which very poor datasets can enter AI training - it sounded familiar, but was much easier to see when it was framed in the game's workplace-hiring scenario.

I highly recommend playing it.

The game is also a really beautiful example of scenario-based gamified learning. 

By asking you to make decisions, the game lets you form your own hypotheses about what is happening. Then it uses meaningful feedback to draw out the key messages it's trying to teach. As a player, you automatically compare these messages with your hypotheses, and are able to adjust them based on the feedback. This cycle of forming hypotheses and testing them helps learners to engage deeply, so they are doing more than clicking for information. 

Conventional e-learning often uses a 'presentation and test' format, providing information and then letting learners check they've understood it. Or even a 'presentation' format, where the learner just clicks and clicks, seeing or hearing content, but never verifying whether they've understood it. Putting the 'test' first, by having the learner make decisions about a scenario, means the learner comes out having formed their own interpretation, rather than having become temporarily able to parrot back someone else's words. The points they've studied are part of their long-term understanding of the world. It's more fun, more effective, and more powerful.

So why don't we do this every time? 

Creating rich scenarios, with multiple feedback options for different decisions, takes real work. The team who made this will have had to think hard about what they were trying to achieve and how they could get there. I note that the design team seem to have made this tool voluntarily, as an effort to improve the world. (I also assume the finished game we see is not their first iteration. It's fun to play, so I suspect it's been well user-tested.)

As well as a project team who aren't afraid of hard work, it also takes project owners who see the value in providing learning activities rather than presentations.

As learning designers, it's our job to help make this happen. 

About a year ago, I was in a design meeting for an e-learning module, and it started (as they so often do), with a senior manager saying he didn't have time to attend the whole meeting, but that all he wanted was for all the learners we were targeting to have read the new policy. Not understood it. Not known how it would affect their work. Just read it. 

He wanted us to make an e-learning module with a copy of each page of the policy, and reporting that would show who had viewed each page. I felt a bit daunted, but focused on asking him why he wanted this - what was he hoping it would achieve.

It took about 15 minutes to bring him around to telling me a bit about what he actually wanted to achieve on the job. Quarter of an hour after that, he cancelled his competing meeting so he could explain fully what he wanted the learners to do. It was pretty complex, because it wasn't about reading the policy at all. 

He wanted his staff to react in accordance with a whole series of different legal requirements, in a wide range of situations. The policy was meant to provide them with guidance on what that might mean for them, but it was still pretty generic, and he needed each team to discuss the implications for their work with their managers, making sure everyone was really clear on it. There were some great guiding principles in the policy, but no examples. Because each team would encounter very different situations in which the principles would apply, he wasn't sure how any single training intervention could help. 

Once we knew what we were trying to achieve, it got a lot easier. We decided that scenario-based activities were the way to go. We did make a short e-learning module, as a quick introduction to why the policy was needed. It started with a scenario, and we planned it to have a lot of 'question and feedback' rather than 'presentation and question' interactions, to get people forming hypotheses for themselves. Then we interviewed people from the teams, finding examples that matched the roles of 10 different groups of learners - around fifty scenarios in total. We planned a workshop framed around the examples, and we trained team leaders in how to run the workshop, using the relevant scenarios for their teams, to make sure everyone was having the discussions that they really needed - about what the new policy actually meant for them. 

It was a long way from a compliance activity with no guarantee that anyone had even read the pages they'd clicked to view!

It still wasn't a patch on Survival of the Best Fit in terms of interactivity. One thing this game relies on is the simple nature of its scenario. Because the player starts from one point, all the complexity can stay within the outcomes from that point. Often, when people talk about using scenarios, they are aiming for richness in the starting point - like the 50 scenarios in my workshop. This is great, but it has natural limits in how rich the interactions around them can be. It's generally just not feasible to start from multiple points and also provide really deep interactions around them, in a workplace learning module. 

It can be hard to trust that a very simple scenario can become rich, but Survival of the Best Fit shows how this works. It's a branching scenario, with what look like multiple loops. So it feels realistic and prompts the learner to form complex hypotheses by letting them make decisions and giving them prompts to reflect and feedback on what they have chosen.

As an example, let's look at my first play: I was blindsided (embarrassingly - I thought I knew better than this!) by the racism inherent in seeing 'education' as a key predictor of success. I'd formed a hypothesis that education would be a good predictor of success, but then started to question it when the results I was getting were not what I'd have thought. When I got the feedback that showed me why, I felt as though someone was shining a torch on my initial hypothesis and it was full of holes. Right now, I am pretty sure I won't forget to consider this again! More deeply, I can see ways in which people who are selecting datasets won't be aware of how their own biases will populate through algorithms into decisions they wouldn't support if they were aware of them. 

And all this from such a simple start - look at four variables and choose the best sets. 

This placing of the learning into feedback, rather than presentation, is what makes Survival of the Best Fit a really effective learning tool. 

Twenty years ago, people in workplace learning design were talking about moving away from 'chalk and talk' and 'text and next'. Designing learning activities to support reflection and encourage learners to form and test their own hypotheses is still surprisingly rare. 

However, it's where we need to be, if we are going to be anything beyond second-rate copywriters.

Originally published on my LinkedIn page on 18 June 2020.