Chapter 8: The Toxic Power of Data and Experiment

What boundaries or regulation should education technology advocates have around data collection and analysis?

What ethics or policies should guide experimentation in online learning environments?

How do you view the “risk-reward” calculus of the potential gains from education technology research with the risks of surveillance and a loss of autonomy.

Hi all,

So, I made the book :-). That story about Pearson on pages 220-223? That was me and my team. The first time I heard of Justin was when we Tweeted something in support of A/B testing when the “controversy” was going around Twitter. I appreciated that and the many expressions of support I got from others (including Neil Heffernan who was a guest on the virtual book club).

Interestingly, the way this came to general attention was that we decided to share our findings with the education research world instead of keeping them to ourselves so we presented at the American Education Research Association conference. A reporter from EdWeek picked up on it and ran with the headline along the lines of “Pearson conducts social-psychological experiment on students.” Then a blogger from the Washington Post picked it up and we were off. I spent days doing interviews with other media outlets.

I had never before talked to Investor Relations, the team at Pearson who manages all the communications with investors, but my phone rang that week as our stock price fell. It turns out that the market overall was down so they decided it wasn’t attributable to me but they needed a response in case investors asked. One of the investment rating services reviewed it and concluded “Pearson tries to improve learning. Washington Post objects.” I exhaled. All this because we wanted to see if adding some growth mindset messaging improved performance.

As you can imagine, this has made me somewhat gun shy about A/B testing but also really interested in the public’s view of it. I was fascinated by this article in the Proceedings of the National Academies of Science studying the public’s view of A/B testing. Across 16 studies spanning 9 domains, people were more likely to favor implementation of unproven treatments for everyone than running A/B experiments to prove if it works. I don’t think its just concerns about the data and privacy, I think there is some issue or perception of equal treatment or fairness at play here. I would love to hear some discussion on Monday about how to reconcile many of our belief in this type of experiment as a way to learn about learning and improve our products with the general public’s seeming distaste for them.

When I saw the topic, I also thought of the kind of information generated through quantitative and qualitative data. And our tendency of wanting to measure and numerically represent everything. For example, I recently had to work on developing curiosity in the classroom and some of the top papers are about showing kids words like curious, wonder, etc. It’s called priming. That sounds fundamentally dehumanising to me. But qualitative methods aren’t very scalable so are we stuck with quantitative, summary data?