The Biased Movie and Book Club, Episode 2 - Weapons of Math Destruction
- Luisa Herrmann
- Jun 1
- 2 min read

I read Cathy O’Neil’s Weapons of Math Destruction a few years ago, and I still come back to it when I want examples of how models can be used to codify biases under the guise of equality. It was one of the first books I read at the beginning of my journey toward understanding how data, analytics, and models are used in the “real world,” and it was a huge wake-up call. It’s one of the clearest, most impactful books I’ve read about the risks of unchecked algorithmic decision-making. And now, as AI systems are built and deployed at unprecedented scale, its message feels more urgent than ever.
O’Neil lays out in sharp, accessible language, how algorithms that are opaque, unregulated, and scaled too quickly can reinforce existing inequalities and even create new ones. They are created in theoretical spaces by very smart people who are not necessarily exposed to the human element impacted by their designs, and can't see all ends. Whether it’s hiring algorithms that penalize candidates based on gendered language in their resumes, lending models that assume lower credit equals lower worth, or predictive policing tools that entrench historical discrimination, the outcomes are clear: in an attempt to improve efficiency and decision-making, these systems codify harm to the historically discriminated and disenfranchised.
At the core of the book is the belief that just because something is based on mathematics or statistics doesn’t mean it’s neutral. In fact, the opposite is often true. The way we collect data, define success, and encode assumptions into models are all choices, which makes it inherently human and therefore, inherently biased. And when these systems scale across millions of lives, the consequences are real, and they can be devastating. Worse, when these final outcomes arrive, they are often so far removed from the original decision and implementation that there is little feedback happening to correct them.
For those of us building data-driven products or working in AI/ML (or using math/statistics/data science), this book is essential reading. It’s a great reminder that using numbers can obfuscate the fact that real people are affected, and the goal should not be optimization without context. If we aren’t actively asking who a model serves, who it excludes, and who it harms, we’re not doing our jobs responsibly.
Transparency and accountability must be core product values in AI. We need to stop treating algorithms as correct because they use math and are trained on "data". Products that can (and must) be interrogated, stress-tested, and held to account.
Have you read it? What stood out to you?
Comments