Introduction and Chapters 1–2 Summary

Download PDF PDF Page Citation Cite Share Link Share

Last Reviewed on April 2, 2020, by eNotes Editorial. Word Count: 1263


Illustration of PDF document

Download Weapons of Math Destruction Study Guide

Subscribe Now

Author Cathy O’Neil begins by writing about how she has loved mathematics ever since she was a child:

Math provided a neat refuge from the messiness of the real world.

She majored in math, obtained her PhD, and became a professor. Later, however, she left academia to work for a hedge fund. There, the 2008 financial crash showed her that mathematics “was not only deeply entangled in the world’s problems but also fueling many of them,” such as the housing crisis, unemployment, and financial collapse. However, financial institutions did not learn from the crash, and mathematics became more influential than ever with the rise of the Big Data economy.

O’Neil saw the convenience of Big Data, which could sort thousands of loan applications or other numerically-based documents in seconds, but was also aware of the problems, including a tendency to discriminate against those who were already poor. She refers to these Big Data models as WMDs: Weapons of Math Destruction. She uses an example of the Washington, DC, school district, which used a program to identify and fire underperforming teachers. However, this assessment method was seriously flawed, as it relied on a data sample of only 25–30 students and had no feedback mechanism. 206 teachers were fired, but the district will never know whether or not this decision was correct. The teachers are now viewed as failures, purely because the system had identified them as such—an example of what O’Neil calls “a WMD feedback loop.”

In the upper echelons of society, people tend to be personally evaluated. White-shoe law firms and exclusive preparatory schools conduct face-to-face interviews. The poor, however, are processed en masse by WMDs. The score that results “can turn someone’s life upside down,” even though it is only based on a probability, not on a certainty. However, people generally cannot fight back—and when they do, O’Neil notes,

The evidence must be ironclad. The human victims of WMDs . . . are held to a far higher standard of evidence than the algorithms themselves.

Having seen the danger posed by WMDs, O’Neil left the hedge fund where she worked in 2011 and became a data scientist. Seeing interviews with the Occupy Wall Street protesters made her realize that, although she agreed with them, they did not understand how finance worked. She decided to try to help by bringing the knowledge and information she had gained at the hedge fund to the cause of financial reform. She wanted to do this because she believes that the people who run WMDs, and even other data scientists, often fail to consider the people “on the receiving end of the transaction,” who are regarded as “collateral damage” (if anyone thinks about them at all).

The book, O’Neil writes, explores examples of people being harmed by WMDs “at critical life moments: going to college, borrowing money, getting sentenced to prison, or finding and holding a job.” This is the dark side of Big Data.

Chapter 1: Bomb Parts: What Is a Model?

O’Neil begins by observing how the managers of baseball teams organize their defenses based on analyses of opposing players’ hitting patterns. This is only one of the ways in which teams use statistical data to maximize the probability that they will win. The use of statistical models in baseball is fair, because everyone has access to the statistics and the data is highly relevant to the outcomes.

While the second point may sound obvious, the people who design WMDs, in contrast, “routinely lack data for the behaviors they’re most interested in.” This leads them to use proxies which do not necessarily correlate, and are sometimes discriminatory or even illegal, such as judging a subject’s ability to pay back a loan based on the language patterns they use. The baseball models also make use of a constant stream of new data, whereas WMDs frequently remain static.

When we create a model, we choose the most important information to include. We generally design it for one specific function, meaning that it will have huge blind spots outside that area. Although mathematical models appear to be impartial, these blind spots—the matters that designers exclude as irrelevant or unimportant—reflect their opinions. O’Neil uses racism as an instance of a poorly designed predictive model, one that is “built from faulty, incomplete, or generalized data” and based on haphazard information and confirmation bias. WMDs, she says, often operate in much the same way. She gives examples of the way in which data-based risk models, designed to eliminate racial bias in sentencing, have in fact merely camouflaged that bias through the selection of data and the way in which it is interpreted.

O’Neil concludes the chapter by posing questions that reveal whether a statistical model can be classed as a WMD, which is differentiated by its toxic effects. First, she asks, is the model opaque or even invisible to those whose data is being analyzed? Secondly, is it unfair? Thirdly, does the model have the capacity to grow exponentially? These three factors—termed Opacity, Scale (or scalability), and Damage (to the lives of those whose data is used)—define a WMD.

Chapter 2: Shell Shocked: My Journey of Disillusionment

O’Neil observes that tiny anomalies in financial markets can be worth millions of dollars to those who discover them. She enjoyed this aspect of her work at the hedge fund, D. E. Shaw, and at first regarded it as morally neutral. She did not think about the fact that the huge sums of money with which the hedge fund was gambling represented the mortgages and pension funds of real people. This was brought home to her by the financial collapse of 2008. The managers of D. E. Shaw initially thought that the collapse would not affect the hedge fund and that they might even make money out of it.

However, even the managers eventually became worried by the element of novelty in the 2008 crisis—in particular by the unreliability of mortgage-backed securities, which had always been stable in the past. Since mathematical models predict the future based on historical data, a genuinely new situation inevitably presents a problem. The subprime mortgages which fueled the crisis were not themselves WMDs, since they were financial instruments, not models. However, the models which classified the mortgages were WMDs, and the mathematicians who used them were dealing with highly unreliable, often fraudulent, data. The scale of the market was vast, with $3 trillion of subprime mortgages and a surrounding market twenty times as large by 2007. By the second half of 2008, these numbers had turned into human suffering as people lost their jobs and homes.

In 2009, it became clear to O’Neil that financial institutions had not learned the lessons of the crash and were continuing much as before. This is why she left D. E. Shaw and joined a firm of risk analysts. She found that, unlike hedge funds, the big banks showed little interest in analyzing the risk in their portfolios, since the culture of Wall Street depends on underestimating risk.

O’Neil soon moved to another company to work as a data scientist. She saw many parallels between the financial sector and Big Data, including the type of ambitious, well-educated, money-motivated people who worked in both. She also felt that the two sectors were related in “the separation between technical models and real people.” This led her to wonder what the Big Data analogue to the financial crisis would be, and she quit her job to investigate what she saw as the misuse of mathematics in this field.


Chapters 3–5 Summary