Last Reviewed on April 2, 2020, by eNotes Editorial. Word Count: 1301
Chapter 9: No Safe Zone: Getting Insurance
In 1896, Frederick Hoffman, a statistician who worked for the Prudential Insurance Company, produced a report which argued that the lives of black Americans were so precarious as to be uninsurable. This analysis was a WMD, containing statistical errors and confusing correlation with causation. The insurance industry, however, accepted Hoffman’s idea that certain groups of people posed an unacceptable level of risk, and they made the same assessment of certain neighborhoods.
A subtler form of these ideas is encoded in even the most recent WMDs. Insurance has become ever more personalized since the origins of the industry in the seventeenth century. More data than ever is now available for insurance companies to assess the peculiar risks pertaining to each individual. However, what insurers actually do is to use data “to divide us into smaller tribes.” The opacity of the process prevents us from knowing we are being placed into groups, let alone what those groups are. Some of the criteria are grossly unfair. O’Neil mentions that in Florida, Consumer Reports found that people with poor credit scores and clean driving licenses paid, on average, $1,552 more for car insurance than those with excellent credit scores and a conviction for drunk driving, perhaps the most directly relevant criterion imaginable.
The models place such a high value on credit scores partly because they can be processed quickly and easily. However, the insurance companies are also reluctant to change their practices because they make huge profits from overcharging competent drivers, and without incurring much risk. The opacity of the criteria prevents customers from shopping around for better rates. At the same time that data is withheld from customers, insurers are able to obtain increasing amounts of data about customers. This will eventually allow them to charge prohibitively high rates or refuse coverage to those who pose the highest risk, negating the original purpose of insurance, which was “to help society balance its risk.”
Machines that analyze millions of data points will often find correlations which humans would never consider. This could be something as absurd as “people who spend more than 50 percent of their days on streets starting with the letter J.” It can then take a lot of work (which is not always done) to discover why the machine in question has sorted people into particular categories. Huge amounts of data are fed into black boxes, and as time goes on, we will have less and less idea of the reasoning behind the conclusions that emerge.
Health insurance is particularly vulnerable to the faulty analysis of vast amounts of data. The scores that health insurers use are often based on discredited science, such as the Body Mass Index (BMI), which discriminates against women and those with muscular physiques. Employers are already using health data to assess potential employees and current workers. O’Neil is concerned that “if companies cooked up their own health and productivity models, this could grow into a full-fledged WMD.”
Chapter 10: The Targeted Citizen: Civic Life
When someone posts something on Facebook, they do not know which of their friends will see it. Facebook’s algorithms decide this for users. About two-thirds of adults in America have a Facebook profile, and half of them rely on Facebook “to deliver at least some of their news.” Facebook, therefore, has a huge amount of power to control the news seen by a large segment of the population. Other companies—such as Google, Apple, Microsoft, and Amazon—also control vast quantities of information and are correspondingly powerful. Facebook’s campaign to increase voter turnout in 2010 persuaded an estimated 340,000 more people to vote. In 2012, Facebook altered their news feed algorithm for approximately two million people who appeared to be politically engaged, in order to give them “a higher proportion of hard news.” Surveys indicated that voter participation in this group increased slightly, from 64% to 67%.
Facebook has also conducted experiments to affect the way its users feel by altering the quotient of positive and negative posts in the news feeds of different groups. All of this activity is opaque to users. Unlike newspapers, which clearly curate the news, Facebook appears at first to be a neutral platform—and the company actively cultivates this image. Google is in a similar position. It is generally trusted and regarded as neutral, but “search results, if Google so chose, could have a dramatic effect on what people learn and how they vote.” Google and Facebook are not yet political WMDs, because there is no evidence that they are being used in a partisan manner. “Still,” O’Neil notes, “the potential for abuse is vast.”
Consumer marketing has now merged with politics, allowing politicians to target specific groups with the messages they know those groups want to hear. Politicians also emulate credit card companies in their use of Big Data, creating huge databases of voters and placing them into groups based on demographics and values. These tools allow candidates “to sell multiple versions of themselves—and it’s anyone’s guess which version will show up for work after inauguration.” Political campaigns employ data analysts who use patterns identified by retailers to develop voter profiles. Similar methods are employed by lobbyists and special interest groups.
At its most sinister, the strategy of targeting carefully identified groups of voters is used to spread lies among those predisposed to believe them. This, O’Neil asserts, is why in 2015, 20% of Republicans believed that President Obama was born outside the United States, while 43% thought he was a Muslim. These highly targeted political messages are making it more and more difficult to know what others are hearing and reading, increasing the polarization of society. As with most WMDs, however, targeted messaging could theoretically be used for constructive purposes if the objective were changed, supplying voters with information that is particularly relevant or helpful to them.
WMDs reverse the national motto of the United States: e pluribus unum, “out of many, one.” They divide Americans into many groups and inflict great harm on the poorest of those groups, while hiding that harm from everyone else. O’Neil contrasts the idea of dismantling WMDs with the cause of gay rights. The gay rights movement benefitted from market forces, as companies could not afford to ignore the talents of gay and lesbian workers. Social justice was a by-product of the profit motive. There is no such obvious payoff in getting rid of WMDs, as they are often profitable and their victims are powerless. However, some WMDs target the middle classes and the rich, and they will do so increasingly as their users look for new opportunities.
O’Neil suggests that data scientists should make a pledge similar to the medical profession’s Hippocratic oath. Such an oath would contain provisions not to be overly reliant on mathematical models, not to sacrifice reality for efficiency or elegance, and not to pretend that models are more accurate than they really are. Because only the most scrupulous individuals will take such an oath voluntarily, laws also need to change, and “we must re-evaluate our metric of success” by looking at the human impact of mathematical models. O’Neil suggests specific updates for the Americans with Disabilities Act (ADA) and the Health Insurance Portability and Accountability Act (HIPAA). Models that have an impact on people’s lives should also be transparent and easily available to those affected.
If we treat mathematical models as a neutral force, ignoring the prejudices and self-interest coded into them, we will continue to create WMDs and exacerbate the problems caused by existing models. We should police WMDs and create mathematical models with the specific intent of making them fair and accountable, using them to combat racism, sexism, and poverty. As O’Neil concludes,
Math deserves much better than WMDs, and democracy does too.