Chapters 6–8 Summary
Last Updated on September 5, 2023, by eNotes Editorial. Word Count: 1262
Chapter 6: Ineligible to Serve: Getting a Job
The chapter begins with the story of a student whose application for a minimum-wage job at a supermarket was rejected on the basis of a personality test. He later found that all the other companies to which he applied also turned him down based on the same test. O’Neil uses this example to introduce the question of “how automatic systems judge us when we seek jobs and what criteria they evaluate.”
The Supreme Court has ruled that companies cannot use intelligence tests to select employees, so many have taken to using personality tests instead. Since simple personality tests can be easy to “game,” as the answers the employer wants are obvious, more complex tests have been developed, often featuring questions in which prospective employees are asked to admit to one of two faults. These tests are opaque and have no feedback mechanism, since no one tracks what happens to the candidates who are rejected. This is why personality tests for hiring employees are WMDs. They are even more unfair than the old-fashioned hiring practices based on personal contacts and first impressions: these, at least, varied from company to company, but the prejudices embedded in tests are scaled across entire industries.
Human resources departments are increasingly reliant on automatic systems to screen applications. Applicants who know the particular terms for which these systems search, therefore, have a built-in advantage. The systems are often designed to replicate procedures that human beings have previously followed, meaning that the computer learns “from the humans how to discriminate.” O’Neil cites the case of St. George’s Hospital Medical School in London. The people who screened applications tended to discard those with poor grammar or spelling, many of which were from overseas. The computer therefore learned to assign lower scores to applicants from Africa and Asia.
The complexity of recruiting and retaining good employees with any degree of consistency has so far proved too much for both old-fashioned and mathematical models. O’Neil compares companies’ imprecise use of data to the pseudoscience of phrenology, which claimed to make authoritative pronouncements about personality based on bumps and indentations in the subject’s skull. The Big Data models that claim to assess personality are often, like phrenology, “little more than a bundle of untested assumptions.”
Chapter 7: Sweating Bullets: On the Job
The data economy helps companies to tailor employees’ schedules to better serve the needs of the business. However, the needs of employees are generally disregarded—including such basic requirements as childcare, transportation, and sleep—as schedules change at the last minute to eliminate inefficiency and save the business as much money as possible. The technology used in scheduling is rooted in a branch of applied mathematics known as “operations research,” or OR. This was used during World War II to track the “exchange ratio” of “Allied resources spent versus enemy resources destroyed.” OR was developed and refined after the war by the Pentagon, and then by large companies. Technology which was first used to manage the supply of parts for car assembly lines now manages the supply of low-paid workers in the service sector.
These scheduling programs, O’Neil argues, are some of the worst WMDs. They are entirely opaque to workers, who often don’t know their schedules in advance, and they create a vicious feedback loop, since the uncertainty and stress prevent employees from finding better jobs or improving their prospects through education. The children of workers are also severely affected by the lack of routine and stability. As is always the case with WMDs, the model is optimized for efficiency, not fairness.
(This entire section contains 1262 words.)
See This Study Guide Now
Start your 48-hour free trial to unlock this study guide. You'll also get access to more than 30,000 additional guides and more than 350,000 Homework Help questions answered by our experts.
Already a member? Log in here.
These scheduling programs, O’Neil argues, are some of the worst WMDs. They are entirely opaque to workers, who often don’t know their schedules in advance, and they create a vicious feedback loop, since the uncertainty and stress prevent employees from finding better jobs or improving their prospects through education. The children of workers are also severely affected by the lack of routine and stability. As is always the case with WMDs, the model is optimized for efficiency, not fairness.
Although these models have only been used in the past to manage service and industrial workers, some companies are now trying to use Big Data to assess the performance of white-collar workers, using such metrics as the number of ideas they generate. This is very difficult, not least because it is hard to say what precisely constitutes an idea, but O’Neil warns that professionals will not necessarily be able to avoid assessment by WMDs in the future, especially as companies such as Google, Facebook, Amazon, and IBM become involved in the search for mathematical models.
The final part of the chapter uses methods by which school boards evaluate teachers to highlight various ways in which serious errors can be introduced into attempts to measure employee performance. One teacher in New York was given a score of 6% one year and 96% the following year, using a method so opaque that he was left to guess at the reason for this massive discrepancy. However, such wildly fluctuating results were so common that one in four teachers registered a 40% difference in successive years. These results suggest that it was not the teachers’ performance that varied, but rather the scoring generated by a faulty WMD.
Chapter 8: Collateral Damage: Landing Credit
Local bankers used to make decisions about lending based on various factors, including their prejudices about the potential borrower, who was usually known to them. The FICO model, developed by Bill Fair and Earl Isaac in the 1950s, evaluates the risk that an individual will default on a loan, and this system improved matters by removing irrelevant factors and individual prejudice from the decision. FICO is not a WMD, since it is transparent and improves the fairness of loan decisions.
Many of the models that have proliferated since FICO, however, are pseudoscientific, “arbitrary, unaccountable, unregulated, and often unfair—in short, they’re WMDs.” These models are known as “e-scores” and are extensively used by companies in making marketing decisions, since they are legally prohibited from using credit scores. The e-scores, like the local banker from decades ago, use proxies in place of the relevant data. Although the banker might have been directly racist, the e-score takes account of the area in which a person lives, which generally yields the same result.
Even genuine credit scores are sometimes misused as proxies. Employers increasingly take them to indicate general reliability, believing that people who pay their bills on time will be better employees. Since these scores are private, the companies have to ask for permission to see them, but this is little more than a formality, since prospective employees who refuse will not be considered. This practice creates a dangerous feedback loop, as those who have struggled to pay bills in the past are unable to find employment and can become trapped in poverty.
Mistakes on credit records and other automated databases, such as no-fly lists, can be hard to correct, and they can cause serious consequences for the individuals affected. In 2013, the Federal Trade Commission reported that 5% of consumers’ credit reports contained an error “serious enough to result in higher borrowing costs.” Credit reports, however, are regulated and transparent. The unregulated use of data is even more prone to expensive errors, and individuals often struggle to detect and fix these mistakes, since they are not the customers but the product. WMDs inevitably make errors which only human intervention can rectify.
Credit card companies and lenders have increased their use of data enormously, but this has often merely led to their considering more and more proxies. Amex noticed that people who shopped at certain stores were more likely to default on loans and penalized cardholders accordingly. The company only abandoned this practice after negative press coverage. Another company, Zest Finance, proclaims on its website that “all data is credit data.” Compared with the encoded prejudices of WMDs evaluating every aspect of the applicant’s life, the old-fashioned local banker doesn’t seem so bad.