Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence
Page 5 the concept of intelligence has done inordinate harm over centuries and has been used to justify relations of domination from slavery to eugenics.
Page 7 this belief that the mind is like a computer, and vice versa, has "infected decades of thinking in the computer and cognitive sciences," creating a kind of original sin for the field.
Page 8 In contrast, in this book I argue that AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications. AI systems are ultimately designed to serve existing dominant interests. In this sense, artificial intelligence is a registry of power. AI systems both reflect and produce social relations and understandings of the world.
Page 9 "Machine learning" is more commonly used in the technical literature. Yet the nomenclature of AI is often embraced during funding application season, For my purposes, I use AI to talk about the massive industrial formation that includes politics, labor, culture, and capital. what is being optimized, and for whom, and who gets to decide. Then we can trace the implications of those choices.
Page 11 This colonizing impulse centralizes power in the AI field: it determines how the world is measured and defined while simultaneously denying that this is an inherently political activity.
Page 13 the politics of technology,
Page 15 Mining is where we see the extractive politics of AI at their most literal. building models for natural language processing and computer vision is enormously energy hungry, and the competition to produce faster and more efficient models has driven computationally greedy methods that expand AI's carbon footprint.
Page 16 Systems are increasing surveillance and control for their bosses. When these collections of data are no longer seen as people's personal material but merely as infrastructure, the specific meaning or context of an image or a video is assumed to be irrelevant.
Page 17 By looking at how classifications are made, we see how technical schemas enforce hierarchies and magnify inequity. affect recognition, the idea that facial expressions hold the key to revealing a person's inner emotional state. there is considerable scientific controversy around emotion detection, which is at best incomplete and at worst misleading. Despite the unstable premise, these tools are being rapidly implemented into hiring, education, and policing systems. The deep interconnections between the tech sector and the military are now being reined in to fit a strong nationalist agenda.
Page 18 The concluding chapter assesses how artificial intelligence functions as a structure of power that combines infrastructure, capital, and labor. AI systems are built with the logics of capital, policing, and militarization— and this combination further widens the existing asymmetries of power. Artificial intelligence, then, is an idea, an infrastructure, an industry, a form of exercising power, and a way of seeing; it's also a manifestation of highly organized capital backed by vast systems of extraction and logistics, with supply chains that wrap around the entire planet.
Page 20 This book argues that addressing the foundational problems of AI and planetary computation requires connecting issues of power and justice: from epistemology to labor rights, resource extraction to data protections, racial inequity to climate change. ONE. Earth
Page 26 The history of mining, like the devastation it leaves in its wake, is commonly overlooked in the strategic amnesia that accompanies stories of technological progress.
Page 28 The greatest benefits of extraction have been captured by the few. The effects of large-scale computation can be found in the atmosphere, the oceans, the earth's crust, the deep time of the planet, and the brutal impacts on disadvantaged populations around the world.
Page 29 Tesla could more accurately be described as a battery business than a car company. 14 The imminent shortage of such critical minerals as nickel, copper, and lithium poses a risk for the company, making the lithium lake at Silver Peak highly desirable.
Page 30 The term "artificial intelligence" may invoke ideas of algorithms, data, and cloud architectures, but none of that can function without the minerals and resources that build computing's core components. Rechargeable lithium-ion batteries are essential for mobile devices and laptops, in-home digital assistants, and data center backup power.
Page 31 The cloud is the backbone of the artificial intelligence industry, and it's made of rocks and lithium brine and crude oil. From the perspective of deep time, we are extracting Earth's geological history to serve a split second of contemporary technological time,
Page 32 The Bay Area is a central node in the mythos of AI, but we'll need to traverse far beyond the United States to see the many-layered legacies of human and environmental damage that have powered the tech industry.
Page 33 There are seventeen rare earth elements: But extracting these minerals from the ground often comes with local and geopolitical violence. Mining is and always has been a brutal undertaking.
Page 34 Mining profits have financed military operations in the decades-long Congo-area conflict, fueling the deaths of thousands and the displacement of millions.
Page 35 While mining to finance war is one of the most extreme cases of harmful extraction, most minerals are not sourced from direct war zones. This doesn't mean, however, that they are free from human suffering and environmental destruction.
Page 38 It is a common practice of life to focus on the world immediately before us, the one we see and smell and touch every day. It grounds us where we are, with our communities and our known corners and concerns. But to see the full supply chains of AI requires looking for patterns in a global sweep, a sensitivity to the ways in which the histories and specific harms are different from place to place and yet are deeply interconnected by the multiple forces of extraction.
Page 41 algorithmic computing, computational statistics, and artificial intelligence were developed in the twentieth century to address social and environmental challenges but would later be used to intensify industrial extraction and exploitation and further deplete environmental resources. Advanced computation is rarely considered in terms of carbon footprints, fossil fuels, and pollution; metaphors like "the cloud" imply something floating and delicate within a natural, green industry. As Tung-Hui Hu writes in A Prehistory of the Cloud, "The cloud is a resource-intensive, extractive technology that converts water and electricity into computational power, leaving a sizable amount of environmental damage that it then displaces from sight." 52
Page 42 running only a single NLP model produced more than 660,000 pounds of carbon dioxide emissions, the equivalent of five gas-powered cars over their total lifetime (including their manufacturing) or 125 round-trip flights from New York to Beijing. 56
Page 43 Data centers are among the world's largest consumers of electricity.
Page 45 Just as the dirty work of the mining sector was far removed from the companies and city dwellers who profited most, so the majority of data centers are far removed from major population hubs, whether in the desert or in semi-industrial exurbs.
Page 48 We have seen how AI is much more than databases and algorithms, machine learning models and linear algebra. It is metamorphic: relying on manufacturing, transportation, and physical work; data centers and the undersea cables that trace lines between the continents; personal devices and their raw components; transmission signals passing through the air; datasets produced by scraping the internet; and continual computational cycles. These all come at a cost. TWO. Labor
Page 54 Robotics has become a key part of Amazon's logistical armory, and while the machinery seems well tended, the corresponding human bodies seem like an afterthought. Humans are the necessary connective tissue to get ordered items into containers and trucks and delivered to consumers. But they aren't the most valuable or trusted component of Amazon's machine.
Page 56 Many large corporations are heavily investing in automated systems in the attempt to extract ever-larger volumes of labor from fewer workers. Logics of efficiency, surveillance, and automation are all converging in the current turn to computational approaches to managing labor. Rather than debating whether humans will be replaced by robots, in this chapter I focus on how the experience of work is shifting in relation to increased surveillance, algorithmic assessment, and the modulation of time.
Page 56 humans are increasingly treated like robots and what this means for the role of labor.
Page 57 work. But large-scale computation is deeply rooted in and running on the exploitation of human bodies.
Page 58 The common refrain for the expansion of AI systems and process automation is that we are living in a time of beneficial human-AI collaboration. engagement, where workers are expected to re-skill, keep up, and unquestioningly accept each new technical development.
Page 60 During the eighteenth and nineteenth centuries, the propaganda about hard work came in the forms of pamphlets and essays on the importance of discipline and sermons on the virtues of early rising and working diligently for as long as possible. The use of time came to be seen in both moral and economic terms: understood as a currency, time could be well spent or squandered away.
Page 63 Exploitative forms of work exist at all stages of the AI pipeline, from the mining sector, where resources are extracted and transported to create the core infrastructure of AI systems, to the software side, where distributed workforces are paid pennies per microtask.
Page 64 The technical AI research community relies on cheap, crowd-sourced labor for many tasks that can't be done by machines. Between 2008 and 2016, the term "crowdsourcing" went from appearing in fewer than a thousand scientific articles to more than twenty thousand--which makes sense, given that Mechanical Turk launched in 2005. But during the same time frame, there was far too little debate about what ethical questions might be posed by relying on a workforce that is commonly paid far below the minimum wage. 21
Page 65 Sometimes workers are directly asked to pretend to be an AI system.
Page 66 The writer Astra Taylor has described the kind of overselling of high-tech systems that aren't actually automated as "fauxtomation." 26 Automated systems appear to do work previously performed by humans, but in fact the system merely coordinates human work in the background. The true labor costs of AI are being consistently downplayed and glossed over, but the forces driving this performance run deeper than merely marketing trickery. It is part of a tradition of exploitation and deskilling,
Page 67 Fauxtomation does not directly replace human labor; rather, it relocates and disperses it in space and time. In so doing it increases the disconnection between labor and value and thereby performs an ideological function. Some 250 years later, the hoax lives on. Amazon chose to name its micropayment-based crowdsourcing platform "Amazon Mechanical Turk," despite the association with racism and trickery.
Page 68 On Amazon's platform, real workers remain out of sight in service of an illusion that AI systems are autonomous and magically intelligent. Now Mechanical Turk connects businesses with an unseen and anonymous mass of workers who bid against one another for the opportunity to work on a series of microtasks. In a paradox that many of us have experienced, and ostensibly in order to prove true human identity when reading a website, we are required to convince Google's reCAPTCHA of our humanity. So we dutifully select multiple boxes containing street numbers, or cars, or houses. We are training Google's image recognition algorithms for free.
Page 69 Again, the myth of AI as affordable and efficient depends on layers of exploitation, including the extraction of mass unpaid labor to fine-tune the AI systems of the richest companies on earth. Contemporary forms of artificial intelligence are neither artificial nor intelligent.
Page 71 work. As Astra Taylor argues, "The kind of efficiency to which techno-evangelists aspire emphasizes standardization, simplification, and speed, not diversity, complexity, and interdependence." 38
Page 75 A 2014 class action lawsuit against McDonald's restaurants in California noted that franchisees are led by software that gives algorithmic predictions regarding employee-to-sales ratios and instructs managers to reduce staff quickly when demand drops. 47 Employees reported being told to delay clocking in to their shifts and instead to hang around nearby, ready to return to work if the restaurant started getting busy again. Because employees are paid only for time clocked in, the suit alleged that this amounted to significant wage theft on the part of the company and its franchisees. 48
Page 76 There was an almost total removal of all conceptual work from execution of tasks." workers clock in to their shifts by swiping access badges or by presenting their fingerprints to readers attached to electronic time clocks. They work in front of timing devices that indicate the minutes or seconds left to perform the current task before a manager is notified. They sit at workstations fitted with sensors that continuously report on their body temperature, their physical distance from colleagues, the amount of time they spend browsing websites instead of performing assigned tasks, and so on.
Page 77 Surveillance apparatuses are justified for producing inputs for algorithmic scheduling systems that further modulate work time, or to glean behavioral signals that may correlate with signs of high or low performance, or merely sold to data brokers as a form of insight. young, mostly male engineers, often unencumbered by time-consuming familial or community responsibilities, are building the tools that will police very different workplaces, quantifying the productivity and desirability of employees. The workaholism and round-the-clock hours often glorified by tech start-ups become an implicit benchmark against which other workers are measured, producing a vision of a standard worker that is masculinized, narrow, and reliant on the unpaid or underpaid care work of others.
Page 81 Although there will always be ways to resist the imposed temporality of work, with forms of algorithmic and video monitoring, this becomes much harder--as the relation between work and time is observed at ever closer range.
Page 82 defining time is an established strategy for centralizing power.
Page 85 AI and algorithmic monitoring are simply the latest technologies in the long historical development of factories, timepieces, and surveillance architectures.
Page 88 All kinds of workers are subject to the extractive technical infrastructures that seek to control and analyze time to its finest grain--many of whom have no identification with the technology sector or tech work at all. THREE. Data
Page 93 I've looked at hundreds of datasets over years of research into how AI systems are built, but the NIST mug shot databases are particularly disturbing because they represent the model of what was to come. It's not just the overwhelming pathos of the images themselves. Nor is it solely the invasion of privacy they represent, since suspects and prisoners have no right to refuse being photographed. It's that the NIST databases foreshadow the emergence of a logic that has now thoroughly pervaded the tech sector: the unswerving belief that everything is data and is there for the taking. It doesn't matter where a photograph was taken or whether it reflects a moment of vulnerability or pain or if it represents a form of shaming the subject. It has become so normalized across the industry to take and use whatever is available that few stop to question the underlying politics. I argue this represents a shift from image to infrastructure, where the meaning or care that might be given to the image of an individual person, or the context behind a scene, is presumed to be erased at the moment it becomes part of an aggregate mass that will drive a broader system. It is all treated as data to be run through functions, material to be ingested to improve technical performance. This is a core premise in the ideology of data extraction.
Page 94 A computer vision system can detect a face or a building but not why a person was inside a police station or any of the social and historical context surrounding that moment. The mug shot collections are used like any other practical resource of free, well-lit images of faces, a benchmark to make tools like facial recognition function.
Page 95 The AI industry has fostered a kind of ruthless pragmatism, with minimal context, caution, or consent-driven data practices while promoting the idea that the mass harvesting of data is necessary and justified for creating systems of profitable computational "intelligence." This has resulted in a profound metamorphosis, where all forms of image, text, sound, and video are just raw data for AI systems and the ends are thought to justify the means. But we should ask: Who has benefited most from this transformation, and why have these dominant narratives of data persisted?
Page 96 It's useful to consider why machine learning systems currently demand massive amounts of data. One example of the problem in action is computer vision, the subfield of artificial intelligence concerned with teaching machines to detect and interpret images.
Page 96 These vast collections are called training datasets, and they constitute what AI developers often refer to as "ground truth." 13 The more examples of correctly labeled data there are, the better the algorithm will be at producing accurate predictions.
Page 97 Training data also defines more than just the features of machine learning algorithms. It is used to assess how they perform over time. Like prized thoroughbreds, machine learning algorithms are constantly raced against one another in competitions all over the world to see which ones perform the best with a given dataset.
Page 98 Once training sets have been established as useful benchmarks, they are commonly adapted, built upon, and expanded. Training data, then, is the foundation on which contemporary machine learning systems are built. 16 These datasets shape the epistemic boundaries governing how AI operates and, in that sense, create the limits of how AI can "see" the world.
Page 99 In the 1970s, artificial intelligence researchers were mainly exploring what's called an expert systems approach: rules-based programming that aims to reduce the field of possible actions by articulating forms of logical reasoning. But it quickly became evident that this approach was fragile and impractical in real-world settings, where a rule set was rarely able to handle uncertainty and complexity. 19 By the mid-1980s, research labs were turning toward probabilistic or brute force approaches. In short, they were using lots of computing cycles to calculate as many options as possible to find the optimal result.
Page 100 They started using statistical methods that focused more on how often words appeared in relation to one another, rather than trying to teach computers a rules-based approach using grammatical principles or linguistic features. the reduction from context to data, from meaning to statistical pattern recognition.
Page 103 Text archives were seen as neutral collections of language, as though there was a general equivalence between the words in a technical manual and how people write to colleagues via email. Like images, text corpuses work on the assumption that all training data is interchangeable. But language isn't an inert substance that works the same way regardless of where it is found. The origins of the underlying data in a system can be incredibly significant, and yet there are still, thirty years later, no standardized practices to note where all this data came from or how it was acquired--let alone what biases or classificatory politics these datasets contain that will influence all the systems that come to rely on them. 31
Page 106 The internet, in so many ways, changed everything; it came to be seen in the AI research field as something akin to a natural resource, there for the taking. As more people began to upload their images to websites, to photo-sharing services, and ultimately to social media platforms, the pillaging began in earnest. The tech industry titans were now in a powerful position: they had a pipeline of endlessly refreshing images and text, and the more people shared their content, the more the tech industry's power grew.
Page 108 ImageNet would become, for a time, the world's largest academic user of Amazon's Mechanical Turk, deploying an army of piecemeal workers to sort an average of fifty images a minute into thousands of categories. 40
Page 109 The approach of mass data extraction without consent and labeling by underpaid crowdworkers would become standard practice, and hundreds of new training datasets would follow ImageNet's lead.
Page 110 Over and over, data extracted without permission or consent would be uploaded for machine learning researchers, who would then use it as an infrastructure for automated imaging systems. Even when datasets are scrubbed of personal information and released with great caution, people have been reidentified or highly sensitive details about them have been revealed.
Page 111 For instance, the same New York City taxi dataset was used to suggest which taxi drivers were devout Muslims by observing when they stopped at prayer times. 50
Page 112 Contemporary organizations are both culturally impelled by the data imperative and powerfully equipped with new tools to enact it. 53 Behind the questionable belief that "more is better" is the idea that individuals can be completely knowable, once enough disparate pieces of data are collected. 54
Page 113 Terms like "data mining" and phrases like "data is the new oil" were part of a rhetorical move that shifted the notion of data away from something personal, intimate, or subject to individual ownership and control toward something more inert and nonhuman.
Page 113 Ultimately, "data" has become a bloodless word; it disguises both its material origins and its ends. And if data is seen as abstract and immaterial, then it more easily falls outside of traditional understandings and responsibilities of care, consent, or risk. metaphors of data as a "natural resource"
Page 114 High achievers in the mainstream economy tend to do well in a data-scoring economy, too, while those who are poorest become targets of the most harmful forms of data surveillance and extraction. data now operates as a form of capital. a shift away from ideas like "human subjects"--a concept that emerged from the ethics debates of the twentieth century--to the creation of "data subjects," agglomerations of data points without subjectivity or context or clearly defined rights.
Page 115 Once AI moved out of the laboratory contexts of the 1980s and 1990s and into real-world situations--such as attempting to predict which criminals will reoffend or who should receive welfare benefits--the potential harms expanded. Further, those harms affect entire communities as well as individuals. But there is still a strong presumption that publicly available datasets pose minimal risks and therefore should be exempt from ethics review. 64
Page 116 The risk profile of AI is rapidly changing as its tools become more invasive and as researchers are increasingly able to access data without interacting with their subjects. For example, a group of machine learning researchers published a paper in which they claimed to have developed an "automatic system for classifying crimes." 65 In particular, their focus was on whether a violent crime was gang-related, which they claimed their neural network could predict with only four pieces of information: the weapon, the number of suspects, the neighborhood, and the location. They did this using a crime dataset from the Los Angeles Police Department, which included thousands of crimes that had been labeled by police as gang-related. Gang data is notoriously skewed and riddled with errors, yet researchers use this database and others like it as a definitive source for training predictive AI systems.
Page 117 This separation of ethical questions away from the technical reflects a wider problem in the field, where the responsibility for harm is either not recognized or seen as beyond the scope of the research.
Page 118 Technical approaches can move rapidly from conference papers to being deployed in production systems, where harmful assumptions can become ingrained and hard to reverse.
Page 119 There are gigantic datasets full of people's selfies, tattoos, parents walking with their children, hand gestures, people driving their cars, people committing crimes on CCTV, and hundreds of everyday human actions like sitting down, waving, raising a glass, or crying. Every form of biodata--including forensic, biometric, sociometric, and psychometric--is being captured and logged into databases for AI systems to find patterns and make assessments.
Page 120 The collection of people's data to build AI systems raises clear privacy concerns. The practices of data extraction and training dataset construction are premised on a commercialized capture of what was previously part of the commons. This particular form of erosion is a privatization by stealth, an extraction of knowledge value from public goods. A dataset may still be publicly available, but the metavalue of the data--the model created by it--is privately held.
Page 122 The way data is understood, captured, classified, and named is fundamentally an act of world-making and containment. It has enormous ramifications for the way artificial intelligence works in the world and which communities are most affected. FOUR. Classification
Page 127 The politics of classification is a core practice in artificial intelligence. How does classification function in machine learning? What is at stake when we classify? In what ways do classifications interact with the classified? And what unspoken social and political theories underlie and are supported by these classifications of the world?
Page 128 classifications can disappear, as Bowker and Star observe, "into infrastructure, into habit, into the taken for granted."
Page 129 One of the more vivid examples of bias in action comes from an insider account at Amazon. In 2014, the company decided to experiment with automating the process of recommending and hiring workers.
Page 129 "They literally wanted it to be an engine where I'm going to give you 100 resumes, it will spit out the top five, and we'll hire those." 21 Quickly, the system began to assign less importance to commonly used engineering terms, like programming languages, because everyone listed them in their job histories. Instead, the models began valuing more subtle cues that recurred on successful applications. A strong preference emerged for particular verbs. The examples the engineers mentioned were "executed" and "captured." 22
Page 130 Inadvertently, Amazon had created a diagnostic tool. The vast majority of engineers hired by Amazon over ten years had been men, so the models they created, which were trained on the successful résumés of men, had learned to recommend men for future hiring. The employment practices of the past and present were shaping the hiring tools for the future.
Page 130 The AI industry has traditionally understood the problem of bias as though it is a bug to be fixed rather than a feature of classification itself.
Page 132 Designers get to decide what the variables are and how people are allocated to categories. Again, the practice of classification is centralizing power: the power to decide which differences make a difference.
Page 133 Skin color detection is done because it can be, not because it says anything about race or produces a deeper cultural understanding.
Page 133 Technical claims about accuracy and performance are commonly shot through with political choices about categories and norms but are rarely acknowledged as such. 33 These approaches are grounded in an ideological premise of biology as destiny, where our faces become our fate.
Page 134 By the 1900s, "bias" had developed a more technical meaning in statistics, where it refers to systematic differences between a sample and population, when the sample is not truly reflective of the whole. Machine learning systems are designed to be able to generalize from a large training set of examples and to correctly classify new observations not included in the training datasets. 35 machine learning systems can perform a type of induction, learning from specific examples (such as past résumés of job applicants) in order to decide which data points to look for in new examples In such cases, the term "bias" refers to a type of error that can occur during this predictive process of generalization--namely, a systematic or consistently reproduced classification error that the system exhibits when presented with new examples. This type of bias is often contrasted with another type of generalization error, variance, which refers to an algorithm's sensitivity to differences in training data.
Page 135 Amos Tversky and Daniel Kahneman study "cognitive biases," or the ways in which human judgments deviate systematically from probabilistic expectations. Technical designs can certainly be improved to better account for how their systems produce skews and discriminatory results. But the harder questions of why AI systems perpetuate forms of inequity are commonly skipped over in the rush to arrive at narrow technical solutions of statistical bias as though that is a sufficient remedy for deeper structural problems. There has been a general failure to address the ways in which the instruments of knowledge in AI reflect and serve the incentives of a wider extractive economy. Every dataset used to train machine learning systems, whether in the context of supervised or unsupervised machine learning, whether seen to be technically biased or not, contains a worldview. To create a training set is to take an almost infinitely complex and varied world and fix it into taxonomies composed of discrete classifications of individual data points, a process that requires inherently political, cultural, and social choices. By paying attention to these classifications, we can glimpse the various forms of power that are built into the architectures of AI world-building.
Page 139 Bowker and Star also underscore that once classifications of people are constructed, they can stabilize a contested political category in ways that are difficult to see. 50 They become taken for granted unless they are actively resisted.
Page 139 To borrow an idea from linguist George Lakoff, the concept of an "apple" is a more nouny noun than the concept of "light," which in turn is more nouny than a concept such as "health." 51 Nouns occupy various places on an axis from the concrete to the abstract, from the descriptive to the judgmental.
Page 142 In fact, there are no neutral categories in ImageNet, because the selection of images always interacts with the meaning of words. The politics are baked into the classificatory logic, even when the words aren't offensive. ImageNet is a lesson, in this sense, of what happens when people are categorized like objects.
Page 143 Perhaps it is no surprise that when we investigate the bedrock layer of these labeled images, we find that they are beset with stereotypes, errors, and absurdities. The focus on making training sets "fairer" by deleting offensive terms fails to contend with the power dynamics of classification and precludes a more thorough assessment of the underlying logics.
Page 144 By focusing on classification in AI, we can trace the ways that gender, race, and sexuality are falsely assumed to be natural, fixed, and detectable biological categories.
Page 146 the history of disability itself is a "story of the ways in which various systems of classification (i.e., medical, scientific, legal) interface with social institutions and their articulations of power and knowledge." 67
Page 147 Classifications are technologies that produce and limit ways of knowing, and they are built into the logics of AI. The problem for computer science is that justice in AI systems will never be something that can be coded or computed. It requires a shift to assessing systems beyond optimization metrics and statistical parity and an understanding of where the frameworks of mathematics and engineering are causing the problems. This also means understanding how AI systems interact with data, workers, the environment, and the individuals whose lives will be affected by its use and deciding where AI should not be used.
Page 148 Nonconsensual classifications present serious risks, as do normative assumptions about identity, yet these practices have become standard. That must change.
Page 150 Classificatory schemas enact and support the structures of power that formed them, and these do not shift without considerable effort. But the truly massive engines of classification are the ones being operated at a global scale by private technology companies, including Facebook, Google, TikTok, and Baidu. These companies operate with little oversight into how they categorize and target users, and they fail to offer meaningful avenues for public contestation. FIVE. Affect
Page 151 Like many Western researchers before him, Ekman had come to Papua New Guinea to extract data from the indigenous community. all humans exhibit a small number of universal emotions or affects that are natural, innate, cross-cultural, and the same all over the world. This is the story of how affect recognition came to be part of artificial intelligence and the problems this presents.
Page 152 Today affect recognition tools can be found in national security systems and at airports, in education and hiring start-ups, from systems that purport to detect psychiatric illness to policing programs that claim to predict violence. Why did the idea that there is a small set of universal emotions, readily interpreted from the face, become so accepted in the AI field, despite considerable evidence to the contrary?
Page 153 His work is connected to U.S. intelligence funding of the human sciences during the Cold War through foundational work in the field of computer vision to the post-9/ 11 security programs employed to identify terrorists and right up to the current fashion for AI-based emotion recognition. One of the many things made possible by this profusion of images is the attempt to extract the so-called hidden truth of interior emotional states using machine learning. These systems may not be doing what they purport to do, but they can nonetheless be powerful agents in influencing behavior and training people to perform in recognizable ways.
Page 154 A startup in London called Human uses emotion recognition to analyze video interviews of job candidates.
Page 155 Emotion recognition systems grew from the interstices between AI technologies, military priorities, and the behavioral sciences--psychology in particular. They share a similar set of blueprints and founding assumptions: that there is a small number of distinct and universal emotional categories, that we involuntarily reveal these emotions on our faces, and that they can be detected by machines.
Page 155 These articles of faith are so accepted in some fields that it can seem strange even to notice them, let alone question them. They are so ingrained that they have come to constitute "the common view."
Page 156 One aspect in particular played an outsized role: the idea that if affect was an innate set of evolutionary responses, they would be universal and so recognizable across cultures.
Page 162 They presumed a link between body and soul that justified reading a person's interior character based on their exterior appearance.
Page 165 In later years Ekman also would insist that anyone could come to learn to recognize microexpressions, with no special training or slow motion capture, in about an hour. 59 But if these expressions are too quick for humans to recognize, how are they to be understood? 60
Page 167 Ekman's FACS system provided two things essential for later machine learning applications: a stable, discrete, finite set of labels that humans can use to categorize photographs of faces and a system for producing measurements. It promised to remove the difficult work of representing interior lives away from the purview of artists and novelists and bring it under the umbrella of a rational, knowable, and measurable rubric suitable to laboratories, corporations, and governments.
Page 170 Ekman's work became a profound and wide-ranging influence on everything from lie detection software to computer vision.
Page 172 Other problems became clear as Ekman's ideas were implemented in technical systems. As we've seen, many datasets underlying the field are based on actors simulating emotional states, performing for the camera. That means that AI systems are trained to recognize faked expressions of feeling.
Page 174 This is not an engineering problem that could be solved with a better algorithm. By analyzing the history of these ideas, we can begin to see how military research funding, policing priorities, and profit motives have shaped the field.
Page 175 Once the theory emerged that it is possible to assess internal states by measuring facial movements and the technology was developed to measure them, people willingly adopted the underlying premise. The theory fit what the tools could do. SIX. State
Page 182 the intelligence community contributed to the development of many of the techniques we now refer to as artificial intelligence.
Page 184 As the historian of science Paul Edwards describes in The Closed World, military research agencies actively shaped the emerging field that would come to be known as AI from its earliest days. The military priorities of command and control, automation, and surveillance profoundly shaped what AI was to become. The tools and approaches that came out of DARPA funding have marked the field, including computer vision, automatic translation, and autonomous vehicles.
Page 185 Technologies once only available to intelligence agencies--that were extralegal by design--have filtered down to the state's municipal arms: government and law enforcement agencies. less attention is given to the growing commercial surveillance sector,
Page 186 Algorithmic governance is both part of and exceeds traditional state governance. But the rhetoric around artificial intelligence is much starker: we are repeatedly told that we are in an AI war. The dominant objects of concern are the supernational efforts of the United States and China, with regular reminders that China has stated its commitment to be the global leader in AI.
Page 196 As law professor Andrew Ferguson explains, "We are moving to a state where prosecutors and police are going to say ‘the algorithm told me to do it, so I did, I had no idea what I was doing.' And this will be happening at a widespread level with very little oversight." 56
Page 197 police are turning into intelligence agents:
Page 198 The intelligence models that began in national government agencies have now become part of the policing of local neighborhoods.
Page 201 Vigilant has since expanded its "crime-solving" toolkit beyond license plate readers to include ones that claim to recognize faces. In doing so, Vigilant seeks to render human faces as the equivalent of license plates and then feed them back into the policing ecology. 66 Like a network of private detectives, Vigilant creates a God's-eye view of America's interlaced roads and highways, along with everyone who travels along them, while remaining beyond any meaningful form of regulation or accountability. 67
Page 201 For Amazon, each new Ring device sold helps build yet more large-scale training datasets inside and outside the home, with classificatory logics of normal and anomalous behavior aligned with the battlefield logics of allies and enemies.
Page 203 But in 2014, the legal organization Reprieve published a report showing that drone strikes attempting to kill 41 individuals resulted in the deaths of an estimated 1,147 people.
Page 204 Once a pattern is found in the data and it reaches a certain threshold, the suspicion becomes enough to take action even in the absence of definitive proof. This mode of adjudication by pattern recognition is found in many domains--most often taking the form of a score.
Page 205 new technical systems of state control use the bodies of refugees as test cases.
Page 205 These military and policing logics are now suffused with a form of financialization: socially constructed models of creditworthiness have entered into many AI systems, influencing everything from the ability to get a loan to permission to cross borders.
Page 208 Taken together, the AI and algorithmic systems used by the state, from the military to the municipal level, reveal a covert philosophy of en masse infrastructural command and control via a combination of extractive data techniques, targeting logics, and surveillance. These goals have been central to the intelligence agencies for decades, but now they have spread to many other state functions, from local law enforcement to allocating benefits.
Page 211 Artificial intelligence is not an objective, universal, or neutral computational technique that makes determinations without human direction.
Page 211 AI systems are expressions of power that emerge from wider economic and political forces, created to increase profits and centralize control for those who wield them. But this is not how the story of artificial intelligence is typically told.
Page 213 Narratives of magic and mystification recur throughout AI's history, drawing bright circles around spectacular displays of speed, efficiency, and computational reasoning. 5 It's no coincidence that one of the iconic examples of contemporary AI is a game. This epistemological flattening of complexity into clean signal for the purposes of prediction is now a central logic of machine learning. The historian of technology Alex Campolo and I call this enchanted determinism: AI systems are seen as enchanted, beyond the known world, yet deterministic in that they discover patterns that can be applied with predictive certainty to everyday life.
Page 214 That deep learning approaches are often uninterpretable, even to the engineers who created them, gives these systems an aura of being too complex to regulate and too powerful to refuse. We are told to focus on the innovative nature of the method rather than on what is primary: the purpose of the thing itself.
Page 215 These programs produce surprising moves uncommon in human games for a straightforward reason: they can play and analyze far more games at a far greater speed than any human can. This is not magic; it is statistical analysis at scale.
Page 215 Over and over, we see the ideology of Cartesian dualism in AI: the fantasy that AI systems are disembodied brains that absorb and produce knowledge independently from their creators, infrastructures, and the world at large. These illusions distract from the far more relevant questions: Whom do these systems serve? What are the political economies of their construction? And what are the wider planetary consequences?
Page 216 the artificial intelligence industry's expansion has been publicly subsidized: from defense funding and federal research agencies to public utilities and tax breaks to the data and unpaid labor taken from all who use search engines or post images online. AI began as a major public project of the twentieth century and was relentlessly privatized to produce enormous financial gains for the tiny minority at the top of the extraction pyramid.
Page 218 This book proposes that the real stakes of AI are the global interconnected systems of extraction and power, not the technocratic imaginaries of artificiality, abstraction, and automation. AI is born from salt lakes in Bolivia and mines in Congo, constructed from crowdworker-labeled datasets that seek to classify human actions, emotions, and identities. It is used to navigate drones over Yemen, direct immigration police in the United States, and modulate credit scores of human value and risk across the world. A wide-angle, multiscalar perspective on AI is needed to contend with these overlapping regimes. The opacity of the larger supply chain for computation in general, and AI in particular, is part of a long-established business model of extracting value from the commons and avoiding restitution for the lasting damage.
Page 219 Thousands of people are needed to support the illusion of automation: tagging, correcting, evaluating, and editing AI systems to make them appear seamless. The uses of workplace AI further skew power imbalances by placing more control in employers' hands. Apps are used to track workers, nudge them to work longer hours, and rank them in real time. Amazon provides a canonical example
Page 221 What epistemological violence is necessary to make the world readable to a machine learning system? AI seeks to systematize the unsystematizable, formalize the social, and convert an infinitely complex and changing universe into a Linnaean order of machine-readable tables.
Page 221 Many of AI's achievements have depended on boiling things down to a terse set of formalisms based on proxies: identifying and naming some features while ignoring or obscuring countless others.
Page 222 The rhetoric about the AI war between the United States and China drives the interests of the largest tech companies to operate with greater government support and few restrictions.
Page 223 The result is a profound and rapid expansion of surveillance and a blurring between private contractors, law enforcement, and the tech sector, fueled by kickbacks and secret deals. Could there not be an AI for the people that is reoriented toward justice and equality rather than industrial extraction and discrimination? This may seem appealing, but as we have seen throughout this book, the infrastructures and forms of power that enable and are enabled by AI skew strongly toward the centralization of control. As Audre Lorde reminds us, the master's tools will never dismantle the master's house.
Page 224 The voices of the people most harmed by AI systems are largely missing from the processes that produce them. ethics is necessary but not sufficient to address the fundamental concerns raised in this book. power. AI is invariably designed to amplify and reproduce the forms of power it has been deployed to optimize. Instead of glorifying company founders, venture capitalists, and technical visionaries, we should begin with the lived experiences of those who are disempowered, discriminated against, and harmed by AI systems.
Page 225 The social contract, to the extent that there ever was one, has brought a climate crisis, soaring wealth inequality, racial discrimination, and widespread surveillance and labor exploitation. But the idea that these transformations occurred in ignorance of their possible results is part of the problem.
Page 226 We see glimpses of this refusal when populations choose to dismantle predictive policing, ban facial recognition, or protest algorithmic grading.