ENGL 8122  ※  User-Experience Research & Writitng



Nexus: A Brief History of Information Networks from the Stone Age to AI

  • Page xiii The tendency to create powerful things with unintended consequences started not with the invention of the steam engine or AI but with the invention of religion. Power always stems from cooperation between large numbers of humans.
  • Page xiv The main argument of this book is that humankind gains enormous power by building large networks of cooperation, but the way these networks are built predisposes us to use that power unwisely. Our problem, then, is a network problem. Information is the glue that holds networks together. But for tens of thousands of years, Sapiens built and maintained large networks by inventing and spreading fictions, fantasies, and mass delusions-about gods, about enchanted broomsticks, about AI, and about a great many other things. While each individual human is typically interested in knowing the truth about themselves and the world, large networks bind members and create order by relying on fictions and fantasies.
  • Page xv in sufficient quantities information leads to truth, and truth in turn leads to both power and wisdom. Ignorance, in contrast, seems to lead nowhere.
  • Page xvi Of course, the naive view acknowledges that many things can go wrong on the path from information to truth. However, the naive view assumes that the antidote to most problems we encounter in gathering and processing information is gathering and processing even more information. Wisdom is commonly understood to mean "making right decisions," but what "right" means depends on value judgments that differ among diverse people, cultures, and ideologies.
  • Page xx the naive view of information sees only part of the picture,
  • Page xxi AI could destroy our civilization.[ Such a scenario is unlikely, and it merely distracts people from the real dangers. Rather, experts warn about two other scenarios. First, the power of AI could supercharge existing human conflicts,
  • Page xxii Second, the Silicon Curtain might come to divide not one group of humans from another but rather all humans from our new AI overlords. a web of unfathomable algorithms that manage our lives, AI is the first technology in history that can make decisions and create new ideas by itself. Knives and bombs do not themselves decide whom to kill. AI isn't a tool-it's an agent.
  • Page xxiii Can we trust computer algorithms to make wise decisions and create a better world? In 2016, I published Homo Deus, the real hero of history has always been information, rather than Homo sapiens, and that scientists increasingly understand not just history but also biology, politics, and economics in terms of information flows. The book warned that while we hope better information technology will give us health, happiness, and power, it may actually take power away from us and destroy both our physical and our mental health.
  • Page xxiv populism views information as a weapon.[20]
  • Page xxiv In its more extreme versions, populism posits that there is no objective truth at all and that everyone has "their own truth," which they wield to vanquish rivals. Whenever and wherever populism succeeds in disseminating the view of information as a weapon, language itself is undermined.
  • Page xxv Karl Marx, who argued in the mid-nineteenth century that power is the only reality, that information is a weapon, and that elites who claim to be serving truth and justice are in fact pursuing narrow class privileges.
  • Page xxvi as a mouthpiece for the capitalist class, and that scientific institutions like universities spread disinformation in order to perpetuate capitalist control, populists accuse these same institutions of working to advance the interests of the "corrupt elites" at the expense of "the people."
  • Page xxvii One of the recurrent paradoxes of populism is that it starts by warning us that all human elites are driven by a dangerous hunger for power, but often ends by entrusting all power to a single ambitious human.
  • Page xxviii populists are eroding trust in large-scale institutions and international cooperation just when humanity confronts the existential challenges of ecological collapse, global war, and out-of-control technology. If we wish to avoid relinquishing power to a charismatic leader or an inscrutable AI, we must first gain a better understanding of what information is, how it helps to build human networks, and how it relates to truth and power. it explores key dilemmas that people in all eras faced when trying to construct information networks, and it examines how different answers to these dilemmas shaped contrasting human societies.
  • Page xxix What we usually think of as ideological and political conflicts often turn out to be clashes between opposing types of information networks.
  • Page xxix large-scale human information networks: mythology and bureaucracy. Institutions and societies are often defined by the balance they manage to find between the conflicting needs of their mythmakers and their bureaucrats. another contrast-between distributed and centralized information networks.
  • Page xxx rise of AI is arguably the biggest information revolution in history. History isn't the study of the past; it is the study of change.
  • Page xxxi Silicon chips can create spies that never sleep, financiers that never forget, and despots that never die. How will this change society, economics, and politics?
  • Page 3 In everyday usage, "information" is associated with human-made symbols like spoken or written words.
  • Page 7 the naive view argues that information is an attempt to represent reality, and when this attempt succeeds, we call it truth. truth is an accurate representation of reality. Most information in human society, and indeed in other biological and physical systems, does not represent anything. Throughout this book, "truth" is understood as something that accurately represents certain aspects of reality. Underlying the notion of truth is the premise that there exists one universal reality. While different people, nations, or cultures may have competing beliefs and feelings, they cannot possess contradictory truths, because they all share a universal reality. Anyone who rejects universalism rejects truth.
  • Page 8 Another problem with any attempt to represent reality is that reality contains many viewpoints. Reality includes an objective level with objective facts that don't depend on people's beliefs;
  • Page 9 Reality also includes a subjective level with subjective facts like the beliefs and feelings of various people, but in this case, too, facts can be separated from errors.
  • Page 10 The point is that even the most truthful accounts of reality can never represent it in full. There are always some aspects of reality that are neglected or distorted in every representation. Truth, then, isn't a one-to-one representation of reality. Rather, truth is something that brings our attention to certain aspects of reality while inevitably ignoring other aspects. No account of reality is 100 percent accurate, but some accounts are nevertheless more truthful than others. the naive view sees information as an attempt to represent reality. It is aware that some information doesn't represent reality well, but it dismisses this as unfortunate cases of "misinformation" or "disinformation." The naive view further believes that the solution to the problems caused by misinformation and disinformation is more information.
  • Page 12 errors, lies, fantasies, and fictions are information, too.
  • Page 12 what information does is to create new realities by tying together disparate things- Its defining feature is connection rather than representation, Information doesn't necessarily inform us about things. Rather, it puts things in formation.
  • Page 14 Information is something that creates new realities by connecting different points into a network. This still includes the view of information as representation.
  • Page 15 Viewing information as a social nexus helps us understand many aspects of human history that confound the naive view of information as representation.
  • Page 16 To conclude, information sometimes represents reality, and sometimes doesn't. But it always connects. This is its fundamental characteristic. "How well does it connect people? What new network does it create?"
  • Page 17 When we look at the history of information from the Stone Age to the Silicon Age, we therefore see a constant rise in connectivity, without a concomitant rise in truthfulness or wisdom. Contrary to what the naive view believes, Homo sapiens didn't conquer the world because we are talented at turning information into an accurate map of reality. Rather, the secret of our success is that we are talented at using information to connect lots of individuals. We'll discuss how, over tens of thousands of years, humans invented various information technologies that greatly improved connectivity and cooperation without necessarily resulting in a more truthful representation of the world.
  • Page 19 In order to cooperate, Sapiens no longer had to know each other personally; they just had to know the same story. A story can thereby serve like a central connector, with an unlimited number of outlets into which an unlimited number of people can plug.
  • Page 20 The social media accounts are usually run by a team of experts, and every image and word is professionally crafted and curated to manufacture what is nowadays called a brand.[5] A "brand" is a specific type of story.
  • Page 22 It should be stressed that the creation of the Jesus story was not a deliberate lie. the result of emotional projections and wishful thinking. By gaining all those believers, the story of Jesus managed to have a much bigger impact on history than the person of Jesus.
  • Page 23 the whole purpose of the Passover meal is to create and reenact artificial memories.
  • Page 24 The Jewish Passover story builds a large network by taking existing biological kin bonds and stretching them. It creates an imagined family of millions.
  • Page 27 Of all genres of stories, those that create intersubjective realities have been the most crucial for the development of large-scale human networks.
  • Page 30 In fact, all relations between large-scale human groups are shaped by stories, because the identities of these groups are themselves defined by stories. Contrary to Marxist thinking, large-scale identities and interests in history are always intersubjective; they are never objective.
  • Page 31 History is often shaped not by deterministic power relations, but rather by tragic mistakes that result from believing in mesmerizing but harmful stories. The naive view of information says that information leads to truth, and knowing the truth helps people to gain both power and wisdom. This sounds reassuring.
  • Page 32 history, power stems only partially from knowing the truth. It also stems from the ability to maintain social order among a large number of people. If you build a bomb and ignore the facts of physics, the bomb will not explode. But if you build an ideology and ignore the facts, the ideology may still prove explosive.
  • Page 33 What the people at the top know, which nuclear physicists don't always realize, is that telling the truth about the universe is hardly the most efficient way to produce order among large numbers of humans. When it comes to uniting people, fiction enjoys two inherent advantages over the truth. First, fiction can be made as simple as we like, whereas the truth tends to be complicated, because the reality it is supposed to represent is complicated. Second, the truth is often painful and disturbing, and if we try to make it more comforting and flattering, it will no longer be the truth. In contrast, fiction is highly malleable.
  • Page 34 The choice isn't simply between telling the truth and lying. There is a third option. Telling a fictional story is lying only when you pretend that the story is a true representation
  • Page 35 the U.S. Constitution was fundamentally different from stories that denied their fictive nature and claimed divine origin, such as the Ten Commandments.
  • Page 37 to survive and flourish, every human information network needs to do two things simultaneously: discover truth and create order. Having a lot of information doesn't in and of itself guarantee either truth or order. It is a difficult process to use information to discover the truth and simultaneously use it to maintain order. What makes things worse is that these two processes are often contradictory, because it is frequently easier to maintain order through fictions.
  • Page 38 What happens when the same bit of information reveals an important fact about the world, and also undermines the noble lie that holds society together? In such cases society may seek to preserve order by placing limits on the search for truth. While over the generations human networks have grown increasingly powerful, they have not necessarily grown increasingly wise. If a network privileges order over truth, it can become very powerful but use that power unwisely.
  • Page 43 The big problem with lists, and the crucial difference between lists and stories, is that lists tend to be far more boring than stories, which means that while we easily remember stories, we find it difficult to remember lists.
  • Page 44 Kendall Haven writes in his 2007 book, Story Proof: The Science Behind the Startling Power of Story,
  • Page 45 Unlike national poems and myths, which can be stored in our brains, complex national taxation and administration systems have required a unique nonorganic information technology in order to function. This technology is the written document.
  • Page 46 Like stories and like all other information technologies in history, written documents didn't necessarily represent reality accurately. But whether true or false, written documents created new realities. documents changed the method used for creating intersubjective realities. Humans couldn't forge an intersubjective reality that their brains couldn't remember. This limit could be transcended, however, by writing documents.
  • Page 47 In a literate state, to own a field increasingly came to mean that it is written on some clay tablet, bamboo strip, piece of paper, or silicon chip that you own that field.
  • Page 48 As people produced more and more documents, finding them turned out to be far from easy. Written documents were much better than human brains in recording certain types of information. But they created a new and very thorny problem: retrieval.
  • Page 49 Another common rule is that apples grow on apple trees, whereas figs grow on figs trees. So if you are looking for an apple, you first need to locate an apple tree, and then look up. It is very different with archives. Since documents aren't organisms, they don't obey any biological laws, and evolution didn't organize them for us. Bureaucracy is the way people in large organizations solved the retrieval problem and thereby created bigger and more powerful information networks.
  • Page 50 But like mythology, bureaucracy too tends to sacrifice truth for order.
  • Page 50 Many of the problems of our twenty-first-century information networks-like biased algorithms that mislabel people, or rigid protocols that ignore human needs and feelings-are not new problems of the computer age. They are quintessential bureaucratic problems that have existed long before anyone even dreamed of computers. Bureaucracy literally means "rule by writing desk." Bureaucracy seeks to solve the retrieval problem by dividing the world into drawers, and knowing which document goes into which drawer. Divide the world into containers, and keep the containers separate so the documents don't get mixed up. bureaucracy is often busy imposing a new and artificial order on the world.
  • Page 51 The urge to divide reality into rigid drawers also leads bureaucrats to pursue narrow goals irrespective of the wider impact of their actions.
  • Page 54 intersubjective conventions are themselves part of reality.
  • Page 54 In defense of bureaucracy it should be noted that while it sometimes sacrifices truth and distorts our understanding of the world, it often does so for the sake of order, without which it would be hard to maintain any large-scale human network.
  • Page 56 Mythology and bureaucracy are the twin pillars of every large-scale society. Yet while mythology tends to inspire fascination, bureaucracy tends to inspire suspicion. For all bureaucracies-good or bad-share one key characteristic: it is hard for humans to understand them.
  • Page 57 In tribal societies that lack written documents and bureaucracies, the human network is composed of only human-to-human and human-to-story chains. Authority belongs to the people who control the junctions that link the various chains. These junctions are the tribe's foundational myths. Charismatic leaders, orators, and mythmakers know how to use these stories in order to shape identities, build alliances, and sway emotions.
  • Page 58 As documents became a crucial nexus linking many social chains, considerable power came to be invested in these documents, and experts in the arcane logic of documents emerged as new authority figures. In bureaucratic systems, power often comes from understanding how to manipulate obscure budgetary loopholes and from knowing your way around the labyrinths of offices, committees, and subcommittees. For better or worse, literate bureaucracies tended to strengthen the central authority at the expense of ordinary citizens.
  • Page 59 biological drama, sibling rivalry, romantic triangle "Boy meets girlc and "boy fights boy over girl" tension between purity and impurity,
  • Page 61 The list of biological dramas that press our emotional buttons includes several additional classics, such as "Who will be alpha?""Us versus them," and "Good versus evil."
  • Page 62 Storytellers like Franz Kafka, who focused on the often surreal ways that bureaucracy shapes human lives, pioneered new nonbiological plotlines. In Kafka's The Trial, the bank clerk K. is arrested by unidentified officials of an unfathomable agency for an unnamed crime. Whereas stories about heroes who confront monsters-from the Ramayana to Spider-Man-repackage the biological dramas of confronting predators and romantic rivals, the unique horror of Kafkaesque stories comes from the unfathomability of the threat.
  • Page 63 The difficulty of depicting and understanding bureaucratic realities has had unfortunate results. On the one hand, it leaves people feeling helpless in the face of harmful powers they do not understand, like the hero of The Trial. On the other hand, it also leaves people with the impression that bureaucracy is a malign conspiracy, even in cases when it is in fact a benign force providing us with health care, security, and justice.
  • Page 66 government stepped in to offer a solution to the imaginary problem invented by its own propaganda.
  • Page 68 All powerful information networks can do both good and ill, depending on how they are designed and used. Merely increasing the quantity of information in a network doesn't guarantee its benevolence, or make it any easier to find the right balance between truth and order. That is a key historical lesson for the designers and users of the new information networks of the twenty-first century. AI is taking up the role of both bureaucrats and mythmakers. AI systems know how to find and process data better than flesh- and- blood bureaucrats, and AI is also acquiring the ability to compose stories better than most humans. We have now seen that information networks don't maximize truth, but rather seek to find a balance between truth and order. Bureaucracy and mythology are both essential for maintaining order, and both are happy to sacrifice truth for the sake of order. The way human information networks have dealt with the problem of errors
  • Page 69 Holy books like the Bible and the Quran are an information technology that is meant to both include all the vital information society needs and be free from all possibility of error.
  • Page 71 In our personal lives, religion can fulfill many different functions, like providing solace or explaining the mysteries of life. But historically, the most important function of religion has been to provide superhuman legitimacy for the social order. At the heart of every religion lies the fantasy of connecting to a superhuman and infallible intelligence.
  • Page 72 Religion wanted to take fallible humans out of the loop and give people access to infallible superhuman laws, but religion repeatedly boiled down to trusting this or that human.
  • Page 73 Holy books like the Bible and the Quran are a technology to bypass human fallibility, and religions of the book-like Judaism, Christianity, and Islam-have been built around that technological artifact.
  • Page 75 The Bible as a single holy book didn't exist in biblical times. King David and the prophet Isaiah never saw a copy of the Bible.
  • Page 79 A second and much bigger problem concerned interpretation. Even when people agree on the sanctity of a book and on its exact wording, they can still interpret the same words in different ways.
  • Page 80 More problems resulted from the fact that even if the technology of the book succeeded in limiting changes to the holy words, the world beyond the book continued to spin, and it was unclear how to relate old rules to new situations. As Jews increasingly argued over the interpretation of the Bible, rabbis gained more power and prestige. Writing down the word of Jehovah was supposed to limit the authority of the old priestly institution, but it gave rise to the authority of a new rabbinical institution.
  • Page 81 The dream of bypassing fallible human institutions through the technology of the holy book never materialized.
  • Page 86 It is crucial to note that the people who created the New Testament weren't the authors of the twenty-seven texts it contains; they were the curators.
  • Page 88 Just as most Jews forgot that rabbis curated the Old Testament, so most Christians forgot that church councils curated the New Testament, and came to view it simply as the infallible word of God.
  • Page 89 As time passed, problems of interpretation increasingly tilted the balance of power between the holy book and the church in favor of the institution.
  • Page 90 the church couldn't prevent the occasional freethinker from formulating heretical ideas. But because it controlled key nodes in the medieval information network-such as copying workshops, archives, and libraries-it could prevent such a heretic from making and distributing a hundred copies of her book.
  • Page 91 If infallible texts merely lead to the rise of fallible and oppressive churches, how then to deal with the problem of human error?
  • Page 91 The naive view expects that if all restrictions on the free flow of information are removed, error will inevitably be exposed and displaced by truth.
  • Page 92 In the history of information networks, the print revolution of early modern Europe is usually hailed as a moment of triumph, breaking the stranglehold that the Catholic Church had maintained over the European information network. But print wasn't the root cause of the scientific revolution. In fact, print allowed the rapid spread not only of scientific facts but also of religious fantasies, fake news, and conspiracy theories.
  • Page 96 While it would be an exaggeration to argue that the invention of print caused the European witch-hunt craze, the printing press played a pivotal role in the rapid dissemination of the belief in a global satanic conspiracy.
  • Page 98 Nobody in early modern Europe had sex with Satan or was capable of flying on broomsticks and creating hailstorms. But witches became an intersubjective reality. Like money, witches were made real by exchanging information about witches.
  • Page 99 The new intersubjective reality was so convincing that even some people accused of witchcraft came to believe that they were indeed part of a worldwide satanic conspiracy.
  • Page 101 "there were neither witches nor bewitched until they were talked and written about."[
  • Page 102 The history of print and witch-hunting indicates that an unregulated information market doesn't necessarily lead people to identify and correct their errors, because it may well prioritize outrage over truth. The curation institutions that played a central role in the scientific revolution connected scholars and researchers both in and out of universities, forging an information network that spanned the whole of Europe and eventually the world. For the scientific revolution to gather pace, scientists had to trust information published by colleagues in distant lands.
  • Page 103 In other words, the scientific revolution was launched by the discovery of ignorance.[
  • Page 104 The trademark of science is not merely skepticism but self-skepticism, and at the heart of every scientific institution we find a strong self-correcting mechanism. As an information technology, the self-correcting mechanism is the polar opposite of the holy book.
  • Page 105 Self-correcting mechanisms are ubiquitous in nature. Institutions, too, die without self-correcting mechanisms. These mechanisms start with the realization that humans are fallible and corruptible. But instead of despairing of humans and looking for a way to bypass them, the institution actively seeks its own errors and corrects them. All institutions that manage to endure beyond a handful of years possess such mechanisms, but institutions differ greatly in the strength and visibility of their self-correcting mechanisms.
  • Page 110 Scientific institutions maintain that even if most scientists in a particular period believe something to be true, it may yet turn out to be inaccurate or incomplete.
  • Page 110 Crucially, scientific institutions are willing to admit their institutional responsibility for major mistakes and crimes.
  • Page 115 An institution can call itself by whatever name it wants, but if it lacks a strong self-correcting mechanism, it is not a scientific institution.
  • Page 116 order by itself isn't necessarily good.
  • Page 116 Scientific institutions have been able to afford their strong self-correcting mechanisms because they leave the difficult job of preserving the social order to other institutions.
  • Page 118 democracy and dictatorship as contrasting types of information networks. Dictatorial information networks are highly centralized.[ the center enjoys unlimited authority;
  • Page 119 The second characteristic of dictatorial networks is that they assume the center is infallible. To summarize, a dictatorship is a centralized information network, lacking strong self- correcting mechanisms. A democracy, in contrast, is a distributed information network, possessing strong self- correcting mechanisms.
  • Page 121 while a dictatorship is about one central information hub dictating everything, a democracy is an ongoing conversation between diverse information nodes.
  • Page 122 democracy is not a system in which a majority of any size can decide to exterminate unpopular minorities; it is a system in which there are clear limits on the power of the center.
  • Page 123 The most common method strongmen use to undermine democracy is to attack its self-correcting mechanisms one by one, often beginning with the courts and the media. The typical strongman either deprives courts of their powers or packs them with his loyalists and seeks to close all independent media outlets while building his own omnipresent propaganda machine.[5] The strongmen don't usually take the final step of abolishing the elections outright. Instead, they keep them as a ritual that serves to provide legitimacy and maintain a democratic facade, as happens, for example, in Putin's Russia.
  • Page 125 least from the viewpoint of information flows, what defines a system as "democratic" is only that its center doesn't have unlimited authority and that the system possesses robust mechanisms to correct the center's mistakes.
  • Page 126 Democratic networks assume that everyone is fallible, and that includes even the winners of elections and the majority of voters.
  • Page 126 Elections establish what the majority of people desire, rather than what the truth is. And people often desire the truth to be other than what it is. Democratic networks therefore maintain some self-correcting mechanisms to protect the truth even from the will of the majority.
  • Page 127 the majority should at least acknowledge its own fallibility and protect the freedom of minorities to hold and publicize unpopular views, which might turn out to be correct. the one option that should not be on offer in elections is hiding or distorting the truth.
  • Page 128 Naturally, academic institutions, the media, and the judiciary may themselves be compromised by corruption, bias, or error. But subordinating them to a governmental Ministry of Truth is likely to make things worse. Allowing the government to supervise the search for truth is like appointing the fox to guard the chicken coop. academic institutions, the media, and the judiciary have their own internal self-correcting mechanisms for fighting corruption, correcting bias, and exposing error. the existence of several independent institutions that seek the truth in different ways allows these institutions to check and correct one another. For example, if powerful corporations manage to break down the peer-review mechanism None of these mechanisms are completely fail-safe, but no human institution is. Government certainly isn't.
  • Page 129 If all this sounds complicated, it is because democracy should be complicated. Simplicity is a characteristic of dictatorial information networks in which the center dictates everything and everybody silently obeys. The term "populism" derives from the Latin populus, which means "the people." In democracies, "the people" is considered the sole legitimate source of political authority. Only representatives of the people should have the authority to declare wars, pass laws, and raise taxes. Populists cherish this basic democratic principle, but somehow conclude from it that a single party or a single leader should monopolize all power.
  • Page 130 Even if they win just a small share of votes, populists may still believe they alone represent the people. populists can believe that the enemies of the people have deceived the people to vote against its true will, which the populists alone represent. "the people" is not a collection of flesh-and-blood individuals with various interests and opinions, but rather a unified mystical body that possesses a single will-"the will of the people."
  • Page 131 The Nazi case is of course extreme, and it is grossly unfair to accuse all populists of being crypto-Nazis with genocidal inclinations. What turns someone into a populist is claiming that they alone represent the people and that anyone who disagrees with them-whether state bureaucrats, minority groups, or even the majority of voters-either suffers from false consciousness or isn't really part of the people. This is why populism poses a deadly threat to democracy.
  • Page 132 Having claimed that they alone represent the people, populists argue that the people is not just the sole legitimate source of political authority but the sole legitimate source of all authority. Any institution that derives its authority from something other than the will of the people is antidemocratic. populists consequently seek to monopolize not just political authority but all types of authority and to take control of institutions such as media outlets, courts, and universities. By taking the democratic principle of "people's power" to its extreme, populists turn totalitarian. populists to be skeptical of the pursuit of truth, and to argue-as we saw in the prologue-that "power is the only reality." The result is a dark and cynical view of the world as a jungle and of human beings as creatures obsessed with power alone.
  • Page 133 Biologists, climatologists, epidemiologists, economists, historians, and mathematicians are just another interest group feathering its own nest-at the expense of the people.
  • Page 133 Populism offers strongmen an ideological basis for making themselves dictators while pretending to be democrats.
  • Page 134 Once people think that power is the only reality, they lose trust in all these institutions, democracy collapses, and the strongmen can seize total power. Of course, populism could lead to anarchy rather than totalitarianism, if it undermines trust in the strongmen themselves. When trust in bureaucratic institutions like election boards, courts, and newspapers is particularly low, an enhanced reliance on mythology is the only way to preserve order. Strongmen who claim to represent the people may well rise to power through democratic means, and often rule behind a democratic facade. Rigged elections in which they win overwhelming majorities serve as proof of the mystical bond between the leader and the people.
  • Page 135 If one person dictates all the decisions, and even their closest advisers are terrified to voice a dissenting view, no conversation is taking place. Such a network is situated at the extreme dictatorial end of the spectrum. The focus on conversations rather than elections raises a host of interesting questions.
  • Page 136 Scathing public attacks on the government are a daily occurrence. But where is the room where the crucial conversations happen, and who sits there? Based on the above definition of democracy, we can now turn to the historical record and examine how changes in information technology and information flows have shaped the history of democracy.
  • Page 138 In the millennia following the agricultural revolution, and especially after writing helped create large bureaucratic polities, it became easier to centralize the flow of information and harder to maintain the democratic conversation. As the size of polities continued to increase, and city-states were superseded by larger kingdoms and empires, even Athenian-style partial democracy disappeared. All the famous examples of ancient democracies are city-states such as Athens and Rome. In contrast, we don't know of any large-scale kingdom or empire that operated along democratic lines.
  • Page 140 By the third century CE, not only the Roman Empire but all other major human societies on earth were centralized information networks lacking strong self-correcting mechanisms. Thousands of more small-scale societies continued to function democratically in the third century CE and beyond, but it seemed that distributed democratic networks were simply incompatible with large-scale societies.
  • Page 141 How do we know whether democracies fail because they are undermined by strongmen or because of much deeper structural and technological reasons? The key misconception here is equating democracy with elections. Tens of millions of Roman citizens could theoretically vote for this or that imperial candidate. But the real question is whether tens of millions of Romans could have held an ongoing empire-wide political conversation. In the present-day United States the democratic conversation is endangered by people's inability to listen to and respect their political rivals, yet this can presumably still be fixed.
  • Page 142 To hold a conversation, it is not enough to have the freedom to talk and the ability to listen. There are also two technical preconditions. First, people need to be within hearing range of one another. with the help of some kind of information technology that can swiftly convey what people say over long distances. Second, people need at least a rudimentary understanding of what they are talking about. The only way to have a large-scale political conversation among diverse groups of people is if people can gain some understanding of issues that they have never experienced firsthand. In a large polity, it is a crucial role of the education system and the media to inform people about things they have never faced themselves. If there is no education system or media platform to perform this role, no meaningful large-scale conversations can take place.
  • Page 144 The lack of a meaningful public conversation was not the fault of Augustus, Nero, Caracalla, or any of the other emperors. They didn't sabotage Roman democracy. Given the size of the empire and the available information technology, democracy was simply unworkable. It should be stressed that in many large-scale autocracies local affairs were often managed democratically.
  • Page 145 Even in empires whose rulers never had any democratic pretensions, democracy could still flourish in local settings. In tsarist villages and Roman cities a form of democracy was possible because a meaningful public conversation was possible.
  • Page 146 Mass media are information technologies that can quickly connect millions of people even when they are separated by vast distances.
  • Page 148 The newspaper is a periodic pamphlet, and it was different from earlier one-off pamphlets because it had a much stronger self-correcting mechanism. Unlike one-off publications, a weekly or daily newspaper has a chance to correct its mistakes and an incentive to do so in order to win the public's trust. Newspapers that succeeded in gaining widespread trust became the architects and mouthpieces of public opinion. They created a far more informed and engaged public, which changed the nature of politics, first in the Netherlands and later around the world.[
  • Page 151 You may wonder whether we are talking about democracies at all. At a time when the United States had more slaves than voters (more than 1.5 million Americans were enslaved in the early 1820s),[50] was the United States really a democracy? As noted earlier, democracy and autocracy aren't absolutes; they are part of a continuum. voting is not the only thing that counts. stronger self-correcting mechanisms.
  • Page 152 It was these self-correcting mechanisms that gradually enabled the United States to expand the franchise, abolish slavery, and turn itself into a more inclusive democracy.
  • Page 153 to press a button while sitting in their homes. Large-scale democracy had now become feasible. Millions of people separated by thousands of kilometers could conduct informed and meaningful public debates about the rapidly evolving issues of the day. Mass media made large-scale democracy possible, rather than inevitable.
  • Page 154 an autocratic network, there are no legal limits on the will of the ruler, but there are nevertheless a lot of technical limits. In a totalitarian network, many of these technical limits are absent.[58]
  • Page 155 Totalitarianism is the attempt to control what every person throughout the country is doing and saying every moment of the day, and potentially even what every person is thinking and feeling.
  • Page 155 Emperors, caliphs, shahs, and kings found it a huge challenge to keep their subordinates in check. Rulers consequently focused their attention on controlling the military and the taxation
  • Page 160 Full-blown totalitarianism might have been dreamed about by the likes of the Qin, but its implementation had to wait for the development of modern technology. Just as modern technology enabled large-scale democracy, it also made large-scale totalitarianism possible.
  • Page 162 While in most polities throughout history the army had wielded enormous political power, in twentieth-century totalitarian regimes the regular army ceded much of its clout to the secret police-the information army. what made the secret police powerful was its command of information.
  • Page 164 Totalitarian regimes are based on controlling the flow of information and are suspicious of any independent channels of information. key tenet of totalitarian regimes is that wherever people meet and exchange information, the regime should be there too, to keep an eye on them.
  • Page 168 created an entire nonexistent category of enemies.
  • Page 176 We see then that the new information technology of the late modern era gave rise to both large-scale democracy and large-scale totalitarianism. differences between how the two systems used information technology. it allows many independent nodes to process the information and make decisions by themselves. Information freely circulates In contrast, totalitarianism wants all information to pass through the central hub and doesn't want any independent institutions making decisions on their own. The biggest advantage of the centralized totalitarian network is that it is extremely orderly, which means it can make decisions quickly and enforce them ruthlessly.
  • Page 177 if the official channels are blocked, the information cannot find an alternative means of transmission. fearful subordinates hide bad news from their superiors. Another common reason why official channels fail to pass on information is to preserve order.
  • Page 178 "Americans grow up with the idea that questions lead to answers," he said. "But Soviet citizens grew up with the idea that questions lead to trouble." in a distributed democratic network, when official lines of communication are blocked, information flows through alternative channels.
  • Page 179 Totalitarian and authoritarian self-correcting mechanisms tend to be very weak. Nobody can challenge the leader, and on his own initiative the leader-being a human being-may well refuse to admit any mistakes.
  • Page 184 The relentless barrage of fake news and conspiracy theories helped to keep hundreds of millions of people in line.
  • Page 185 Once we learn to see democracy and totalitarianism as different types of information networks, we can understand why they flourish in certain eras and are absent in others.
  • Page 186 Technology only creates new opportunities; it is up to us to decide which ones to pursue. Totalitarian regimes choose to use modern information technology to centralize the flow of information and to stifle truth in order to maintain order. Democratic regimes choose to use modern information technology to distribute the flow of information between more institutions and individuals and encourage the free pursuit of truth. They consequently have to struggle with the danger of fracturing.
  • Page 187 The pressure to live up to the democratic ideals and to include more people and groups in the public conversation seemed to undermine the social order and to make democracy unworkable.
  • Page 188 Western democracies not only surged ahead technologically and economically but also succeeded in holding the social order together despite-or perhaps because of-widening the circle of participants in the political conversation.
  • Page 189 At the beginning of the twenty-first century, it accordingly seemed that the future belonged to distributed information networks and to democracy. This turned out to be wrong. Democracies in the 2020s face the task, once again, of integrating a flood of new voices into the public conversation without destroying the social order. As humankind enters the second quarter of the twenty-first century, a central question is how well democracies and totalitarian regimes will handle both the threats and the opportunities resulting from the current information revolution. As in previous eras, information networks will struggle to find the right balance between truth and order. Some will opt to prioritize truth and maintain strong self-correcting mechanisms. Others will make the opposite choice.
  • Page 190 Hitherto, every information network in history relied on human mythmakers and human bureaucrats to function.
  • Page 193 the current information revolution,
  • Page 193 is the computer. Everything else-from the internet to AI-is a by-product.
  • Page 194 the moment it is enough to say that in essence a computer is a machine that can potentially do two remarkable things: it can make decisions by itself, and it can create new ideas by itself. The rise of intelligent machines that can make decisions and create new ideas means that for the first time in history power is shifting away from humans and toward something else.
  • Page 195 A paradigmatic case of the novel power of computers is the role that social media algorithms have played in spreading hatred and undermining social cohesion in numerous countries.[
  • Page 197 The crucial thing to grasp is that social media algorithms are fundamentally different from printing presses and radio sets. Facebook's algorithms were making active and fateful decisions by themselves.
  • Page 198 The algorithms could have chosen to recommend sermons on compassion or cooking classes, but they decided to spread hate-filled conspiracy theories.
  • Page 199 But why did the algorithms decide to promote outrage rather than compassion? As user engagement increased, so Facebook collected more data, sold more advertisements, and captured a larger share of the information market. human managers provided the company's algorithms with a single overriding goal: increase user engagement. outrage generated engagement.
  • Page 200 AI algorithms. can learn by themselves things that no human engineer programmed, and they can decide things that no human executive foresaw.
  • Page 201 intelligence and consciousness are very different. Intelligence is the ability to attain goals, such as maximizing user engagement on a social media platform. Consciousness is the ability to experience subjective feelings like pain, pleasure, love, and hate. Bacteria and plants apparently lack any consciousness, yet they too display intelligence.
  • Page 202 Of course, as computers become more intelligent, they might eventually develop consciousness and have some kind of subjective experiences. Then again, they might become far more intelligent than us, but never develop any kind of feelings.
  • Page 205 Prior to the rise of computers, humans were indispensable links in every chain of information networks like churches and states. In contrast, computer-to-computer chains can now function without humans in the loop. Another way to understand the difference between computers and all previous technologies is that computers are fully fledged members of the information network, whereas clay tablets, printing presses, and radio sets are merely connections between members.
  • Page 207 power depends on how many members cooperate with you, how well you understand law and finance, and how capable you are of inventing new laws and new kinds of financial devices, then computers are poised to amass far more power than humans.
  • Page 208 By gaining such command of language, computers are seizing the master key unlocking the doors of all our institutions, from banks to temples. We use language to create not just legal codes and financial devices but also art, science, nations, and religions. What would it mean for humans to live in a world where catchy melodies, scientific theories, technical tools, political manifestos, and even religious myths are shaped by a nonhuman alien intelligence that knows how to exploit with superhuman efficiency the weaknesses, biases, and addictions of the human mind?
  • Page 209 Equally alarmingly, we might increasingly find ourselves conducting lengthy online discussions with entities that we think are humans but are actually computers. This could make democracy untenable. Democracy is a conversation, and conversations rely on language. By hacking language, computers could make it extremely difficult for large numbers of humans to conduct a meaningful public conversation.
  • Page 210 When we engage in a political debate with a computer impersonating a human, we lose twice. First, it is pointless for us to waste time in trying to change the opinions of a propaganda bot, which is just not open to persuasion. Second, the more we talk with the computer, the more we disclose about ourselves, thereby making it easier for the bot to hone its arguments and sway our views. By conversing and interacting with us, computers could form intimate relationships with people and then use the power of intimacy to influence us. In the 2010s social media was a battleground for controlling human attention. In the 2020s the battle is likely to shift from attention to intimacy.
  • Page 211 What we are talking about is potentially the end of human history. Not the end of history, but the end of its human-dominated part.
  • Page 212 What will happen to the course of history when computers play a larger and larger role in culture and begin producing stories, laws, and religions? Within a few years AI could eat the whole of human culture-everything we have created over thousands of years-digest it, and begin to gush out a flood of new cultural artifacts. At first, computers will probably imitate human cultural prototypes, writing humanlike texts and composing humanlike music. This doesn't mean computers lack creativity; after all, human artists do the same. computers too can make cultural innovations, These innovations will in turn influence the next generation of computers, which will increasingly deviate from the original human models,
  • Page 213 But in order to manipulate humans, there is no need to physically hook brains to computers. For thousands of years prophets, poets, and politicians have used language to manipulate and reshape society. Now computers are learning how to do it.
  • Page 214 In theory, the text you've just read might have been generated by the alien intelligence of some computer. As computers amass power, it is likely that a completely new information network will emerge. Of course, not everything will be new.
  • Page 215 computer-to-computer chains are emerging in which computers interact with one another on their own.
  • Page 216 In computer evolution, the distance from amoeba to T. rex could be covered in a decade.
  • Page 219 we humans are still in control. Tech giants like Facebook, Amazon, Baidu, and Alibaba aren't just the obedient servants of customer whims and government regulations. They increasingly shape these whims and regulations.
  • Page 221 Local newspapers, TV stations, and movie theaters lose customers and ad revenue to the tech giants.
  • Page 222 In tax literature, "nexus" means an entity's connection to a given jurisdiction. In the words of the economist Marko Köthenbürger, "The definition of nexus based on a physical presence should be adjusted to include the notion of a digital presence in a country."[
  • Page 223 money will soon become outdated as many transactions no longer involve money. rather than as dollars, taxing only money distorts the economic and political picture.
  • Page 224 Taxation is just one among many problems created by the computer revolution. The computer network is disrupting almost all power structures. Democracies fear the rise of new digital dictatorships. Dictatorships fear the emergence of agents they don't know how to control. Everyone should be concerned about the elimination of privacy and the spread of data colonialism.
  • Page 224 technology is moving much faster than the policy.
  • Page 225 The people who lead the information revolution know far more about the underlying technology than the people who are supposed to regulate it.
  • Page 226 How would it feel to be constantly monitored, guided, inspired, or sanctioned by billions of nonhuman entities? The most important thing to remember is that technology, in itself, is seldom deterministic. Yes, since human societies are information networks, inventing new information technologies is bound to change society. humans still have a lot of control over the pace, shape, and direction of this revolution-
  • Page 228 Engineers working for authoritarian governments and ruthless corporations could develop new tools to empower the central authority, by monitoring citizens and customers twenty-four hours a day. Hackers working for democracies may develop new tools to strengthen society's self-correcting mechanisms, by exposing government corruption and corporate malpractices. Both technologies could be developed. The knife doesn't force our hand. Though radio sets in East Germany could technically receive a wide range of transmissions, the East German government did its best to jam Western broadcasts and punished people who secretly tuned in to them.[55] The technology was the same, but politics made very different uses of it.
  • Page 229 To understand the new computer politics, we need a deeper understanding of what's new about computers. In this chapter we noted that unlike printing presses and other previous tools, computers can make decisions by themselves and can create ideas by themselves. That, however, is just the tip of the iceberg. What's really new about computers is the way they make decisions and create ideas.
  • Page 230 When centralized bureaucratic networks appeared and developed, one of the bureaucrats' most important roles was to monitor entire populations.
  • Page 230 Of course, surveillance has also been essential for providing beneficial services.
  • Page 231 In order to get to know us, both benign and oppressive bureaucracies have needed to do two things. gather a lot of data about us. analyze all that data and identify patterns. However, in all times and places surveillance has been incomplete. In democracies like the modern United States, legal limits have been placed on surveillance to protect privacy and individual rights. In totalitarian regimes like the ancient Qin Empire and the modern U.S.S.R., surveillance faced no such legal barriers but came up against technical boundaries.
  • Page 234 By 2024, we are getting close to the point when a ubiquitous computer network can follow the population of entire countries twenty-four hours a day.
  • Page 235 Just as the computer network doesn't need millions of human agents to follow us, it also doesn't need millions of human analysts to make sense of our data. In 2024 language algorithms like ChatGPT and Meta's Llama can process millions of words per minute and "read" 2.6 billion words in a couple of hours. The ability of such algorithms to process images, audio recordings, and video footage is equally superhuman.
  • Page 237 course, pattern recognition also has enormous positive potential. we must first appreciate the fundamental difference between the new digital bureaucrats and their flesh-and-blood predecessors. As fish live in water, humans live in a digital bureaucracy, constantly inhaling and exhaling data. Each action we make leaves a trace of data, which is gathered and analyzed to identify patterns.
  • Page 239 In theory, the dictators of the future could get their computer network to go much deeper than just watching our eyes. If the network wants to know our political views, personality traits, and sexual orientation, it could monitor processes inside our hearts and brains. The necessary biometric technology is already being developed by some governments and companies, nobody yet has the biological knowledge necessary to deduce things like precise political opinions from under-the-skin data like brain activity.
  • Page 240 biometric sensors register what happens to the heart rate and brain activity of millions of people as they watch a particular news item on their smartphones, that can teach the computer network far more than just our general political affiliation. The network could learn precisely what makes each human angry, fearful, or joyful.
  • Page 241 In a world where humans monitored humans, privacy was the default. But in a world where computers monitor humans, it may become possible for the first time in history to completely annihilate privacy. The post-privacy era is taking hold in authoritarian countries ranging from Belarus to Zimbabwe,[23] as well as in democratic metropolises like London and New York. or able to install cameras inside people's homes, algorithms regularly watch us even in our living rooms, bedrooms, and bathrooms via our own computers and smartphones.)
  • Page 243 Facial recognition algorithms and AI-searchable databases are now routinely used by police forces all over the world.
  • Page 250 Peer-to-peer surveillance systems typically operate by aggregating many points to determine an overall score.
  • Page 251 For scoring those things that money can't buy, there was an alternative nonmonetary system, which has been given different names: honor, status, reputation. What social credit systems seek is a standardized valuation of the reputation market. Social credit is a new points system that ascribes precise values even to smiles and family visits.
  • Page 252 Some people might see social credit systems as a way to reward pro- social behavior, punish egotistical acts, and create kinder and more harmonious societies. The Chinese government, for example, explains that its social credit systems could help fight corruption, scams, tax evasion, false advertising, and counterfeiting, and thereby establish more trust between individuals, between consumers and corporations, and between citizens and government institutions.[ 50] Others may find systems that allocate precise values to every social action demeaning and inhuman. Even worse, a comprehensive social credit system will annihilate privacy and effectively turn life into a never- ending job interview.
  • Page 254 network of computers can always be on. Computers are consequently pushing humans toward a new kind of existence in which we are always connected and always monitored.
  • Page 258 the computer networks of the twenty-first century, which might create new types of humans and new dystopias.
  • Page 258 radicalizing people.
  • Page 261 We have reached a turning point in history in which major historical processes are partly caused by the decisions of nonhuman intelligence.
  • Page 261 Computer errors become potentially catastrophic only when computers become historical agents.
  • Page 264 To tilt the balance in favor of truth, networks must develop and maintain strong self-correcting mechanisms that reward truth telling. These self-correcting mechanisms are costly, but if you want to get the truth, you must invest in them.
  • Page 265 Instead of investing in self-correcting mechanisms that would reward truth telling, the social media giants actually developed unprecedented error-enhancing mechanisms that rewarded lies and fictions.
  • Page 266 I don't want to imply that the spread of fake news and conspiracy theories is the main problem with all past, present, and future computer networks.
  • Page 266 We also shouldn't discount the huge social benefits that YouTube, Facebook, and other social media platforms have brought.
  • Page 267 When computers are given a specific goal, such as to increase YouTube traffic to one billion hours a day, they use all their power and ingenuity to achieve this goal.
  • Page 272 the more powerful the computer, the more careful we need to be about defining its goal in a way that precisely aligns with our ultimate goals.
  • Page 274 As we give algorithms greater and greater power over health care, education, law enforcement, and numerous other fields, the alignment problem will loom ever larger.
  • Page 274 In theory, when humans create a computer network, they must define for it an ultimate goal, which the computers are never allowed to change or ignore.
  • Page 274 Then, even if computers become so powerful that we lose control over them, we can rest assured that their immense power will benefit rather than harm us. Unless, of course, it turned out that we defined a harmful or vague goal.
  • Page 276 alignment. A tactical maneuver is rational if, and only if, it is aligned with some higher strategic goal, which should in turn be aligned with an even higher political goal.
  • Page 277 Tech executives and engineers who rush to develop AI are making a huge mistake if they think there is a rational way to tell AI what its ultimate goal should be. They should learn from the bitter experiences of generations of philosophers who tried to define ultimate goals and failed. For millennia, philosophers have been looking for a definition of an ultimate goal that will not depend on an alignment to some higher goal. They have repeatedly been drawn to two potential solutions, known in philosophical jargon as deontology and utilitarianism. Deontologists (from the Greek word deon, meaning "duty") believe that there are some universal moral duties, or moral rules, that apply to everyone. These rules do not rely on alignment to a higher goal, but rather on their intrinsic goodness. If such rules indeed exist, and if we can find a way to program them into computers, then we can make sure the computer network will be a force for good.
  • Page 280 rules often end up the captives of local myths. This problem with deontology is especially critical if we try to dictate universal deontologist rules not to humans but to computers. Computers aren't even organic. So if they follow a rule of "Do unto others what you would have them do unto you," why should they be concerned about killing organisms like humans?
  • Page 281 The English philosopher Jeremy Bentham-another contemporary of Napoleon, Clausewitz, and Kant-said that the only rational ultimate goal is to minimize suffering in the world and maximize happiness. If our main fear about computer networks is that their misaligned goals might inflict terrible suffering on humans and perhaps on other sentient beings, then the utilitarian solution seems both obvious and attractive. Unfortunately, as with the deontologist solution, what sounds simple in the theoretical realm of philosophy becomes fiendishly complex in the practical land of history. We don't know how many "suffering points" or "happiness points" to assign to particular events, so in complex historical situations it is extremely difficult to calculate whether a given action increases or decreases the overall amount of suffering in the world.
  • Page 284 while utilitarianism promises a rational-and even mathematical-way to align every action with "the ultimate good," in practice it may well produce just another mythology.
  • Page 284 How then did bureaucratic systems throughout history set their ultimate goals? They relied on mythology to do it for them.
  • Page 285 The alignment problem turns out to be, at heart, a problem of mythology. one of the most important things to realize about computers is that when a lot of computers communicate with one another, they can create inter-computer realities, analogous to the intersubjective realities produced by networks of humans. These inter-computer realities may eventually become as powerful-and as dangerous-as human-made intersubjective myths.
  • Page 286 Just as intersubjective realities like money and gods can influence the physical reality outside people's minds, so inter-computer realities can influence reality outside the computers. The Google algorithm determines the website's Google rank by assigning points to various parameters, such as how many people visit the website and how many other websites link to it. The rank itself is an inter-computer reality, existing in a network connecting billions of computers-the internet.
  • Page 288 Increasingly, however, understanding American politics will necessitate understanding inter-computer realities ranging from AI-generated cults and currencies to AI-run political parties and even fully incorporated AIs. The U.S. legal system already recognizes corporations as legal persons that possess rights such as freedom of speech.
  • Page 288 In Citizens United v. Federal Election Commission (2010) the U.S. Supreme Court decided that this even protected the right of corporations to make political donations.[ What would stop AIs from being incorporated and recognized as legal persons with freedom of speech, then lobbying and making political donations to protect and expand AI rights?
  • Page 289 The problem we face is not how to deprive computers of all creative agency, but rather how to steer their creativity in the right direction.
  • Page 291 As computers replace humans in more and more bureaucracies, from tax collection and health care to security and justice, they too may create a mythology and impose it on us with unprecedented efficiency. In a world ruled by paper documents, bureaucrats had difficulty policing racial borderlines or tracking everyone's exact ancestry. People could get false documents. For example, social credit systems could create a new underclass of "low- credit people." Such a system may claim to merely "discover" the truth through an empirical and mathematical process of aggregating points to form an overall score. But how exactly would it define pro- social and antisocial behaviors?
  • Page 294 The fundamental principle of machine learning is that algorithms can teach themselves new things by interacting with the world, just as humans do, thereby producing a fully fledged artificial intelligence. AI is not a dumb automaton that repeats the same movements again and again irrespective of the results. Rather, it is equipped with strong self-correcting mechanisms, which allow it to learn from its own mistakes.
  • Page 296 if real-life companies already suffer from some ingrained bias, the baby algorithm is likely to learn this bias, and even amplify it.
  • Page 297 But getting rid of algorithmic bias might be as difficult as ridding ourselves of our human biases.
  • Page 298 A social media algorithm thinks it has discovered that humans like outrage, when in fact it is the algorithm itself that conditioned humans to produce and consume more outrage. We saw in chapter 4 that already thousands of years ago humans dreamed about finding an infallible information technology to shield us from human corruption and error. Holy books were an audacious attempt to craft such a technology, but they backfired. Since the book couldn't interpret itself, a human institution had to be built to interpret the sacred words and adapt them to changing circumstances.
  • Page 299 But in contrast to the holy book, computers can adapt themselves to changing circumstances and also interpret their decisions and ideas for us. Some humans may consequently conclude that the quest for an infallible technology has finally succeeded and that we should treat computers as a holy book that can talk to us and interpret itself, without any need of an intervening human institution.
  • Page 299 algorithms are independent agents, and they are already taking power away from
  • Page 300 One potential guardrail is to train computers to be aware of their own fallibility. As Socrates taught, being able to say "I don't know" is an essential step on the path to wisdom. And this is true of computer wisdom no less than of human wisdom. Baby algorithms should learn to doubt themselves, should keep humans in the loop,
  • Page 301 To conclude, the new computer network will not necessarily be either bad or good. All we know for sure is that it will be alien and it will be fallible. We therefore need to build institutions that will be able to check not just familiar human weaknesses like greed and hatred but also radically alien errors.
  • Page 309 the end of the twentieth century, it had become clear that imperialism, totalitarianism, and militarism were not the ideal way to build industrial societies. Despite all its flaws, liberal democracy offered a better way. The great advantage of liberal democracy is that it possesses strong self-correcting mechanisms, which limit the excesses of fanaticism and preserve the ability to recognize our errors and try different courses of action. Given our inability to predict how the new computer network will develop, our best chance to avoid catastrophe in the present century is to maintain democratic self-correcting mechanisms that can identify and correct mistakes as we go along.
  • Page 310 Democracies can choose to use the new powers of surveillance in a limited way, in order to provide citizens with better health care and security without destroying their privacy and autonomy.
  • Page 311 The first principle is benevolence. When a computer network collects information on me, that information should be used to help me rather than manipulate me. Having access to our personal life comes with a fiduciary duty to act in our best interests.
  • Page 312 the tech giants cannot square their fiduciary duty with their current business model, legislators could require them to switch to a more traditional business model, of getting users to pay for services in money rather than in information. The second principle that would protect democracy against the rise of totalitarian surveillance regimes is decentralization. the survival of democracy, some inefficiency is a feature, not a bug. To protect the privacy and liberty of individuals, it's best if neither the police nor the boss knows everything about us.
  • Page 313 A third democratic principle is mutuality. If democracies increase surveillance of individuals, they must simultaneously increase surveillance of governments and corporations too. What's bad is if all the information flows one way: from the bottom up.
  • Page 314 A fourth democratic principle is that surveillance systems must always leave room for both change and rest. New surveillance technology, especially when coupled with a social credit system, might force people either to conform to a novel caste system or to constantly change their actions, thoughts, and personality in accordance with the latest instructions from above.
  • Page 315 So an alternative health-care system may instruct its algorithm not to predict my illnesses, but rather to help me avoid them. But before we rush to embrace the dynamic algorithm, we should note that it too has a downside. Human life is a balancing act between endeavoring to improve ourselves and accepting who we are. If the goals of the dynamic algorithm are dictated by an ambitious government or by ruthless corporations, the algorithm is likely to morph into a tyrant, relentlessly demanding that I exercise more, eat less, change my hobbies, and alter numerous other habits, or else it would report me to my employer or downgrade my social credit score.
  • Page 316 Surveillance is not the only danger that new information technologies pose to democracy. A second threat is that automation will destabilize the job market and the resulting strain may undermine democracy.
  • Page 317 Unfortunately, nobody is certain what skills we should teach children in school and students in university, because we cannot predict which jobs and tasks will disappear and which ones will emerge. intellectuals tend to appreciate intellectual skills more than motor and social skills. But actually, it is easier to automate chess playing than, say, dish washing.
  • Page 318 Another common but mistaken assumption is that creativity is unique to humans so it would be difficult to automate any job that requires creativity. A third mistaken assumption is that computers couldn't replace humans in jobs requiring emotional intelligence, from therapists to teachers. AI doesn't have any emotions of its own, but it can nevertheless learn to recognize these patterns in humans.
  • Page 320 In sports, for example, we know that robots can move much faster than humans, but we aren't interested in watching robots compete in the Olympics.[15] The same is true for human chess masters.
  • Page 321 Yet even professions that are the preserve of conscious entities-like priests-might eventually be taken over by computers, because, as noted in chapter 6, computers could one day gain the ability to feel pain and love. Even if they can't, humans may nevertheless come to treat them as if they can.
  • Page 322 Chatbots and other AIs may not have any feelings of their own, but they are now being trained to generate feelings in humans and form intimate relationships with us.
  • Page 324 numerous democracies have been hijacked by unconservative leaders such as Donald Trump and have been transformed into radical revolutionary parties.
  • Page 324 the Trumpian program talks more of destroying existing institutions and revolutionizing society. Nobody knows for sure why all this is happening. One hypothesis is that the accelerating pace of technological change with its attendant economic, social, and cultural transformations might have made the moderate conservative program seem unrealistic. If conserving existing traditions and institutions is hopeless, and some kind of revolution looks inevitable, then the only means to thwart a left-wing revolution is by striking first and instigating a right-wing revolution. This was the political logic in the 1920s and 1930s, when conservative forces backed radical fascist revolutions in Italy, Germany, Spain, and elsewhere as a way-so they thought-to preempt a Soviet-style left-wing revolution.
  • Page 325 When both conservatives and progressives resist the temptation of radical revolution, and stay loyal to democratic traditions and institutions, democracies prove themselves to be highly agile.
  • Page 326 The most important human skill for surviving the twenty-first century is likely to be flexibility, and democracies are more flexible than totalitarian regimes.
  • Page 330 By the early 2020s citizens in numerous countries routinely get prison sentences based in part on risk assessments made by algorithms that neither the judges nor the defendants comprehend.[31] And prison sentences are just the tip of the iceberg.
  • Page 331 Computers are making more and more decisions about us, both mundane and life-changing. In addition to prison sentences, algorithms increasingly have a hand in deciding whether to offer us a place at college, give us a job, provide us with welfare benefits, or grant us a loan. They similarly help determine what kind of medical treatment we receive, what insurance premiums we pay, what news we hear, and who would ask us on a date.
  • Page 333 The rise of unfathomable alien intelligence undermines democracy. If more and more decisions about people's lives are made in a black box, so voters cannot understand and challenge them, democracy ceases to function. In particular, what happens when crucial decisions not just about individual lives but even about collective matters like the Federal Reserve's interest rate are made by unfathomable algorithms? Human voters may keep choosing a human president, but wouldn't this be just an empty ceremony?
  • Page 334 The increasing unfathomability of our information network is one of the reasons for the recent wave of populist parties and charismatic leaders. when they feel overwhelmed by immense amounts of information they cannot digest, they become easy prey for conspiracy theories, and they turn for salvation to something they do understand-a human.
  • Page 336 How can a human mind analyze and evaluate a decision made on the basis of so many data points?
  • Page 337 There is, however, a silver lining to this cloud of numbers. While individual laypersons may be unable to vet complex algorithms, a team of experts getting help from their own AI sidekicks can potentially assess the fairness of algorithmic decisions even more reliably than anyone can assess the fairness of human decisions.
  • Page 338 To vet algorithms, regulatory institutions will need not only to analyze them but also to translate their discoveries into stories that humans can understand. Because computers will increasingly replace human bureaucrats and human mythmakers, this will again change the deep structure of power.
  • Page 340 The new computer network poses one final threat to democracies. Instead of digital totalitarianism, it could foster digital anarchy. To function, a democracy needs to meet two conditions: it needs to enable a free public conversation on key issues, and it needs to maintain a minimum of social order and institutional trust. Now, with the rise of the new computer network, might large-scale democracy again become impossible? One difficulty is that the computer network makes it easier to join the debate. In the past, organizations like newspapers, radio stations, and established political parties acted as gatekeepers, deciding who was heard in the public sphere. Social media undermined the power of these gatekeepers, leading to a more open but also more anarchical public conversation.
  • Page 342 So, what happens to democratic debates when millions-and eventually billions-of highly intelligent bots are not only composing extremely compelling political manifestos and creating deepfake images and videos but also able to win our trust and friendship?
  • Page 343 In the face of the threat algorithms pose to the democratic conversation, democracies are not helpless. They can and should take measures to regulate AI and prevent it from polluting our infosphere with fake people spewing fake news.
  • Page 344 Digital agents are welcome to join many conversations, provided they don't pretend to be humans. Another important measure democracies can adopt is to ban unsupervised algorithms from curating key public debates.
  • Page 345 For most of history large-scale democracy was impossible because information technology wasn't sophisticated enough to hold a large-scale political conversation.
  • Page 346 We cannot foretell how things will play out. it is clear that the information network of many democracies is breaking down.
  • Page 348 However, as of 2024, more than half of "us" already live under authoritarian or totalitarian regimes,[2] many of which were established long before the rise of the computer network. enabled the rise of both large-scale democracy and large-scale totalitarianism, but totalitarianism suffered from a severe disadvantage.
  • Page 349 Technologies like the telegraph, the telephone, the typewriter, and the radio facilitated the centralization of information, but they couldn't process the information and make decisions by themselves. This remained something that only humans could do. The rise of machine-learning algorithms, however, may be exactly what the Stalins of the world have been waiting for. Even in democratic countries, a few corporations like Google, Facebook, and Amazon have become monopolies in their domains, partly because AI tips the balance in favor of the giants.
  • Page 352 Russia's human engineers can do their best to create AIs that are totally aligned with the regime, but given the ability of AI to learn and change by itself, how can the human engineers ensure that the AI never deviates into illicit territory?
  • Page 354 In the long term, totalitarian regimes are likely to face an even bigger danger: instead of criticizing them, an algorithm might gain control of them.
  • Page 358 Whereas democracies assume that everyone is fallible, in totalitarian regimes the fundamental assumption is that the ruling party or the supreme leader is always right.
  • Page 361 Computers are not yet powerful enough to completely escape our control or destroy human civilization by themselves. As long as humanity stands united, we can build institutions that will control AI and will identify and correct algorithmic errors. Unfortunately, humanity has never been united. We have always been plagued by bad actors, as well as by disagreements between good actors. The rise of AI, then, poses an existential danger to humankind not because of the malevolence of computers but because of our own shortcomings.
  • Page 362 As we have seen in previous chapters, human civilization is threatened not only by physical and biological weapons of mass destruction like atom bombs and viruses. Human civilization could also be destroyed by weapons of social mass destruction, like stories that undermine our social bonds.
  • Page 369 On September 1, 2017, President Putin of Russia declared, "Artificial intelligence is the future, not only for Russia, but for all humankind…. Whoever becomes the leader in this sphere will become the ruler of the world."
  • Page 370 But what began as a commercial competition between corporations was turning into a match between governments, or perhaps more accurately, into a race between competing teams, each made of one government and several corporations. The prize for the winner? World domination.
  • Page 375 It is becoming difficult to access information across the Silicon Curtain, say between China and the United States, or between Russia and the EU. Moreover, the two sides are increasingly run on different digital networks, using different computer codes. Each sphere obeys different regulations and serves different purposes. In the United States, the government plays a more limited role. Private enterprises lead the development and deployment of AI, and the ultimate goal of many new AI systems is to enrich the tech giants rather than to strengthen the American state or the current administration.
  • Page 381 An increasingly important question is, Can people adopt any virtual identity they like, or should their identity be constrained by their biological body?
  • Page 382 it is probable that within a few decades the computer network will cultivate new human and nonhuman identities that make little sense to us.
  • Page 385 Global cooperation and patriotism are not mutually exclusive.
  • Page 386 global cooperation means two far more modest things: first, a commitment to some global rules.
  • Page 386 The second principle of globalism is that sometimes-not always, but sometimes-it is necessary to prioritize the long-term interests of all humans over the short-term interests of a few.
  • Page 387 Forging and keeping international agreements on AI will require major changes in the way the international system functions. Epilogue
  • Page 398 One lesson is that the invention of new information technology is always a catalyst for major historical changes, because the most important role of information is to weave new networks rather than represent preexisting realities.
  • Page 399 The invention of AI is potentially more momentous than the invention of the telegraph, the printing press, or even writing, because AI is the first technology that is capable of making decisions and generating ideas by itself.
  • Page 401 Let's return now to the question I posed at the beginning of this book: If we are so wise, why are we so self-destructive?
  • Page 402 This book has argued that the fault isn't with our nature but with our information networks. Due to the privileging of order over truth, human information networks have often produced a lot of power but little wisdom.
  • Page 402 Accordingly, as a network becomes more powerful, its self-correcting mechanisms become more vital.
  • Page 403 Unfortunately, despite the importance of self-correcting mechanisms for the long-term welfare of humanity, politicians might be tempted to weaken them.