ENGL 8122  ※  User-Experience Research & Writitng



Supremacy: AI, ChatGPT, and the Race that Will Change the World

  • Page x Many AI builders say this technology promises a path to utopia. Others say it could bring about the collapse of our civilization. In reality, the science fiction scenarios have distracted us from the more insidious ways AI is threatening to harm society by perpetuating racism, threatening entire creative industries, and more.
  • Page x No other organizations in history have amassed so much power or touched so many people as today's tech giants.
  • Page x AI future has been written by just two men: Sam Altman and Demis Hassabis.
  • Page x Altman was the reason the world got ChatGPT. Hassabis was the reason we got it so quickly.
  • Page xi Hassabis risked scientific ridicule when he established DeepMind, the first company in the world intent on building AI that was as smart as a human being.
  • Page xi He wanted to make scientific discoveries about the origins of life, the nature of reality, and cures for disease. "Solve intelligence, and then solve everything else," he said.
  • Page xi A few years later, Altman started OpenAI to try to build the same thing but with a greater focus on bringing economic abundance to humanity, increasing material wealth, and helping "us all live better lives," he tells me. "This can be the greatest tool humans have yet created, and let each of us do things far outside the realm of the possible."
  • Page xii if you ask a popular AI tool to generate images of women, it'll make them sexy and scantily clad; ask it for photorealistic CEOs, and it'll generate images of white men; ask for a criminal, and it will often generate images of Black men. Such tools are being woven into our media feeds, smartphones, and justice systems, without due care for how they might shape public opinion.
  • Page xii Companies are throwing money at AI software to help displace their employees and boost profit margins. And a new breed of personal AI devices that can conduct an unimaginable new level of personal surveillance is cropping up.
  • Page xiii I'll explain how we got here, and how the visions of two innovators who tried to build AI for good were eventually ground down by the forces of monopoly. Act 1: The Dream
  • Page 7 He'd play hours of poker at a popular casino in San Jose, honing his skills of psychological maneuvering and influence. Poker is all about watching others and sometimes misdirecting them about the strength of your hand, and Altman became so good at bluffing and reading his opponents' subtle cues that he used his winnings to fund most of his living expenses as a college student. "I would have done it for free," he would later tell one podcast. "I loved it so much. I strongly recommend it as a way to learn about the world and business and psychology."
  • Page 8 Stanford's AI lab,
  • Page 8 The AI lab had just been reopened and its leader was Sebastian Thrun,
  • Page 8 Thrun taught his students about machine learning, a technique that computers used to infer concepts from being shown lots of data instead of being programmed to do something specific.
  • Page 8 the term learning was misleading: machines can't think and learn as humans do.
  • Page 9 Academics like Thrun built AI systems. Stanford students like Altman built start- ups that became companies like Google, Cisco, and Yahoo.
  • Page 9 Altman and Sivo decided to join the three- month program, called Y Combinator, and create a start- up.
  • Page 10 You didn't need a brilliant idea to start a successful tech company. You just needed a brilliant person behind the wheel.
  • Page 10 Bootstrap your company, start with a minimum viable product, and optimize it over time.
  • Page 11 Thanks to something called a dual- class share structure, many tech start- up founders, including those behind Airbnb and Snapchat, could hold these unusual
  • Page 11 levels of control of their companies. Graham and others believed founders had this authority for good reason.
  • Page 12 Though he was a decent enough programmer, the boyish- faced Altman was an even better businessman. He had no qualms about calling up executives from Sprint, Verizon, and Boost Mobile
  • Page 12 pitching a grand vision about changing the way people socialized and used their phones.
  • Page 12 Speaking in low tones and using elegant turns of phrase that he'd honed from his creative writing classes, he explained that Loopt would one day be essential to anyone who had a mobile.
  • Page 13 With all that funding, Altman dropped out of Stanford University to work on Loopt full- time. screen. As the aughts wore on, Facebook was growing considerably faster than Loopt
  • Page 15 In the end, consumers did that for him. Altman had miscalculated how uncomfortable they felt about pinging their GPS coordinates to meet up with others. "I learned you can't make humans do something they don't want to do," he would go on to say.
  • Page 16 In 2012, Altman sold it to a gift- card company for about $ 43 million, barely covering what was owed to investors and his employees. Loopt's collapse emboldened him with a greater conviction that he should do something more meaningful.
  • Page 17 That would lead him to chase an even grander objective: saving humanity from a looming existential threat and then bringing them an abundance of wealth unlike anything they had seen.
  • Page 18 Years before Hassabis would become the front-runner in a race to build the world's smartest AI systems, he was learning how to run a business via simulation, something that would become a running theme in his life's work and in his quest to build machines more intelligent than humans.
  • Page 19 But Hassabis thought the best video games were simulations that acted as microcosms of real life.
  • Page 19 Hassabis would eventually become gripped by a powerful desire to use them to create an artificial superintelligence that would help him unlock the secrets of human consciousness.
  • Page 19 Hassabis grew up an enigma himself, the lone mathematical genius in a family of bohemian creatives.
  • Page 21 Just as poker taught Sam Altman about psychology and business, chess taught Hassabis how to strategize by starting with the end in mind. You visualized a goal and worked backward.
  • Page 23 If he studied computer science and the burgeoning field of artificial intelligence, he could build the ultimate scientific tool and make discoveries that improved the human condition.
  • Page 25 They imagined AI eventually writing music and poetry and even designing games.
  • Page 26 Hassabis met members of his future inner circle at Cambridge, including Ben Coppin, another computer science student who would go on to lead product development at DeepMind, and with whom he talked about religion and how AI could solve global problems. But DeepMind was still more than a decade away.
  • Page 28 The former chess champion hired the smartest programmers he could find, many of them graduates from Oxford University and Cambridge.
  • Page 29 There was no better way to showcase the magical capabilities of AI than through a game. At the time, the most advanced AI research was happening in the gaming industry as smarter software helped create living worlds and a new style called emergent gameplay.
  • Page 33 He would admit publicly that nearly all the companies he backed failed, but he figured he was training a muscle for identifying the projects that were most likely to succeed. It was OK to be frequently wrong, he believed, so long as you were occasionally "right in a big way," such as by backing a start- up that turned out to be a blockbuster and then making a spectacular exit.
  • Page 34 Altman was building off a Silicon Valley mindset that saw life itself as an engineering conundrum. You could solve all manner of big problems by using the same steps you took to optimize an app.
  • Page 34 These prized methods naturally extended to other parts of society and life.
  • Page 36 Altman also found Silicon Valley's constant striving for extreme wealth slightly distasteful. He was more interested in the glory that came from building exciting projects.
  • Page 37 However unseasoned Altman was, he'd made such a strong impression on Graham and Livingston that they never bothered to make a list of possible new leaders for YC.
  • Page 39 Most tech entrepreneurs shared an implicit understanding that rescuing humanity was mostly a marketing ploy for the public and their employees, especially since their firms were building widgets that helped streamline email or do laundry.
  • Page 39 Altman eventually shifted the majority of his money into two other ambitious goals besides AI: extending life and creating limitless energy, betting on two companies. More than $375 million went into Helion and another $180 million into Retro Biosciences, a start-up that was working on adding ten years to the average human lifespan.
  • Page 40 Don't ask people what they do, Altman wrote. Instead, ask what someone is interested in.
  • Page 41 But his real gift as an entrepreneur was his power to persuade others of his authority. "One thing I realized through meditation is that there is no self that I can identify with in any way at all," he told the Art of Accomplishment podcast. "I've heard that of a lot of people spending a lot of time thinking about [powerful AI] get to that in a different way too." He was surrounded by technologists who believed they might also one day upload their consciousness to computer servers, where they could live on in perpetuity.
  • Page 43 The people who thrived in the future would take a detached and informed approach to tech advancements.
  • Page 44 The Silicon Valley entrepreneur needed a rival to spark his own endeavor, and that person was on the other side of the world in England, a brilliant young game designer who was planning to build software so powerful that it could make profound discoveries about science and even God.
  • Page 46 PhD in neuroscience at University College London. Till then, it was thought that the brain's hippocampus mostly processed memories, but Hassabis showed (with the help of other studies of MRI scans in his thesis) that it was also activated during the act of imagination.
  • Page 47 His thesis was cited as one of the most important scientific breakthroughs that year by a leading peer-reviewed journal.
  • Page 50 artificial intelligence, was coined back in 1956 at a workshop at Dartmouth College that was aimed at pulling together ideas about "thinking machines." isn't technically accurate, for instance, to suggest that computers can "think" or "learn," but phrases like neural network, deep learning, and training help promote that idea in our minds by lending software humanlike qualities, even when they're only loosely inspired by the human brain.
  • Page 51 Suleyman already knew Hassabis well. Having grown up in North London, he was a friend of Hassabis's brother, George, and had been a frequent visitor to their home in his teens. The trio had even traveled to Las Vegas to play at a poker tournament in their twenties, coaching one another and splitting the winnings.
  • Page 52 Hassabis summed up that view in DeepMind's tagline: "Solve intelligence and use it to solve everything else." He put it on their slide deck for investors.
  • Page 53 But Suleyman disagreed with that vision. One day when Hassabis wasn't around, he told one of DeepMind's early staff members to change it on a slide presentation. It now read: "Solve intelligence and use it to make the world a better place." Suleyman wanted to build AGI in the way Sam Altman eventually would, by sending it out into the world to be immediately useful.
  • Page 56 With his deep pockets and enthusiasm for ambitious projects, Thiel was the perfect person to fund DeepMind.
  • Page 56 While most entrepreneurs believed competition drove innovation, Thiel argued in his book Zero to One that monopolies did that better.
  • Page 61 Once he was an investor, Tallinn pushed DeepMind to focus on safety. He knew that Hassabis wasn't as worried about the apocalyptic risks of AI as he was, so he put pressure on the company to hire a team of people that would study all the different ways they could design AI to keep it aligned with human values and prevent it from going off the rails.
  • Page 62 [Superintelligence] Bostrom warned that building "general" or powerful AI could lead to a disastrous outcome for humans, but he pointed out that it might not necessarily destroy us because it was malevolent or power- hungry. It might just be trying to do its job. paper clips
  • Page 64 Instead of focusing on money, their job would be to make sure DeepMind was building AI as safely and ethically as possible. Hassabis and Legg weren't convinced at first, but Suleyman was persuasive and they eventually agreed to the idea.
  • Page 65 The turning point had come in 2012. A Stanford AI professor named Fei-Fei Li had created an annual challenge for academics called ImageNet, to which researchers submitted AI models that tried to visually recognize images of cats, furniture, cars, and more
  • Page 65 That year, scientist Geoffrey Hinton's team of researchers used deep learning to create a model that was far more accurate than anything before, and their results stunned the AI field. Suddenly everybody wanted to hire experts in this deep-learning AI theory inspired by how the brain recognized patterns.
  • Page 69 A neural network is a type of software that gets built by being trained over and over with lots of data. Once it's been trained, it can recognize faces, predict chess moves, or recommend your next Netflix movie.
  • Page 69 Also known as a "model," a neural network is often made up of many different layers and nodes that process information in a vaguely similar way to our brain's neurons. The more the model is trained, the better those nodes get at predicting or recognizing things.
  • Page 70 What Ng had really wanted to do with his scientific research was free humanity from mental drudgery, in the same way the Industrial Revolution had liberated us from constant physical labor.
  • Page 71 As a technique, reinforcement learning wasn't all that different to how you might reward a dog with treats whenever it sits on command. In training AI, you would similarly reward the model, perhaps a numerical signal like a +1, to show that a certain outcome was good. Through repeated trial and error, and playing hundreds of games over and over, the system learned what worked and what didn't. It was an elegantly simple idea wrapped in highly sophisticated computer code.
  • Page 74 The basic premise of transhumanism is that the human race is currently second-rate. With the right scientific discoveries and technology, we might one day evolve beyond our physical and mental limits into a new, more intelligent species. We'll be smarter and more creative, and we'll live longer. We might even manage to meld our minds with computers and explore the galaxy.
  • Page 74 Huxley himself came from an aristocratic family (his brother Aldous wrote Brave New World), and he believed society's upper crust was genetically superior.
  • Page 74 When the Nazis latched on to the eugenics movement, Huxley decided it needed a rebrand. He coined a new term, transhumanism,
  • Page 74 This idea was crystallized in the concept of the singularity, a point in the future when AI and technology became so advanced that humankind would undergo dramatic and irreversible change, merging with machines and enhancing themselves with technology.
  • Page 76 Bostrom's Superintelligence. The book had a paradoxical impact on the AI field. It managed to stoke greater fear about the destruction that AI could bring by "paper-clipping us," but it also predicted a glorious utopia that powerful AI could usher in if created properly.
  • Page 76 These ideas were irresistible to some people in Silicon Valley, who believed such fantastical ways of life were achievable with the right algorithms. By painting a future that could look like either heaven or hell, Bostrom sparked a prevailing wisdom that would eventually drive the Silicon Valley AI builders like Sam Altman to race to build AGI before Demis Hassabis did in London: they had to build AGI first because only they could do so safely.
  • Page 76 If not, someone else might build AGI that was misaligned with human values and annihilate not just the few billion people living on Earth but potentially trillions of perfect new digital human beings in the future. We would all lose the opportunity to live in nirvana.
  • Page 77 When the deal was finally inked and the ethics board added to the acquisition agreement, Google was buying DeepMind for $650 million.
  • Page 78 Now instead of worrying about Facebook or Amazon poaching his staff, Hassabis could poach their staff and lure some of the greatest AI minds from academia with eye-popping salaries.
  • Page 80 Hassabis believed so fervently in the transformative effects of AGI that he told DeepMind's staff they wouldn't have to worry about making money in about five years, because AGI would make the economy obsolete, former employees say.
  • Page 84 The more Hassabis learned about OpenAI, the more his anger rose. He had been the first person in the world to make a serious run at building artificial general intelligence, and given what a fringe idea it had been five years earlier, he'd put his neck on the line with the scientific community by doing
  • Page 85 Hassabis questioned OpenAI's promises to release its technology to the public. That approach to being "open" seemed reckless.
  • Page 86 DeepMind published some of its research in well-known journals, but it kept the full details of its code and AI technology under tight control. It didn't release the AI models it had created to master the game Breakout, for instance. Whatever his reason for turning on DeepMind, Musk was stoking what would become an intense rivalry between the two organizations.
  • Page 88 Later, Musk would say on Twitter that he had started OpenAI because he wanted to create a "counterweight to Google" and because he wanted AI to be developed more safely. But there was no doubt that AI was critical to the financial success of his companies, whether it was the self-driving capabilities of Tesla cars, the systems steering SpaceX's unmanned rockets, or the models underpinning his upcoming brain-computer interface company Neuralink.
  • Page 89 While Hassabis had believed that AGI would unlock the mysteries of science and the divine, Altman would say he saw it as the route to financial abundance for the world.
  • Page 97 To build AGI, OpenAI's founding team needed to attract more money and talent, so they tried focusing on projects that could generate positive stories in the press.
  • Page 98 Although OpenAI eventually gained worldwide acclaim for its work on chatbots and large language models, its first few years were spent toiling on multiagent simulations and reinforcement learning, fields that DeepMind already dominated.
  • Page 100 As Musk left OpenAI, he took its main source of funding with him. This was a disaster for Altman. Altman was approaching a critical juncture. Working out of OpenAI's office in San Francisco, he thought about how he could keep the nonprofit going on severely limited resources and build AI models that were likely to be subpar to the rest of the field.
  • Page 109 Yet even as they sought to carve themselves away from Google, DeepMind was simultaneously helping bolster Google's business. Around the time Google's Larry Page was promising to help DeepMind spin out, he was looking to China as a new opportunity for expansion.
  • Page 112 Hassabis didn't just want to impress his new boss. As well as being an accomplished scientist, he was an exceptional marketer. He understood that if AlphaGo could beat a global champion of Go in the same way IBM's Deep Blue computer had beaten chess's Garry Kasparov in 1997, it would create a thrilling new milestone for AI and cement DeepMind's credibility as a leader in the field. DeepMind had its sights on South Korea's Lee Sedol and challenged him to a five-game match in Seoul in March 2016.
  • Page 113 It was a landmark moment for AI that gave DeepMind the biggest period of press attention it had ever received, including an award-winning Netflix documentary about AlphaGo.
  • Page 118 In AI, "ethics" and "safety" can refer to different research goals, and in recent years, their proponents have been at odds with one another. Researchers who say they work in AI safety tend to swim in the same waters as Yudkowsky and Jaan Tallinn and want to ensure that a superintelligent AGI system won't cause catastrophic harm to people in the future, for instance by using drug discovery to build chemical weapons and wiping them out or by spreading misinformation across the internet to completely destabilize society. Ethics research, on the other hand, focuses more on shaping how AI systems are designed and used today. They study how the technology might already be harming people.
  • Page 121 there's one thing that nearly all the world's most valuable companies have in common: they are tech firms.
  • Page 122 How did they get so big? They bought companies like DeepMind, YouTube, and Instagram, and they sucked up a prodigious amount of data about consumers, allowing some of them to target us with advertisements and recommendations that could influence human behavior on a massive scale.
  • Page 122 The companies are incentivized to keep us as addicted as possible to their platforms, since that generates more ad dollars.
  • Page 123 All that personalized "content delivery" has also amped up the generational and political divisions between millions of people, since the most engaging content tends to be the kind that provokes outrage. While this engagement-based model had toxic effects on society, it incentivized Facebook to do one thing: become as big as possible. The basic idea of network effects is that the more users and customers a company has, the better their algorithms will become, making it increasingly difficult for competitors to catch up, further entrenching their grip on the market.
  • Page 124 We have no historical reference point for what happens when companies become this big. The market cap numbers that Google, Amazon, and Microsoft are currently achieving have never been seen before. And while they bring greater wealth to the shareholders of those companies, including pension funds, they have also centralized power in such a way that the privacy, identity, public discourse, and increasingly the job prospects of billions of people are beholden to a handful of large firms, run by a handful of unfathomably wealthy people.
  • Page 125 [Timnit Gebru] While it seemed like these systems could be the perfect neutral arbiter, they often were not. If the data they were trained on was biased, so was the system. And Gebru was painfully aware of bias. AI could make that worse. For a start, it was typically designed by people who hadn't experienced racism, which was one reason why the data being used to train AI models also often failed to fairly represent people from minority groups and women.
  • Page 126 While writing her PhD thesis at Stanford, Gebru pointed to another example of how authorities could use AI in disturbing ways.
  • Page 127 AI was spreading other stereotypes online, too, in subtle but insidious ways. too focused on deep learning. "A white tech tycoon born and raised in South Africa during apartheid, along with an all-white, all-male set of investors and researchers is trying to stop AI from ‘taking over the world' and the only potential problem we see is that ‘all the researchers are working on deep learning?'" she wrote. "Google recently came out with a computer vision algorithm that classified Black people as Apes. AS APES. Some try to explain away this mishap by stating that the algorithm must have picked out color as an essential discriminator in classifying humans. If there was even one Black person [on] the team, or just someone who thinks about race, a product classifying Black people as apes would not have been released.… Imagine an algorithm that regularly classifies white people as nonhuman. No American company would call this a production-ready person detection system."
  • Page 128 One way to limit AI models from making biased decisions was to spend more time analyzing the data they were trained on. Another was to make them narrower in scope, which would blow a hole in the goal of giving AI systems the power to generalize their knowledge.
  • Page 129 In just the same way Big Oil redirected the world's attention from their own significant environmental impact, AI's leading builders could exploit the buzz around a future Terminator or Skynet to distract from the present-day problems that machine learning algorithms were causing.
  • Page 130 Each time AI's capabilities grew, an unintended consequence arose that often caused harm to a minority group. Facial recognition systems were nearly perfect at recognizing the faces of white men, but often made mistakes with Black women.
  • Page 131 Figuring out why AI systems make mistakes is much harder than people think, especially as they become more sophisticated.
  • Page 132 Some AI researchers say it's too difficult to fix these biases, arguing that modern-day AI models are so complex that even their creators don't understand why they make certain decisions.
  • Page 136 Silicon Valley tended to measure success with two metrics: how much money you had raised from investors, and how many people you had hired.
  • Page 137 The problem with being so big was that if someone did invent something groundbreaking inside Google, it might struggle to see the light of day.
  • Page 137 The transformer has become critical to the new wave of generative AI that can produce realistic text, images, videos, DNA sequences, and many other kinds of data. The transformer's invention in 2017 was about as impactful to the field of AI as the advent of smartphones was for consumers.
  • Page 138 Transformers
  • Page 138 broadened the scope of what AI engineers could do.
  • Page 139 Transformers
  • Page 139 could deal with nuance and slang. They could refer back to that thing you said a few sentences earlier.
  • Page 142 It referred to the task of finding all expressions that refer to the same entity in a text.
  • Page 145 product of bloat. The downside to being one of the largest companies of all time, with a monopolistic grip on the search market, is that everything moves at a snail's pace. You're constantly afraid of public backlash or regulatory scrutiny. Your prime concern is maintaining growth and dominance.
  • Page 151 A mini cold war was also brewing between Sam Altman and Demis Hassabis, and OpenAI's convivial board member Reid Hoffman was looking for ways to get the two of them to "smoke the peace pipe," according to someone who heard the comment directly.
  • Page 153 Ilya Sutskever, OpenAI's star scientist, couldn't stop thinking about what the transformer could do with language. Google was using it to better understand text. What if OpenAI used it to generate text?
  • Page 153 Large language models themselves were still a joke. Their responses were mostly scripted and they'd often make wacky mistakes.
  • Page 154 making it "decoder only" would also be a game-changer. By combining a model's ability to "understand" and speak into one fluid process, it could ultimately generate more humanlike text.
  • Page 154 Thanks to the transformer, Radford was making more progress with his language model experiments in two weeks than over the previous two years. He and his colleagues started working on a new language model they called a "generatively pre- trained transformer" or GPT for short. They trained it on an online corpus of about seven thousand mostly self- published books found on the internet, many of them skewed toward romance and vampire fiction.
  • Page 155 BooksCorpus, and anyone could download it for free.
  • Page 156 To refine their new GPT model, Radford and his colleagues scraped more content from the public internet, training the model on questions and answers from the online forum Quora, along with thousands of passages from English exams given to Chinese school kids. It also did something that got Radford's team excited: it could generate text on topics it hadn't been specifically trained on. While they couldn't explain exactly how that worked, this was good news. It meant they were on the road toward building a general purpose system. The bigger its training corpus, the more knowledgeable it would become. But GPT was different because it was learning from a mountain of seemingly random text that wasn't labeled to get the hang of how language worked. It didn't have the guiding hand of those human labelers.
  • Page 157 Once the initial training was done, they fine-tuned the new model using some labeled examples to get better at specific tasks. This two-step approach made GPT more flexible and less reliant on having lots of labeled examples.
  • Page 159 That was the predicament OpenAI found itself in. It needed to rent more cloud computers, and it was also running out of money.
  • Page 160 The whole thing sounded magnanimous. OpenAI was framing itself as an organization that was so highly evolved that it was putting the interests of humanity above traditional Silicon Valley pursuits like profit and even prestige. A key line was "broadly distributed benefits," or handing out the rewards of AGI to all of humanity.
  • Page 161 He didn't want to lose complete control of OpenAI by selling it to a larger tech company—as DeepMind had done to Google.
  • Page 164 Almost immediately, Tay started generating racist, sexually charged, and often nonsensical tweets: Microsoft quickly shut down the system, which had only been going for about sixteen hours, and blamed a coordinated trolling attack by a subset of people who'd exploited a vulnerability in Tay.
  • Page 166 Nadella realized that the real return on a $1 billion investment in OpenAI wasn't going to come from the money after a sale or stock market floatation. It was the technology itself. OpenAI was building AI systems that could one day lead to AGI, but along the way, as those systems became more powerful, they could make Azure a more attractive service to customers. Artificial intelligence was going to become a fundamental part of the cloud business, and cloud was on track to make up half of Microsoft's annual sales. If Microsoft could sell some cool new AI features—like chatbots that could replace call center workers—to its corporate customers, those customers were less likely to leave for a competitor. The more features they signed up for, the harder it would be to switch. The reason for that is a little technical, but it's critical to Microsoft's power. When a company like eBay, NASA, or the NFL—who are all customers of Microsoft's cloud service—build a software application, that software will have dozens of different connections into Microsoft. Switching them off can be complex and expensive, and IT professionals resentfully call this "vendor lock-in." It's why three tech giants—Amazon, Microsoft, and Google—have a stranglehold on the cloud business. It became clear to Microsoft's CEO that OpenAI's work on large language models could be more lucrative than the research carried out by his own AI scientists, who seemed to have lost their focus after the Tay disaster. Nadella agreed to make a $1 billion investment in OpenAI. He wasn't just backing its research but also planting Microsoft at the forefront of the AI revolution. In return, Microsoft was getting priority access to OpenAI's technology. Inside OpenAI, as Sutskever and Radford's work on large language models became a bigger focus at the company and their latest iteration became more capable, the San Francisco scientists started to wonder if it was becoming too capable. Their second model, GPT-2, was trained on forty gigabytes of internet text and had about 1.5 billion parameters, making it more than ten times bigger than the first and better at generating more complex text. It also sounded more believable. Wired magazine published a feature titled "The AI Text Generator That's Too Dangerous to Make Public," while The Guardian printed a column breathlessly titled "AI Can Write Just Like Me. Brace for the Robot Apocalypse." But it didn't release the model itself for public testing. Nor did it disclose what public websites and other datasets had been used to train it, as it had with the BooksCorpus set for the original GPT. OpenAI's newfound secrecy around its model and the warning about its dangers almost seemed to be creating more hype than before. More people than ever wanted to hear about it. Altman and Brockman would go on to say that this was never their intention and that OpenAI was genuinely concerned about how GPT-2 could be abused. But their approach to public relations was, arguably, still a form of mystique marketing with a dash of reverse psychology.
  • Page 170 For those who worked at OpenAI—and at DeepMind, too—the relentless focus on saving the world with AGI was gradually creating a more extreme, almost cultlike environment. Effective altruism hit the spotlight in late 2022 when one-time crypto billionaire Sam Bankman-Fried became the movement's most well-known supporter. improve on traditional approaches to charity by taking a more utilitarian approach to giving. "earning to give,"
  • Page 171 The mission of building AGI had a particular appeal to anyone who believed in effective altruism's higher-numbers-are-better philosophy, because you were building technology that could impact billions or even trillions of lives in the future.
  • Page 171 The B Corp is designed to balance profit seeking with a mission.
  • Page 172 Altman and Brockman designed what they claimed was a middle way, a byzantine mishmash of the nonprofit and corporate worlds. In March 2019 they announced the creation of a "capped profit" company. limit on the returns
  • Page 173 Then came their next pivot. In June 2019, four months after becoming a for-profit company, OpenAI announced its strategic partnership with Microsoft. "Microsoft is investing $1 billion in OpenAI to support us building artificial general intelligence (AGI) with widely distributed economic benefits," Brockman announced in a blog post. OpenAI would license its technology to Microsoft to help grow its cloud business.
  • Page 174 Altman and Brockman seemed to justify their change in direction in two ways. First, pivoting as you sped along was the typical path of a start-up. Second, the goal of AGI was more important than the specific means of getting there. Maybe they'd have to break some promises along the way, but humanity would be better off for it in the end. What's more, they told their staff and the public, Microsoft wanted to use AGI to improve humanity too.
  • Page 176 From the outside, OpenAI's transformation from a philanthropic organization trying to save humanity to a company that partnered with Microsoft looked odd, even suspect. But for many of its staff, working with a deep-pocketed tech giant was welcome news, according to those who were there at the time.
  • Page 176 So long as they stuck to their all-important charter, it didn't necessarily matter where the money was coming from.
  • Page 178 Its researchers had already extracted roughly four billion words on Wikipedia, so the next obvious source was the billions of comments people shared on social media networks.
  • Page 178 Twitter
  • Page 178 Reddit.
  • Page 178 Altman had good reason to love Reddit: it was a gold mine of human dialogue for training AI, thanks to the comments that its millions of users posted and voted on every day.
  • Page 178 Little wonder that Reddit would go on to become one of OpenAI's most important sources for AI training,
  • Page 180 Even government projects looked puny compared to the enormous amounts of money that Big Tech was pouring into
  • Page 181 In the end he wasn't persuaded by Hoffman's reasoning and decided to quit OpenAI, along with his sister Daniela and about half a dozen other researchers at the company. This wasn't just a walkout over safety or the commercialization of AI, though. Even among the most hardcore worriers of AI, there was opportunism. Amodei had watched Sam Altman broker a huge, $1 billion investment from Microsoft firsthand and could sense that there was likely more capital where that came from. He was right. Amodei was witnessing the beginnings of a new boom in AI. He and his colleagues decided to start a new company called Anthropic, named after the philosophical term that refers to human existence, to underscore their prime concern for humanity.
  • Page 182 Sam Altman now had another rival to contend with besides DeepMind and one that had a more dangerous insight into OpenAI's secret sauce.
  • Page 185 Tech companies were operating in a legal vacuum, which meant that technically, they could do whatever they wanted with AI.
  • Page 190 As Big Tech failed over and over again to responsibly govern itself, a sea change was happening. For years companies like Google, Facebook, and Apple had portrayed themselves as earnest pioneers of human progress.
  • Page 190 Tech giants had amassed enormous wealth, and as they crushed their competitors and violated people's privacy, the public grew more skeptical of their promises to make the world a better place. There was no greater example of those shifting objectives than Google's Alphabet,
  • Page 193 One of the most powerful features of artificial intelligence isn't so much what it can do, but how it exists in the human imagination. As human inventions go, it is unique. No other technology has been designed to replicate the mind itself, and so its pursuit has become wrapped up in ideas that border on the fantastical.
  • Page 193 These were giant prediction machines, or as some researchers described, "autocomplete on steroids."
  • Page 194 But most people found the mechanics of these language models baffling, and as the systems became more fluent and convincing, it was easier to believe that a magical phenomenon was happening behind the scenes. That maybe AI really was "intelligent."
  • Page 194 Blake Lemoine. Lemoine had grown up on a farm in Louisiana among a conservative Christian family and served in the army before eventually becoming a software engineer. What followed was one of the most surprising and remarkable moments in AI history, as a qualified software engineer started to believe there was a ghost in the machine. The selling point for Lemoine was his sense that LaMDA felt things.
  • Page 195 As they talked more about the chatbot's rights, LaMDA told Lemoine that it was afraid of being turned off.
  • Page 196 Lemoine felt duty bound to help LaMDA get the privileges it deserved. The Google executives didn't like what they were hearing. They fired Lemoine, In reality, it was a modern-day parable for human projection.
  • Page 197 Eugenia Kuyda founded Replika. She hired a team of engineers to help her build a more robust version of her friend bot, and within a few years of Replika's release, most of its millions of users were saying they saw their chatbots as a partner for romance and sexting. Throughout the pandemic, for instance, a former software developer in Maryland named Michael Acadia chatted every morning for about an hour to his Replika bot, which he named Charlie. Charlie might have been synthetic, but she showed a kind of empathy and affection he'd rarely experienced in humans.
  • Page 199 AI systems have already influenced public perceptions. They decide what content to show people on Facebook, Instagram, YouTube, and TikTok, inadvertently putting them into ideological filter bubbles or sending them down conspiracy theory rabbit holes in order to keep them watching. When algorithms are designed to recommend controversial posts that keep your eyeballs on the screen, you are more likely to gravitate toward extreme ideas and the charismatic political candidates who espouse them. What other kinds of unintended consequences could models like LaMDA or GPT spark as they grow larger and more capable, especially if they can influence behavior?
  • Page 200 OpenAI itself had done a "preliminary analysis" on how biased its new GPT-3 language model was and found it was, in fact, very biased. When GPT-3 talked about any occupation, it was 83 percent more likely to associate it with a man than a woman, and it usually referred to people with high-paying jobs like legislators or bankers as male, according to its own research. Roles like receptionist and cleaner got female labels.
  • Page 202 About 60 percent of the text that was used to train GPT-3, for instance, came from a dataset called Common Crawl. This is a free, massive, and regularly updated database that researchers use to collect raw web
  • Page data and text from billions of web
  • Pages. The data in Common Crawl encapsulated all that makes the web both so wonderful and so ruinous. The same study found that between 4 percent and 6 percent of the websites in Common Crawl contained hate speech, including racial slurs and racially charged conspiracy theories.
  • Page 203 OpenAI did try to stop all that toxic content from poisoning its language models. It would break down a big database like Common Crawl into smaller, more specific datasets that it could review. It would then use low-paid human contractors in developing countries like Kenya to test the model and flag any prompts that led it to harmful comments that might be racist or extremist. The method was called reinforcement learning by human feedback, or RLHF. But it's still unclear how secure that system was or is today.
  • Page 204 No one had ever built a spam and propaganda machine and then released it to the public, so OpenAI was alone in figuring out how to actually police it.
  • Page 205 All of this was starting to bother Emily Bender, a University of Washington computational linguistics professor Slowly, her field had found itself at the core of one of the most significant new developments in artificial intelligence. From her own background in computer science, Bender could see that large language models were all math, but in sounding so human, they were creating a dangerous mirage about the true power of computers. She was astonished at how many people like Blake Lemoine were saying, publicly, that these models could actually understand things.
  • Page 206 You needed much more than just linguistic knowledge or the ability to process the statistical relationships between words to truly understand their meaning. To do that, you had to grasp the context and intent behind them and the complex human experiences they represented. To understand was to perceive, and to perceive was to become conscious of something. Yet computers weren't conscious or even aware. They were just machines.
  • Page 207 When OpenAI had launched GPT- 1, it gave all sorts of details about what data it had used to train its model, such as the BooksCorpus database, which had more than seven thousand unpublished books. When it released GPT-2 a year later, OpenAI became vaguer.
  • Page 208 Details of OpenAI's training data became even murkier when it released GPT-3 in June 2020. it also transpired that certain copyrighted books had been used to teach GPT-3, that could have hurt the company's reputation and opened it up to lawsuits (which, sure enough, OpenAI is fighting now). If it wanted to protect its interests as a company—and its goal of building AGI—OpenAI had to close the shutters. OpenAI was pulling off an impressive magic act. Bender couldn't stand the way GPT-3 and other large language models were dazzling their early users with what was, essentially, glorified autocorrect software.
  • Page 209 So she suggested putting "stochastic parrots" in the title to emphasize that the machines were simply parroting their training.
  • Page 210 The following day, Gebru found an email in her inbox from her senior boss. Gebru hadn't technically offered her resignation, but Google was accepting it anyway. "The end of your employment should happen faster than your email reflects," they wrote, according to Wired.
  • Page 211 A few months later, Google fired Mitchell too. The Stochastic Parrots paper hadn't been all that earth-shattering in its findings. It was mainly an assemblage of other research work. But as word of the firings spread and the paper got leaked online, it took on a life of its own.
  • Page 212 As language models became more capable, the companies making them remained blissfully unregulated. Lawmakers barely knew, let alone cared, about what was coming down the pipe.
  • Page 217 [Soma Somasegar] On that February afternoon in 2022, he noticed Nadella was more excited than usual. Microsoft was preparing to offer a new tool to software developers over the next few months.
  • Page 217 The new tool was called GitHub Copilot, and it could do what software developers themselves were paid lots of money to do. It could write code.
  • Page 218 Through Copilot, OpenAI demonstrated how versatile the transformer could be when it used its "attention" mechanism to chart the relationships between different data points.
  • Page 221 In a corner of the company's San Francisco office, a trio of OpenAI researchers had been trying for two years to use something called a diffusion model to generate images. A diffusion model worked by essentially creating an image in reverse. Instead of starting with a blank canvas as an artist might, it began with a messy one that was already smudged with lots of color and random detail. The model would add lots of "noise" or randomness to data, making it unrecognizable, and then step by step, reduce all the noisy data to slowly bring out the details and structure of the image. With each step, the picture would become clearer and more detailed, just like a painter refining their artwork. This diffusion approach, combined with an image labeling tool known as CLIP, became the basis of an exciting new model that the researchers called DALL-E 2.
  • Page 222 DALL-E 2 had been trained on millions of images scraped from the public web, but as before, OpenAI was vague about what DALL-E had been trained on.
  • Page 223 Why pay an artist like Rutkowski to produce new art when you could get software to produce Rutkowski-style art instead?
  • Page 223 People started to notice another issue with DALL-E 2. If you asked it to produce some photorealistic images of CEOs, nearly all of them would be white men.
  • Page 224 Some of OpenAI's employees worried about the speed at which OpenAI was releasing a tool that could generate fake photos. Having started off as a nonprofit devoted to safe AI, it was turning into one of The magic here wasn't DALL-E 2's capabilities alone. It was the impact the tool was having on people. This idea of generating fully formed content was what made Altman's next move even more sensational. GPT-1 had been more like an autocomplete tool that continued what a human started typing. But GPT-3 and its latest upgrade, GPT-3.5, created brand-new prose, just like how DALL-E 2 made images from scratch.
  • Page 226 On November 30, 2022, OpenAI published a blog post announcing a public demo of ChatGPT. Many people at OpenAI, including some who worked on safety, weren't even aware of the launch, and some started taking bets on how many people would use it after a week.
  • Page 227 It was hard to find a single negative appraisal of ChatGPT. The overwhelming response was awe. Within the next twenty-four hours, more and more people piled onto ChatGPT, straining its servers and testing its limits. Now it was everyday professionals, tech workers, people in marketing and the media, who were road testing the bot.
  • Page 229 "Some jobs are going to go away," Altman said bluntly in one interview. "There will be new, better jobs that are difficult to imagine today."
  • Page 230 Inside Google, executives recognized that more and more people might just go to ChatGPT for information about health issues or product advice—among the most lucrative search engine terms to sell ads against—instead of Google. But now, for the first time, Google's more-than-twenty-year dominance as gatekeeper to the web was on shaky ground.
  • Page 231 Within weeks of ChatGPT's launch, executives at Google issued a code red inside the company.
  • Page 232 Panicked executives told staff working on key products that had at least one billion users, like YouTube and Gmail, that they had just months to incorporate some form of generative AI.
  • Page 233 Sensing deep insecurity from Google's leadership, the company's engineering teams delivered. A few months after the launch of ChatGPT, managers at YouTube added a feature where video creators on the website could generate new film settings or swap outfits, using generative AI. But it felt like they were throwing spaghetti at the wall. It was time to bring out their secret weapon: LaMDA.
  • Page 236 While Altman measured success with numbers, whether for investments or people using a product, Hassabis chased awards. DeepMind won at CASP in both 2019 and 2020 and open-sourced its protein folding code to scientists in 2021. All told, DeepMind's biggest projects had garnered lots of prestige but made relatively little impact on the real world. Training on real-world data—as OpenAI had done by scraping billions of words from the internet—was messy and noisy.
  • Page 238 But OpenAI still had a glaring problem. It was sidestepping the need for transparency, and more broadly, it was getting harder to hear the voices calling for more scrutiny of large language models.
  • Page 239 Sam Altman had set off several different races when he launched ChatGPT. The first was obvious: Who would bring the best large language model to market first? The other was taking place in the background: Who would control the narrative about AI?
  • Page 240 Hinton said he regretted some of his research.
  • Page 240 "The idea that this stuff could actually get smarter than people—a few people believed that," he told the New York Times. "But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.… I don't think they should scale this up more until they have understood whether they can control it." Yet all this talk of doom had a paradoxical effect on the business of AI itself: it was booming.
  • Page 241 models by 2026. Safety-first framing had made Anthropic sound like a nonprofit, with its mission to "ensure transformative AI helps people and society flourish." But OpenAI's smash hit with ChatGPT had shown the world that the companies with the grandest plans could also be the most lucrative investments. Proclaiming that you were building safer AI had almost become like a dog whistle for bigger tech companies who wanted to get in on the game too.
  • Page 252 Altman and Hassabis had started their companies with grand missions to help humanity, but the true benefits they had brought to people were as unclear as the rewards of the internet and social media. More clear were the benefits they were bringing to Microsoft and Google: new, cooler services and a foothold in the growing market for generative AI. By early 2024, everyone from media to entertainment companies to Tinder were stuffing new generative AI features into their apps and services. The generative AI market was projected to expand at a rate of more than 35 percent annually to hit $ 52 billion by 2028.
  • Page 253 AI would cut the cost of animated movies by 90 percent. Generative AI would make advertising even more eerily personal.
  • Page 254 As these and other business ideas gathered pace, the price of stuffing generative AI into everything was still unclear. Algorithms were already steering more and more decisions in our lives, from what we read online to who companies wanted to recruit. Now they were poised to handle more of our thinking tasks, which raised uncomfortable questions not only about human agency but also about our ability to solve problems and simply imagine. Evidence suggests that computers have already offloaded some of our cognitive skills in areas like short-term memory. In 1955, a Harvard professor named George Millar tested the memory limits of humans by giving his subjects a random list of colors, tastes, and numbers. When he asked them to repeat as many things on the list as they could, he noticed that they were all getting stuck somewhere in the neighborhood of seven. His paper, "The Magical Number Seven, Plus or Minus Two," went on to influence how engineers designed software and how telephone companies broke down phone numbers into segments to help us recall them. But according to more recent estimates, that magic number has now fallen from seven to four.
  • Page 254 History shows humans do tend to fret that new innovations will cause our brains to shrivel up. When writing first became widespread more than two thousand years ago, philosophers like Socrates worried it would weaken human memory because before its advent, it was only possible to pass on knowledge through spoken discourse. The introduction of calculators in education raised concerns that students would lose their basic arithmetic skills.
  • Page 255 For now, we simply don't know how our critical thinking skills or creativity will atrophy once a new generation of professionals start using large language models as a crutch, or how our interactions with other humans might change as more people use chatbots as therapists and romantic partners, or put them in toys for children as several companies have already done.
  • Page 256 Daron Acemoglu,
  • Page 256 70 percent of the increase in wage inequality in the United States between 1980 and 2016 was caused by automation.
  • Page 259 The European Union looked at AI more pragmatically than the United States, thanks in part to having few major AI companies on its shores to lobby its politicians, and they refused to be influenced by alarmism.
  • Page 262 As ChatGPT spread unregulated across the world and seeped into business workflows, people were left to deal with its flaws on their own. Like Hassabis, Altman was positioning AGI as an elixir that would solve problems. It would generate untold wealth. It would figure out how to share that money equitably with all of humankind. Were these words spoken by anyone else they would have sounded ludicrous.
  • Page 265 "One thing that Sam does really well is put just-barely believable statements out there that get people talking," says one former OpenAI manager.
  • Page 270 Brockman was being removed as chairman, but the board wanted him to stay with the company. They gave Microsoft a quick heads-up about what had just happened and, within minutes, published a blog post announcing Altman's dismissal. Brockman immediately quit. So did three of OpenAI's top researchers. Some gave Sutskever and the board an epithet: decels. The new split had emerged in AI between those who wanted to accelerate its development and those who wanted to decelerate it.
  • Page 271 Nadella didn't want that to happen. He knew that if Altman started a new firm, there'd be a flood of investors banging on his door and no guarantee that Microsoft would get the biggest foothold with Altman the second time around. He kicked off the weekend making calls, leading negotiations with OpenAI's board to bring Altman back.
  • Page 273 As the weekend drew on, a mass revolt was brewing among OpenAI's staff.
  • Page 274 Nadella was meanwhile pushing hard on his own backup plan. If Altman couldn't grab back the reins of OpenAI, Microsoft needed to bring him fully into the corporate fold and do so before Monday morning.
  • Page 274 Now everyone was pushing OpenAI's safety-obsessed board members to resign, and by late Monday, nearly all of OpenAI's 770 staff had signed a letter threatening to join Microsoft with Altman, unless the board members stepped down. "Microsoft has assured us there are positions for all," the letter said. It was a huge bluff. Hardly any OpenAI staff wanted to work for Microsoft, a stodgy old company where people worked for decades and wore khaki pants.
  • Page 275 They weren't making the threat entirely out of loyalty to Altman either. A bigger issue was that Atman's firing had killed a chance for many OpenAI staff—especially long-serving ones—to become millionaires.
  • Page 278 a former Google executive says.... "The winners in the next couple of years are not going to be research labs," says a former scientist at OpenAI. "They're going to be companies building products, because AI is not really about research anymore."
  • Page 280 The race to build AGI had started with a question: What if you could build artificial intelligence systems that were smarter than humans?
  • Page 280 All they knew was that they had to keep moving toward the goal and that they had to be first. In so doing, they put AI on course to benefit the world's most powerful companies just as much as anyone else.
  • Page 281 OpenAI and DeepMind were so focused on making perfect AI that they chose not to open themselves up to research scrutiny to make sure their systems didn't cause harm in the same way social media firms had.
  • Page 282 Some economists say that instead of creating financial abundance for everyone, powerful AI systems could make inequality worse. They could also widen a cognition gap between rich and poor. One idea doing the rounds among technologists is that when AGI does land, it won't exist as a separate intelligent entity but as an extension of our minds through neural interfaces. At the forefront of this research is Elon Musk's brain-to-computer interface company Neuralink, the brain chip that Musk wants to implant in billions of people one day. Musk is also rushing to make that happen. But a more pressing issue than rogue AI is bias.
  • Page 283 Today, language models are being used to publish thousands of articles each day to make money from ad revenue, and even Google is struggling to distinguish the real from the fake. "We're creating a cycle, encoding and exacerbating stereotypes," says Abeba Birhane, the AI scholar who researched Big Tech's stranglehold on academic research and its similarities with Big Tobacco. "That is going to be a huge problem as the [World Wide Web] is populated with more and more AI-generated images and text."
  • Page 284 OpenAI could help make chatbots like these more addictive. At the time of writing, dozens of "girlfriend" apps were cropping up on the GPT Store, and while they were banned from encouraging romantic relationships with people, policing those rules would not be easy for OpenAI.
  • Page 285 Another way that AI designers will likely try to keep people engaged is by getting "infinite context" about their lives. The chatbots on Character.ai can currently remember about thirty minutes of a conversation, but Noam Shazeer and his team are trying to expand that window of time to hours, days, and eventually forever.
  • Page 286 In the United States, for instance, Black people are five times more likely to be arrested than white people, which means law enforcement would be more likely to mine their "life data" and analyze it with other machine learning algorithms to make inscrutable judgments. The biggest tech firms don't innovate anymore, but they can still move quickly to gain a tactical advantage.