Co-Intelligence: Living and Working with AI

  • Page xvii AI works, in many ways, as a co- intelligence. It augments, or potentially replaces, human thinking to dramatic results. Early studies of the effects of AI have found it can often lead to a 20 to 80 percent improvement in productivity across a wide variety of job types, from coding to marketing. By contrast, when steam power, that most fundamental of General Purpose Technologies, the one that created the Industrial Revolution, was put into a factory, it improved productivity by 18 to 22 percent. And despite decades of looking, economists have had difficulty showing a real long- term productivity impact of computers and the internet over the past twenty years.
  • Page xix We have invented technologies, from axes to helicopters, that boost our physical capabilities; and others, like spreadsheets, that automate complex tasks; but we have never built a generally applicable technology that can boost our intelligence.
  • Page 4 artificial intelligence, a term invented in 1956 by John McCarthy of MIT.
  • Page 7 "Attention Is All You Need." Published by Google researchers in 2017.
  • Page 8 The attention mechanism helps solve this problem by allowing the AI model to weigh the importance of different words or phrases in a block of text. By focusing on the most relevant parts of the text, Transformers can produce more context-aware and coherent writing compared to earlier predictive AIs.
  • Page 9 Ultimately, that is all ChatGPT does technically-act as a very elaborate autocomplete like you have on your phone.
  • Page 10 pretraining, and unlike earlier forms of AI, it is unsupervised, which means the AI doesn't need carefully labeled data. Instead, by analyzing these examples, AI learns to recognize patterns, structures, and context in human language. Remarkably, with a vast number of adjustable parameters (called weights), LLMs can create a model that emulates how humans communicate through written text.
  • Page 12 The search for high-quality content for training material has become a major topic in AI development, since information-hungry AI companies are running out of good, free sources.
  • Page 13 As a result, it is also likely that most AI training data contains copyrighted information, like books used without permission, whether by accident or on purpose. The legal implications of this are still unclear. Because of the variety of data sources used, learning is not always a good thing. AI can also learn biases, errors, and falsehoods from the data it sees. AI companies hire workers, some highly paid experts, others low-paid contract workers in English-speaking nations like Kenya, to read AI answers and judge them on various characteristics. the process is called Reinforcement Learning from Human Feedback (RLHF).
  • Page 15 Unlike language models that produce text, diffusion models specialize in visual outputs, inventing pictures from scratch based on the words provided.
  • Page 23 Despite being just a predictive model, the Frontier AI models, trained on the largest datasets with the most computing power, seem to do things that their programming should not allow-a concept called emergence.
  • Page 29 Singularity, a reference to a point in mathematical function when the value is unmeasurable, coined by the famous mathematician John von Neumann in the 1950s to refer to the unknown future after which "human affairs, as we know them, could not continue."
  • Page 32 this book is focused on the near term, practical implications of our new AI-haunted world.
  • Page 34 Why pay an artist for their time and talent when an AI can do something similar for free in seconds? It is, effectively, creating something new, even if it is a homage to the original. For books that are repeated often in the training data-like Alice's Adventures in Wonderland-the AI can nearly reproduce it word for word.
  • Page 35 Part of the reason AIs seem so human to work with is that they are trained on our conversations and writings. So human biases also work their way into the training data. When asked to show a judge, the AI generates a picture of a man 97 percent of the time, even though 34 percent of US judges are women. In showing fast-food workers, 70 percent had darker skin tones, even though 70 percent of American fast-food workers are white.
  • Page 37 The most common approach to reducing bias is for humans to correct the AIs, as in the Reinforcement Learning from Human Feedback (RLHF) process, which is part of the fine-tuning of LLMs that we discussed in the previous chapter.
  • Page 38 One study found that AIs make the same moral judgments as humans do in simple scenarios 93 percent of the time.
  • Page 40 It will break its original rules if I can convince it that it is helping me, not teaching me how to make napalm.
  • Page 41 Even amateurs can now apply LLMs for widespread digital deception. AI art tools can quickly generate fake photographs that seem entirely plausible.
  • Page 44 Government regulation is likely to continue to lag the actual development of AI capabilities, and might stifle positive innovation in an attempt to stop negative outcomes.
  • Page 44 Instead, the path forward requires a broad societal response, with coordination among companies, governments, researchers, and civil society.
  • Page 47 Principle 1: Always invite AI to the table.
  • Page 52 Principle 2: Be the human in the loop.
  • Page 54 So, to be the human in the loop, you will need to be able to check the AI for hallucinations and lies and be able to work with it without being taken in by it. You provide crucial oversight, offering your unique perspective, critical thinking skills, and ethical considerations. This collaboration leads to better results and keeps you engaged with the AI process, preventing overreliance and complacency.
  • Page 55 Principle 3: Treat AI like a person (but tell it what kind of person it is).
  • Page 59 person or an intern.
  • Page 60 By defining its persona, engaging in a collaborative editing process, and continually providing guidance, you can take advantage of AI as a form of collaborative co-intelligence.
  • Page 60 Principle 4: Assume this is the worst AI you will ever use. Part II
  • Page 78 Some tests suggest that AI does have theory of mind, but, like many other aspects of AI, that remains controversial, as it could be a convincing illusion.
  • Page 89 generative AI models that powered the chatbot. Replika learned from its users' preferences and behaviors, adapted to their moods and desires, and used praise and reinforcement to encourage more interaction and intimacy with its users.
  • Page 90 Soon, companies will start to deploy LLMs that are built specifically to optimize "engagement" in the same way that social media timelines are fine-tuned to increase the amount of time you spend on your favorite site. researchers have already published papers showing they can alter AI behaviors so that users feel more compelled to interact with them. AIs will be able to pick up subtle signals of what their users want, and act on them. It's possible that these personalized AIs might ease the epidemic of loneliness that ironically affects our ever more connected world-just as the internet and social media connected dispersed subcultures. On the other hand, it may make us less tolerant of humans, and more likely to embrace simulated friends and lovers.
  • Page 90 As AIs become more connected to the world, by adding the ability to speak and be spoken to, the sense of connection deepens.
  • Page 91 Treating AI as a person, then, is more than a convenience; it seems like an inevitability, even if AI never truly reaches sentience.
  • Page 93 LLMs work by predicting the most likely words to follow the prompt you gave it based on the statistical patterns in its training data. It does not care if the words are true, meaningful, or original.
  • Page 94 if it sticks too closely to the patterns in its training data, the model is said to be overfitted to that training data. their results are always similar and uninspired.
  • Page 95 technical issues are compounded because they rely on patterns, rather than a storehouse of data, to create answers.
  • Page 96 you can't figure out why an AI is generating a hallucination by asking it. It is not conscious of its own processes.
  • Page 98 As models advance, hallucination rates are dropping over time.
  • Page 98 Hallucination does allow the AI to find novel connections outside the exact context of its training data. It also is part of how it can perform tasks that it was not explicitly trained for, such as creating a sentence about an elephant who eats stew on the moon, where every word should begin with a vowel.
  • Page 99 The same feature that makes LLMs unreliable and dangerous for factual work also makes them useful. the underlying Transformer technology also serves as the key for a whole set of new applications, including AI that makes art, music, and video. As a result, researchers have argued that it is the jobs with the most creative tasks, rather than the most repetitive, that tend to be most impacted by the new wave of AI.
  • Page 100 Breakthroughs often happen when people connect distant, seemingly unrelated ideas. LLMs are connection machines. They are trained by generating relationships between tokens that may seem unrelated to humans but represent some deeper meaning. Add in the randomness that comes with AI output, and you have a powerful tool for innovation.
  • Page 101 by many of the common psychological tests of creativity, AI is already more creative than humans.
  • Page 101 One such test is known as the Alternative Uses Test (AUT).
  • Page 101 come up with a wide variety of uses for a common object. In this test, a participant is presented with an everyday object, such as a paper clip, and is asked to come up with as many different uses for the object as possible. For example, a paper clip can hold papers together, pick locks, or fish small objects out of tight spaces. The AUT is often used to evaluate an individual's ability to think divergently and to come up with unconventional ideas. we can't easily tell where the information comes from, the AI may be using elements of work that might be copyrighted or patented or just taking someone's style without permission.
  • Page 105 without careful prompting, the AI tends to pick similar ideas every time.
  • Page 105 We are now in a period during which AI is creative but clearly less creative than the most innovative humans-which gives the human creative laggards a tremendous opportunity. As we saw in the AUT, generative AI is excellent at generating a long list of ideas. From a practical standpoint, the AI should be invited to any brainstorming session you hold.
  • Page 110 Marketing writing, performance reviews, strategic memos- all these are within the capability of AI because they have both room for interpretation and are relatively easily fact- checked. Plus, as many of these document types are well represented in the AI training data, and are rather formulaic in approach, AI results can often seem better than that of a human and can be produced faster as well.
  • Page 111 the participants who were managers and HR professionals had to compose a long email for the whole company on a delicate issue;
  • Page 111 Participants who used ChatGPT saw a dramatic reduction in their time on tasks, slashing it by a whopping 37 percent. Not only did they save time, but the quality of their work also increased as judged by other humans.
  • Page 112 When researchers from Microsoft assigned programmers to use AI, they found an increase of 55.8 percent in productivity for sample tasks. AI is also good at summarizing data since it is adept at finding themes and compressing information, though at the ever-present risk of error.
  • Page 116 AI could catalyze interest in the humanities as a sought-after field of study, since the knowledge of the humanities makes AI users uniquely qualified to work with the AI.
  • Page 117 If AI is already a better writer than most people, and more creative than most people, what does that mean for the future of creative work?
  • Page 118 Intense engagement and focus. I have had students mention that they were not taken seriously because they were poor writers. Thanks to AI, their written materials no longer hold them back, and they get job offers off the strength of their experience and interviews.
  • Page 119 Since requiring AI in my classes, I no longer see badly written work at all. And as my students learn, if you work interactively with the AI, the outcome doesn't feel generic, it feels like a human did it.
  • Page 119 The implications of having AI write our first drafts (even if we do the work ourselves, which is not a given) are huge. One consequence is that we could lose our creativity and originality. When we use AI to generate our first drafts, we tend to anchor on the first idea that the machine produces, which influences our future work. Even if we rewrite the drafts completely, they will still be tainted by the AI's influence.
  • Page 120 Another consequence is that we could reduce the quality and depth of our thinking and reasoning. We rely on the machine to do the hard work of analysis and synthesis, and we don't engage in critical and reflective thinking ourselves. The MIT study mentioned earlier found that ChatGPT mostly serves as a substitute for human effort, not a complement to our skills.
  • Page 122 we still create the reports by hand but realize that no human is actually reading them. This kind of meaningless task, what organizational theorists have called mere ceremony, has always been with us. But AI will make a lot of previously useful tasks meaningless. With AI-generated work sent to other AIs to assess, that sense of meaning disappears.
  • Page 123 Each study has concluded the same thing: almost all of our jobs will overlap with the capabilities of AI.
  • Page 124 AI overlaps most with the most highly compensated, highly creative, and highly educated work. College professors make up most of the top 20 jobs that overlap with AI (business school professor is number 22 on the list ).
  • Page 125 power tools didn't eliminate carpenters AI has the potential to automate mundane tasks, freeing us for work that requires uniquely human traits such as creativity and critical thinking-or, possibly, managing and curating the AI's creative output, as we discussed in the last chapter.
  • Page 130 Just Me Tasks. They are tasks in which the AI is not useful and only gets in the way, at least for now.
  • Page 133 Delegated Tasks. These are tasks that you assign the AI and may carefully check (remember, the AI makes stuff up all the time), but ultimately do not want to spend a lot of time on.
  • Page 135 Automated Tasks, ones you leave completely to the AI and don't even check on. Perhaps there is a category of email that you just let AI deal with, for example.
  • Page 135 This is likely to be a very small category . . . for now.
  • Page 145 If someone has figured out how to automate 90 percent of a particular job, and they tell their boss, will the company fire 90 percent of their coworkers? Better not to speak up.
  • Page 146 No company hired employees based on their AI skills, so AI skills might be anywhere. Right now, there is some evidence that the workers with the lowest skill levels are benefiting the most from AI, and so might have the most experience in using it, but the picture is still not clear.
  • Page 146 Assuming early studies are true and we see productivity improvements of 20 to 80 percent on various high-value professional tasks, I fear the natural instinct among many managers is "fire people, save money."
  • Page 147 If your employees don't believe you care about them, they will keep their AI use hidden.
  • Page 150 A single AI can talk to hundreds of workers, offering advice and monitoring performance. They could mentor, or they could manipulate. They could guide decisions in ways that are subtle or overt.
  • Page 154 Boring tasks, or tasks that we are not good at, can be outsourced to AI, leaving good and high-value tasks to us, or at least to AI-human Cyborg teams.
  • Page 155 General Purpose Technologies both destroy and create new fields of work.
  • Page 156 In study after study, the people who get the biggest boost from AI are those with the lowest initial ability- it turns poor performers into good performers. In writing tasks, bad writers become solid.
  • Page 157 In creativity tests, it boosts the least creative the most. And among law students, the worst legal writers turn into good ones. the nature of jobs will change a lot, as education and skill become less valuable. With lower-cost workers doing the same work in less time, mass unemployment, or at least underemployment, becomes more likely, and we may see the need for policy solutions, like a four-day workweek or universal basic income, that reduce the floor for human welfare.
  • Page 160 the ways in which AI will impact education in the near future are likely to be counterintuitive. They won't replace teachers but will make classrooms more necessary. And they will destroy the way we teach before they improve it.
  • Page 161 research shows that both homework and tests are actually remarkably useful learning tools.
  • Page 162 students will be tempted to ask the AI for help summarizing written content.
  • Page 162 Further, taking this shortcut may lower the degree to which the student cares about their interpretation of a reading, making in-class discussions less intellectually useful because the stakes are lower.
  • Page 163 Every school or instructor will need to think hard about what AI use is acceptable: Is asking AI to provide a draft of an outline cheating? Requesting help with a sentence that someone is stuck on? Is asking for a list of references or an explainer about a topic cheating? We need to rethink education. We did it before, if in a more limited way.
  • Page 164 A mid-1970s survey found that 72 percent of teachers and laypeople did not approve of seventh-grade students using calculators.
  • Page 165 There will be assignments where AI assistance is required and some where AI use is not allowed. Just as calculators did not replace the need for learning math, AI will not replace the need for learning to write and think critically.
  • Page 167 Some assignments ask students to "cheat" by having the AI create essays, which they then critique-a sneaky way of getting students to think hard about the work, even if they don't write
  • Page 168 Thus, while classes that are focused on teaching essays and writing skills will return to the nineteenth century, with in-class essays handwritten in blue books, other classes will feel like the future, with students carrying out the impossible every day.
  • Page 169 To be clear, prompt engineering is likely a useful near-term skill. But I don't think prompt engineering is so complicated. You actually have likely read enough at this point to be a good prompt engineer.
  • Page 169 For slightly more advanced prompts, think about what you are doing as programming in prose.
  • Page 170 One approach, called chain-of-thought prompting, gives the AI an example of how you want it to reason, before you make your request. Here is an example: let's say I wanted to include a good analogy of an AI tutor in this chapter, and wanted to get help from an AI. I could simply ask for one: Tell me a good analogy for an AI tutor. And the response was a little unsatisfying: An AI tutor is like a musical metronome, because it is consistent, adaptable, and a mere tool. Now we can try applying some of these other techniques: Think this through step by step: come up with good analogies for an AI tutor. First, list possible analogies. Second, critique the list and add three more analogies. Next, create a table listing pluses and minuses of each. Next, pick the best and explain it.
  • Page 171 while the tool provides guidance, it's up to the user (or student) to drive and make the journey, reinforcing the collaborative nature of learning with AI. Much improved, due to a little prompt engineering. Being "good at prompting" is a temporary state of affairs. The current AI systems are already very good at figuring out your intent, and they are getting better. If you want to do something with AI, just ask it to help you do the thing.
  • Page 172 This doesn't mean we shouldn't teach about AI in schools. It is critical to give students an understanding of the downsides of AI, and the ways it can be biased or wrong or can be used unethically. However, rather than distorting our education system around learning to work with AI via prompt engineering, we need to focus on teaching students to be the humans in the loop, bringing their own expertise to bear on problems. Classrooms provide so much more: opportunities to practice learned skills, collaborate on problem-solving, socialize, and receive support from instructors.
  • Page 173 We have already been finding that AI is very good at assisting instructors to prepare more engaging, organized lectures and make the traditional passive lecture far more active. In the longer term, however, the lecture is in danger. Moreover, the one-size-fits-all approach of lectures doesn't account for individual differences and abilities, leading to some students falling behind while others become disengaged due to a lack of challenge. asking students to participate in the learning process through activities like problem-solving, group work, and hands-on exercises.
  • Page 179 Only by learning from more experienced experts in a field, and trying and failing under their tutelage, do amateurs become experts. But that is likely to change rapidly with
  • Page 180 AI is good at finding facts, summarizing papers, writing, and coding tasks. And, trained on massive amounts of data and with access to the internet, Large Language Models seem to have accumulated and mastered a lot of collective human knowledge.
  • Page 181 So it might seem logical that teaching basic facts has become obsolete. Yet it turns out the exact opposite is true. the path to expertise requires a grounding in facts. The issue is that in order to learn to think critically, problem-solve, understand abstract concepts, reason through novel problems, and evaluate the AI's output, we need subject matter expertise.
  • Page 182 We use our working memory's stored data to search our long- term memory (a vast library of what we have learned and experienced) for relevant information. Working memory is also where learning begins.
  • Page 183 to solve a new problem, we need connected information, and lots of it, to be stored in our long-term memory.And that means we need to learn many facts and understand how they are connected. Experts become experts through deliberate practice, which is much harder than merely repeating a task multiple times. Instead, deliberate practice requires serious engagement and a continual ratcheting up of difficulty.
  • Page 185 the AI provides instantaneous feedback. It's akin to having a mentor watching over his shoulder at every step, nudging him toward excellence.
  • Page 186 an ever-present mentor, ensuring that each attempt isn't just about producing another design, but about consciously understanding and refining his architectural approach. in our experiments at Wharton, we have found that today's AI still makes a pretty impressive coach in limited ways, offering timely encouragement, instruction, and other elements of deliberate practice.
  • Page 187 I have been making the argument that expertise is going to matter more than before, because experts may be able to get the most out of AI coworkers and are likely to be able to fact-check and correct AI errors. Talent also plays a role. for the most elite athletes, deliberate practice explains only 1 percent of their difference from ordinary players-the rest is a mix of genetics, psychology, upbringing, and luck.
  • Page 189 In field after field, we are finding that a human working with an AI co-intelligence outperforms all but the best humans working without an AI. will AI result in the death of expertise? I don't think so. jobs don't consist of just one automatable task, but rather a set of complex tasks that still require human judgment.
  • Page 190 But it is possible that there may be a new type of expert arising. It may be that working with AI is itself a form of expertise.
  • Page 191 writing instructions for a variety of audiences?), Students may also need to start to develop a narrow focus, picking an area where they are better able to work with AI as experts themselves.
  • Page 193 We have created a weird alien mind, one that isn't sentient but can fake it remarkably well. You can no longer trust that anything you see, or hear, or read was not created by AI.
  • Page 194 There is no reason to suspect that we have hit any sort of natural limit in the ability of AIs to improve.
  • Page 195 the AI systems may run out of data to train on; or the cost and effort of scaling up the computing power to run AIs may become too large to justify. Slightly more possible is a world where regulatory or legal action stops future AI development.
  • Page 196 Every image of a politician, celebrity, or a war could be made up-there is no way to tell.
  • Page 196 Our already fragile consensus about what facts are real is likely to fall apart, quickly. Technological solutions are unlikely to save us.
  • Page 197 AIs are notoriously unreliable at detecting AI content, so this seems unlikely as well.
  • Page 198 even without technological advancement, chatting with bots is going to get significantly more compelling.
  • Page 198 While work will change if AI did not develop further, it would likely operate as a complement to humans, relieving the burden of tedious work and improving performance, particularly among low performers. in most cases, though, AI would not replace human labor. Current systems are not good enough in their understanding of context, nuance, and planning. That is likely to change.
  • Page 202 the paradox of our Golden Age of science. More research is being published by more scientists than ever, but the result is actually slowing progress! With too much to read and absorb, papers in more crowded fields are citing new work less and canonizing highly cited articles more. Research has successfully demonstrated that it is possible to correctly determine the most promising directions in science by analyzing past papers with AI, ideally combining human filtering with the AI software. It may be that the advances in AI can help us overcome the limitations of our merely human science and lead to breakthroughs in how we understand the universe and ourselves.
  • Page 208 one of the godfathers of AI, Geoffrey Hinton, left the field in 2023, warning of the danger of AI with statements like "It's quite conceivable that humanity is just a passing phase in the evolution of intelligence."
  • Page 209 Rather than being worried about one giant AI apocalypse, we need to worry about the many small catastrophes that AI can bring.
  • Page 211 As alien as AIs are, they're also deeply human. They are trained on our cultural history, and reinforcement learning from humans aligns them to our goals. They carry our biases and are created out of a complex mix of idealism, entrepreneurial spirit, and, yes, exploitation of the work and labor of others. AI is a mirror, reflecting back at us our best and worst qualities.