The Singularity Is Nearer: When We Merge with AI
Page 1
Eventually nanotechnology will enable these trends to culminate in directly expanding our brains with layers of virtual neurons in the cloud. In this way we will merge with AI and augment ourselves with millions of times the computational power that our biology gave us. This will expand our intelligence and consciousness so profoundly that it's difficult to comprehend. This event is what I mean by the Singularity. I use the term as a metaphor.
Page 2
Algorithmic innovations and the emergence of big data have allowed AI to achieve startling breakthroughs sooner than even experts expected- from mastering games like Jeopardy! and Go to driving automobiles, writing essays, passing bar exams, and diagnosing cancer. Now, powerful and flexible large language models like GPT- 4 and Gemini can translate natural- language instructions into computer code- dramatically reducing the barrier between humans and machines.
Page 4
AI and maturing nanotechnology will unite humans and our machine creations as never before- heightening both the promise and the peril even further. If we can meet the scientific, ethical, social, and political challenges posed by these advances, by 2045 we will transform life on earth profoundly for the better. Yet if we fail, our very survival is in question. And so this book is about our final approach to the Singularity- the opportunities and dangers we must confront together over the last generation of the world as we knew it.
Page 5
As these technologies unlock enormous material abundance for our civilization, our focus will shift to overcoming the next barrier to our full flourishing: the frailties of our biology. First by defeating the aging of our bodies and then by augmenting our limited brains and ushering in the Singularity. These breakthroughs may also put us in jeopardy. Possibly lead to an existential catastrophe like a devastating pandemic or a chain reaction of self- replicating machines.
Page 8
With brains, we added roughly one cubic inch of brain matter every 100,000 years, whereas with digital computation we are doubling price- performance about every sixteen months. In the Fifth Epoch, we will directly merge biological human cognition with the speed and power of our digital technology.
The Sixth Epoch is where our intelligence spreads throughout the universe, turning ordinary matter into computronium, which is matter organized at the ultimate density of computation.
Page 9
A key capability in the 2030s will be to connect the upper ranges of our neocortices to the cloud, which will directly extend our thinking. In this way, rather than AI being a competitor, it will become an extension of ourselves.
What Does It Mean to Reinvent Intelligence? > Page 11
If the whole story of the universe is one of evolving paradigms of information processing, the story of humanity picks up more than halfway through. Our chapter in this larger tale is ultimately about our transition from animals with biological brains to transcendent beings whose thoughts and identities are no longer shackled to what genetics provides.
What Does It Mean to Reinvent Intelligence? > Page 11
we will engineer brain– computer interfaces that vastly expand our neocortices with layers of virtual neurons.
Page 12
researchers. In 1956, mathematics professor John McCarthy (1927– 2011)
Page 13
McCarthy proposed that this field, which would ultimately automate every other field, be called "artificial intelligence."[
Page 14
Minsky taught me that there are two techniques for creating automated solutions to problems: the symbolic approach and the connectionist approach. The symbolic approach describes in rule- based terms how a human expert would solve a problem.
Page 16
By the late 1980s these "expert systems" were utilizing probability models and could combine many sources of evidence to make a decision.[ 21] While a single if- then rule would not be sufficient by itself, by combining many thousands of such rules, the overall system could make reliable decisions for a constrained problem.
Although the symbolic approach has been used for over half a century, its primary limitation has been the "complexity ceiling."[
Page 18
the added value of the connectionist one. This entails networks of nodes that create intelligence through their structure rather than through their content. Instead of using smart rules, they use dumb nodes that are arranged in a way that can extract insight from data itself. One of the key advantages of the connectionist approach is that it allows you to solve problems without understanding them. Connectionist AI is prone to becoming a "black box"- capable of spitting out the correct answer, but unable to explain how it found it.[
This is why many AI experts are now working to develop better forms of "transparency"
Page 26
The goal is to then find actual examples from which the system can figure out how to solve a problem. A typical starting point is to have the neural net wiring and synaptic weights set randomly, so that the answers produced by this untrained neural net will thus also be random. The key function of a neural net is that it must learn its subject matter, just like the mammalian brains on which it is (at least roughly) modeled. A neural net starts out ignorant but is programmed to maximize a "reward" function.
It is then fed training data (e.g., photos containing corgis and photos containing no corgis, as labeled by humans in advance). When the neural net produces a correct output (e.g., accurately identifying whether there's a corgi in the image), it gets reward feedback. This feedback can then be used to adjust the strength of each interneuronal connection. Connections that are consistent with the correct answer are made stronger, while those that provide a wrong answer are weakened. Over time, the neural net organizes itself to be able to provide the correct answers without coaching. Experiments have shown that neural nets can learn their subject matter even with unreliable teachers. Despite these strengths, early connectionist systems had a fundamental limitation. One- layer neural networks were mathematically incapable of solving some kinds of problems.
Page 28
If you had enough layers and enough training data, it could deal with an amazing level of complexity. The tremendous surge in AI progress in recent years has resulted from the use of multiple neural net layers. So connectionist approaches to AI were largely ignored until the mid- 2010s, when hardware advances finally unlocked their latent potential.
Page 32
These cerebellum- driven animal behaviors are known as fixed action patterns. These are hardwired into members of a species, unlike behavior learned through observation and imitation.
Page 33
When behaviors are driven by genetics instead of learning, they are orders of magnitude slower to adapt. While learning allows creatures to meaningfully modify their behavior during a single lifetime, innate behaviors are limited to gradual change over many generations.
In order to make faster progress, evolution needed to devise a way for the brain to develop new behaviors without waiting for genetic change to reconfigure the cerebellum. This was the neocortex.
Meaning "new rind," it emerged some 200 million years ago in a novel class of animals: mammals.[
Page 34
was capable of a new type of thinking: it could invent new behaviors in days or even hours. This unlocked the power of learning.
Page 36
when humans are able to connect our neocortices directly to cloud-based computation, we'll unlock the potential for even more abstract thought than our organic brains can currently support on their own.
Page 40
Connectionist approaches were impractical for a long time because they take so much computing power to train. But the price of computation has fallen dramatically.
Page 41
But then, in 2015–16, Alphabet subsidiary DeepMind created AlphaGo, which used a "deep reinforcement learning" method in which a large neural net processed its own games and learned from its successes and failures.[78] It started with a huge number of recorded human Go moves and then played itself many times until the version AlphaGo Master was able to beat the world human Go champion, Ke Jie.[79]
A more significant development occurred a few months later with AlphaGo Zero. When IBM beat world chess champion Garry Kasparov with Deep Blue in 1997, the supercomputer was filled with all the know- how its programmers could gather from human chess experts.[ 80] It was not useful for anything else; it was a chess- playing machine. By contrast, AlphaGo Zero was not given any human information about Go except for the rules of the game, and after about three days of playing against itself, it evolved from making random moves to easily defeating its previous human- trained incarnation, AlphaGo, by 100 games to 0.[ 81] (In 2016, AlphaGo had beaten Lee Sedol, who at the time ranked second in international Go titles, in four out of five games.) AlphaGo Zero used a new form of reinforcement learning in which the program became its own instructor. It took AlphaGo Zero just twenty- one days to reach the level of AlphaGo Master, the version that defeated sixty top professionals online and the world champion Ke Jie in three out of three games in 2017.[ 82] After forty days, AlphaGo Zero surpassed all other versions of AlphaGo and became the best Go player in human or computer form.[ 83] It achieved this with no encoded knowledge of human play and no human intervention.
Page 42
The next incarnation, AlphaZero, can transfer abilities learned from Go to other games like chess.[
The latest version as I write this is MuZero, which repeated these feats without even being given the rules![
But deep reinforcement learning is not limited to mastering such games.
The only exceptions (for now) are board games that require very high linguistic competencies. Diplomacy is perhaps the best example of this-a world domination game that is impossible for a player to win through luck or skill, and forces players to talk to one another.[87] To win, you have to be able to convince people that moves that help you will be in their own self-interest. So an AI that can consistently dominate Diplomacy games will likely have also mastered deception and persuasion more broadly.
But even at Diplomacy, AI made impressive progress in 2022, most notably Meta's CICERO, which can beat many human players.[88] Such milestones are now being reached almost every week.
Page 43
Yet while MuZero can conquer many different games, its achievements are still relatively narrow-it can't write a sonnet or comfort the sick. AI will need to master language. We can construct a multilayer feed-forward neural net and find billions (or trillions) of sentences to train it. These can be gathered from public sources on the web. The neural net is then used to assign each sentence a point in 500-dimensional space (that is, a list of 500 numbers, though this number is arbitrary; it can be any substantial large number). At first, the sentence is given a random assignment for each of the 500 values. During training, the neural net adjusts the sentence's place within the 500-dimensional space such that sentences that have similar meanings are placed close together; dissimilar sentences will be far away from one another. If we run this process for many billions of sentences, the position of any sentence in the 500-dimensional space will indicate what it means by virtue of what it is close to.
Page 44
AI learns meaning from the contexts that words are actually used in.
Page 46
One of the most promising applications of hyperdimensional language processing is a class of AI systems called transformers. These are deep- learning models that use a mechanism called "attention" to focus their computational power on the most relevant parts of their input data- in much the same way that the human neocortex lets us direct our own attention toward the information most vital to our thinking. As a scaled-down example, if I can use only one parameter to predict "Is this animal an elephant?" I might choose "trunk." So if the neural net's node dedicated to judging whether the animal has a trunk fires ("Yes, it does"), the transformer would categorize it as an elephant. But even if that node learns to perfectly recognize trunks, there are some animals with trunks that aren't elephants, so the one-parameter model will misclassify them. By adding parameters like "hairy body," we can improve accuracy. Now if both nodes fire ("hairy body" and "trunk"), I can guess that it's probably not an elephant but rather a woolly mammoth. The more parameters I have, and the more granular detail I can capture, the better predictions I can make.
parameters are stored as weights between nodes in the neural net. And in practice, while they sometimes correspond to human-understandable concepts like "hairy body" or "trunk," they often represent highly abstract statistical relationships that the model has discovered in its training data.
Page 47
Invented by Google researchers in 2017, this mechanism has powered most of the enormous AI advances of the past few years.[95]
This requires vast amounts of computation both for training and for usage.
with many billions of parameters, it can process the input words in the prompt at the level of associative meaning and then use the available context to piece together a completion text never before seen in history. And because the training text features many different styles of text, such as question-and-answer, op-ed pieces, and theatrical dialogue, the transformer can learn to recognize the nature of the prompt and generate an output in the appropriate style. While cynics may dismiss this as a fancy trick of statistics, because those statistics are synthesized from the combined creative output of millions of humans, the AI attains genuine creativity of its own.
Page 48
Another capability unlocked by GPT-3 was stylistic creativity. Because the model had enough parameters to deeply digest a staggeringly large dataset, it was familiar with virtually every kind of human writing. Users could prompt it to answer questions about any given subject in a huge variety of styles-from scientific writing to children's books, poetry, or sitcom scripts. It could even imitate specific writers, living or dead.
Page 49
Another startling advance in 2021 was multimodality.
In general, models like GPT-3 exemplify "few-shot learning."
But DALL-E and Imagen took this a dramatic step further by excelling at "zero-shot learning."
create new images wildly different from anything they had ever seen in their training data.
Page 50
Zero-shot learning is the very essence of analogical thinking and intelligence itself.
It is truly learning concepts with the ability to creatively apply them to novel problems.
In addition to zero-shot flexibility within a given type of task, AI models are also rapidly gaining cross-domain flexibility.
Page 51
April 2022, Google's 540-billion-parameter PaLM model achieved stunning progress on this problem, particularly in two areas fundamental to our own intelligence: humor and inferential reasoning.[
Even more importantly, PaLM could explain how it reached conclusions via "chain-of-thought" reasoning, although not yet (at least as of 2023) as deeply as what humans can
Page 52
Then, in March of 2023, GPT-4 was rolled out for public testing via ChatGPT. This model achieved outstanding performance on a wide range of academic tests such as the SAT, the LSAT, AP tests, and the bar exam.[119] But its most important advance was its ability to reason organically about hypothetical situations by understanding the relationships between objects and actions-a capability known as world modeling.
Page 53
AI progress is now so fast, though, that no traditional book can hope to be up to date.
AI will likely be woven much more tightly into your daily life.
Page 54
well on our way to re-creating the capabilities of the neocortex.
Page 56
My optimism about AI soon closing the gap in all these areas rests on the convergence of three concurrent exponential trends: improving computing price-performance, which makes it cheaper to train large neural nets; the skyrocketing availability of richer and broader training data, which allows training computation cycles to be put to better use; and better algorithms that enable AI to learn and reason more efficiently.
Page 58
While a neocortex can have some idea of what a training set is all about, a well-designed neural net can extract insights beyond what biological brains can perceive. From playing a game to driving a car, analyzing medical images, or predicting protein folding, data availability provides an increasingly clear path to superhuman performance. This is creating a powerful economic incentive to identify and collect kinds of data that were previously considered too difficult to bother with.
Page 59
when AI researchers talk about human-level intelligence, it generally means the ability of the most skilled humans in a particular domain.
Page 60
Once we develop AI with enough programming abilities to give itself even more programming skill (whether on its own or with human assistance), there'll be a positive feedback loop.
Page 61
With machine learning getting so much more cost-efficient, raw computing power is very unlikely to be the bottleneck in achieving human-level AI.
Page 62
computers will be able to simulate human brains in all the ways we might care about within the next two decades or so.
Page 63
With AI gaining major new capabilities every month and price-performance for the computation that powers it soaring, the trajectory is clear. But how will we judge when AI has finally reached human-level intelligence?
Page 64
In 2018 Google debuted Duplex, an AI assistant that spoke so naturally over the phone that unsuspecting parties thought it was a real human, and IBM's Project Debater, introduced the same year, realistically engaged in competitive debate.[160]
And as of 2023, LLMs can write whole essays to human standards.
Page 65
As I write this, despite the great engineering effort going into curbing hallucinations,[162] it remains an open question how difficult this problem will be to overcome.
acts. If different computational processes lead a future AI to make groundbreaking scientific discoveries or write heartrending novels, why should we care how they were generated?
And if an AI is able to eloquently proclaim its own consciousness, what ethical grounds could we have for insisting that only our own biology can give rise to worthwhile sentience? The empiricism of the Turing test puts our focus firmly where it should be.
Between 2023 and 2029, the year I expect the first robust Turing test to be passed, computers will achieve clearly superhuman ability in a widening range of areas. Indeed, it is even possible that AI could achieve a superhuman level of skill at programming itself before it masters the commonsense social subtleties of the Turing test. That remains an unresolved question, but the possibility shows why our notion of human-level intelligence needs to be rich and nuanced.
Page 66
As Turing said in 1950, "May not machines carry out something which ought to be described as thinking but which is very different from what a man does?…[ I] f, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection."
Page 66
Today, AI's still-limited ability to efficiently understand language acts as a bottleneck on its overall knowledge. By contrast, the main constraints on human knowledge are our relatively slow reading ability, our limited memory, and ultimately our short life spans.
Page 69
When AI language understanding catches up to the human level, it won't just be an incremental increase in knowledge, but a sudden explosion of knowledge. This means that an AI going out to pass a traditional Turing test is actually going to have to dumb itself down! Thus, for tasks that don't require imitating a human, like solving real-world problems in medicine, chemistry, and engineering, a Turing-level AI would already be achieving profoundly superhuman results.
Page 69
Functional magnetic resonance imaging scans (fMRIs) measure blood flow in the brain as a proxy for neural firing.[167] When a given part of the brain is more active, it consumes more glucose and oxygen, requiring an inflow of oxygenated blood.
Page 69
Yet because there is a lag between actual brain activity and blood flow, the brain activity can often be measured to within only a couple of seconds-and can rarely be better than 400 to 800 milliseconds.[
Page 69
Electroencephalograms (EEGs) have the opposite problem. They detect the brain's electrical activity directly, so they can pinpoint signals to within about one millisecond.[170] But because those signals are detected from the outside of the skull, it's hard to pinpoint exactly where they came from,
Page 70
Having a thought-to-text technology would be transformative, which has prompted research aiming to perfect a brain wave–language translator.
Page 70
Elon Musk's Neuralink,
Page 71
Defense Advanced Research Projects Agency (DARPA) is working on a long-term project called Neural Engineering System Design,
Page 71
Ultimately, brain–computer interfaces will be essentially noninvasive-
a brain–computer interface doesn't need to account for the bulk of these computations, as they are preliminary activity happening well below the top layer of the neocortex.[181] Rather, we need to communicate only with its upper ranges.
And we can ignore noncognitive brain processes like regulating digestion altogether.
Page 72
At some point in the 2030s we will reach this goal using microscopic devices called nanobots. These tiny electronics will connect the top layers of our neocortex to the cloud, allowing our neurons to communicate directly with simulated neurons hosted for us online.[
this century progresses and the price-performance of computing continues to improve exponentially, the computing power available to our brains will, too.
Page 72
Remember what happened two million years ago, the last time we gained more neocortex? We became humans.
The result will be the invention of means of expression vastly richer than the art and technology that's possible today-more profound than we can currently imagine.
Page 73
But we might eventually have art that puts a character's raw, disorganized, nonverbal thoughts-in all their inexpressible beauty and complexity-directly into our brains. This is the cultural richness that brain–computer interfaces will enable for us.
Chapter 3: Who Am I?
Page 76
what is consciousness?
Page 76
One of these refers to the functional ability to be aware of one's surroundings and act as though aware of both one's internal thoughts and an external world that's distinct from them.
it is generally possible to judge the level of another person's consciousness from the outside. a second meaning is more relevant: the ability to have subjective experiences inside a mind-when I say here that we can't detect consciousness directly, I mean that a person's qualia cannot be detected from the outside.
Page 77
in the twenty-first century, scientists have gained a better understanding of how even very primitive life forms can show rudimentary forms of intelligence, such as memory.[6]
Page 78
In 2012 a multidisciplinary group of scientists met at the University of Cambridge to assess the evidence of consciousness among nonhuman animals.
Page 78
regardless of consciousness's origin, both poles of the spiritual–secular divide agree that it is somehow sacred.
brains that can support more sophisticated behavior likewise give rise to more sophisticated subjective consciousness. Sophisticated behavior, as discussed in the previous chapter, arises from the complexity of information processing in a brain[9]-and this in turn is largely determined by how flexibly it can represent information and how many hierarchical layers are in its network.
Page 79
similar to that of our Neolithic ancestors. Yet when we can augment the neocortex itself, during the 2030s and 2040s, we won't just be adding abstract problem-solving power; we will be deepening our subjective consciousness itself.
Page 80
Subjective consciousness is qualitatively different from the realm of observable physical laws, and it doesn't follow that particular patterns of information processing according to these laws would yield conscious experience at all. Chalmers calls this the "hard problem of consciousness." His "easy questions," such as what happens to our mind when we are not awake, are among the most difficult in all of science, but at least they can be studied scientifically.
For the hard problem, Chalmers turns to a philosophical idea he calls "panprotopsychism."[13] Panprotopsychism treats consciousness much like a fundamental force of the universe-one that cannot be reduced to simply an effect of other physical forces.
Page 81
if there's a plausible chance that an entity you mistreat might be conscious, the safest moral choice is to assume that it is rather than risk tormenting a sentient being.
the Turing test would not just serve to establish human-level functional capability but would also furnish strong evidence for subjective consciousness and, thus, moral rights.
Page 82
A concept closely related to consciousness is our sense of free will.[
Page 86
A statistical sampling of individual cells would make their states seem essentially random, but we can see that each cell's state results deterministically from the previous step-and the resulting macro image shows a mix of regular and irregular behavior. This demonstrates a property called emergence.[26] In essence, emergence is very simple things, collectively, giving rise to much more complex things.
We inhabit a world that is deeply affected by the kind of patterning found in such cellular automata-a very simple algorithm producing highly complex behavior straddling the boundary between order and chaos.
It is this complexity in us that may give rise to consciousness and free will.
Page 88
"compatibilism"- We can make free decisions (that is, ones not caused by something else, like another person), even though our decisions are determined by underlying laws of reality. The human brain has multiple distinct decision-making units.
Page 90
if an electronic brain represents the same information as a biological brain and claims to be conscious, there is no plausible scientific basis for denying its consciousness. Ethically, then, we ought to treat it as though it is conscious and therefore possessing moral rights.
Page 95
to the extent that your identity hinges on the exact sperm and egg that made you, the odds of this happening were about one in two quintillion. if your father produced two chromosomally identical sperm at age twenty-five and age forty-five, they wouldn't give precisely the same contribution to the formation of a baby.
Page 98
The most common explanation of this apparent fine-tuning states that the very low probability of living in such a universe is explained by observer selection bias.[76] In other words, in order for us to even be considering this question, we must inhabit a fine-tuned universe-if it had been otherwise, we wouldn't be conscious and able to reflect on that fact. This is known as the anthropic principle. Some scientists believe that such an explanation is adequate.
Page 99
think there is something there that needs explaining."[
Page 99
Even as of 2023, though, AI is rapidly gaining proficiency at imitating humans. Deep-learning approaches like transformers and GANs (generative adversarial networks) have propelled amazing progress.
Page 99
By combining these techniques, AI can thus already imitate a specific person's writing style, replicate their voice, or even realistically graft their face into a whole video.
Page 100
2016, The Verge published a remarkable article about a young woman named Eugenia Kuyda who used AI and saved text messages to "resurrect" her dead best friend, Roman Mazurenko.[82] As the amount of data each of us generates grows, ever more faithful re-creations of specific humans will become possible.
Page 101
Replicant bodies will exist mostly in virtual and augmented reality, but realistic bodies in actual reality (that is, convincing androids) will also be possible using the nanotechnology of the late 2030s.
Page 102
Eventually replicants may even be housed in cybernetically augmented biological bodies grown from the DNA of the original person (assuming it can be found).
Page 103
In the early 2040s, nanobots will be able to go into a living person's brain and make a copy of all the data that forms the memories and personality of the original person: You 2.
Page 104
this level of technology will also allow our subjective self to persist in After Life-
Page 105
the practical goal is to figure out how to get computers to interface effectively with the brain, and crack the code of how the brain represents information.
Page 109
Yet despite my share of responsibility for who I am, my self-actualization is limited by many factors outside my control. My biological brain evolved for a very different kind of prehistoric life and predisposes me to habits that I would rather not have. It cannot learn fast enough or remember well enough to know all the things I would like to know. I can't reprogram it to free me of fears, traumas, and doubts that I know are preventing me from achieving what I would like to achieve. And my brain sits in a body that is gradually aging-although I work hard to slow this process-and is biologically programmed to eventually destroy the information pattern that is Ray Kurzweil. The promise of the Singularity is to free us all from those limitations.
Once our brains are backed up on a more advanced digital substrate, our self-modification powers can be fully realized.
Page 112
the law of accelerating returns
the LOAR describes a phenomenon wherein certain kinds of technologies create feedback loops that accelerate innovation.
Page 113
What makes the LOAR so powerful for information technologies is that feedback loops keep the costs of innovation lower than the benefits, so progress continues.
Page 115
A modern version of a predator hiding in the foliage is the phenomenon of people continually monitoring their information sources, including social media, for developments that might imperil them.
Nostalgia, a term the Swiss physician Johannes Hofer devised in 1688 by combining the Greek words nostos (homecoming) and algos (pain or distress), is more than just recalling fond reminiscences; it is a coping mechanism to deal with the stress of the past by transforming it.
The Reality Is That Nearly Every Aspect of Life Is Getting Progressively Better as a Result of Exponentially Improving Technology > Page 122
technological change is essentially permanent. Once our civilization learns how to do something useful, we generally keep that knowledge and build on. The Reality Is That Nearly Every Aspect of Life Is Getting Progressively Better as a Result of Exponentially Improving
new technologies can have huge indirect benefits, even far from their own areas of application.
Page 128
Electricity is not itself an information technology, but because it powers all our digital devices and networks, it is the prerequisite for the countless other benefits of modern civilization.
Page 133
most of our progress in disease treatment and prevention to date has been the product of the linear process of hit-or-miss efforts to find useful interventions. Because we have lacked tools for systematically exploring all possible treatments, discoveries under this paradigm have owed a lot to chance.
Page 135
during the 2020s we are entering the second bridge: combining artificial intelligence and biotechnology to defeat these degenerative diseases. We are now utilizing AI to find new drugs, and by the end of this decade we will be able to start the process of augmenting and ultimately replacing slow, underpowered human trials with digital simulations.
medical nanorobots with the ability to intelligently conduct cellular-level maintenance and repair throughout our bodies.
Page 136
the core of a person's identity is not their brain itself, but rather the very particular arrangement of information that their brain is able to represent and manipulate. Once we can scan this information with sufficient accuracy, we'll be able to replicate it on digital substrates.
Page 146
Shifts in the kinds of jobs in demand have motivated millennials and Generation Z, more than other generations, to seek creative, often entrepreneurial careers, and have given them the freedom to work remotely, which cuts out travel time and expense but can lead to blurry boundaries between work and life.
Page 148
Increasing material prosperity has a mutually reinforcing relationship with declining violence. Where humans once only identified with small groups, communication technology (books, then radio and television, then computers and the internet) enabled us to exchange ideas with an ever wider sphere of people and discover what we have in common. The ability to watch gripping video of disasters in distant lands can lead to historical myopia, but it also powerfully harnesses our natural empathy and extends our moral concern across our whole species. Once humanity has extremely cheap energy (largely from solar and, eventually, fusion) and AI robotics, many kinds of goods will be so easy to reproduce that the notion of people committing violence over them will seem just as silly as fighting over a PDF seems today.
Page 159
The printing press is an excellent illustrative example of how the law of accelerating returns works for information technologies.
Page 160
Very broadly, the more ideas a person or a society has, the easier it is to create new ones; this includes technological innovation.
technologies that make it easier to share ideas make it easier to create new technologies-when Gutenberg introduced the printing press, it soon became vastly cheaper to share ideas.
The spread of knowledge brought wealth and political empowerment,
Page 163
History gives us reason for profound optimism, though. As technologies for sharing information have evolved from the telegraph to social media, the idea of democracy and individual rights has gone from barely acknowledged to a worldwide aspiration that's already a reality for nearly half the people on earth.
Page 164
The essential point to realize is that all the progress I have described so far came from the slow early stages of these exponential trends. As information technology makes vastly more progress in the next twenty years than it did in the past two hundred, the benefits to overall prosperity will be far greater-indeed, they are already much greater than most realize.
Page 169
As I will explain later in this chapter, we will soon produce high-quality, low-cost food using vertical agriculture with AI-controlled production and chemical-free harvesting.
Page 170
Much like the internet is an integrated and persistent environment of web pages, the VR and AR of the late 2020s will merge into a compelling new layer to our reality.
Page 171
over the next couple of decades, brain–computer interface technology will become much more advanced.
Page 172
we need advances in materials science to achieve further improvements in cost-efficiency.
Page 173
Costs of solar electricity generation are falling quite a bit faster than those of any other major renewable, and solar has the most headroom to grow.
Page 177
A key challenge of the twenty-first century will be making certain that earth's growing population has a reliable supply of clean, fresh water.
3D printing allows manufacturing to be decentralized, empowering consumers and local communities.
Page 186
Each year the resolution of 3D printing is improving and the technology is getting cheaper. new research is applying 3D printing to biology. One potential drawback of 3D printing is that it could be used to manufacture pirated designs. All of this requires new approaches to protect intellectual property. decentralized manufacturing will allow civilians to create weapons that they otherwise couldn't easily access.
Page 189
Material abundance and peaceful democracy make life better, but the challenge with the highest stakes is the effort to preserve life itself.
Biological life is suboptimal because evolution is a collection of random processes optimized by natural selection.
Page 190
We are beginning to use AI for discovery and design of both drugs and other interventions, and by the end of the 2020s biological simulators will be sufficiently advanced to generate key safety and efficacy data in hours rather than the years that clinical trials typically require.
Page 192
Nanorobots not only will be programmed to destroy all types of pathogens but will be able to treat metabolic diseases.
The fourth bridge to radical life extension will be the ability to essentially back up who we are, just as we do routinely with all of our digital information. As we augment our biological neocortex with realistic (albeit much faster) models of the neocortex in the cloud, our thinking will become a hybrid of the biological thinking we are accustomed to today and its digital extension.
Page 193
If you restored your mind file after biological death, would you really be restoring yourself?
Page 194
Information technology is about ideas, and exponentially improving our ability to share ideas and create new ones gives each of us-in the broadest possible sense-greater power to fulfill our human potential and to collectively solve many of the maladies that society faces.
Page 195
The convergent technologies of the next two decades will create enormous prosperity and material abundance around the world. But these same forces will also unsettle the global economy, forcing society to adapt at an unprecedented pace.
Page 197
Yet driving is just one of a very long list of occupations that are threatened in the fairly near term by AI that exploits the advantage of training on massive datasets.
Page 198
a 2023 report by McKinsey, found that 63 percent of all working time in today's developed economies is spent on tasks that could already be automated with today's technology.[
Page 207
Erik Brynjolfsson. He argues that, unlike previous technology-driven transitions, the latest form of automation will result in a loss of more jobs than it creates.[
Page 208
Economists who take this view see the current situation as the culmination of several successive waves of change.
The first wave is often referred to as "deskilling."[
One of the main effects of deskilling is that it is easier for people to take new jobs without lengthy training.
The second wave is "upskilling." Upskilling often follows deskilling, and introduces technologies that require more skill than what came before.
AI-driven innovation different from previous technologies is that it opens more opportunities for taking humans out of the equation altogether.
Page 209
This is desirable not just for cost reasons but also because in many areas AI can actually do a better job than the humans it is replacing.
Yet it is important to distinguish between tasks and professions.
ATMs can now replace human bank tellers for many routine cash transactions, but tellers have taken on a greater role in marketing and building personal relationships with customers.[83]
Page 210
Yet one sticking point in this thesis has been a productivity puzzle: if technological change really is starting to cause net job losses, classical economics predicts that there would be fewer hours worked for a given level of economic output. By definition, then, productivity would be markedly increasing. However, productivity growth as traditionally measured has actually slowed since the internet revolution in the 1990s.
Page 214
The good news, though, is that artificial intelligence and technological convergence will turn more and more kinds of goods and services into information technologies during the 2020s and 2030s-allowing them to benefit from the kinds of exponential trends that have already brought such radical deflation to the digital realm.
Page 219
so, even as technological change is rendering many jobs obsolete, those very same forces are opening up numerous new opportunities that fall outside the traditional model of "jobs."
Page 221
People will be able to describe their ideas to AI and tweak the results with natural language until it fulfills the visions in their minds. Instead of needing thousands of people and hundreds of millions of dollars to produce an action movie, it will eventually be possible to produce an epic film with nothing but good ideas and a relatively modest budget for the computer that runs the AI.
Page 221
Most of our new jobs require more sophisticated skills. As a whole, our society has moved up the skill ladder, and this will continue.
Page 222
Real-time translation between any pair of languages will become smooth and accurate, breaking down the language barriers that divide us. Augmented reality will be projected constantly onto our retinas from our glasses and contact lenses.
Page 223
But on the way to a future of such universal abundance, we need to address the societal issues that will arise as a result of these transitions.
Page 226
Thanks to accelerating technological change, overall wealth will be far greater,
Page 226
and given the long-term stability of our social safety net regardless of the governing party, it is very likely to remain in place-and at substantially higher levels than today.
Page 227
we'll need smart governmental policies to ease the transition and ensure that prosperity is broadly shared.
Page 228
considering the role of jobs in our lives forces us to reconsider our broader search for meaning. People often say that it is death and the brevity of our existence that gives meaning to life. But my view, rather, is that this perspective is an attempt to rationalize the tragedy of death as a good thing.
Page 229
One of the great challenges of adapting to technological changes is that they tend to bring diffuse benefits to a large population, but concentrated harms to a small group.
Page 233
do think the specter of troublesome social dislocation-including violence-during this transition is a possibility that we should anticipate and work to mitigate.
Page 235
Turning medicine into an exact science will require transforming it into an information technology-allowing it to benefit from the exponential progress of information technologies. 2023 the first drug designed end-to-end by AI entered phase II clinical trials to treat a rare lung disease.[
Page 236
AI can learn from more data than a human doctor ever could and can amass experience from billions of procedures instead of the thousands a human doctor can perform in a career.
Page 237
In 2020 a team at MIT used AI to develop a powerful antibiotic that kills some of the most dangerous drug-resistant bacteria in existence. But by far the most important application of AI to medicine in 2020 was the key role it played in designing safe and effective COVID-19 vaccines in record time.
Page 241
There will likely be substantial resistance in the medical community to increasing reliance on simulations for drug trials-for a variety of reasons. It is very sensible to be cautious about the risks.
Page 243
In addition to scientific applications, AI is gaining the ability to surpass human doctors in clinical medicine.
Page 245
As Hans Moravec argued back in 1988, when contemplating the implications of technological progress, no matter how much we fine-tune our DNA-based biology, our flesh-and-blood systems will be at a disadvantage relative to our purpose-engineered creations.[45] As writer Peter Weibel put it, Moravec understood that in this regard humans can only be "second-class robots."[46] This means that even if we work at optimizing and perfecting what our biological brains are capable of, they will be billions of times slower and far less capable of what a fully engineered body will be able to achieve.
Page 253
Think of e-books. When books were first invented, they had to be copied by hand, so labor was a massive component of their value. With the advent of the printing press, physical materials like paper, binding, and ink took on the dominant share of the price. But with e-books, the costs of energy and computation to copy, store, and transmit a book are effectively zero. What you're paying for is creative assembly of information into something worth reading (and often some ancillary factors, like marketing).
Page 253
As all these components of value become less expensive, the proportional value of the information contained in products will increase.
In many cases, this will make products cheap enough that they can be free to consumers.
Page 254
This dramatic reduction of physical scarcity will finally allow us to easily provide for the needs of everyone.
While nanotechnology will allow the alleviation of many kinds of physical scarcity, economic scarcity is also partly driven by culture-especially when it comes to luxury goods.
Page 254
the nanotech manufacturing revolution won't eliminate all economic scarcity.
Page 256
(Longevity Escape Velocity)
Page 257
If you can live long enough for anti-aging research to start adding at least one year to your remaining life expectancy annually, that will buy enough time for nanomedicine to cure any remaining facets of aging.
Page 260
Eventually, using nanobots for body maintenance and optimization should prevent major diseases from even arising.
Page 262
As AI gains greater ability to understand human biology, it will be possible to send nanobots to address problems at the cellular level long before they would be detectable by today's doctors.
Page 263
Nanobots will also allow people to change their cosmetic appearance as never before.
Page 264
A deeper virtual neocortex will give us the ability to think thoughts more complex and abstract than we can currently comprehend.
Page 267
just as this progress will improve billions of lives, it will also heighten peril for our species. New, destabilizing nuclear weapons, breakthroughs in synthetic biology, and emerging nanotechnologies will all introduce threats we must deal with.
Page 271
advances in genetic engineering[25] (which can edit viruses by manipulating their genes) could allow the creation-either intentionally or accidentally-of a supervirus that would have both extreme lethality and high transmissibility.
Page 274
contrast, biological weapons can be very cheap.
Page 276
even if responsible people design safe nanobots, bad actors could still design dangerous ones.
Page 278
AI is smarter than its human creators, it could potentially find a way around any precautionary measures that have been put in place. There is no general strategy that can definitively overcome that.
Page 278
Three broad categories of peril - Misuse ... Outer misalignment, which refers to cases where there's a mismatch between the programmers' actual intentions and the goals they teach the AI in hopes of achieving them. Inner misalignment occurs when the methods the AI learns to achieve its goal produce undesirable behavior, at least in some cases.[
while the AI alignment problem will be very hard to solve,[62] we will not have to solve it on our own-with the right techniques, we can use AI itself to dramatically augment our own alignment capabilities.
Page 284
With technologies now beginning to modify our bodies and brains, another type of opposition to progress has emerged in the form of "fundamentalist humanism": opposition to any change in the nature of what it means to be human.
Page 285
AI is the pivotal technology that will allow us to meet the pressing challenges that confront us, including overcoming disease, poverty, environmental degradation, and all of our human frailties. Overall, we should be cautiously optimistic.