Nick Bostrom’s new tome comes out today, and what can I say — it has a great cover with a number of interesting questions and a subtitle that hints that it might address the meaning of life in a future where AI and robots can do everything. But alas, after much build up and anticipation, he leaves that question unanswered, with an abrupt oops, out of time on page 472. Not even a 42 to make us chuckle.
My biggest frustration with the book is that he takes over 500 pages to convey what could be more clearly said in well under 50. I can’t wait to run the text through a LLM for the compressed summary (it’s a bit ironic to use an AI this way, producing something like Wittgenstein’s Tractatus). If you do decide to slog through it, I can save you some time: skip the first 118 pages. You won’t miss anything in that preamble that is not repeated elsewhere, and you’ll avoid a pedantic revival of Malthusian immortality.
My main concern with Bostrom’s overall framework is his baseline assumption that our future world has reached “technological maturity” (everything that can be invented or discovered has been) where humans are, at the same time, almost immortal and able to edit themselves and their emotional response and cognition with precision. You could say we have philosophical differences here:
1) I don’t think there is an upper bound to ideas. Each idea is a recombination of prior ideas, and the space of possible idea pairings grows exponentially (Reed’s Law). The unfolding of certain iterative algorithms (evolution, ML, culture) is computationally irreducible, and learning its richness would require a simulator of comparable complexity. We will never finish exploring the universe, even if much of it end up being in deep simulations. We will never accumulate all possible ideas. And this opens a window to a rich panoply of possible meaning and purpose for humanity that Bostrom’s presumption forecloses. His subtitle of a “solved world” presumes the computational complexity of our existence reaches some terminus, a solution to the questions of life. He quibbles that increasingly insignificant discoveries may be possible, but “technology” has an upper asymptote making it effectively a ceiling. It feels like cosmological doomerism.
2) I don’t think we will modify our core intelligence or achieve near-immortality in a timeframe of relevance to the core question: how do we find meaning in a post-AGI world of abundance? We will have to wrestle with the AI issues long before we get to edit our biology to such effect (and we may need AGI to have a chance). Reverse-engineering the human brain to enhance its core functionality is much more difficult that building an AGI of comparable complexity. Extending lifespan to near-immortality is a very long process with regulatory restraints on experimentation and a century of waiting to see the effects, and side effects. The pace of AI advancement will make biological change look glacial. Unfortunately, the vast majority of the book explores the bizarre implications of editing our biological consciousness (e.g., to edit out the feeling of boredom as a way to say boredom won’t be a problem) that will never happen in a timeframe of relevance. Very little of the book explores AGI on its own, before the magic biology. So, it fundamentally misses the mark as a thought exercise about our future in the AGI era, and many of possibilities explored without intellectual commitment bear an irrelevance to reality that I thought was only possible in the field of economics. :)
Also, there is no high-level organization to the book; the chapters are just days of the week of a long rambling lecture given by Bostrom to his fawning pupils, interspersed with some animal characters on side adventures that are pure filler (often 20-40 pages at a time), and I can say in retrospect, they are best skipped. So, the overall experience is one of countless digressions without a clear arc of where we are going.
But Bostrom does scatter some lovely turns of phrase, jovial nuggets, and poignant prose on a random walk through a soporific diatribe of digressions... Some that caught my eye:
“The concept of deep utopia can serve as a kind of philosophical particle accelerator in which extreme conditions are created that allow us to study the elementary constituents of our values.” (p.3., made me hopeful)
“We need to develop a culture that is better suited for a life of leisure.” (119) “We would emphasize enjoyment and appreciation rather than usefulness and efficiency.” (129)
“As we look deeper into the future, any possibility that is not radical is not realistic.” (129)
“Some researchers have suggested that our Stone Age forbears had plenty of free time, that they may have worked as little as four hours a day.” (130)
“We have to remember that ‘interesting times’ are often horrible times for those who have to live through them. An uneventful and orderly future, in contrast, can be a great place to inhabit.” (152)
“For each category of utopia, there is a correlate category of dystopia… usually not as a prediction of the future but as a critique of some pernicious pattern in the author’s contemporary society. In classical governance & culture dystopias, for example, the problematic pattern might be oppressive totalitarianism (Nineteen Eighty-Four) or dehumanizing consumerism (Brave New World).” (203)
He tries to address meaty topics like, what keeps life interesting? What is our purpose and meaning when the struggle is gone? Can fulfillment get full? But in each case, the pedagogy is more of a survey of all possible answers versus the much more difficult task of making specific predictions. He even invokes multiple universes, unicorn breeding, Jupiter brains and interstellar travel to make sure he’s exhausting all possible scenarios.
“Why do you think people are interested in interestingness?
The learning and exploration hypothesis: The value we place on interestingness derives from a kind of learning instinct and/or an exploration bias. We seek out situations that present us with significant new information and novel varied challenges, because doing so led our ancestors to acquire more knowledge and skills, which was adaptive in our evolutionary environment.” (230, followed by three other, less compelling hypotheses)
“We can think of intrinsified values as motivational flywheels” (236)
And of course, if all ideas have been had already, “we seem bound to encounter diminishing returns quite quickly, after which successive life years bring less and less interestingness.” (253)
“And although the years may bring some modicum of understanding of the workings of the world, it tends to come late, often too late — making it seem like the only fruit that grows on the tree of experience is resignation. Wisdom withered on the bough.” (259)
“Objective interestingness will probably peak around the development of machine superintelligence. Depending how steep the takeoff is, interestingness might then remain at unprecedentedly high levels for a decade or so, before gradually trailing off to levels far lower than we have seen even in relatively stagnant period of human history. The most important things that can be discovered may by then already have been discovered.” (265, one of the few specific forecasts, and one I believe to be patently false)
“Usually the conclusion has been that the best and most fulfilling way to live a human life, and the most praiseworthy one, is in fact to be a philosopher.” (311)
“The value of fulfillment may be satiable in a way that the value of interestingness is not.” (314)
“I expect that virtual worlds will be experienced as decidedly more real than the physical world” (323)
“All of us have aims, many of us have goals, but relatively few have missions.” (325)
“Feelings of alienation could be easily banished with mature neurotechnoogy” (335, just one example of countless appeals to bio magi that really makes his whole thought experiment useless and non-actionable)
“Cultural and interpersonal complications might provide us with purposes in utopia beyond those which we may create for ourselves individually by setting ourselves challenging goals.” (336)
Augment early and often I used to joke, 20 years ago, when discussing humanity’s futile attempt to stay relevant in a future of superintelligence. So, I chuckled at “To have a chance at being task-optimal at technological maturity, you would probably have to start your transformation early and proceed at close to maximal speed.” (355)
And finally, an observation I agree with, with the framework of Wolfram’s computational universe in mind: “To the extent that meaning can be derived from scientifically or artistically representing continually changing patterns in the world, as opposed to fundamental timeless conditions, there would be better prospects of never running out of material. Yet this would still leave us with the second problem, which is that other minds and mechanisms would far surpass ours in the efficiency with which they could discover these truths and patterns” (415)
P.S. I picked up the book at a cool gathering at the beginning of birthday month and will share the learnings when they become public.