If you woke up one day and discovered that you were living in a virtual world – that everything you'd ever known was, like the Matrix, a form of hyper-realistic simulation – what would this imply for your hopes, dreams and experiences? Would it reveal them all to be lies: deceptions devoid of authenticity?
For most people, the intuitive answer to all these questions is "yes". After all, the Matrix movies depict a dystopian nightmare in which humanity has been enslaved by sinister machines. How else to think about the revelation that "reality" is nothing like it seems? For the philosopher David Chalmers, however, none of this necessarily follows. No matter what the status of your reality, he suggests, your thoughts and experiences remain as real as it gets. And the value and purpose of your life are similarly untouched. In fact, as Chalmers bluntly puts it in his new book, Reality+: Virtual Worlds and the Problems of Philosophy: "Simulations are not illusions. Virtual worlds are real. Virtual objects really exist." And the sooner we get used to these ideas, the sooner we'll be able to grasp some of the digital age's deepest tensions.
Chalmers didn't start out wanting to be a philosopher. When he was growing up in Australia in the 1970s, he identified with maths more than philosophy. He also read a great deal of science fiction and, by the time he'd finished a maths degree and spent six months travelling around Europe, found himself thinking obsessively about the science of the mind. How could something as remarkable as consciousness be part of the physical world? What might it mean to study consciousness scientifically?
The thing about science fiction and thought experiments, Chalmers told me when we spoke in late 2021, is that "they don't just illustrate a philosophical problem: they actually give you some impetus to work through it". And it was the impetus of his fascination with consciousness that led him, partway through a graduate degree in maths at Oxford, to write up some of the ideas swirling in his head and send them to Douglas Hofstadter – one of the world's most renowned researchers in cognitive science – at Indiana University. Hofstadter wrote back. "Why don't you come to Indiana and work with me?" Chalmers remembers him suggesting. "There's a group of people here who think about everything under the Sun: philosophy, artificial intelligence, cognitive science, humour, creativity. I thought, that sounds fantastic. So I ended up leaving Oxford and going to Indiana."
This was in 1989 – a time when, Chalmers recalls, researching consciousness was "not really fashionable… people said, why are you wasting your time thinking about this musty old topic when you could be working on artificial intelligence?" He was obsessed with it, however; and this meant throwing himself into a field where he had plenty of ideas but little formal expertise. "I suspect," he told me, "if I'd been through a more standard programme, I probably would have been told this is a little bit too ambitious, let's focus it. Nobody told me that. They let me think about these big issues." As it turned out, the world was more than ready to listen to what he had to say.
In 1994, soon after completing his doctorate, Chalmers submitted a talk proposal to a new conference in Tucson, Arizona, called Toward a Science of Consciousness. The conference organisers liked the proposal and put him in the first session; so, in due course, Chalmers found himself on stage asking his audience a question: "What is it we need to explain when explaining consciousness?" Establishing a science of consciousness was, you might have thought, a clear enough objective. But, Chalmers suggested, any attempt at explaining consciousness actually entails not one but two sorts of problem. On the one hand, there are the (relatively) "easy" problems of explaining behaviour: those objectively observable acts performed by a conscious being, such as reacting to stimuli, controlling actions and reporting mental states. On the other hand, there is the (extremely) "hard" problem of explaining subjective experience: what it is actually like to have particular thoughts and feelings.
This second problem is hard, above all, because it entails phenomena that are neither susceptible to direct measurement nor encompassed by most scientific models. Your perception of being you is something that you know exists, but that's also impossible for anyone else to experience, record or feel. This vantage point is, depending upon your perspective, both real in the most self-evident sense and outside of everything else we take to constitute reality. As Chalmers put it in a 1995 paper based on his talk: "Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does." The link between mind and matter is perhaps philosophy's most ancient enigma – and one that, for all the scientific advances of the last few centuries, becomes if anything more mysterious if you don't believe in souls and spirits.
Chalmers came bearing questions rather than definitive answers, but something about his formulation of "the hard problem" resonated with his audience. The phrase was picked up by fellow researchers, then by Scientific American and the US press. The conference itself received international media coverage – and interest began to build around the book he was working on. "As I've said many times," he told me, "there's nothing original about the idea that consciousness poses a hard problem. But that phrase does turn out to be a very effective way of posing the problem as a challenge, and making it harder for people to avoid." He had brought a vast theme to life with a clarity that commanded attention – and that suggested several tantalisingly specific lines of investigation.
Foremost among these was that of information. The early 1990s was a golden time for the science of consciousness, with bestselling books by physicists and philosophers, as well as research by prominent biologists and neuroscientists, exploring its basis in newly empirical terms. Chalmers' own thinking drew on these insights, proposing a "non-reductive" approach to the hard problem via the suggestion that every form of information processing entails an irreducible component constituting the basis of conscious experience. According to this view, the relatively simple information-processing taking place in the brain of a mouse yields relatively simple experiences, the immensely complex information-processing taking place in a human brain yields immensely complex experiences, and, most provocatively of all, even the minimal kind of information-process taking place in a device like a thermostat may yield a minimal kind of experience. Consciousness is, in other words, an inherent property of the Universe itself: something that cannot be explained merely in terms of matter.
It was a theory that melded scientific rigour with wildly imaginative theorising – and that explicitly invited further research and debate, not least via the Association for the Scientific Study of Consciousness, which Chalmers co-founded in 1996. By the time his first book, The Conscious Mind, appeared later that year, he was a fêted name in his field. And he was already working on another highly speculative theory; one that would both contribute a second significant phrase to contemporary philosophy and place scenarios of human enhancement and augmentation centre stage: the extended mind.
Between the Tucson conference and the publication of The Conscious Mind, Chalmers and the British philosopher Andy Clark began to explore a deceptively simple question together: "Where does the mind stop and the rest of the world begin?" Answering this may, once again, seem to require little more than common sense. The brain is connected via the nervous system to the body – so it's the body that marks the limits of mind and self. Beyond the skin and the skull lies the external world, knowledge of which can only be gleaned indirectly. Yet Clark and Chalmers proposed something very different: an "active externalism" based on the hypothesis that humans are able to outsource aspects of thought and memory to external artefacts – and thus that these artefacts can literally become parts of human minds.
This being the mid-1990s – a time when the web was in its infancy and mobile phones resolutely "dumb" – one of the central examples Clark and Chalmers used to illustrate their theory was a Filofax: a personal organiser wallet whose name is an amalgamation of the phrase "file of facts". To record or process information via an object that's always with you is, they suggested, to become a "coupled system" with that object: one whose capabilities can only accurately be described by treating you-plus-that-object as a single entity. As they put it in their joint paper titled The Extended Mind: "If, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process… if the resources of my calculator or my Filofax are always there when I need them, then they are coupled with me as reliably as we need. In effect, they are part of the basic package of cognitive resources that I bring to bear on the everyday world."
Although they started work on it in 1995, the paper didn't actually appear until 1998 – because, Chalmers explained, "we tried to publish it in three journals and they all turned us down: they thought it was rather extreme speculation." The speculation turned out to be prescient to a degree that has impressed even its authors. "It's interesting," Chalmers told me, "how some of these things go from being totally implausible, to interesting but speculative, to totally obvious." What was once an extreme imaginative leap has, over the course of two decades, become an everyday reality. Consider the role that artefacts like smartphones play in cognition – and the ways in which, if our phones are taken away from us, both how and what we think is altered in non-trivial ways. "I often joke that an enormous chunk of my mind is constituted by Google, by Apple, by Facebook," he noted. "Maybe Google has 30% and Apple has 20%. And that's just ordinary mind extension. Once this applies to everything in our environment, I think the potential is that much more."
It's here that we reach the inflection point between speculative fiction and technological possibility that has recurred throughout Chalmers' career – a theme that Reality+ and his most recent work push towards its metaphysical limits. For, while it may seem self-evident that our minds access the "real" world directly and that any virtual or simulated environments are thus "unreal", the ways in which we interact with 21st-Century technologies belie this at every turn. Onscreen words and worlds inspire ferocious passions, loyalties and hates; virtual currencies and goods change hands for millions of dollars; augmentations of everyday experience, from mapping and search software to social media and advertising algorithms, shape the contours of consciousness. In each case the point is not that such things are unreal, but rather that they are differently real. They are, Chalmers emphasises, real virtual artefacts, real mediated experiences and real online encounters: entities whose nature and consequences can only be understood if we stop treating them as abstractions.
Near the start of the first Matrix movie, the character Neo (Keanu Reeves) faces a dilemma. He has just been told that his world is, in fact, a simulation within a larger reality. Now he has a choice. He can take a blue pill and keep on living forgetfully in the Matrix, as if nothing has happened. Or he can take a red pill and wake up into the "base" reality beyond it. What should he do? What would you do? Neo chooses the red pill – and goes on to save both the external and simulated worlds. But, as Chalmers pointed out in a 2003 article commissioned by the production company behind The Matrix, discovering that you've lived your entire life inside a simulation doesn't actually invalidate the "reality" of that life.
After all, if you were born and grew up in the Matrix, you would by definition never have encountered any non-simulated objects, or had any experiences prompted by non-simulated interactions. What you call "trees" are actually digital simulations. But since you've never seen a non-simulated tree, all this means is that everything you know about "trees" can technically be rephrased as being about "simulated trees". Unless you have suddenly been granted simulation-breaking new powers, this revelation is no different to discovering that what you have been calling "trees" are, technically, "accretions of subatomic particles" or "collapsed quantum waveforms" or "temporarily captured energy". In other words, Chalmers suggests, if I were to wake up one day and discover I'm living in a simulation, "I should not infer that the external world does not exist, or that I have no body, or that there are no tables and chairs… Rather, I should infer that the physical world is constituted by computations beneath the microphysical level. There are still tables, chairs, and bodies: these are made up fundamentally of bits, and of whatever constitutes these bits. This world was created by other beings, but is still perfectly real."
What follows from this? Among other things, Chalmers argues in Reality+, the question of whether we're living in a simulation has an unexpectedly theological dimension. A simulation operated by super-powerful entities is, in many ways, equivalent to a Universe created by a divine being. And it begs similar questions – not least if you turn out to be one of the super-powerful entities in question. What kinds of risks and responsibilities accompany the god-like powers associated with operating simulated worlds? Given that Facebook recently changed its name to Meta, in honour of the immersive environments it plans soon to unveil, the question of what it means for corporations to operate realms within which they're close to omniscient and omnipotent has a startlingly practical dimension.
"If you think that privacy and manipulation are already a problem on current social media," Chalmers told me, "they're obviously going to have the potential to be much more so when it comes to virtual worlds controlled and created by the same corporations." And this potential is even greater once we recognise that the values, experiences, objects and interactions at play in such worlds are real. In fact, the questions that matter most are not about reality and unreality at all, but rather about the kinds of experience, agency and opportunities afforded by any environment we are responsible for: "if these are genuine realities, ones where you can have meaningful experiences… what kind of meaningful experiences are we going to have?"
Plenty of philosophers and ethicists have made the case in recent years for the importance of principles like privacy, transparency, agency and explicability within information environments. Chalmers is unusual, however, in the intensity of his focus upon the technology's most distant horizons – and his quest for a non-naïve optimism when it comes to humans' relationships with and through their creations.
To see what such an optimism might look like in practice, consider an inexperienced user of a virtual environment who doesn't, for instance, know that the avatar they're chatting to is being controlled by a corporate AI rather than a human. This is a scenario in which an informational asymmetry – the fact that the user is profoundly deceived about the nature of the interaction – may be connected to all kinds of manipulation or exploitation. Contrast this with an experienced user of a virtual environment who is hanging out with some avatars controlled by (human) friends as well as an AI-controlled avatar that's telling them stories beside a virtual campfire. This is a very different prospect. What's playing out here is a potentially life-enhancing encounter in an artificial realm – its pleasures derived from a knowing combination of verisimilitude and fictionality.
In Reality+, Chalmers uses the phrase "a sense of virtuality" to describe the ways in which people know that an object or environment is simulated – and the importance of this awareness when it comes to rich, meaningful interactions with virtual environments. "I think knowledge is very important," he told me. "That, when you're interacting with something virtual, you know it's virtual; that, when you interact with something digital, you know it's digital. It wouldn't surprise me if this becomes part of the ethical regulation of virtual worlds. It's not to say that these virtual worlds are not real. You just want to know which reality you're in."
The knowledge that you bring to a simulated experience is, in other words, a vital component of that experience – something that applies equally to any "real" situation. In each case, to be under-informed or misinformed is to be vulnerable to various kinds of manipulation, while to possess meaningful options, agency and expertise is to be empowered.
This brings us to perhaps the most significant and sobering lesson of all: that, when it comes to consciousness, humans are at once brilliant and profoundly vulnerable. Countless artefacts, systems and environmental nudges are constantly altering and extending our minds. We do not and cannot access even "base" reality directly via our senses. And this means that any and every moment we experience is at once more open and more unknowable than intuition easily lets us believe.
What, I asked Chalmers, are some of the things that have most surprised or excited him within our growing understanding of consciousness? One example that comes to mind, he told me, is research into what's known as "change blindness". Change blindness describes the ways in which people can effectively be "blind" to even substantial changes in what's in front of them, unless they are specifically looking out for such changes.
In one 1998 experiment, for example, experimenters initiated a conversation with a pedestrian and then, half way through the conversation, surreptitiously replaced the first experimenter with a different person who carried on the conversation. Only half of the pedestrians even noticed the change, a remarkable finding reinforced by a growing body of research that suggests people may be conscious of – as Chalmers puts it – "much less than we thought". It seems that our everyday awareness of the world is detailed, smooth and constantly updated. But this is little more than a useful illusion. "We thought we were conscious of everything, all the details of a picture; but it turns out that maybe we're just conscious of seven blobs that we attend to. Whenever we attend to it, it's there. But [at least so far as consciousness is concerned] it's not always there."
Our minds and perceptions, in other words, are fundamentally non-literal in their readings of reality – while perception itself is a kind of evolved illusion, useful and accurate enough to safeguard our survival, but nothing like as comprehensive as it seems. Virtual worlds and technological mediation are, in this sense, already a kind of second nature so far as humanity is concerned: environments and encounters neither more or less inherently meaningful than anything else we experience. In the end, information itself is the reality that matters.
What is Chalmers' own take when it comes to the status of his reality? Would he like to live in a simulation – or to know if he were already living in one? "I haven't quite made up my mind," he says. "On the one hand, there's something very cool about the idea of being in the base reality. There are all these simulations, but getting to be in base reality is a very interesting and special place to be. On the other hand, if we are in a simulation, then the Universe is much bigger and grander than we had thought."
It's a line of thought that feels, somehow, autobiographical – a version of the restless curiosity that took him halfway across the world, and that Reality+ maps across a succession of philosophical vignettes, provocations and parables. As he put it at the end of our conversation: "I grew up in Australia, and I discovered that at some point, oh my God, there's a whole world out there that I get to explore beyond this. I think that knowing there's a world outside our own Universe, perhaps even one that we could in principle explore, would open up horizons and possibilities that are exciting and interesting." And his ultimate verdict on that slippery word "reality" – and why it needs to be followed by "plus" to encompass everything he's trying to say? "I guess I would like to say… that reality is capacious. Space is big. But reality is bigger."