The following is excerpted from Beyond Zero and One by Andrew Smart, published by OR Books.
To address some of the reservations we might have about attributing consciousness to robots, I would like to ask a question similar to Rothblatt’s about thinking computers, which at first may sound far-fetched: are we heading to a point when computers will have psychedelic experiences just as humans do? And does the ability to have psychedelic experiences mean something important about brains? What if we added to the self-awareness, morality, autonomy, and sentience that cyberconsciousness should have, the ability to have psychedelic experiences? Would a cyberconscious mindclone pass Ken Kesey’s acid test?
The very term psychedelic means “mind manifesting.” Many people now use the word entheogen to refer to drugs like LSD. Entheogen is derived from the Greek word entheos and means “god within.” The German toxicologist Louis Lewin used the term phantastica. Native Mexicans call magic mushrooms “Flesh of the Gods.” These terms suggest that LSD and other similar drugs connect us to a divine or spiritual nature that is within us. While these terms are not used in the scientific literature on LSD, they capture how LSD makes people feel. The scientific term most often used is hallucinogen, which is really a misnomer because at typical doses LSD does not in fact cause hallucinations in the traditional sense of the word.
If Rothblatt and Kurzweil are correct and achieving an artificial mind is possible in a computer, would we be able to give this artificial mind LSD? If computationalism (which you will recall is the theory that the brain is a computer) is true, the phenomenal consciousness a reasonably accurate simulation of the brain produces in a robot is the same as human consciousness. This mind should also be able to be perturbed by drugs like LSD. Because, by extension, “being able to be perturbed by drugs like LSD” is a property of human phenomenal consciousness. In other words, the psychedelic state in a biological brain is also made of computations. We know that human phenomenal consciousness on psychedelics is different because LSD dramatically alters our own conscious experience, and we know that LSD targets and changes the behavior of certain brain structures like serotonin receptors. These receptors are therefore likely also involved in generating so-called normal experience somehow.
I therefore propose raising the bar on the Turing Test and its governing body: the small group of human-consciousness experts. If the artificial mind is real, in the sense that our own consciousness is real, then it should be as affected by psychedelic drugs as humans. This would be called the Turing-Acid Test for machine consciousness.
A few important caveats to this proposal:
- We do not understand human consciousness.
- We do not understand how to create machine consciousness.
- We do not fully understand how or why LSD perturbs human consciousness. (See caveat 1.)
- We do know that LSD profoundly alters conscious experience.
LSD brings the subjective/objective divide into sharp relief. In other words, when we say that LSD alters consciousness, or that the drug is mind-altering, we mean that LSD alters first-person phenomenal consciousness. LSD and hallucinogens cut through a lot of philosophical handwringing about how a physical system like the brain can produce subjective qualitative phenomenal content. Manzotti again: “We swallow molecules of various kinds and, as a result, we feel the flavor of a delightful Brunello di Montalcino.” But rather than the molecules in a fine Italian wine somehow creating a nice taste experience, the LSD molecule fundamentally transforms your entire subjective qualitative phenomenal content. All of the things that you take for granted in your daily life for navigating the world can disappear: your own sense of self, your sense of time, and your sense of space.
We do not have any objective way to verify that a person’s experience is altered by LSD. We can only rely on reports by people who are tripping or have taken the drug. In fact, Albert Hofmann’s colleagues were skeptical upon hearing his fantastic report about the first acid trip, so he gave them some LSD and told them to try it. They were soon convinced.
People on LSD or other psychedelics typically see movement in things that aren’t actually moving. They report that things look strange. They feel unusual bodily sensations. Edges appear warped. They see geometric patterns, have a distorted sense of time and an extremely vivid imagination, feel like they are in a dream, and may feel that they merge with their surroundings. I experienced each of these things when I took LSD, as did Albert Hofmann, as did Steve Jobs and Brian Wilson.
The reason a conscious machine should be able to trip on acid follows from the premises of computationalism. According to computationalism, all mental phenomena, and potentially all physical phenomena, and all simulations of these phenomena are computations. As McDermott points out, a simulation of a computation is a computation. If consciousness is an algorithm that our brains are running, the simulation of that algorithm is equivalent to the original in our brains.
Again, this means that a human brain tripping on LSD is also a computation. However, we would never expect a computer simulation of a hurricane to actually breach a levee. This is because the simulation of the hurricane exists entirely inside the computer and does not affect the external world. But crucially for my robots-on-acid argument, the simulated hurricane can breach a simulated levee inside the computer program. Furthermore, as I have argued, normal waking consciousness and hallucinations are both real perceptions. Hallucinations are usually not composed of anything particularly useful.
The unique thing about psychedelic drugs is that all of their dramatic effects upon consciousness happen inside the brain. One of the reasons we can be quite certain that consciousness arises or emerges from the brain is precisely because we know that ingesting a molecule like LSD produces such dramatic altered states of consciousness. Psychedelic drugs have been known to humans for thousands of years for their ability to perturb consciousness in a powerful way.
Putting aside issues of inputs and outputs such as how we would deliver LSD to a computer, and what digital LSD would actually be made of—if the computer were intelligent in the sense that it was able to pass the Turing Test, claimed in natural human language that it was experiencing pain or joy, and was generally accepted by human society as having conscious experience, a further way to really probe this machine’s claims that it was conscious would be to give it the digital equivalent of LSD. The purpose of this is the same as when we give psychedelics to humans in clinical or scientific contexts—to perturb the system and observe the consequences. If the artificial intelligence is based on some kind of biology-inspired system it might be possible to introduce the LSD molecule directly.
As with the simulation of a hurricane, we need to worry about the computer’s physical form. This is because even though the conscious machine would have a model of itself that may not necessarily be based on a robot body, it would have to be connected to the physical environment in an appropriate way to allow it to experience and learn from the environment. To achieve human-level intelligence, it is almost certain the machine would have to have some kind of embodiment so that it will know what it is like to move around in the world.
At a very minimum, in order to test whether the machine is conscious, it should possess similar mechanisms to what the brain has that allow human brains to trip on acid. As we reviewed in the last chapter, recent brain imaging and molecular studies in animals are starting to unravel the precise neural mechanisms that are affected by LSD. These studies indicate that psychedelic molecules cause decreased activity and connectivity in the brain’s key connector hubs, enabling a state of unconstrained cognition. The connection between the prefrontal cortex and posterior cingulate cortex is disrupted on psychedelics. In other words, and perhaps counterintuitively, hallucinogens more or less partially disconnect certain brain regions from each other and allow experiences to be freely generated from unusual combinations of alternative brain regions.
I have argued that normal everyday consciousness is a type of useful hallucination, but one that is constrained by sensory inputs. This allows us to navigate the world and come to a consensus about what is real with other minds that have similar sensory-constrained experiences. However, as we see in hallucinations and schizophrenia, when the sensory constraints are removed, the schizophrenic’s brain creates perceptions that are only experienced by one person.
Psychedelic drugs directly impact several key attributes of human minds that have, I believe, huge implications for building an artificial mind. Among these are cognitive unity, temporal integration, and emotional experience.
Cognitive unity is the idea that consciousness, from a first-person perspective, relies on the fact that we seem to have a single unified experience—our own unified point of view of our perception of the world from inside our heads. And we perceive our unified cognitive experience as separate from the external world. We even know more or less what brain regions are most involved in generating our unified sense of self as separate from the world.
What this means is that there is no way to disintegrate our conscious cognition into its components. From our own perspectives we are unable to break down our experience into independent components.
LSD profoundly alters cognitive unity. Many people feel that the separation between the self and world dissolves when on LSD, and they begin to feel at one with everything. Conscious experience as a unified whole also breaks down on LSD, especially during the acute phase at high doses, so that perceptions that originate from inside are difficult to disentangle from those originating from outside. Experience itself becomes like movie frames slowed down so that each frame is perceivable. We know now that there are neurobiological reasons for this; hallucinogens have profound effects on global brain activity. Psilocybin, for example, decreases the connections between visual and sensorimotor networks, while it seems to increase the connectivity between the resting-state networks.
Temporal integration is related to one’s sense of the current moment. Conscious experience is somehow located in time. We feel like we occupy an omnipresent widthless temporal point— the now. As Riccardo Manzotti says:
Every conscious process is instantiated by patterns of neural activity extended in time. This apparently innocuous hypothesis hides a possible problem. If neural activity spans in time (as it has to do since neural activity consists in trains of temporally distributed spikes), something that takes place in different instants of time has to belong to the same cognitive or conscious process. For instance, what glues together the first and the last spike of neural activity underpinning the perception of a face?
We know that neuronal oscillations at different frequencies act as this temporal glue. However, when you’re on LSD, this glue seems to dissolve. As Albert Hofmann and many others report, your normal sense of time vanishes on psychedelics. The famous bicycle trip on acid during which Hofmann reported that he felt he was not moving, and yet he arrived at home somehow, illustrates this distortion of the brain mechanisms that support our normal perception of the flow of time.
As my own LSD experience illustrated, the relationship between conscious experience and our emotions is a very tight one, and is often overlooked in AI research. This too could be the result of the intrinsically illogical nature of emotions. We have seen that the serotonin system in the brain is a key mechanism that creates our emotions. For a computer to pass the Turing-Acid Test, it would have to be constructed with a sufficiently accurate simulation of our own emotion system. It is difficult to see how this might be done given current technology; however, the whole brain emulation approach might come the closest to at least a theoretically possible avenue.
Finally, what if the psychedelic state could be a kind of biomarker for consciousness in artificial minds? A biomarker in medicine is an objectively measurable sign of a disease or condition: a molecule, gene, or characteristic that can be seen in a test. For example, with a condition like schizophrenia there is no clinically validated biomarker; i.e., a test that can objectively and quantitatively detect the presence of the disease. There is nothing (that we know of) in the schizophrenic’s blood, in his genes, or in his brain scans that definitively indicates the presence of the disorder. Rather, schizophrenia is diagnosed behaviorally by psychiatrists based on observed symptoms and patient-reported experiences. There is therefore some variability in how people get diagnosed with schizophrenia. Whereas with other diseases, such as Type I diabetes, there are several objective, quantifiable blood tests that can determine if you have the disease. There is a large body of ongoing research that is trying to develop biomarkers for psychiatric disorders.
Some have suggested that there are biomarkers for human consciousness. The connectivity measure in EEG data, for example, can distinguish different levels of consciousness. Connectivity refers to how much information is being exchanged among different brain regions. When connectivity between brain regions is low, it means there is a serious problem with your consciousness. It is often diagnostically crucial to test whether a patient who seems unconscious is actually in a coma, in a minimally-conscious state, or in a vegetative state. The outward symptoms of the patient are almost identical—they are unconscious and unresponsive. However, their chances for recovering are much better if they are in a minimally-conscious state versus a vegetative state. Therefore it is critical to determine which disorder of consciousness the patient has.
Once neuroscience works out the precise way in which psychedelics alter human or animal brain activity, could a test be developed for an artificial system claiming to have a mind in which the equivalent of a psychedelic compound is given and the artificial mind’s response is measured using the equivalent of a brain scan? Would the artificial consciousness be perturbed in a similar way?