Cracking the code to consciousness

Neurotech@Berkeley
7 min readAug 7, 2024

--

Some of the most fascinating questions in the world are the ones that seem impossible to answer: is there other sentient life in our universe? What makes us conscious human beings? The latter question has gnawed at humans for centuries; after all, what is more human than analyzing ourselves? Different schools of thought have attempted to offer theories and explanations for this question. In the 1600s, philosopher René Descartes proposed that consciousness, or one’s existence, is the only thing in life a person can believe because it is immaterial, and can not be an illusion: cogito ergo sum, or “I think, therefore I exist.” Religion attributes consciousness to a divine spark. Science looks to neurotransmitters — chemicals in the brain that allow communication between neurons — and neuronal networks to explain the basis of behavior that signifies consciousness. However, despite its extensive findings of brain activity behind observable human behavior, science has not yet been able to explain conscious experience itself. For example, if we mapped all human brain functions to a computer, would it gain sentience? Probably not, so what are we missing?

To start tackling this problem, we must first define consciousness. To be conscious is to be aware of one’s existence and recognize the qualitative experience of something, also known as “qualia.” In his landmark 1974 paper “What Is It Like to Be a Bat?” philosopher Thomas Nagel summarizes this concept as “There is something that it is like to be in a mental state”; there is a qualitative feeling to experience. A conscious state is classified by the “type” of consciousness, or state of awareness, one is experiencing, from sleep to alertness. As a result, most consciousness studies compare different states of consciousness and the neural underpinnings of the observable differences between states, such as the brain on sleep, various substances, or while alert. These studies do not uncover the root of conscious experience itself, which is considered the “hard problem” of consciousness.

Neuroscience studies have attempted to explain the nature of consciousness itself, even if explaining how consciousness itself emerges was a gargantuan task. In 1983, neuroscientist Benjamin Libet ran an experiment to test human free will and consciousness. Test subjects were instructed to simply flex their wrist whenever they wanted to and mark when they first felt aware of their intention to act. However, readiness potentials from the brain, which are neural activity spikes in the motor cortex, preceded the movement and the subject’s awareness of their intention. In other words, the brain initiated an action potential to flex the wrist before the subject was even aware of it. This result implies we do not possess free will to initiate actions, and thus are not conscious in the moment, even if we feel the illusion of control. So, when are we actually conscious?

This experiment unfortunately suffers from a few experimental design and data interpretation errors: for example, the recorded readiness potential may not be correlated to intention to act, the experiment does not resemble real world conditions, and its claim to be measuring free will is dubious. An important aspect in our search for consciousness is ensuring that we are measuring the same thing: a tough feat, as there is no universally agreed-upon definition of consciousness. However, the questions Libet posed in his experiment are crucial in future experimentation and our search for answers.

Neuroscientists Roger Sperry and Michael Gazzaniga conducted a series of split-brain experiments in the 1960s to observe the behavior of humans whose corpus callosum — a bundle of neurons that connects the brain’s two hemispheres — was split as an epilepsy treatment. In these experiments, because the hemispheres could no longer communicate, a person claimed not to have seen an object placed in front of them: the left side of the brain, responsible for language production, could not receive input from the right brain, which processed the visual object in the subject’s left visual field. As a result, the subject could not process or articulate that the object had been perceived. This confusing behavior calls into question: are there now multiple distinct, conscious minds in the brain? If so, does the brain with an intact corpus callosum have multiple loci of consciousness that integrate together well for regular functioning? The general modern-day consensus is that more information is needed to answer these questions; however, some theories of consciousness rely on the assumption that consciousness is split in a split-brain.

All of these experiments culminate in the elusive mind-body problem: this problem attempts to define the relationship between the mind and body. The body exists in the physical world, and is thus subject to the laws of physics. Old schools of thought believed that the mind was a distinct entity and unrelated, but the current consensus is that the mind is a product of the brain, and is thus also grounded in the laws of the physical world. Cognitive and behavioral neuroscience attempts to understand the precise relationship between the mind and brain by finding the neurobiological basis of observable mental phenomena but we still do not know how conscious experience emerges. We can correlate neurobiological brain states to mental states, but we can not correlate brain states to our qualitative experience of mental states.

Neural theories of consciousness have emerged to understand how conscious experience is encoded in the brain, which is distinct from studies about different states of consciousness. Each theory seeks to identify a neural correlate of consciousness: the minimum activation of neurobiological mechanisms necessary to generate conscious experience (although some believe there is no specific neural correlate of consciousness, and it is instead bound to visual processing mechanisms).

One such theory is the memory theory of consciousness, which suggests that the qualitative experience of consciousness that we feel might be a memory that was processed extremely quickly. The theory also proposes that instead of one unified neural correlate across the brain, different cortical areas contribute different aspects to consciousness, and thus have different correlates. This theory would accurately account for the phenomenon of conditions following a stimulus affecting our conscious perception of it.

Alternately, higher-order theories of consciousness suggest that an experience can be deemed conscious only after a “higher area” has created a brain state that correlates to a thought about the experience. For example, in visual processing, visual stimuli are passed to the primary visual cortex, which has neurons that respond to the stimuli’s basic, simple features. As areas of higher complexity process the stimuli, the initially selected basic features are combined to create more complex features, until the highest-level area can represent an entire object like a face. Similarly, the higher-order theory claims that in order for conscious perception to arise, a higher-order network in the brain must process stimuli up this hierarchy and have an associated brain state correlated to thinking about the experience. For example, for a color to be consciously perceived, the visual system must represent the color, and there must be a representation of the thought of perceiving the color — quite layered!

The global workspace theory posits that stimuli we process are only registered as conscious once they are broadcasted to different regions that separately process the information; it then is synthesized into one stream of consciousness.

However, both higher-order theories and the global workspace theories are computational models that explain how consciousness functions, not what provides us with the feeling of conscious experience. Understanding how physical states enable the experience of subjective states may require more biological models, but scientists still do not know how to close this explanatory gap.

In order to understand the precise relationship between the mind and brain, we need to understand what mechanisms enable the qualitative experience of consciousness (the “hard problem” of consciousness), and understanding the neural correlates of consciousness may be the key to unlocking this current knowledge gap. We can not stop asking these tough questions, even if they feel too broad to tackle all at once.

Bibliography

Aru J., Suzuki M., Rutiku R., Larkum M. E., Bachmann T. (2019). Coupling the State and Contents of Consciousness. Frontiers in Systems Neuroscience, 13. https://doi.org/10.3389%2Ffnsys.2019.00043.

Block, N. (2009). Comparing the major theories of consciousness. In Michael Gazzaniga (ed.), The Cognitive Neurosciences IV. (pp. 1111–1123). MIT Press.

Chalmers, D. J. (2010). What Is a Neural Correlate of Consciousness? The Character of Consciousness (pp. 59–90). Oxford Academic. https://doi.org/10.1093/acprof:oso/9780195311105.003.0003.

de Haan, E.H.F., Corballis, P.M., Hillyard, S.A. et al. (2020). Split-Brain: What We Know Now and Why This is Important for Understanding Consciousness. Neuropsychology Review, 30, 224–233. https://doi.org/10.1007/s11065-020-09439-3.

Dehaene, S., Lau, H., Kouider, S. (2017). What is consciousness, and could machines have it? Science, 358(6362), 486–492. https://doi.org/10.1126/science.aan8871.

Friedman, G., Turk, K.W. & Budson, A.E. (2023). The Current of Consciousness: Neural Correlates and Clinical Aspects. Current Neurology and Neuroscience Reports 23, 345–352. https://doi.org/10.1007/s11910-023-01276-0.

Libet, B., Gleason, C. A., Wright, E. W., Pearl, D. K. (1983). Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential): the unconscious initiation of a freely voluntary act. Brain, 106(3), 623–642. https://doi.org/10.1093/brain/106.3.623.

Nagel, T. (1971). Synethese, 22(3/4), 396–413. https://www.jstor.org/stable/20114764.

Northoff, G., Lamme, V. (2020). Neural signs and mechanisms of consciousness: Is there a potential convergence of theories of consciousness in sight? Neuroscience and Biobehavioral Reviews, 118, 568–587. https://doi.org/10.1016/j.neubiorev.2020.07.019.

Seth, A. K., Bayne, T. (2022). Theories of consciousness. Nature Reviews Neuroscience, 23, 439–452. https://doi.org/10.1038/s41583-022-00587-4.

Valencia, A. L., Froese, T. (2020). What binds us? Inter-brain neural synchronization and its implications for theories of human consciousness. Neuroscience of Consciousness, 2020(1), https://doi.org/10.1093/nc/niaa010.

This article was written by Arhana Aatresh, an undergraduate student at UC Berkeley majoring in Neuroscience and minoring in Science and Technology Studies. This article was edited by Aileen Xia and Jade Harrell.

--

--

Neurotech@Berkeley
Neurotech@Berkeley

Written by Neurotech@Berkeley

We write on psychology, ethics, neuroscience, and the newest in neural engineering. @UC Berkeley

Responses (1)