Abstract
This review article traces the historical development of understanding the brain and consciousness. It begins with ancient civilizations, highlighting the Egyptians’ first conceptions about the brain’s role and connection to the heart, and progresses to Ancient Greece and the writings of Hippocrates. The paper then explores significant advancements in the study of consciousness throughout the Common Era, including contributions from Descartes, Locke, James, and pivotal cases such as Phineas Gage, which changed the understanding of the brain as scientists knew it. Next, it focuses on Antonio Damasio’s critique of Cartesian dualism and his emphasis on the interplay between emotion and rational decision-making. The article concludes by addressing the philosophical challenges of neuroscience. It presents leading contemporary theories of consciousness, including the Somatic Marker Hypothesis, Global Workspace Theory, Integrated Information Theory, and Predictive Coding. While these theories do not individually fully explain the neurobiology of consciousness, future technological advancements and research may eventually produce a universal understanding.
Introduction
Early history of the brain (Ancient Egypt, Greece)
The brain is one of the most complex functioning organs and makes up one-half of the nervous system. The nervous system consists of the brain and the spinal cord, all connected through a network of nerve cells, primarily neurons. Neurons are cells that fire electrochemical signals to help with a person’s day-to-day functions. Behaviors like memory, problem-solving, speaking, learning, and sleep are typical things the brain helps with. This includes consciousness, one of the most mysterious features of the body. Among these complex functions, consciousness–our subjective awareness–remains perhaps the most profound enigma.
Consciousness is a comprehensive idea that requires reflective study, and after many years of research and debate, consciousness still needs to be understood even to this day. What is consciousness? While difficult to define, generally speaking, consciousness refers to the awareness of one’s sense of self and the world around them. The root word of “consciousness” comes from the Latin word “conscius,” which means “knowing” or “aware”1. Each person has a particular consciousness, so the concept is challenging to examine experimentally. The purpose of this article is to provide a scoping review, detailing the evolution in humanity’s understanding and scientific investigation of consciousness. From there, the discussion will continue into the four leading theories of consciousness in the 21st century.
Early Theories (Ancient Egypt, Greece)
During the Third Dynasty of Egypt’s Old Kingdom, from 2686 to 2613 BCE, the first documented report about the human brain was made on papyrus, an early paper form. One recorded discovery was how certain brain and spinal cord injuries led to specific changes in behavior2. Many ancient Egyptians believed in connecting the heart to the soul instead of looking to the brain during this time. They also concluded that rather than the brain exclusively controlling the organs, it was also controlled by the heart. The heart would then communicate through a system of channels that connected the heart to all the vital organs. This system of channels was called the metu3.
Of course, the heart does not control the organs and nerves; the brain does. These Egyptians proclaimed that the heart was where all feelings and thoughts came from. They also described that the brain was less important than the other organs when they died. For the mummification process, they would harvest only what was necessary. “Even the lungs, liver, intestines, and stomach received better treatment than the lowly brain.”4
The idea of consciousness needed to be clearly described and even conceptualized, but was not during this time. No one understood why humans had a ‘sense of self.’ So, during these times, the concept of consciousness remained largely unknown and scientifically unexplored.
The golden age of Ancient Greece, during the 5th to 4th century BCE, became another fostering ground for more in-depth research about the brain. Many well-known figures, such as Plato, Isocrates, and Aristotle, emerged during this period. These men contributed significantly to philosophy and science. However, there was one man known for his studies that many others missed and did not discuss about the brain, and this man was Hippocrates.
Hippocrates was a Greek scientist who studied the human body. Through his discoveries and observations, he became convinced that the brain contributed to humanity’s knowledge and morals that humans hold on to as truths.5 Most people during his period only understood how this worked then. As Hippocrates wrote:
“Men ought to know that from nothing else but the brain comes joy, delights, laughter and sports, sorrows, griefs, despondency, and lamentations. And by this, in an especial manner, we acquire wisdom and knowledge, and see and hear and know what are foul and what are fair, what are bad and what are good…In these ways, I am of the opinion that the brain exercises the greatest power in the man”.4
This idea is very accurate, given what we know today about how the brain functions. However, this conclusion on weather or not emotions are embodied in consciousness limits the information available to answer the question of consciousness. While insightful, Hippocrates’ focus on the brain as the seat of mind and emotion did not yet directly address the nature of subjective awareness itself, leaving the core question of consciousness largely unexplored. The discussion regarding the connection between the mind and soul is deeply rooted in the philosophy established by Hippocrates, who was a pioneering figure in ancient medicine. His revolutionary belief in the close relationship between the body and mind represented a significant departure from the dominant philosophical and medical perspectives of his era, which often viewed the body as something separate and distinct from the mind. Hippocrates argued that physical health was intimately tied to mental well-being, a concept that laid the groundwork for holistic approaches to medicine. This perspective was also influenced by the prevailing religious beliefs of the time, which emphasized the role of conscience and morality, often attributing human experiences to the influence of deities who supposedly controlled these aspects of existence.6
In sharp contrast to Hippocrates’ views, the philosopher Plato strongly opposed the idea of an intertwined mind-body connection. He articulated a clear distinction between the mind and the soul, asserting that they are separate entities with different functions. According to Plato, the mind, or intellect, is closely associated with the physical body, acting as the driving force behind actions such as speaking and behaving. He believed that these actions were influenced by sensory experiences and rational thought. On the other hand, Plato posited that the soul is the profound source of human emotions, desires, and biases, serving as the essence of one’s character and moral compass. This dualism in Plato’s philosophy raises crucial questions about the nature of human existence and the interplay between rational thought and emotional experience, continuing to provoke thought and discussion in both philosophical and psychological realms today. This sharp distinction laid philosophical groundwork that would profoundly influence later thinkers, notably Descartes, in their considerations of mind and body.
However, this information was crucial for the modern understanding of consciousness. This is because philosophers and scientists at this time, lacking much of the necessary technology, were forced to rely on subjective, personal experience to draw conclusions about the brain. At this time in history, everything was subjective, and the lack of proper lab testing meant that this information was unknown to most people, making this discovery quite ground-breaking.
The Renaissance and Enlightenment Period (Descartes, Locke)
The exploration of consciousness eventually moved into the 16th century, when a French scientist, philosopher, and mathematician named René Descartes contributed to the study of consciousness. Descartes’ theory described the means of physiology of rational thoughts and decisions and how they evolved through Darwinian natural selection. His theory further explained how the mind’s mental and emotional capacity and the body’s reaction were separated due to this same natural selection.
Descartes argued for a form of dualism where the rational mind was distinct from the body and its passions (emotions). While he analyzed the mechanisms of emotions, he emphasized reason as the path to clear judgment, viewing strong passions as potential sources of bias that could cloud rational thought.7
René Descartes’ influential concept of dualism, which posits a distinct separation between the mind and body, played a crucial role in furthering the longstanding philosophical debate initiated by Hippocrates and Plato. This dualistic framework required a careful differentiation between rational thought and emotional experience, a distinction that was met with significant uncertainty and disagreement among scholars of the time. Even Descartes himself expressed skepticism about this separation, acknowledging that many viewed thoughts and emotions as either intricately intertwined or positioned at opposite ends of a continuum. This ongoing philosophical discourse highlights the profound complexities involved in understanding the human condition and the various interpretations of the intricate relationship between mind and body.8
Centuries later, in 1690, John Locke, a British enthusiast of medicine and natural sciences, expanded the knowledge of senses in the brain. He asked why humans determine certain conclusions before creating an action. For example: “How do we know that an object is cold or hot?” The simple answer was through experience.9
Experience is gained through trial and error, and future adjustments are made when a result is discovered. This would then become Locke’s Tabula Rasa theory, stipulating that humans start with no experience and create knowledge through more circumstances of expertise. So, through sensory experience comes proper knowledge. With the introduction of this separation, it became necessary to explore the distinction between rational thought and sensory experience, ultimately igniting the debate between rationalism and empiricism.
Rationalism, as championed by Descartes, argues that individuals are born with innate knowledge—ideas and concepts that exist independently of sensory experience. He posited that as children grow, they build upon this foundational knowledge, refining their understanding through reflection and reasoning.
In contrast, empiricism, represented by philosophers like John Locke, contends that knowledge is primarily derived from sensory experience. According to this perspective, individuals start with a “tabula rasa,” or blank slate, where knowledge is accumulated through interaction with the world around them. Locke argued that all ideas and concepts begin as simple sensations that, when combined through experience, lead to more complex understandings of reality.
As the story of consciousness continued, Darwin’s theory of evolution exploded in the Western world around the 1800s, and William James became one of the more profound contributors to further theories on consciousness. Using the concept of “The survival of the fittest,” he stipulated that consciousness was a trait given through survival and that the only way to identify consciousness is through one’s personal experience.10
In the 1800s, the case of Phineas Gage, a railroad construction foreman known for his almost fatal accident, notably shook the world. His head was impaled by a large pale rod, demolishing his brain’s left frontal lobe. Shockingly, he did not die. However, this marked a pivot in society’s understanding that the brain controlled a person’s personality. Due to the significant alterations in Gage’s demeanor and personality that occurred once the impalement had happened, many had more questions about what this meant for studying the brain.11,12
During the late 19th and early 20th centuries, two main theories, the Reticular Theory and the Neuron Doctrine, tried to explain the structure of the nervous system and the brain. The Reticular Theory, proposed by Joseph von Gerlach in 1871, was a leading theory of the 19th century attempting to describe the brain’s functions. It suggested that the nervous system, unlike most biological systems, was a continuous network of interconnected web or net – without gaps, synapses, or individuated cells. However, as scientific understanding advanced, this theory was replaced by the more accurate Neuron Doctrine, which provided a better understanding of how the brain transmits information.13
The Neuron Doctrine, championed by Santiago Ramón y Cajal, contended that the nervous system comprises distinct individual cells known as neurons, which communicate through specialized junctions or synapses.14 Cajal’s use of Golgi’s silver staining technique to observe neurons provided compelling evidence supporting his theory, underscoring the crucial role of experimental methods in validating scientific theories. This emphasis on scientific rigor is a vital aspect of the Neuron Doctrine, which became the accepted model for how the nervous system functioned, leading to Cajal’s Nobel Prize in Physiology and Medicine in 1906.15,16
The Beginning of Technological Advancement
The Evolution of Neuroscience (Phineas Gage, EEG)
Prior to significant technological advancements, inquiry into the mind relied heavily on philosophical reasoning, introspection, and limited observation, lacking the empirical tools for systematic neuroscientific investigation. In 1924, building on Richard Caton’s previous work, German scientist Hans Berger created a way to measure electrical activity in the brain. This would be known as electroencephalography (EEG) and was appropriately deemed a “window to the brain.”17.
EEG records electrical signals produced by the brain, allowing for graphing and plotting different brain waves. Berger first investigated consciousness during the hours of sleep in various animals. Sleep is a unique experience period because humans are not conscious, which is a perfect way to see how the brain works in its simplest state. The sleep brain wave patterns were recorded, thus providing new information about the brain’s function during altered states of consciousness:
“That focuses on stimulus-related brain activity with a particular interest in how and when the brain processes specific types of information and generates decisions or actions in response to external stimuli in conjunction with mental operations.”18.
This new way of seeing responses on charts was only the beginning. It was the closest human being had been to understanding the brain. Soon after the birth of advanced technology, the findings, discoveries, and theories about the brain and consciousness would explode. This now addressed the ancient Egyptians’ initial misconceptions. The brain controls senses, emotions, behaviors, decisions, and the critical components of human consciousness. With the rise of technology in the 20th and 21st centuries, theories could be tested on the brain with more precise conclusions, opening the door for more fine-tuning and criticism of our current understanding.
Modern Theories of Consciousness
Antonio Damasio & Descartes’ Error
The first notable theory of the 20th and 21st centuries to be rigorously evaluated in the study of consciousness was Decartes’ theory from the 16th century. Descartes’ theory was well known, and many scientists began to analyze it in depth.
René Descartes’s argument for dualism emphasizes the distinction between the mind and the body as two fundamentally separate entities. He contends that the mind, which encompasses thoughts, beliefs, and emotions, operates independently of the physical body, which is governed by the laws of nature. This separation implies that emotional experiences do not directly influence physical states; rather, they exist as distinct phenomena. Consequently, this disconnection can lead to the notion that emotions may obstruct rational decision-making, as individuals might struggle to separate their feelings from logical thought processes when faced with important choices. Understanding this dualistic perspective sheds light on the complex interplay between emotional responses and rational decision-making in human behavior.
One scientist who analyzed and criticized Decartes’ theory was a Portuguese neuroscientist, Antonio Damasio. Damasio gained significant insight into the brain and consciousness through various lesion studies with his wife, Hanna. Damasio collected data and published his seminal work Descartes’ Error, which outlines the origins of consciousness in a digestible format for a broad audience. In his book, Damasio argues against the separation of the mind and body and advocates for the importance of emotions in rational decision-making. He states that emotions affect how humans react and arrive at conclusions. Reasons are heavily reliant on somatic feelings as well as emotions. With a lack of such feelings, reasons would become unfruitful and obstructive. Emotions make humans produce good decisions, and this was Descartes’ Error said,
“Feelings do seem to depend on a dedicated multicomponent system indissociable from biological regulation. Reason does seem to depend on specific brain systems, some of which happen to process feelings. Thus there may be a connecting trail, in anatomical and functional terms, from reason to feelings to body. It is as if we are possessed by a passion for reason, a drive that originates in the brain core, permeates other levels of the nervous system, and emerges as either feelings or nonconscious biases to guide decision making.”19
This idea is supported by two components: Every human experience feelings, which create further mental biases that cause internal reasoning. This theory is one of the many backbones of the modern understanding of consciousness. Experiments can test this theory and prove that bias in experience, based on feeling and emotion, is how humans make decisions and choices. In Descartes’ Error, Damasio also outlines the Somatic Marker Hypothesis, one of the leading theories of consciousness. Alongside it are the Somatic Marker Hypothesis, Global Workspace Theory, Integrated Information Theory, and Predictive Coding, which will be the remaining focus of this review.20
The Somatic Marker Hypothesis
The investigation of consciousness began to attract the interest of more scientists. In addition to Descartes’ Error, which led to a more fundamental understanding of consciousness, Damasio synthesized several experiments, namely lesion studies, to expand knowledge of a human’s decision-making process. Damasio’s resulting theory, created in the early 1990s, became known as the Somatic Marker Hypothesis (SMH). This is the first leading theory on consciousness at the time.
The SMH addressed how humans have an emotional decision-making process known as bias. People’s decision-making is assisted by what is known as somatic markers. Somatic markers are feelings in the body that usually have emotions, such as an association with a rapid heartbeat created by anxiety or disgust. Many described it as a ‘gut feeling.’ Damasio claimed that these somatic markers could even cause beneficial decisions to be made before subconsciously knowing that a decision is advantageous. This countered the prevailing psychological theory initially put forth by Descartes, which suggested that emotions hurt decision-making.
One of the critical experiments was when Damasio and his colleagues constructed a simple card game to “mimic decision-making.”21. This experiment was called the Iowa Gambling Task. When the game began, the participants started to make good decisions to change the outcome of the cards before they consciously knew that those decisions were good. On the flip side, when the participants made a wrong decision before, they consciously knew about it, their skin conductance responses (SCRs) would rise, and when they made good decisions. SCR refers to changes in the skin’s electrical and sweat gland activity in response to mental triggers.
This experiment showed that lesions in the frontal regions of the brain caused some individuals to behave differently than others in a gambling context. During the experiment, the lesioned patient’s brain did not produce signals to stop gambling behavior but rather continued, hoping for a different outcome. On the other hand, healthy people’s brains continued to give off a signal to stop gambling due to the constant result of failure. This experiment showed that if emotions are associated with decisions, depending on each person, emotions can affect how each person approaches something. The brain will lack an emotional process if the wrong signals are made. This raised many questions, some of which were answered by the subsequent leading theories for consciousness.
While SMH highlights the role of bodily feedback in decision-making, it is less explicit about the precise neural mechanisms generating the anticipatory signals or ‘predictions’ about outcomes that trigger these somatic states, a point addressed more directly by theories like Predictive Coding. Just like the future theories that will be discussed, the location of this fundamental pattern-making process is unknown, making the theory harder to cement with clear evidence and proof. This lack of clarity poses a massive problem with the understanding of consciousness, particularly in terms of its origin and the process of rational decision-making. On the one hand, the theory fills in all the questions we have about how rational decision-making happens, but on the other hand, the lack of proof of its origin raises the question of whether it’s all a speculation of what scientists think.
SMH suggests that consciousness evolved primarily as an adaptation for managing social interactions, emphasizing skills like theory of mind, communication, and cooperative behavior. This perspective aligns with the idea that complex social environments create strong selective pressures for advanced cognition, which could, in turn, lead to the emergence of consciousness. The hypothesis is supported by findings that species with larger social groups tend to exhibit greater cognitive abilities, such as primates with advanced social hierarchies. The SMH also provides an evolutionary explanation for self-awareness, arguing that understanding others’ mental states required a parallel ability to reflect on one’s own thoughts. This approach is particularly compelling because it connects neuroscientific, evolutionary, and behavioral perspectives, reinforcing the idea that consciousness serves an adaptive social function.
However, SMH faces significant challenges when attempting to account for non-social aspects of consciousness. While the hypothesis explains interpersonal cognition, it is less clear how it accounts for solitary conscious experiences, such as deep introspection, artistic creativity, or sensory perception in isolation. Consciousness is not only about navigating social environments but also about problem-solving, imagination, and sensory awareness, all of which can occur outside of social contexts. Furthermore, comparative research on highly intelligent but asocial species, such as octopuses, challenges the assumption that social complexity is the primary driver of consciousness. If intelligence and flexible cognition can evolve in solitary organisms, it raises the question of whether social interaction is necessary for consciousness or simply one potential contributing factor. Without stronger neurobiological evidence linking consciousness specifically to social cognition mechanisms, SMH remains a plausible but incomplete explanation.
Global Workspace Theory
Another leading theory in the modern day for consciousness was the Global Workspace Theory (GWT), proposed by Bernard Baars, a senior fellow in theoretical Neurobiology, and Stan Franklin in 2004. The name describes a general idea of a mental “global workspace” that distributes information or “activity” across the brain through high and energetic neurons. This, in turn, creates everyday conscious actions such as problem-solving and forms of qualia, as well as subjective human experiences due to the reaction to stimuli of the world.
The Global Workspace Theory (GWT) argues that consciousness emerges when information is globally broadcasted across multiple cognitive systems, allowing different neural processes to access and act upon it. This model aligns well with evidence from neuroscience, particularly studies using EEG and fMRI, which show that conscious perception correlates with widespread cortical activation. The strength of GWT lies in its ability to account for flexible cognition—by making information broadly available, the brain can integrate inputs from multiple sources, supporting complex decision-making, planning, and self-awareness. Moreover, GWT explains why some mental processes are unconscious, as it posits that only information that enters the global workspace is experienced consciously, while lower-level computations (such as early visual processing) or non-prioritized information remain unconscious as they operate within specialized, non-broadcast modules. This functionalist perspective makes GWT a powerful explanatory framework for the mechanisms of cognitive control, memory retrieval, and attention.
A metaphor that nicely describes the basic idea of GWT is a theater stage. Many important things are on stage. However, attention will put a spotlight on the most important thing. Backstage are all the subconscious thoughts that will eventually come on stage and be recognized by the brain. Many parts can be on stage. However, one’s brain can only sometimes focus on one or many things at a time, and this is what the GWT explains.
GWT originated from traditional cognitive science. Baars described the process as like a “blackboard architecture,” with the basic premise being that knowledge would be distributed throughout the cortices of the brain so a person could cooperatively solve problems. GWT develops predictions of emotions, motivations, learning, voluntary control, working memory, and self-aware systems, such as heartbeat and blood flowing throughout the body.
GWT shares similarities with theories such as Neural Darwinism, which explains brain functioning. These theories on brain functioning cover cortical activities located in both the frontoparietal and medial temporal regions.22 Activity in the frontoparietal regions, such as visual projections, is usually made when the brain is unconscious during sleep stages, comas, and anesthesia. This consistency of study is why many scientists favor this theory over other leading theories in consciousness research.
When the mind is unconscious, these studies of consciousness create stimuli in local feature activity in the sensory cortex. GWT suggests that consciousness facilitates “multiple networks” connected to create standard functions like problem-solving. These interactions are caused by widespread multiple rhythms or patterns in the brain. GWT primarily addresses the mechanism of conscious access—how information becomes available for report and control—rather than explaining the intrinsic nature of all complex cognitive processes like reasoning itself, although these processes can utilize information within the workspace. GWT supports this by explaining that only certain prefrontal regions, such as the orbitofrontal cortex and anterior cingulate cortex, disturb the conscious experience.
However, stimulation of anterolateral prefrontal sites was considered very important in the GWT. Through research, the typical things associated with consciousness were explained as not customarily related to the environmental stimuli immediately. “Nevertheless, effects in the orbitofrontal and anterior cingulate cortices suggest a specific role for these PFC (prefrontal cortex) subregions in supporting emotional aspects of conscious experience.”23. This means that GWT scientifically tries to prove where most conscious experiences are found.
GWT also tries to solve other problems from the human mind’s perspective. One of these problems is known as the frame problem. The frame problem concerns how the cognitive processes handle relevant information without filtering out irrelevant data.
The frame problem, which Foder proposed in 1987, displayed the problem of artificial intelligence (AI) and how AI reacts to an ever-changing environment. The GWT offers a parallel ‘global broadcast’ that searches through information very similarly to how AI works, but this is in the minds of humans rather than a manufactured creation.24
Specific processes in the mind, such as the peripheral process, work to encapsulate and transfer information. This transferring of information is what makes this theory crucial. However, Foder made his argument on many assumptions, one being that “the Computational Theory of Mind is the only viable research methodology in cognitive science today, “the only game in town,” as he put it. This means that if other scientists did not believe this was the case, the frame problem would not make sense. Secondly, the theory would be disclosed immediately. This was the case; others tried to argue against it.
What made this frame problem so interesting to others was the complexity of the issue at hand and how the question was answered. The theory explained how unencapsulated information worked in the brain, in contrast to what scientists working with the GWT thought to be only pulsated information distributed throughout the brain. It opened the door for new theories about how the brain spreads information from place to place.
This is what is called architecture, and this is what the parallel sources meant in the serial thread of computation, the execution of working a single data item at a time. GWT suggests that consciousness is limited to content at a time, all being specific and going to different places to communicate that information. However, this information can only be done on a human through sleep. Computational modeling, including AI approaches, provides a valuable tool for testing the feasibility and implications of architectures like GWT, helping researchers refine hypotheses about how information might be processed in the brain.
GWT states that once the information in the brain makes it to the unique global workspace, it is sent throughout the brain, making one conscious of something. This makes sense; however, the issue arises when addressing the exact location of the global workspace.
Many who argue against this theory raise the question of ‘where?’ Most say that the theory makes sense. While often associated with widespread activation, particularly in fronto-parietal networks, the precise neural substrate of the global workspace remains debated. Critically, studies showing consciousness persists. However, consciousness persists when these regions are deactivated using magnetic or electrical stimulation, making it challenging to locate the global workspace. This raises more questions about whether the global workspace even exists?’ The simple answer is that the technology is not advanced enough to prove or disprove the theory; it only makes the answer unknown.
Those who do not fully understand the concept of consciousness might argue about the significance of where activity occurs in the brain for it to be taken seriously. If we consider only one area, then it may seem most accurate based on current scientific understanding of consciousness. However, this perspective is limited. Without identifying the proper location of the global workspace in the brain, there could be significant implications for understanding other important areas, such as those responsible for neuron programming related to thought and creativity. Would changes in location affect these functions? Moreover, the global workspace might be situated differently for each individual, which further complicates the issue. Consequently, these questions highlight gaps in the theory that cannot be addressed without knowing the exact location of the potential global workspace.
Despite its strengths, GWT has been criticized for focusing on information processing rather than subjective experience itself. While it explains how information becomes widely accessible, it does not fully address why certain neural processes are associated with conscious awareness while others are not. Another significant challenge is the lack of precise neural correlates defining the “global workspace.” Although widespread cortical activation is observed in conscious tasks, the specific mechanisms by which this activation produces subjective experience remain unclear. Additionally, GWT does not inherently solve the hard problem of consciousness—it describes the functional role of consciousness but does not explain why global broadcasting should feel like anything at all. Some critics argue that GWT may be better suited as a theory of attention and working memory rather than a comprehensive theory of consciousness. This is because it primarily explains the functional role of conscious access to information, potentially conflating it with attention and reportability, rather than addressing the subjective quality of experience itself (the ‘hard problem’). Without addressing these fundamental concerns, GWT remains an influential but incomplete.
Nevertheless, theories continue to surface to answer this hard problem of consciousness. The ongoing nature of research in this field should keep people engaged and involved. This is only the beginning of what scientists will soon learn about the brain and its complex features, for this is not the only theory that tries to address this hard problem.
Integrated Information Theory
Another leading theory of consciousness today is the integrated information theory (IIT), proposed by Giulio Tononi, an Italian neuroscientist and psychiatrist at the University of Wisconsin, in 2004. As the name implies, the theory describes consciousness as integrated information spread throughout the brain to make one aware of one’s consciousness.
Unlike most theories about consciousness at this time, the IIT used the quality and quantity of experiences in scientific and mathematical explanations instead of the typical theoretical approach. This theory is built on the essential importance of axioms.
Axioms, in the most basic terms, are mathematical and scientific foundational concepts from which a more complex argument can be made. For example, 0 < 1 is an established axiom. It has been proven and agreed upon by every person who understands math. This equation can help solve other and more complex questions to create more mathematical axioms. The same can be applied to science. However, unlike math, it is more challenging to make a hypothesis that every scientist can agree upon.
This is where Giulio Tononi created axioms for consciousness. Tononi created five axioms to be the backbone of understanding consciousness, and they were as follows:
“First, consciousness is real and undeniable; a subject’s consciousness has this reality intrinsically; it exists from its own perspective. Second, consciousness has composition. In other words, each experience has a structure. Color and shape, for example, structure visual experience. Such structure allows for various distinctions. Third is the hypothesis of information: how an experience distinguishes it from other possible experiences. An experience specifies that it is specific to certain things, distinct from others. Fourth, consciousness has the characteristic of integration. The elements of an experience are interdependent. For example, the particular colors and shapes that structure a visually conscious state are experienced together. As we read these words, we experience the font shape and letter color inseparably. We do not have isolated experiences of each and then add them together. This integration means that consciousness is fundamental to separate elements. Consciousness is unified. Fifth, consciousness has the property of exclusion. Every experience has borders. Precisely because consciousness specifies certain things, it excludes others. Consciousness also flows at a particular speed.”25.
This set of axioms made this theory different from all the other theories because they used hypotheses that helped build a proof of consciousness. This presents a fascinating philosophical paradox. Axioms are designed to embody the concept of common sense, which often resonates with intuitive understanding. At first glance, this seems reasonable; however, when examined more deeply, science is fundamentally grounded in empirical proof and rigorous evidence. This leads to the intriguing suggestion that the notion of common sense may, in fact, be perceived as flawed or invalid, as it lacks the concrete evidence necessary to substantiate its claims. These axioms help to explain why people experience consciousness and how certain neurons carry information to present a specific experience that consists of qualia.26
Despite Tononi’s efforts to create universal axioms, making them universally agreed upon would become more complex. This is what many who do not support the theory argue. Yes, the theory works, but only if all scientists consider the axioms true, which they are not.27
This is what makes everyone’s experience different. The number of neurons made and working throughout the brain creates the idea of personal qualia. However, this system requires all of the causes and effects within the system of neurons, which would make it a property. This is why most scientists who support this theory refer to consciousness as a structure rather than an idea, concept, or process. This means every property has a pair, and every experience corresponds with a structure. However, the reliance on axioms as the starting point, rather than empirically derived principles, is a key point of contention. The challenge lies in moving from these phenomenological axioms to empirically testable predictions about neural correlates.
Experimental evidence with the neural correlates of consciousness (NCC) uses the Greek letter Φ (phi, which measures how much consciousness is in a system) to show mathematical equations to the value of the physical substrate (PSC) of consciousness. This is the most natural cause and effect to determine the quality and quantity of the human experience).28
The NNC shows the amount of sensory activity generated in different brain parts, such as the temporal, occipital, and parietal cortices found in the cerebral cortex. Most importantly, the axioms and properties of the IIT can provide a general idea and principle for finding the PSC in the brain. The PSC supports a cause and effect to show perspective in the system of neurons and properties. The IIT predicts that specific experiences will become more likely when the cause and effect increase.
The only accurate way to show this natural cause and effect is through sleep. During sleep, the consciousness is the least active, making it easier to track neurons in the brain and reveal the standard size of the PSC. When uncommon circumstances happen, like seizures, the PSC changes drastically, and the neurons can no longer be tracked accurately. Information specific to each person’s qualia is much bigger than the limited capacity of consciousness that tests can study in the 21st century.
However, the IIT provides a foundational explanation for many facts debated in the PSC. The IIT explains why the cerebral cortex, not the cerebellum, is essential for understanding consciousness. “In general, the coexistence of functional specialization and integration in the cerebral cortex is ideally suited to integrating information.”29. This would also explain why there are four times as many neurons in the cerebellum over the cerebral cortex. IIT also contributes to the fading disappearance of consciousness during sleep:
“When cortical neurons fire but, as a result of changes in neuromodulation…any input quickly triggers a stereotypical neuronal down-state, after which neurons enter an up-state and activity resumes stochastically.”27.
IIT also predicts the PSC in the brain through natural cause and effect, regardless of how many neurons are involved. Inside the temporal cortices is where all physical evidence of consciousness can be found. The IIT breaks down some of humans’ most basic functions to pair them with specific corresponding structures. Things such as shape and color can be paired with a presence with a corresponding structure. Some of these pairs overlap but are distinct enough to tell the difference between them.
Along with daily experiences, the IIT can predict the amount of neurons transferred throughout the brain. The IIT shows the changes seen with the PSC but can not make any changes during the test. This indicates that consciousness changes for each person and ages as the brain develops. It can also apply to animals since they have similar structures and brain functions.
Despite that information, scientists have still been unable to find a physical way to understand any animal. One reason is that, at this moment, technology has not provided an accurate way of studying how any animal thinks. The second reason is that even if scientists had the technology to learn an animal’s conscious patterns, Thomas Nagel explains in What is it like to be at Bat?:
“One might try, for example, to develop concepts that could be used to explain to a person blind from birth what it was like to see. One would reach a blank wall eventually, but it should be possible to devise a method of expressing much more objectively than we can at present and with much greater precision.”30.
This means that no matter what tests a person can undergo to become another animal, such as a bat, it would be impossible actually to be a bat. The same applies to other people’s qualia and consciousness; it is simply impossible to be in another person’s consciousness.
“In summary, IIT is a theory of consciousness that starts from the self-evident, essential properties (axioms) of experience and translates them into the necessary and sufficient conditions (postulates) for the PSC. The axioms are intrinsic existence (my experience exists from mn intrinsic perspective); composition (it has structure), information (it is specific), integration (it is unitary) and exclusion (it is definite).”27.
Just as many support Integrated Information Theory, many also find issues with it and disagree with its premises. The main disagreement with this theory is that IIT is not falsifiable. Falsifiable means that an idea, hypothesis, or experiment can be proven wrong. Although this might sound more reasonable if the theory were true, this is not true. Writing a hypothesis and conducting an experiment can only be proven right if the hypothesis makes sense.
There may be philosophical barriers to understanding consciousness that persist even after significant technological advancements. Even if we can measure consciousness, we might still struggle to comprehend its true nature. Each person’s experience of consciousness is unique, making it difficult to achieve a universally shared understanding.
Experiments such as multimodal neuroimaging, cellular imaging, and correlations with subjective experiences help advance our understanding of consciousness. However, conducting all these experiments simultaneously poses a significant challenge with our current technology. As a result, we continue to face issues like the “Mary’s Room” thought experiment (where a scientist knows all physical facts about color but learns something new upon seeing color), which highlights the potential gap between physical knowledge and subjective experience, even with complete physical data. This is the case for IIT. Tononi’s hypotheses were not proven to be universally true due to the nature of consciousness. Consciousness and its study can not simply be explained with rules since scientists can not even develop a universal definition. This is the first main problem with this theory.
“IIT proponents believe it may solve the ‘hard problem’ of consciousness of why and how physical processes can be accompanied by subjective experience. We argue that distinguishing a ‘weak’ from a ‘strong’ flavor of IIT can provide a useful theoretical umbrella for ongoing empirical work.”31.
A major criticism from many scientists is not necessarily the accuracy of neural tracking, but rather the current practical impossibility of calculating Phi for complex systems like the brain, the theory’s reliance on non-testable axioms, and concerns about its falsifiability. This is crucial for the theory’s actual math. Even if scientists were to establish Tononi’s axioms accurately, they would still be unable to learn more about the neurons because this inaccurate method of tracking neurons makes it impossible with today’s tools. The testing needs to be narrower and more precise.32,33
The challenge of universal agreement on Tononi’s axioms arises from their abstract and philosophical nature, making it difficult to establish a consensus across different scientific disciplines and cultural perspectives. Unlike empirical theories that are grounded in measurable data, IIT begins with a set of a priori principles about consciousness, such as existence, composition, and exclusion, which are not directly testable. These axioms require an acceptance of specific philosophical assumptions about consciousness that may not align with all scientific frameworks. Additionally, interpretations of what constitutes information and integration can vary significantly between neuroscientists, AI researchers, and philosophers, leading to disagreements about the fundamental premises of the theory. Cultural perspectives further complicate matters, as different traditions and schools of thought conceptualize consciousness in ways that may not be easily reconciled with IIT’s formalism.
The critique regarding IIT’s lack of falsifiability is also crucial and could be further developed to highlight the implications for the theory’s scientific credibility. Falsifiability is a key criterion for a robust scientific theory, as it allows hypotheses to be tested and potentially disproven through empirical evidence. The primary issue with IIT is that while it proposes a mathematical framework to quantify consciousness through integrated information (Φ), it does not generate clear, testable predictions that can be empirically verified or refuted. This limitation raises concerns about whether IIT remains within the realm of scientific inquiry or if it veers toward metaphysical speculation. One potential way forward is to refine the theory in a way that allows for falsifiable predictions, such as identifying specific neural signatures of high-Φ states that can be experimentally tested. Without such advancements, IIT risks being seen as an elegant but ultimately unverifiable model of consciousness.
Nonetheless, IIT has revealed the profound significance of what scientists understand as consciousness. Like IIT and GWT, there are numerous other distinctive avenues through which scientists can explore this enigmatic phenomenon. Both IIT and GWT have contributed a piece to this intricate puzzle, but the journey of discovery is far from over. There are still pieces waiting to be unearthed and added to this ever-evolving picture, and the potential for discoveries is exciting and intriguing.
Predictive Coding Theory
The last leading theory of consciousness discussed in this review is predictive coding, also known as predictive processing. Predictive coding explains how the brain functions, constantly generating and updating a mental model of the environment. This means the brain controls how humans and animals respond to stimuli. Simple things like creating and having habits or having muscle memory in a sport or hobby are some examples of the most common predictive processing.
“A large body of research has arisen based on both empirically testing improved and extended theoretical and mathematical models of predictive coding.”34.
The origin of this theory dates back to 1860 when a German scientist and physicist named Hermann Ludwig Ferdinand von Helmholtz called it unconscious inference. Unconscious inference refers to when the brain creates visual information to make a good perception of a scene. More concepts were added to this idea around the 1940s, with the approach of a ‘New look.’ Needs, motivations, and experiences were added to justify a person’s perception. Jerome Burner created this new sub-point to the theory.
Between the early 1980s and later 1990s, concepts such as prediction were added to this theory. Rajesh P. N. Rao, an Indian neuroscientist, and Dana Harry Ballard, a professor of computer science at the University of Texas, created a generative model that showed the tendencies of the human brain and resulted in consistent patterns of predictive measurements and calculations.
Predictive coding was built from the roots of the sensory system, where the brain solves the problem of modeling distal causes of sensory input through a version of Bayesian inference; this means that the brain uses both past experiences and evidence from the surrounding stimuli to come up with a conclusion of sorts.
Predictive coding inverts the conventional view of perception as a primarily bottom-up process, suggesting that prior predictions largely constrain it.
Finally, in 2004, Rick Grush and Anil Seth introduced their groundbreaking model of neural perceptual processing. Based on the brain’s constant generation of predictions from a generative model, this model is now officially known as ‘Predictive Coding.’
Predictive coding is not just a theoretical concept but a practical tool that explores predicting sensory, visual, and physical information before it happens. This means the brain is only conscious of something when predicting the outcomes. Predictive coding allows humans to predict something that someone is going to see before they see it and then, based on previous experiences, check and correct if something was, in fact, correct.
For instance, when a person looks at an apple, their brain predicts what it will look like in terms of color, size, shape, and other sensory information such as taste and smell. Once they see and recognize the apple, the brain sends a signal to the front of the brain to confirm if the apple they were seeing matches the one the person predicted.
Most predictive coding is said to take place in the brain’s visual cortex. This makes sense, as all visuals come from simply seeing an object and predicting certain aspects. However, predictive coding is more intricate than it may seem. Studies have revealed that individuals who are blind are still able to perceive their environment despite their lack of sight. This is made possible by a specific part of the brain that is activated by predictive coding. Even without the ability to interpret their surroundings visually, blind individuals can anticipate the world around them based on sensory input.35
This is a cause inside the brain known as the visual cortex that communicates this information to the back of the brain, called the cerebellum and the occipital lobe. When information is created (usually by the visual cortex), it is sent back in signals to the cerebellum and occipital lobe. When this happens, this is where scientists believe predictive coding starts.
From there, assumptions are made and returned to the brain’s frontal part, where all the pieces are assembled. This is how blind people can understand the environment around them. Though they might be visually impaired, the rest of their brain helps them figure out the environment. However, predictive coding, like all the other theories, has some limitations. Predictive coding is based on the idea that humans predict things when they are conscious of it. This means that humans are constantly predicting every possible scenario. This is made possible because predictive coding integrates information across all available senses. Even without visual input, the brain uses auditory cues, tactile information, and proprioception to build and update a predictive model of the environment, allowing anticipation and interaction.
Creating every contingency of reality and filtering it out is much information crossing the brain at once. Many scientists such as Joseph LeDoux, Bernard Baars, Daniel Dennett, and Patricia Churchland, believe that transferring information throughout the brain is nearly impossible. This is made possible because predictive coding integrates information across all available senses. Even without visual input, the brain uses auditory cues, tactile information, and proprioception to build and update a predictive model of the environment, allowing anticipation and interaction.
While PC posits that the brain is a hierarchical prediction machine, continuously generating and updating models based on sensory input, its specific neural correlates need clearer elaboration. Studies using electrophysiology, fMRI, and MEG suggest that higher cortical areas send top-down predictions, while lower areas transmit bottom-up prediction errors when reality deviates from expectations. For example, research using Laminar recordings in the visual cortex (Bastos et al., 2012) has demonstrated distinct feedforward and feedback signaling patterns that align with PC’s hierarchical processing framework.
Further support comes from studies on mismatch negativity (MMN), an ERP component observed when unexpected stimuli violate predictions, reflecting prediction error signals in sensory processing (Friston, 2005). fMRI studies have also shown that brain regions such as the anterior cingulate cortex, insula, and prefrontal cortex are active during prediction error processing, further linking PC to conscious perception (Iglesias et al., 2013). Additionally, recent work in active inference extends PC to motor control and decision-making, suggesting that dopaminergic circuits in the basal ganglia encode prediction errors related to both action selection and sensory perception (Schwartenbeck et al., 2015). Strengthening the discussion with these specific neural findings would provide a more empirically grounded understanding of how PC operates in the brain.
As a result, predictive coding stands as a testament to the collective knowledge of scientists who have paved the way. Scientists have contributed to the evolution of this theory from a single seed of thought to the complex framework it is today. Predictive coding, like GWT and IIT, offers a fresh perspective on consciousness, challenging us to rethink what it means to be conscious and how it is achievable. With the power of EEG and data collected by this theory, predictive coding has shown another part of this complex problem of solving consciousness. It is only a matter of time before scientists can understand the truth behind the most mysterious and complex human functions in the mind.
Comparison of Modern Theories of Consciousness
While modern society possesses a wealth of information and advanced technology, this does not guarantee that contemporary scientists can effectively apply these resources to comprehensively describe human consciousness. Global Workspace Theory and Predictive Coding offer different but potentially complementary perspectives on consciousness. GWT emphasizes global broadcasting, where information that enters the workspace becomes available to multiple cognitive systems, enabling flexible, coordinated processing. In contrast, Predictive Coding posits that the brain is fundamentally a prediction machine, constantly generating models of the world and updating them based on incoming sensory data. Consciousness, under PC, arises when there is a prediction error significant enough to require high-level cortical intervention.
These models align in suggesting that consciousness involves large-scale neural integration and that attention plays a key role in determining what becomes conscious. However, they diverge in how they explain this process. GWT suggests that consciousness is a function of widespread availability, whereas PC suggests that consciousness emerges from the resolution of prediction errors. In GWT, information is made conscious when it is broadcasted across networks, while in PC, consciousness is needed when predictions fail and the brain must actively update its model of reality. This difference raises key empirical questions—does neural activity associated with consciousness align more with global availability (supporting GWT) or with error correction mechanisms (supporting PC)? A comparative analysis of their predictions, particularly through neuroimaging and computational modeling, could help determine whether these theories are fundamentally in conflict or whether they describe different aspects of the same underlying process.
Discussion
This paper considered four leading theories of consciousness. The Somatic Marker Hypothesis, created by Antonio Damasio, concluded that the brain requires emotions to make decisions. The Global Workspace theory, created by Bernard Baars, argued that once information hits a specific part of the brain, it is sent everywhere to make one conscious of the information. The Integrated information theory, proposed by Giulio Tononi, offered a different approach. Integrated Information theory discusses the issues of using axioms, universally factual statements about the reality of consciousness. The last theory, Predictive coding, by Anil Seth and Rick Grush, explains that the brain predicts the outcomes of the stimuli around it. Based on previous experiences, the brain can develop and apply the most logical reason to reality. While GWT focuses on information broadcasting and PC on prediction error, IIT offers a distinct mathematical framework based on integrated information, and SMH emphasizes the specific role of embodied emotion in conscious processing and decision-making.
Connecting these diverse theories presents a significant challenge. They operate at different levels of explanation and possess internal inconsistencies and gaps, preventing simple unification. If they aligned perfectly, a single universal theory might emerge, but the complexity of consciousness suggests otherwise. The following discussion will explore key comparisons and contrasts between these frameworks.
All of the theories have one main thing in common, this being that there is a necessity in the location of where consciouness needs to be. For example, Somatic Marker Hypothesis needs rational perception and thinking to be located in a certain part of the brain or the theory does not work.
These four theories do not address religious standpoints, or the philosophical challenges posed to the public. “What is it like to be at Bat?” addresses qualia, the qualitative feeling of phenomenal distinctions within an experience (for example, seeing a color, hearing a sound, or feeling a pain)24 Philosophical thought experiments, such As Mary’s Room, help highlight specific difficulties in studying consciousness.
The Knowledge Experiment, also known as Mary’s Room Experiment, was proposed by Paul M. Churchland in 1982. It was a drastic change that completely shifted our way of learning about consciousness. These thought experiments cannot be tested in a lab informality due to a lack of technological advancement and scientific knowledge of consciousness. Philosophical thought experiments like Mary’s Room are not designed for direct empirical testing in a lab; their purpose is to probe the conceptual limits of physicalist explanations of consciousness and highlight the challenge of accounting for subjective experience, irrespective of technological advancement. Scientists, at this time, simply do not know all the answers. Today’s technology must be more complex in order to understand what these scientists look for when observing consciousness. This means there will likely be a spike in understanding because of the technology and the universal criteria that will be established.
More advanced technology, such as EEG, hyperscanning, and fNIRS, allows scientists to study the brain in more real-world contexts. Along with simultaneous imaging and data analysis techniques, such as AI and machine learning analysis of multimodal datasets, neurologists can more quickly identify patterns between different levels of neuroscience, such as cellular to cognitive thinking.
However, a benchmark has been proposed, and this idea is Universality. The concept of universality came from Anil Seth, the credited creator of predictive coding. According to Seth, the purpose of universality:
“…often assumed in physics, posits that the fundamental laws of nature are consistent and apply equally everywhere in the universe, and they remain the same over time…The belief in universality means the knowledge and insights we gain from experiments on Eartapplyle to the entire universe.”36
Seth then explains that even his theory on predictive coding does not necessarily mean that it is the complete, correct answer to what consciousness is or how it functions, but rather, it supports how the Integrated information theory is. Philosophical thought experiments like Mary’s Room are not designed for direct empirical testing in a lab; their purpose is to probe the conceptual limits of physicalist explanations of consciousness and highlight the challenge of accounting for subjective experience, irrespective of technological advancement. Seth explains that this theory is one of the only modern theories to fit this criterion of universality in the field of neuroscience for consciousness. He also reinforces that ‘correlation does not imply causation’ refers to the inability to deduce a cause-and-effect relationship between two events or variables solely based on an observed association or correlation between them. This means that not all ideas stem from an action. This implies that everything that happens in the world has a cause and effect, which is simply incorrect. It is a fallacy that is also known by the Latin phrase cum hoc ergo propter hoc, meaning ‘with this, therefore because of this.’ The key is that something necessary does not imply anything. For example, if a hypothesis is wrong, then that would mean that the testing is entirely wrong, which is not usually the case. Some experiments have a slight error, but it does not mean the entire experiment is incorrect.
Due to the complexities of technology and the unknown factors influencing consciousness in each individual, the only way to potentially find a solution is to understand how the brain functions. This requires tracking the activity of neurons and understanding their pathways and actions. To achieve this, experiments must focus on actively monitoring brain cells. By doing so, we can identify the regions associated with consciousness and attempt to replicate its characteristics, leading to a deeper understanding of its mechanisms.
While significant progress has been made toward scientifically understanding the neurobiological mechanisms producing consciousness, no singular theory comprehensively captures it. Future research should build upon and potentially unify the Somatic Maker Hypothesis, Global Workspace Theory, Integrated Information Theory, and Predictive Coding.
Interdisciplinary collaboration could also play a key role in bridging theoretical divides. Philosophy could contribute by clarifying conceptual distinctions between different models of consciousness, helping refine their core assumptions. Neuroscience could provide empirical tests of these models through advanced techniques such as high-resolution fMRI, intracranial recordings, or optogenetics to manipulate and measure conscious states. AI research could contribute by building computational models that simulate different aspects of consciousness, testing whether elements of IIT, GWT, or Predictive Processing can be implemented and validated in artificial systems. By integrating insights from these diverse fields, future research may move closer to a unified and testable framework for understanding consciousness.
Limitations
Reproducibility remains a significant challenge in consciousness research, raising concerns about the reliability of key findings that support modern theories. Many studies investigating the neural correlates of consciousness (NCC) rely on small sample sizes, varied experimental paradigms, and complex statistical analyses that introduce inconsistencies. This issue is particularly evident in fMRI studies supporting Global Workspace Theory (GWT), where different studies report inconsistent patterns of widespread cortical activation during conscious perception. Problems such as thresholding variability, overfitting, and circular analyses have led to concerns that some findings may be false positives rather than robust neural signatures of consciousness (Eklund et al., 2016). Additionally, differences in task paradigms—such as whether participants report their awareness or whether researchers infer consciousness based on behavior—have led to contradictory results, making it difficult to establish definitive neural markers of conscious access.
Similarly, Predictive Coding studies face reproducibility challenges due to high intersubject variability and difficulties in isolating prediction error signals from general neural activity. Although mismatch negativity (MMN) and expectation suppression paradigms have been widely used to test PC’s claims, these effects have not consistently replicated across different experimental conditions (Stefanics et al., 2018). Integrated Information Theory presents an even greater challenge, as its core metric, Φ, lacks a standardized empirical measure that can be independently validated. Studies attempting to approximate Φ have often relied on indirect proxies, such as EEG entropy or brain complexity measures, but these approaches vary widely and lack a consensus framework for comparison. Addressing these reproducibility concerns requires larger sample sizes, pre-registered study designs, standardized experimental paradigms, and open-access data sharing. Moving forward, efforts to develop consistent methodologies will be critical in establishing which aspects of consciousness research are truly reliable and generalizable across studies.
Acknowledgments
I am writing to express my heartfelt gratitude to my mentor, Dr. Brock Pluimer, whose unwavering encouragement and insightful guidance have inspired me to embark on this transformative writing journey. With a keen eye for detail, he offers feedback that goes beyond the surface, prompting me to explore the depths of my thoughts and ideas. His thoughtful suggestions have shaped my work and pushed me to think critically and creatively, challenging me to refine my craft with each revision.
Additionally, I extend my deepest thanks to my parents, whose unwavering support and steadfast belief in my potential have been my most significant source of motivation. Their nurturing spirit surrounds me like a warm embrace, instilling a profound confidence that propels me forward. They have cultivated in me a deep love and passion for writing, one that fuels my creativity and drives my intellectual pursuits. Their encouragement has nurtured my growth, allowing my ideas to flourish and take form.
Finally, I sincerely appreciate the Ambrose School for providing a rich environment in which my understanding of rhetoric has grown and thrived. The supportive atmosphere and engaging discussions have significantly contributed to my academic development and inspired me to explore new horizons in my writing.
References
- Merriam-Webster.com Dictionary, s.v. “consciousness,” accessed September 29, 2024, https://www.merriam-webster.com/dictionary/consciousness. [↩]
- Feldman & Goodrich, 1999; translated to English in 1930 by Arno Luckhardt; acquired by Smith in 1862. [↩]
- F. Fabbro, D. Cantone, S. Feruglio, and C. Crescentini. “Origin and Evolution of Human Consciousness.” In Progress in Brain Research, 250:317–43. Elsevier, (2019). [↩]
- N. Erich. The Origins and History of Consciousness. First Princeton Classics edition. Bollingen Series, XLII. Princeton, N.J: Princeton University Press, (2014). [↩] [↩]
- Crivellato, Enrico, D. Ribatti. Soul, Mind, Brain: Greek Philosophy and the Birth of Neuroscience.” Brain Research Bulletin 71, no. 4 (January 2007. [↩]
- Hansotia, Phiroze. A Neurologist Looks at Mind and Brain: ‘The Enchanted Loom.’ Clinical Medicine & Research 1, no. 4 (October 2003). [↩]
- R. Descartes (2008). Meditations on the brain exercised emotions, thoughts and actions. [↩]
- C. B. Wrenn. Naturalistic Epistemology. The Internet Encyclopedia of Philosophy, ISSN 2161-0002, https://iep.utm.edu/, (april 4, 2025). [↩]
- R. Duschinsky “Tabula Rasa and Human Nature.” Philosophy 87, no. 4 (October 2012): 509–29. https://doi.org/10.1017/S0031819112000393. [↩]
- W. James and E. Marty. The Varieties of Religious Experience: A Study in Human Nature. The Penguin American Library. Harmondsworth, Middlesex, England ; New York, N.Y: Penguin Books, (1982). [↩]
- J. Juhansar. “John Locke: The Construction of Knowledge in the Perspective of Philosophy.” Jurnal Filsafat Indonesia 4, no. 3 (November 1, 2021): 254–60. https://doi.org/10.23887/jfi.v4i3.39214. [↩]
- J. Fleischman. Phineas Gage: A Gruesome but True Story about Brain Science. Boston: Houghton Mifflin, 2002. [↩]
- I. Mishqat “The Formation of Reticular Theory“. Embryo Project Encyclopedia ( 2017-06-19 ). ISSN: 1940-5030 https://hdl.handle.net/10776/11773 [↩]
- Katz-Sidlow, J. Rachel “The Formulation of the Neuron Doctrine: The Island of Cajal.” Archives of Neurology 55, no. 2 (February 1, 1998): 237. https://doi.org/10.1001/archneur.55.2.237. [↩]
- R. Yuste From the neuron doctrine to neural networks. Nat Rev Neurosci 16, 487–497 (2015). https://doi.org/10.1038/nrn3962 [↩]
- G.M. Shepherd “Neuron Doctrine: Historical Background.” In Encyclopedia of Neuroscience, 691–95. Elsevier, (2009). https://doi.org/10.1016/B978-008045046-9.00988-8. [↩]
- A. Compston “The Berger Rhythm: Potential Changes from the Occipital Lobes in Man, by E.D. Adrian and B.H.C. Matthews (From the Physiological Laboratory, Cambridge). Brain 1934: 57; 355-385.” Brain 133, no. 1 (January 1, 2010): 3–6. https://doi.org/10.1093/brain/awp324. [↩]
- M. Christoph, and M. Murray. “Towards the Utilization of EEG as a Brain Imaging Tool.” NeuroImage 61, no. 2 (June 2012): 371–85. https://doi.org/10.1016/j.neuroimage.2011.12.039. [↩]
- A. Damasio R. Descartes’ Error: Emotion, Reason and the Human Brain. 18. Druck. New York: Quill, (2004). [↩]
- Moini, Jahangir, A. LoGalbo, and R. Ahangari. Introduction.. In Foundations of the Mind, Brain, and Behavioral Relationships, 3–9. Elsevier, 2024. [↩]
- A. Bechara, , H. Damasio, D. Tranel, and A.R. Damasio. “The Iowa Gambling Task and the Somatic Marker Hypothesis: Some Questions and Answers.” Trends in Cognitive Sciences 9, no. 4 (April 2005): 159–62. https://doi.org/10.1016/j.tics.2005.02.002. [↩]
- S. Murray, and B. Baars. “Applying Global Workspace Theory to the Frame Problem.” Cognition 98, no. 2 (December 2005): 157–76. https://doi.org/10.1016/j.cognition.2004.11.007. [↩]
- R. VanRullen, and R. Kanai. “Deep Learning and the Global Workspace Theory.” Trends in Neurosciences 44, no. 9 (September 2021): 692–704. https://doi.org/10.1016/j.tins.2021.04.005. [↩]
- S. Murray, and B. Baars. “Applying Global Workspace Theory to the Frame Problem.” Cognition 98, no. 2 (December 2005): 157–76. https://doi.org/10.1016/j.cognition.2004.11.007 [↩]
- G. Tononi, M. Boly, M. Massimini, and C. Koch. “Integrated Information Theory: From Consciousness to Its Physical Substrate.” Nature Reviews Neuroscience 17, no. 7 (July 2016): 450–61. https://doi.org/10.1038/nrn.2016.44. [↩]
- R. Kanai, and N. Tsuchiya. “Qualia.” Current Biology 22, no. 10 (May 2012): R392–96. https://doi.org/10.1016/j.cub.2012.03.033. [↩]
- O. Masafumi, L. Albantakis, and G. Tononi. “From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0.” Edited by Olaf Sporns. PLoS Computational Biology 10, no. 5 (May 8, 2014): e1003588. https://doi.org/10.1371/journal.pcbi.1003588. [↩] [↩] [↩]
- K. Hyoungkyu, A. G. Hudetz, J. Lee, G. A. Mashour, U. Lee, and the ReCCognition Study Group. “Estimating the Integrated Information Measure Phi from High-Density Electroencephalography during States of Consciousness in Humans.” Frontiers in Human Neuroscience 12 (February 16, 2018): 42. [↩]
- S. Murray, and B. Baars. “Applying Global Workspace Theory to the Frame Problem.” Cognition 98, no. 2 (December 2005): 157–76. https://doi.org/10.1016/j.cognition.2004.11.007. [↩]
- T. Nagel “What Is It Like to Be a Bat?” The Philosophical Review 83, no. 4 (October 1974): 435. https://doi.org/10.2307/2183914. [↩]
- P. Mediano, F. E. Rosas, D. Bor, A. K. Seth, and A. B. Barrett. “The Strength of Weak Integrated Information Theory.” Trends in Cognitive Sciences 26, no. 8 (August 2022): 646–55. https://doi.org/10.1016/j.tics.2022.04.008. [↩]
- K. Frankish (2023). Integrated Information Theory: Pseudoscience or appropriately anomalous science? [↩]
- IIT-Concerned, S. M Fleming, C. D. Frith, M. Goodale, H. Lau, J. E LeDoux, A. L. F. Lee, et al. “The Integrated Information Theory of Consciousness as Pseudoscience,” September 16, 2023. https://doi.org/10.31234/osf.io/zsr78. [↩]
- A.K. Seth, K. Suzuki, and H. D. Critchley. “An Interoceptive Predictive Coding Model of Conscious Presence.” Frontiers in Psychology 2 (2012). https://doi.org/10.3389/fpsyg.2011.00395. [↩]
- R. Reichardt, B. Polner, and P. Simor. “Novelty Manipulations, Memory Performance, and Predictive Coding: The Role of Unexpectedness.” Frontiers in Human Neuroscience 14 (April 29, 2020): 152. https://doi.org/10.3389/fnhum.2020.00152. [↩]
- K. Ryota, and I. Fujisawa. “Toward a Universal Theory of Consciousness.” Neuroscience of Consciousness 2024, no. 1 (May 31, 2024): niae022. https://doi.org/10.1093/nc/niae022. [↩]