The Observer Watching You From Within

A deep dive into metacognition and its implications for intelligent technology

Neurotech@Berkeley
22 min readMay 31, 2023

When you’re by yourself, content and at peace with whatever task you’re doing at the moment, are you ever interrupted by the intrusive feeling that someone is watching you? Or, do you feel silently watched all the time? Perhaps you’re reading this and are actually very concerned by these questions — “What is this person talking about?” you think to yourself; “Are they experiencing onset schizophrenia?” If you truly do not understand what I mean, though, then you are simply ignorant to the reality occurring around you.

In fact, you are this mysterious “someone” I keep referring to: you are watching yourself. Maybe you are not watching yourself all the time, but rather, in short moments littered all throughout your day. In order to determine whether or not you were asking yourself the questions I proposed for you earlier, you had to enter your mind for a second and silently watch — waiting to see if these sorts of questions came up, or had come up previous to me suggesting them. It is a fact that you are being watched often, although comfortingly, you are likely only being watched by yourself.

This phenomenon of watching yourself, or more specifically, being aware of and thinking about your thinking, is called metacognition in psychological literature. The concept of metacognition has been of interest to many in psychological and technological fields, as it is the basis of what many believe manifests mental growth and everyday functioning. If our models of metacognition are correct, then it can serve as a very useful tool to uncover how we are able to learn more and what this implies about our neurocognitive function. This information can also be applied towards computational modeling in self-learning devices (artificial intelligence).

Metacognition

Metacognition can be described as the higher-order thinking that you can utilize to investigate a first order/lower order thought. To break this down, imagine you are walking around your neighborhood. While walking, you see an obnoxious black sports car speed right past you. This is a first-order thought — essentially an observation you make. In this first-order thought, you have identified whether or not a stimulus is present. If you were to be asked by another person on the street whether or not an obnoxious black sports car had sped past you a few seconds ago, you can then make a claim about it depending on how confident you are — “Yes, an obnoxious black sports car just sped past me” or “I don’t know, I wasn’t paying attention.” This would be a second-order thought, as you are entering your mind and retrieving back to your memory to see if this had happened, and you are also implicitly reporting how you feel about the information you retrieved in your response (saying “yes” implies you are sure of yourself, saying “I don’t know” implies you are not). The metacognitive part of this story lies in how you are actively engaging with and thinking about another thought that you already had. You are climbing up the ladder of thoughts and looking down at the preceding ones in your new thoughts. An interesting case of metacognitive impairment is illustrated in a study dubbed “blindsight,” in which patients with lesions to the primary visual cortex are asked to report whether or not they see a stimulus, similar to whether or not you saw the obnoxious black sports car. Despite being able to respond to the stimulus (e.g., being startled by the presence of something only detectable by sight, etc.), and therefore confirming that the patients are able to actually see the stimulus, the patients will report that they did not see anything. The patients do not lack the ability to see the object — they are able to have the first-order thought of observation — but rather, they lack the awareness of having seen it, demonstrating that their metacognition of sight is poor because they cannot process a higher-order thought about what they have seen.

The ability to understand and respond to one’s own thinking can occur in a multitude of ways — not just through visual observation and memory retrieval. Metacognition is broken up into three different “kinds” of metacognition based on the subject to which the judgment of thought pertains: anoetic, noetic, and autonoetic. Anoetic metacognition deals with reflecting on objects within the world, like the car and blindsight examples from before. Noetic metacognition concerns the mental representations that a person holds, as metacognition can also occur towards the various schema/models of things that we have in our minds. For example, it would be involved in determining if your personal idea of “obnoxious” is accurate when compared to many other definitions of “obnoxious.” This is a very important process in learning, as in order to learn, we must reflect on our current representation/understanding of a subject and see how it holds up to new information. Lastly, autonoetic metacognition describes judgments of oneself, similar to the concept of introspection. Unlike noetic and anoetic metacognition, where a fixed piece of information is processed in our mind, autonoetic metacognition focuses more on being able to narrow in intuitively on fluid features of the self and our ability to take agency on account of those findings. Making a calculated decision about something troubling you would require the use of autonoetic metacognition.

As a result of these definitions, it has been sometimes assumed that metacognition might merely mean introspection or consciousness, although this is not true. The two concepts of consciousness and metacognition are fairly comparable, as consciousness describes the state of us being aware of ourselves and metacognition similarly describes us utilizing our awareness of ourselves. One might think that metacognition is a manifestation of consciousness — and this would be true — but then we must remind ourselves that nearly all cognitive processes start with consciousness. To think of metacognition and consciousness as the same neglects the uniqueness of metacognition being able to manipulate the mind. Consciousness is a state of awareness whereas metacognition further uses this awareness to spur on future actions and thoughts. In a sense, consciousness allows for the possibility of metacognition to exist, but not vice versa, so they do not describe the same phenomenon but are instead adjacent to each other. Similarly, introspection is also sometimes conflated to mean the same thing as metacognition. While properties of introspection require the use of metacognition (in order to reflect on one’s actions, one must have mental representations of themselves to be analyzed by higher order thought), metacognition is not just introspection. This can be best seen in the blindsight example from earlier, in which patients lacked the metacognition to identify that they were seeing something, but they did not necessarily lack introspection in the subjective sense. They could still, theoretically, identify features of themselves like their personality — or they could even be subjectively aware of their cognitive deficit — but this doesn’t change their inability to use metacognition in regards to visual perception. They can be self-aware, but not in cognitive control of their minds.

Metacognition is intimately tied to the idea of cognitive control and monitoring. In order to use metacognition, one must be mentally monitoring, consciously or subconsciously. Further, by the use of metacognitive monitoring, people have more cognitive control and the capacity to better deliberate over their thoughts, reasoning, and overarching mental skills.. Though, as always, it is important not to conflate these ideas together. In an experiment done on the cognitive control/monitoring of typists, it was found that they could intuitively account for errors in moments of lack of awareness — as in, they instinctively recognized that they made an error in their typing even while not fully present and corrected them — demonstrating their enhanced monitoring and control. However, when experimenters added in random errors of their own, the typists noticed them and took accountability for these errors that they themselves did not make. Despite being aware of when they actually did make mistakes via monitoring, they lacked the metacognition to realize when they did not make mistakes — suggesting that cognitive monitoring/control and metacognition are not the same inherently, although related.

Digging deeper into the meaning of metacognition, it becomes apparent that the nature of metacognition is rather subjective and difficult to quantify. Its definition overlaps with many other definitions/phenomena, and one cannot make a clear, distinct statement about what a metacognitive, higher order thought must look like, as the thought can, quite frankly, be about anything. Further, higher order thinking can go on almost continuously — building and building to look upon a previous thought indefinitely — so fixing it in order to make clear statements on it would be neglecting the full nature of metacognition. Therefore, it is important to be delicate and nuanced when operationalizing metacognition for scientific studies. The definition must be strict and simple enough so that it is quantifiable, but abstract enough so that the true nature of metacognition is captured within the study. Because of this, to compute metacognition we often restrict it to measuring feelings that can only exist if metacognition did/didn’t take place.

For example, a key indicator of the presence of metacognition is confidence in regards to performing a task. In this case, the task that a person performs is the object-level, first order thought. The task can be anything that requires the use of varying individual knowledge and skills, such as answering a history or math question. After completing the task, the person’s metacognition can be measured by having them rate how confident they are in the accuracy of their task/answer. Defining how confident someone is in their performance requires them to look within themselves for their skills and knowledge — thereby thinking about past thinking, comparing that with the object-level thought, and “activating” their metacognition. Depending on whether or not their performance was accurate, this can be compared with how well they thought they did when asked to report their confidence in their performance. If they perform correctly and are confident in their performance, they would be considered to have good metacognition, as their reflection of their skills/cognition aligned with their actual skills/cognition as measured by the task. However, if they perform incorrectly but were confident that they had performed correctly, then they would have poor metacognition as their interpreted skills and actual skills did not align. Confidence and question-answering like this is easily self-reported and measured, so it is a popular way of operationalizing metacognition in research. This way of measurement does not come without flaws, though, because sometimes not having an aligned confidence and performance does not mean the metacognition is poor. For example, accurately having low confidence on answering a question because the knowledge is not strong in one’s mind, and then getting the answer right anyway does not mean that one’s metacognition is bad, but rather, that they got the answer right by chance or through extreme difficulty. An adjacent method of measurement is to have subjects make a decision, and then reflect on why they made that decision or the difficulty of making that decision. This method is good for looking more closely at how metacognition works case-by-case between people, as opposed to just measuring the accuracy or strength of metacognition. These kinds of operationalization are referred to as retrospective metacognition, as it is the present use of metacognition regarding something of the past.

Metacognition is also broken up into different, specific “acts,” including “judgements of learning” and “feelings of knowing”. Judgements of learning are a kind of metacognition, as they require reflection on cognitive thought — recognizing how well or if one has learned a subject adequately requires the analysis of one’s own cognition of the subject. On the other hand, feelings of knowing are essentially “tip-of-the-tongue” states, in which one recalls that they know of something, like the name of an actor, although they cannot remember it at the present moment. This demonstrates use of metacognition, as it again requires awareness of one’s cognitive state (memory), and a report back about the state. It occurs to me here whether or not feelings of knowing imply the existence of unconscious metacognition, as technically, the piece of knowledge that one is recalling that they know is not directly conscious to the person. Feelings of knowing confirm that information is constantly stored at the unconscious level, and that it can be deliberately accessed at any time (although with delays and sometimes unsuccessfully). Perhaps, if metacognition is not unconscious in and of itself, it could be that it is the gate into accessing unconscious mental processes. Not only does metacognition go up the metaphorical ladder of thoughts by pressing down on the thought directly beneath it — it also can climb down the ladder rapidly, going underground, and shoot back up while skipping steps along the way for efficiency. Feelings of knowing and judgments of learning are called prospective applications of metacognition.

Relating Metacognition to Neurobiology

Being as fluid and subjective as it is, it would be any neuroscientist’s fantasy to be able to objectively pinpoint where and how metacognition occurs in the brain. Unsurprisingly, metacognition is a multi-step process that involves many parts of the brain. Different “types” of metacognition are also processed in different regions, as confirmed by people who have brain lesions and only specific kinds or steps of metacognition impaired. Despite being multifaceted, metacognition has predominantly been localized to the prefrontal cortex (PFC), the anterior cingulate cortex (ACC), and the hippocampus.

Activity in the ACC is mainly seen working in tandem with the lateral PFC and anterior insula when an individual experiences response conflict. Response conflict describes when an individual is receiving contradictory or noisy information from a stimuli that they must sort through mentally before giving an answer or “response” for. The noise/contradiction causes a conflict in the response that the individual is inclined to give. For example, in the Stroop Test a person is given a series of color words (e.g. “purple,” “green”) that are themselves colored as a different color than what the word says, and are told to report the color of the letter, not the word. This process causes a response conflict as the individual being tested is inclined to say the word and not the color of the word. While experiencing response conflict, there is increased activity in the ACC, anterior insula, and lateral PFC. As we will discuss in a moment, the lateral PFC is correlated with behavior adjustment, increased caution, and cognitive control. Using this fact, it is theorized that the ACC detects when there is response conflict or a lack of confidence and then signals for the lateral PFC to be activated in order to heighten cognitive control and adjustment.

Figure 1. The Stroop Test: One must say aloud the color of the word, not the word itself. For example, the first “Orange” should be read “pink.”

Within the PFC, metacognitive processes are further split between the lateral PFC and the medial PFC. The lateral PFC has been seen to play a role in retrospective metacognitive acts. Dorsolateral PFC activation is associated with individuals experiencing any decreased confidence in performance or a thought, thus suggesting that it’s responsible for holding mental representations to be crunched by metacognitive processes — similar to working memory. In a study where subjects with rostrolateral PFC lesions were tested for their metacognitive ability, it was found that while their metacognition was intact, they required a greater threshold for decreased/increased confidence before being able to report the difficulty/ease of the metacognitive tasks they were doing. Therefore, the rostrolateral PFC likely plays a part in the ability to communicate confidence and in the detection of confidence within metacognition. Supporting this further, varying metacognitive confidence is positively related to gray matter volume in the rostrolateral PFC. The rostrolateral PFC is also more noticeably correlated with increased confidence (greater activation with greater confidence, and vice versa) and stronger white matter integrity in this area was associated with stronger metacognitive accuracy. In other cases, the rostrolateral PFC is seen to integrate information from the ACC and anterior insula. Further, it was found that there is a separation between the functions of the ACC and lateral PFC, as subjects with lesions to the lateral PFC could not report in what way the Stroop Test was difficult, but they were able to perform the test accurately.

With all of this information in mind, I think it is plausible that, if the ACC signals a lack of confidence to all of the lateral PFC (which includes the rostrolateral PFC), then the rostrolateral PFC (being associated with increased confidence) tries to make sense of the information it is being given and find as much confidence that it can from the puddle of uncertainty and doubt gifted by the ACC. To picture this more intuitively, imagine you are trying to figure out whether or not a girl you’re interested in is also interested in you. This girl is sending you mixed signals — she texts you all the time but ignores you and is rather apathetic with you in person. Having both of these conflicting ideas is noticed and then correctly sorted by your ACC: “Texting me all the time means she’s interested, but being apathetic means she’s disinterested, and she cannot be both interested and disinterested in me at the same time.” Then, this information gets sent over to your rostrolateral PFC for confidence detection and communication: “Well, I know that she texts me all the time, so I am confident that she probably likes me at least a little, but maybe she’s on the fence about me. This is what I will tell my friend if they ask me about the situation with her.” At some point when the information is being sent to the ACC, it is also sent to the dorsolateral PFC: “I am not confident about whether or not she likes me.” This relationship between the ACC, dorsolateral PFC, and rostrolateral PFC is only a theory of mine, but it is a fact that the ACC and dorsolateral PFC are associated with decreased confidence and the rostrolateral PFC with increased confidence.

On the other hand, the medial PFC has shown to play a relevant role in prospective uses of metacognition. Damage to the ventromedial PFC leads to a decrease in feelings of knowing accuracy, and activity in the ventromedial PFC increases during accurate feelings of knowing. Activity in the ventromedial PFC also increases during accurate judgments of learning, thus implicating this region in contributing to the accuracy of all prospective metacognition. Outside of prospective metacognition, the anterior medial PFC has been shown to have increased activity (and myelination) during moments of self-reflection/introspection. An interesting finding to note here is that, while anterior medial PFC activity is increased in healthy individuals during self-reflection, this is not true of those with mental health conditions such as schizophrenia while trying to introspect. While this could be because those who are schizophrenic do not self-reflect, and thus there is nothing to be activated, this opens up another interpretation — those with schizophrenia lack the ability to engage with their anterior medial PFC, and thus cannot produce accurate accounts of self-reflection. This new interpretation does not really tell us much that we did not already know, but it does expand the idea of schizophrenia being more of a cognitive issue, and it suggests that there is a process which must occur before the self-reflection process does (perhaps something needs to send a signal to the anterior medial PFC before introspection can start). This step-by-step thinking is supported by Paul Lysaker’s work in schizophrenia treatment, in which he breaks down the self-reflection process to its roots in object-level (first order) cognitive awareness. This treatment has been supposedly successful if approached carefully in a step-by-step process of building cognitive awareness and subjective introspective ability!

Lastly, metacognitive ability has been linked to magnetization transfer and myelination in the retrosplenial cortex and hippocampus. In the retrosplenial cortex, stronger metacognition was correlated with less magnetization transfer. In the left hippocampus, stronger metacognition was correlated with having weaker myelination in this area. Both the retrosplenial cortex and hippocampus are important components of the neural memory network, so having a correlation with metacognition suggests the existence of metamemory and memory working in conjunction with metacognition. This comes as no surprise, as in order to make retrospective judgments of metacognition one must remember the event in which they are judging. What is shocking about this finding is that there is a negative correlation between these parts and processes. A negative correlation suggests that more sparse connectivity in the hippocampus leads to a strengthening of accurate encoding/recall of decision making “evidence” or pieces of information to be used in metacognition. I believe that it is important to consider why this may be occurring, so, thinking back on the idea of response conflict and the ACC, it could be possible that is the case because of noise. Perhaps having sparser connections ensures that redundant information is not being encoded or recalled in memories during metacognitive processes, which could potentially cause conflict in the cognition and lead to an unproductive/unnecessary lessening of confidence that would extend more mental energy than desired. For example, someone attempting to believe that they are a good person will not recall neutral things (not bad nor good) that they have done in order to avoid redundancy, and this is a natural process because the brain does not want to exert more energy than necessary to a given task. By not exerting energy on redundant ideas, more energy can be given to the few relevant mental connections made — therefore, stronger, accurate metacognition occurs.

Metacognition in Artificial Intelligence

Considering the formulaic nature of metacognition, it is of interest if metacognition can be described in computer algorithms. If we were to, for example, give AI the capacity for introspection, they could take in information with more ease and efficiency as they could recognize when new information contradicts the old information that they are aware of having. At this point, we could give them a command to abandon all old information which contradicts the new information — in a sense, creating a self-updating program that requires less work from the programmer. Also, by giving AI metacognition, we could theoretically give the AI a “goal” and context of the “problem” that the goal is resolving, and it would be self-aware enough to know how to carry out that goal using its knowledge of its own programming limitations. An example of this would be having an AI translate between programming languages. If we give the AI knowledge of a set of languages and what various commands in the language mean, with metacognition, the AI could identify the overall goals and limitations of a piece of code in one language, and then match that goal in another language and translate the code while adhering to the new language’s own limitations.

To begin giving AI these abilities, one would need to deconstruct the various abstractions that define metacognition, such as “knowing of oneself” subjectively. One way of doing so is by giving the AI a double-meta model of itself, in which the programmer gives the AI the knowledge/data of how itself is programmed and what it looks like for itself to know how itself is programmed. This may sound abstract, but this is a fairly intuitive concept. Take yourself, for example. You have an idea of yourself, and who you are, broadly speaking. Along with this, you also have an idea of what it looks like when you are analyzing — you are aware of and see the reasoning that occurs when you are thinking of yourself, someone else, or a concept. By giving AI a model of itself, it would understand who it is and how it functions, but it would still lack the ability to make claims or have reasoning about itself. Hence, if you give the AI a model of a model of itself, it can make claims about the way it thinks of itself. This double model accounts for most of the metacognitive ability that an AI would need to know to complete the tasks that we want it to do, but theoretically we could continue this process over and over again. Each new model of itself represents a higher order of metacognition thinking. To describe this in a more computational way, we can also think of higher orders of thought as “dimensions” within the “space” of AI, which makes our goals of AI metacognition more mathematically tangible. This way of creating metacognition for AI is called model based reasoning.

Another important aspect of metacognition relevant to AI is the idea of decision-making and rationality. Metacognition is terribly important for decision making, as one must engage with higher order thinking to make more complex decisions. Generally, one wants to make a rational decision as opposed to an irrational one — requiring even more metacognition to decide what is irrational. Rationality is subjective, so oftentimes rational decision-making is operationalized to mean the decision which has the least cost and the most benefit. This operationalization can then be computed into an anytime algorithm. An anytime algorithm is an algorithm that ensures an output still occurs even if the algorithm is interrupted by additional inputs during the time that it takes to run. This way, all new inputs are accounted for which is imperative for rational decision-making, as what defines the most rational decision can change in an instant. Metacognitively, we are good at adapting to new information and accounting for it right away while making decisions because we constantly have the information readily available to us by means of cognitive monitoring. For computers, this is not the case as commands are fixed and not adaptable — therefore requiring that we make an algorithm that forces adaptation when necessary.

To do so, what first must be done is collecting our meta-data. Meta-data, in this case, means essentially the span of all possible decisions and decision making outcomes. This span is fluid and changing though, as more data gets added with each second of running, so we must make this an anytime algorithm from the very start of the process. Building off of our meta-data, we must implement our decision making process and commands which define what, in our data, has the least cost and most reward. This process could, theoretically, go on indefinitely as the system weighs out the pros and cons of each decision over and over and as the data continuously changes. Because of this, we must consider when enough “thinking” or running time has occurred before terminating the run of the algorithm (would it be beneficial or a waste of time to continue thinking about which choice is best?) — which would require another algorithm that monitors this decision making process if we want this process to be automatically functioning in the AI. A decision making monitor system would also require another computation, where we compute what it means when it is important to continue debating a decision as opposed to it being more important to abandon it for efficiency. In humans, this process is natural and swift as we intuitively know when we have “finished” the time required to make a decision — taking about 5 minutes to decide what is best to make for dinner versus taking 5 hours to decide what kind of car you should buy.

Another way of implementing decision making in AI would be to adopt Pavlovian learning strategies, and to condition the AI to make a certain kind of decision that has worked in the past. This way, one would only need to program the AI to detect similar features of problems it has solved in the past, the problem-and-goal method described earlier, and a means by which to apply past solutions to new situations. Humans do this constantly via experiential learning. This strategy is called case-based reasoning.

Failure driven learning is also being attempted in the programming of AI. For example, the project Meta-AQUA showcases an AI that is capable of generating a story, explaining what is going on in the story/why things occurred as they did, and find faults in their explanation of the story. Meta-AQUA uses model based reasoning and introspection to exhibit this metacognition, as it first has a model of itself, then uses introspection to gather potential reasons why its explanation is incorrect or not up-to-par yet. The project is still ongoing, and its rigorous methods of machine learning are likely to be fruitful in the discussion of technological metacognition for the future.

Conclusion

Metacognition is the process by which we use higher order thinking to make a judgment on lower order thoughts. This can be in a number of ways, and about a number of things. Metacognition can take the form of making a statement about an outside object that one perceives. It can be seen in introspection of oneself that requires deep mental analysis, thinking about choices in order to make a rational decision, or determining if one should be confident about their performance. Common examples of metacognition used in studies are feelings of knowing and judgments of learning. Cognitive control and monitoring is oftentimes mediated by the process of metacognition. Neurologically, the ACC is thought to signal a lack of confidence to the lateral PFC, in which the lateral PFC can then communicate and “bring to life” the presence of the uncertainty via metacognition. The medial PFC is relevant for prospective metacognitive judgments and introspection, and the hippocampus is relevant for more accurate metacognitive recall. Metacognition is being attempted to be replicated in AI for the purposes of efficiency, because if AI can use metacognition, they may become self-functioning and self-fixing. There are a number of metacognitive strategies being worked on in AI, such as rational decision making and failure-driven learning, that are performed on the basis of model and case based reasoning.

Why is any of this important? I’m sure that most of this has felt like being pummeled with a lot of abstract information, and while I described many of these concepts in practical applications, I understand it can still be hard to internalize as something relevant. After all, this is just base-level information — how can we even move forward with applying metacognition? Metacognition seems like merely a fact of life. However, metacognition is sneakily very important for everyday health and functioning. It would be impossible to move throughout your day (safely, at least) without the ability to spontaneously “check” your mind and analyze the best next action. What’s more is that metacognition is varying for many people. Some have really good metacognition, while others do not — and those who are struggling with, let’s say autonoetic metacognition, are typically experiencing other problems that impact their quality of life (for example, people who have schizophrenia). This is why studying metacognition, how it works in the brain, and trying to replicate metacognition in technology is so important. By understanding how metacognition works, we can find who struggles with metacognition and in what ways. With our neurological understanding, we can target these places in the brain and stimulate it for those struggling. To stimulate it, we can use neurotechnology, or we can implant a “metacognitive device” which simulates the experience of metacognition if the brain is completely deficient in this ability. Or, we can try a non-invasive approach such as that of the introspective exercises done with schizophrenic patients, as described earlier. Metacognition holds the key to improving the lives of people with a range of cognitive health issues. So, if we were to ignore metacognition in further research for the sake of its nuance or abstractness, we would simultaneously be dismissing the needs of large populations of people who are facing the consequences of impaired metacognition and could desperately use an improved one.

Bibliography

Fleming, S. M., Dolan, R. J., & Frith, C. D. (2012). Metacognition: computation, biology and function. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1594), 1280–1286.

Allen, M., Glen, J. C., Müllensiefen, D., Schwarzkopf, D. S., Fardo, F., Frank, D., … & Rees, G. (2017). Metacognitive ability correlates with hippocampal and prefrontal microstructure. NeuroImage, 149, 415–423.

Fleming, S. M., & Dolan, R. J. (2012). The neural basis of metacognitive ability. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1594), 1338–1349.

Lysaker, P. H., Buck, K. D., Carcione, A., Procacci, M., Salvatore, G., Nicolò, G., & Dimaggio, G. (2011). Addressing metacognitive capacity for self reflection in the psychotherapy for schizophrenia: a conceptual model of the key tasks and processes. Psychology and Psychotherapy: Theory, Research and Practice, 84(1), 58–69.

Cox, M. T. (2005). Metacognition in computation: A selected research review. Artificial intelligence, 169(2), 104–141.

This article was written by Jade Harrell, an undergraduate student at UC Berkeley studying Molecular and Cell Biology with a concentration in Neurobiology, as well as Psychology. This article was edited by Shobhin Logani and Jacob Marks.

--

--

Neurotech@Berkeley

We write on psychology, ethics, neuroscience, and the newest in neural engineering. @UC Berkeley