Radical Emergence in Brain and Behavior

14 April 2023

In my book, The Entangled Brain, I proposed that it is productive to view the brain and a complex, entangled system, one in which the parts dynamically assemble into coalitions that support complex cognitive–emotional behaviors (Pessoa, 2022a). An entangled system is a deeply context-dependent one in which the function of parts (such as a brain region, or a population of cells within a region) must be understood in terms of other parts: an interactionally complex system. These notions are of course closely related to the concept of emergence, which has been the topic of much work and debated for over a century.

In a recent article, I proposed two types of emergent functional networks/circuits in the brain (Pessoa, 2022). In a Type I emergence, brain regions carry out (compute) fairly specific functions. Emergence, here, suggests that it is necessary to investigate the orchestration of multiple regions to understand how the regions, collectively, carry out the processes of interest. Importantly, however, the collective properties of the system are not accessible, or predictable, from the behavior of the individual regions alone: The multiregion function, F(R1, R2,…, Rn), is poorly characterized from considering f(R1), f(R2), and so on.

Now let us turn to Type II emergence, where areas do not instantiate specific functions. Instead, two or more regions working together instantiate the basic function of interest, such that its implementation is distributed across regions. It is easy to provide an example of Type II networks if we consider computational models where undifferentiated units are trained together to perform a function of interest. Pessoa (2022b) discusses a few examples that are closer to Type II emergence.

Here, I will discuss a more radical version of emergence, one that describes how functional circuits temporally assemble in complex and unpredictable ways in the brain to meet behavioral demands.

*****

Systems are frequently described in terms of states. Given a set of variables of interest x1, x2, …, xn, the state of the system is simply defined as the value of these variables at a given time t: S(t)= (x1(t), x2(t), …, xn(t)). This is useful because a system can be understood in terms of the the set of of possible admissible states.

These system variables, x1, x2, …, xn, are typically chosen because they are considered to be the most relevant or interesting to the particular system being studied. For example, if we were studying a car, we might choose variables such as speed, acceleration, fuel level, and engine temperature. In contrast, a description of the car in terms of its constituent atoms will typically not be very valuable.

The state of a system can be represented as a vector, S(t), consisting of the values of these variables at time t. This vector can be viewed as a point in state space (a Cartesian coordinate system). In some cases, certain regions of state space may not be relevant or accessible; if a variable cannot have a negative value, then the system will only occupy a subset of state space that does not include negative values for that variable.

Interestingly, the set of admissible states can also be represented by more exotic mathematical structures, such as manifolds, which are curved surfaces that can be embedded in higher-dimensional spaces. For example, the set of admissible states for a system with periodic boundary conditions may be represented by a torus.

As the system evolves over time, its state changes, and its trajectory can be thought of as a path through state space. This trajectory is determined by the system’s behavior and the underlying physical laws governing its motion. The types of trajectories that a system can undergo depend on the characteristics of the system itself. For example, a system that exhibits oscillatory behavior may trace out a closed loop in state space, while a system that converges towards a stable equilibrium will approach a fixed point.

*****

Thinking in terms of trajectories helps conceptualize systems as inherently processual. Whereas it is valuable to characterize states, these individual snapshots are part of temporally extended “objects”—a process.

Neuroscientists often think in terms of neuronal spikes as a key determinant of a brain’s state, so brain states can be defined in terms of the average spiking activity across neurons in a given area, for multiple areas of interest, say areas A1, A2, and A3. But a description of a brain state can encompass many other variables thought to be important, for example, the concentration of one or multiple neurotransmitters (e.g., dopamine, serotonin, etc.). In this manner, the state description can be enriched with as many variables as deemed important.

*****

Let’s now return to to emergence. Type I emergence in the brain is relatively uncontroversial; neuroscientists believe that even relatively simple behaviors will engage multiple brain areas that jointly contribute to the behavior in question. (Some neuroscientists might prefer versions of “emergence” where network functions follow fairly clearly from the functions of the individual areas, which would constitute a rather weak notion of “emergence”.)

Type II emergence in the brain means that functional units should be understood in terms of distributed circuits whose functions are not attributable to the parts andthe parts do not compute well-defined functions. Thus, circuits functions are more strongly emergent properties. In addition, the circuits “assemble” and/or evolve temporally. In other words, we should think of the functional circuit traversing a certain trajectory in a suitable state space.

I would like to propose that we need to consider a stronger version of emergence— radical emergence. The notion builds upon the concept of the adjacent possible developed by Stuart Kauffman (2000). To set the stage, consider a system at time t. What determines its future state, S(t+Δ)? Some systems have the benign property that they are memoryless, such that a future state is determined by its current state. An example of a so-called Markovian system is a simple random walk, where a particle moves randomly in discrete steps on a lattice. The probability of the particle being in a particular location at a future time only depends on its current location, and not on any previous locations.

I propose that many brain systems exhibit radical emergence in the sense that you cannot predict the future trajectory from its current state. Importantly, that is not because the system contains some stochasticity; or rather, it is for reasons that are beyond those of probabilistic evolution. In a nutshell, it is due to (1) the combinatorial nature of the potential states (hence trajectories); and (2) the inability to determine all the factors that need to be considered to, in theory, determine the future trajectory. Another way to put it is that the system is radically context dependent.

*****

The notion of the adjacent possible helps describe the set of all the possible next steps or innovations that are available to a system at any given moment in time. A system can only explore and access the adjacent possible: the set of all possible next steps that are one step away from its current state. Kauffman argues that in some complex systems the adjacent possible is constantly expanding and evolving, as each innovation or step forward opens up new possibilities for the system to explore. This means that the system is constantly discovering new opportunities and adapting to its changing environment, even as it remains constrained by the limitations of its current state.

Overall, the concept of the adjacent possible suggests that the evolution and innovation of complex systems is not a linear process, but rather a complex and unpredictable exploration of the possibilities that are available within each system’s unique set of constraints and capabilities.

In complex systems that follow radical emergence, one cannot pre-state all of the conditions that will determine its future. In other words, from Actual (present) states, it is not possible to determine Future states without stepping into Adjacent Possible states, which open up possibilities for further system evolution.

****

Let’s consider these ideas in the context of brain and behavior. From an Actual state, the system can flow into one of multiple Adjacent Possible (AP) states. But which one? By definition, all Adjacent Possible states can be reached, but what determines whether the system steps into AP1 or AP2 depends on a (very) large number of factors. I assume that among these are the animal’s history, the state of the world, and the internal state of the animal. Here, this statement will be left vague; i.e., how much of the animal’s history? how large is the state of the world? and so on.

One reason it is not so important to fully develop answers to the previous questions is because animal behavior is always dynamically coupled with an environment (i.e., everything external to the animal) that is itself dynamic. This leads to an effectively dynamic animal-plus-environment coupling. The animal impacts the environment, which impacts the animal, constantly.

The combination of this open-ended coupling together with a (very) large state-space required to accommodate the animal’s history and the internal state of the animal implies that determining how the Actual state evolves into an Adjacent Possible becomes practically impossible. In related scenarios, it can be shown mathematically that the combinatorics effectively “blows up”, namely, solving for the AP state the solutions grow so rapidly as to effectively diverge to infinity in a finite time (Cortês et al., 2023).

*****

Let’s turn to an example of fear extinction. In fear learning, an initially neutral stimulus (light) repeatedly paired with an aversive stimulus (shock) acquires affective significance. When such conditioned stimulus (light) no longer predicts the unconditioned stimulus to which it was paired in the past (the light is no longer followed by shock), the conditioned stimulus gradually stops eliciting the conditioned response. This process is called “fear extinction.”

The circuit above shows some of the areas involved in fear extinction (ACCUMBENS is part of the ventral striatum; BLA: basolateral amygdala; HIPP: hippocampus; MPFC: medial prefrontal cortex; PAG: periaqueductal gray; REUNIENS is a nucleus in the thalamus; VTA: ventral tegmental area). During the process of extinction, we can imagine a trajectory through state space. Although a “typical” trajectory based on laboratory studies can be described, trajectories “in the wild” will be variable and unpredictable a priori. This is because the exact trajectory depends on the animal’s history, the state of the world, and the internal state of the animal. In the laboratory, upon detecting the CS, the animal freezes in place (the only behavior that is viable). But in the wild, multiple behaviors are possible, and they are selected based on a myriad of factors. In the wild, the animal also doesn’t experience an uninterrupted sequence of “fear extinction trials” (conditioned stimulus followed by the absence of an unconditioned stimulus). Each occurrence (“trial”) is at least somewhat different, triggering a different neuronal trajectory every time. (See the diagram with “perturbation” above.)

Although there is evidence for the non-repeatability of animal behaviors (Latash, 2012), it could be argued that they fall into some relatively well-defined classes. For example, in adult female and male lion interactions, one observes a diverse but finite series of “typical” behavioral interactions (while this is mostly true, reports of unseen behaviors, although anecdotal, are fairly common). But even here, one can only predict one or a few of the most likely behavioral patterns, each of which will evolve according to a neural-behavioral trajectory. Even more so, the claim is that specific behavioral instances cannot be predicted (at what point does a defeated lion with its hindlegs paralyzed given previous attacks stop fighting back?).

*****

In conclusion, the working hypothesis advanced here is the following: dynamically assembled functional circuits in the brain are emergent (in the Type II sense). Not only are they emergent, but they are radically emergent. The way functional circuits assemble is not repeated from time to time, they follow future evolutions that are matched to the specifics of the moment-in-time in question. They are unpredictable because of the radical context-dependence of how complex systems evolve. In all, neural-behavioral trajectories evolve in ways that are constantly opening into new possibilities.

Coda

There are a number of objections that can be raised to the ideas above, including lack of predictive power, ignoring constraints, and vagueness, among others. Hopefully, these will be addressed in the near future.

The framework has been used to mostly developed mostly in the context of evolutionary innovations, innovations of the biosphere, and technological innovations. In such cases, the innovations are objects of a given kind. Here, the proposal is that similar ideas can be applied to neural-behavioral “state innovations”. These innovations are of a different kind compared to new phenotypes or technological products, for example. What is being argued here is that the trajectory is unpredictable because to determine it one would have to carry “too much information” (in some sense that needs to be formalized).

A relevant discussion is provided by Montévil (2019) concerning the space of all possible musical symphonies (where one can define the set of all possible musical scores as the set of combinations of musical symbols).

References

Cortês, M., Kauffman, S. A., Liddle, A. R., & Smolin, L. (2022). The TAP equation: evaluating combinatorial innovation. arXiv:2204.14115.

Kauffman, S. A. (2000). Investigations. Oxford University Press.

Latash, M. L. (2012). The bliss (not the problem) of motor abundance (not redundancy). Experimental Brain Research, 217, 1-5.

Montévil, M. (2019). Possibility spaces and the notion of novelty: from music to biology. Synthese, 196(11), 4555-4581.

Pessoa, L. (2022a). The Entangled Brain: How Perception, Cognition, and Emotion Are Woven Together. MIT Press.

Pessoa, L. (2022b). The entangled brain. Journal of Cognitive Neuroscience, 35(3), 349-360.

Leave a Reply

Your email address will not be published. Required fields are marked *

Comments