Wednesday 12 August 2009

The Small Network Argument

Not long ago, the problem of consciousness was thought to be scientifically non-tractable. But with a lot more psychophysical and brain mapping data coming in from studies being done on normal subjects and people with brain disorders, the field has opened up. In the last decade, a variety of brain processes were proposed to account for consciousness and several more keep coming in but as of today, no concrete model exists. Typical examples include recurrent computation (Grossberg (1999) and Lamme (2006)), synchronized neural activity (Bachmann (1994), Engel et. al (1999)), winner takes all computation (Grossberg (1999)), closed loop action-perception processing (O'Regan and Noe (2001)). In the midst of the madness, comes a paper by Herzog et. al titled Consciousness and the Small Network Argument (Herzog, Esfeld and Gerstner (2007)). They propose that none of the current models can fully account for consciousness because of what they call the small network argument:

For each of the above models, there exists a very small neural network that fulfills the respective characteristics of the model but does not exhibit consciousness

1) Recurrent Computation (Lamme (2006))

Two mutually connected neurons make up a recurrent systems. Thus if recurrence is sufficient for consciousness, a network of two neurons must create consciousness (to which not may of us may accede).

2) Winner Takes All Computation (Grossberg (1999))

The basic operation of a winner-takes-all (WTA) circuit is to suppress the activity of all neurons except the one having the largest input. Minimal models require only three neurons fully connected to three presynaptic neurons, plus a neuron for vigilance, taking the tally to seven. Such a network can develop states allowing learning and memory. And so are we willing to grant consciousness to a network of seven neurons?

3) Synchronized Neural Activity (Bachmann (1994), Engel et. al (1999))

If synchronized firing in neurons is the main characteristic of consciousness, then a group of three inter connected neurons firing in synchrony is conscious

4) Closed-loop action perception processing (O'Regan and Noe (2001))

Herzog et. al give the example of a thermostat with a temperature sensor ('perception') controlling a heater ('action'). This could be formulated as a two neuron feed-forward network with a sensory neuron connected to an output neuron controlling the switch.

Thus they argue that if one does not want to attribute consciousness to such small networks, other components are needed. Additional characteristics are often imposed. Typical examples being attention, number of neurons and complexity of the network. Yet, none of these additional components necessarily solves the small network argument.

1) O'Regan and Noe (2001) combined their sensorimotor contingency approach with attention based processing. They state that perception occurs in a closed loop of action and information processing but consciousness emerges only when attention comes into play. The most common example of this being a person driving a car in an unconscious and automatic mode (unconsciously perceiving other cars on the road and taking correct measures), but conscious perception, e.g. of a traffic signal, arises only when attention comes into play. However, in this model, attention can be incorporated with just one single extra input arising from a second group containing a very small number of cells

2) It is often proposed that consciousness may emerge if the brain exceeds a certain number of neurons. In a model with a simple linear arrangement of neurons where each neuron is connected only to it left and right neighbors, there is no reason to believe why a network of 10^10 neurons is more capable of generating consciousness than a network of three neurons. hence size alone cannot account for this

3)Other approaches state that a certain level of complexity must be met before a network can yield consciousness. Herog et. al give the example of Tononi and Edelman (1998) who proposed to measure complexity by defining a functional cluster that was loosely connected to the rest of the network but still has a rich repertoire of internal states. But this complexity criterion can be met even by a small network. Herzog et al. propose a network of nine neurons, organized into clusters of three. Assume the following model of interaction. The state of each neuron is described by p bits. Therefore, each neuron has can take up 2^p possible states. Now in the first 100 msec, the three neurons agree on the first bit by a majority vote, in the next 50 msec, on the second bit and so on, until the last 200/(2^p) msec, when they decide on the last bit. Thus for any arbitrary p, three neurons will agree on p bits in 200 msec, showing a large level of complexity in a small cluster.

Hence, Herzog et. al argue that all the above mentioned components are important aspects to understand consciousness but they are only trivially necessary rather than sufficient. Finally, they argue that one way out of the small network argument is to assume that each network, however small, has a vanishingly small consciousness, but this form of "panpsychism" also has it's problems. What happens if two networks, each with it's own vanishingly small consciousness, are connected? Do the two consciousness merge, do they stay separate (as proposed in split brain patients) or are there new coalitions of neurons making up new consciousness?

Overall, the paper seems to have created a bit of a ripple in the community of neuroscientists working on consciousness. Taylor (2007) in "Commentary on 'the small network' argument" states

..paper of this issue (Herzog, Esfeld, & Gerstenr, 2007) has derived an interesting criterion for any neural model of consciousness, with the disturbing result that a whole raft of neural models of consciousness are deficient when looked at by this criterion. The criterion itself is that a neural model of consciousness is suspect if it can be shown to work using only a relatively small network of neurons (of the order of 10–100 or so) 


Reference

HERZOG, M., ESFELD, M., & GERSTNER, W. (2007). Consciousness & the small network argument Neural Networks, 20 (9), 1054-1056 DOI: 10.1016/j.neunet.2007.09.001

2 comments:

JNI said...

Thanks for this post, it's really interesting! I've been interested in the problem of consciousness for a long time and have personally favoured some combination of the recurrent computation and closed-loop action perception processing (without knowing these formal names), but this argument indeed makes me rethink my position!

Varun said...

Thanks. Yes the article does make me rethink my position too. Especially with respect to the panpsychism arguments made towards the end