Commentary on Frank van der Velde and Marc de Kamps

To appear 2005 in Behavioral and Brain Sciences (BBS). ©BBS, Cambridge University Press.

Word Counts:
Abstract: 60 words
Main Text: 911 words
References: 102 words
Total Text: 1200 words

Distributed neural blackboards could be more attractive

André Grüning
Cognitive Neuroscience Sector
Scuola Internazionale Superiore di Studi Avanzati (SISSA)
Via Beirut 2—4
34104 Trieste
Italy
gruening at sissa dot it
http://www.sissa.it/~gruening

Alessandro Treves
Cognitive Neuroscience Sector
Scuola Internazionale Superiore di Studi Avanzati (SISSA)
Via Beirut 2—4
34104 Trieste
Italy
ale at sissa dot it
http://www.sissa.it/~ale/limbo.html




Abstract: The target article demonstrates how neurocognitive modellers should not be intimidated by challenges such as Jackendoff’s, and should explore neurally plausible implementations of linguistic constructs. The next step is to also take seriously insights offered by neuroscience, including the robustness allowed by analogue computation with distributed representations, and the power of attractor dynamics in turning analogue into nearly discrete operations.

Van der Velde and de Kamps add a new perspective to the neural mechanisms underlying syntactic binding in human sentence processing (and, perhaps, in visual processing). The neural blackboard model implements binding as an active and dynamic process, and thus it dispenses in a satisfying and elegant way with a homunculus observing synchronous network activity, as posited by some previous approaches. We regard this as an important first step, that can lead to important breakthroughs if combined with an understanding and effective use of cortical attractor dynamics.

Van der Velde and de Kamps commendably aim towards a non-symbolic representation of words by distributed neural assemblies (Pulvermüller 1999). In their implementation of the blackboard, however, they stop short of describing words, or in general items in semantic memory, in terms of local network attractors (O'Kane & Treves 1992), and therefore they cannot utilise either the partially analogue nature of attractor states or the correlational structure of composite attractors, such as varying overlap with competing assemblies, context-dependence of assembly activity or robustness to noise and partial disruption. Hence they use word assemblies in a monolithic, symbolic way, much like lexical word entries in a pure symbolic approach. The activation of their “sub-assemblies” is all-or-nothing (gradual activation is however used to control binding in recursive structures), and it is hard to see how it could be modulated e.g. by contextual information, even though the activation of different sub-assemblies could be interpreted, perhaps beyond the authors’ wish, as a coarse model of composite attractors. The symbolic nature of the proposed computations emerges also in the dichotomy between the blackboard, which temporarily stores a “parsing table”, and the central pattern generator (CPG) that serves as a finite control. The CPG hides much of the complexity required to operate the blackboard. What would be a neurally plausible implementation of the CPG?

Van der Velde and de Kamps helpfully contrast combinatorial with recursive productivity, an important distinction that is not so transparent in the later Chomskyan view (Hauser et al. 2002). Their combinatorics, however, disappointingly appears to rely on an implausibly vast number of specialised processors, although disguised behind the thin veneer of “word assemblies”. Assuming for instance that a third of all the words the cited English 17-year-old knows are nouns, and that there are 10 main noun assemblies, we are left with still 200,000 specialised gating assemblies (just for nouns) that all do more or less the same job. While this number is certainly much smaller than the total number of neurons or synapses in cortex, it appears to reflect an unimaginative view of assemblies as equivalent to local units — perhaps driven by an anxiety to respond to Jackendoff's challenges — that does not do justice to the idea of distributed processing in the brain. Composite cortical attractors, which allow for quantitative analyses of non-dynamic, long-term memory storage (Fulvi Mari and Treves 1998) also lead, with a simple adaptive dynamics, to combinatorial productivity, without any recourse to specialised units, as in the latching model (Treves 2005).

It is highly praiseworthy that van der Velde and de Kamps ground the blackboard operations in the concrete domain of neural network operations. A genuine challenge posed by neural mechanisms, however, is to describe how they could be possibly generated without recourse to innate knowledge or to explicit programming. Associative synaptic plasticity is the dominant, although not the only, concept emerging from neuroscience to account for learning processes, and it is for example sufficient to generate combinatorial productivity in the frontal latching networks model (Treves 2005). It would be exciting, and perhaps not far-fetched, to see van der Velde and de Kamps’ blackboards self-organise through an associative learning paradigm.

Van der Velde and de Kamps also briefly address recursive productivity making use of decaying activity in reverberating assemblies (Pulvermüller 1999). This fits naturally in with the blackboard architecture once there is a discrete set of structure assemblies. However a more distributed mechanism of recursion would be less ad hoc, for example the one discussed by Grüning (2005) for trainable networks. Perhaps one can integrate aspects of distributed network dynamics with composite attractor models in order to allow for distributed and compositional properties as the same time. Such model that combines a distributed blackboard for combinatorial structure and a distributed push-down (or more powerful) storage for recursion would be most interesting.

Last, we would invite to a dynamic interaction in a wider sense. Jackendoff (2002) invokes a conceptual innovation in cognitive neuroscience which would allow for a more productive dialogue between neural network models and linguistic theory, and van der Velde and de Kamps appear to respond to the call. While meeting the challenge of the linguists, however, cognitive modellers should not neglect to appreciate the conceptual advances in systems and theoretical neuroscience, especially as relates to assembly and attractor dynamics. Without it, it seems difficult to understand how linguistic and, in general, cognitive representations can emerge from the underlying neural dynamics, and what qualitative or quantitative changes in the properties of the cortical ware, which had already been shaped by 200 million years of mammalian evolution, led to the distinctiveness of human cognition.

In sum, we regard van der Velde's and de Kamp's article as an important progress in the direction of a fruitful interdisciplinary exchange; and suggest following this program through with the inclusion of distributed and dynamical attractor representations, thus avoiding some of the short-comings and the neural implausibility of the current model.




References

Grüning, A. (2005). Stack- and queue-like dynamics in recurrent neural networks. Connection Science 17. To appear.

Hauser, M. D., Chomsky, N., & Fitch, W. T. (2002). The faculty of language: What is it, who has it, and how did it evolve. Science 298:1569—1579.

Fulvi Mari, C.  & Treves, A. (1998). Modeling neurocortical areas with a modular neural network. Biosystems 48:47—55.

Jackendoff, R. (2002). Foundations of Language: Brain, Meaning, Grammar, Evolution. Oxford University Press, Oxford.

O'Kane, D. & Treves, A. (1992). Why the simplest notion of neocortex as an autoassociative memory would not work. Network 3:379—384.

Pulvermüller, F. (1999). Words in the brain's language. Behavioral and Brain Science 22:253—336.

Treves, A. (2005). Frontal latching networks: a possible neural basis for infinite recursion. Cognitive Neuropsychology 21:276—291.




Acknowledgement

A.G. was supported by the Human Frontier Science Program RGP0047-2004/C during the write-up of the comment.