header logo

Why Language and Brain Still Talk Past Each Other

Why Language and Brain Still Talk Past Each Other
                                                                                                                      (image source: NYU)

Why Language and Brain Still Talk Past Each Other

This post grows out of a public conversation hosted by the Oxford University Linguistic Society with Professor David Poeppel, a leading figure in the neurobiology of language. The occasion is unremarkable; the problem it addresses is not. After decades of parallel progress, linguistics and neuroscience remain curiously misaligned. Both fields study language, both claim cognitive relevance, and yet their deepest insights rarely meet.


On the linguistic side, more than two millennia of inquiry, accelerated dramatically in the past sixty years, have produced an extraordinarily rich understanding of language structure. Modern linguistics offers a detailed ontology: sounds decomposed into features, words into morphemes, sentences into hierarchies, and grammars into systems of constraints and operations. These are not casual descriptions; they are hard‑won empirical generalizations, tested across languages and acquisition stages.


On the neuroscientific side, progress has been equally impressive but differently organized. Researchers now distinguish dozens of cell types, map intricate cortical and subcortical circuits, and record neural activity at millisecond resolution. Neuroscience, too, has a parts list of areas, pathways, rhythms, and mechanisms. In principle, these two inventories should align. Language is a brain function. Yet the alignment remains elusive.


What exists instead are correlations: activation here when a sentence is heard, damage there when speech falters. These correlations are informative but shallow. They do not yet explain how linguistic structure is implemented, nor how neural computation gives rise to grammatical knowledge. This gap, conceptual, methodological, and terminological, is the central problem this section addresses.


II: The Granularity Mismatch


One of the most persistent obstacles to integration is a mismatch of grain. Linguistic theory often operates at a fine level of abstraction: distinctions between types of movement, feature checking, scope, or syllable structure. Neuroscience, by contrast, has traditionally examined language at a coarse scale, asking whether whole regions or networks are involved in comprehension or production.


The result is an uneven dialogue. Linguists see neuroscientific findings as vague and underspecified; neuroscientists see linguistic theories as too abstract to test. Both reactions are understandable. The challenge is not to force one field into the mold of the other, but to identify an intermediate level of description, a shared vocabulary at which hypotheses can be meaningfully linked.


This is not a philosophical impasse but a practical one. Progress will likely be incremental: clarifying which linguistic distinctions matter for computation, and which neural distinctions matter for representation. Step by step, the granularity mismatch can be reduced, though never entirely eliminated.


III: Competence, Performance, and a Historical Detour


The competence–performance distinction, introduced in mid‑twentieth‑century generative linguistics, was a productive idealization. By separating knowledge of language from its use, linguists could study grammatical structure without being overwhelmed by memory limits, errors, and distractions. Like frictionless planes in physics, the abstraction was never meant to deny reality.


Yet the distinction had unintended consequences. Psycholinguistics and theoretical linguistics drifted apart. Processing research proceeded with little reference to grammatical theory, while syntactic theory often ignored real‑time computation. The result was intellectual fragmentation, reinforced by disciplinary boundaries.


Reintegration does not require abandoning idealization. It requires remembering that grammars are hypotheses about cognitive systems. They describe not only abstract well‑formedness, but the operations and representations that a mind must deploy. Seen this way, experimental data are not optional add‑ons; they are potential arbiters between competing theories.


IV: Computation, Representation, and Linguistic Primitives


Underlying much contemporary work is the computational theory of mind: the idea that cognition involves computation over mental representations. In language, this raises immediate questions. What are the representations? What are the elementary operations?


Some answers are already well motivated. In phonology, distinctive features function as plausible primitives—units that are both linguistically explanatory and neurally grounded. The syllable, too, emerges as a central unit, coordinating perception and production and supported by converging experimental evidence.


Beyond phonology, syntax posits operations such as concatenation and hierarchical combination. Whether labeled “merge” or otherwise, such operations must ultimately be implemented by neural circuits. Neuroscience has begun to show that surprisingly simple biological mechanisms can perform nontrivial computations, including logical functions once thought abstract. This invites a disciplined research strategy: specify the elementary computations linguistic theory requires, then ask how neural tissue could realize them.


V: Classical Models and Their Limits


Any discussion of language and the brain must acknowledge the classical model associated with Broca and Wernicke. Historically, it was revolutionary. It established that language is lateralized and that specific brain damage yields specific linguistic deficits. Clinically, it remains useful.


Scientifically, however, the model is too simple. Neither Broca’s area nor Wernicke’s area is a single functional unit. Each comprises many subregions with distinct connectivity and roles. Language processing relies on multiple pathways, not a single cable. Later refinements, such as dividing syntax and semantics across frontal and temporal regions, improved matters but remained underspecified.


The lesson is not that localization is misguided, but that it must be nuanced. Language cannot be mapped onto a cartoon brain.


VI: Modularity Without Localization


Critics sometimes argue that because no single brain area corresponds to “the language faculty,” the notion of a language module is suspect. This inference is mistaken. Modularity is a functional concept, not an anatomical one. A system can be specialized without being spatially isolated.

Language draws on auditory, visual, motor, and conceptual resources. Its processing is necessarily distributed. What matters is whether there are circuits dedicated to particular kinds of representations and computations. In this sense, modularity concerns informational specificity, not physical boundaries.

Some computations may be generic, shared with other cognitive domains. Others, especially those involving lexical storage, may be more specialized. Distinguishing these cases is an empirical challenge, not a conceptual flaw.


VII: Methods, Models, and Modesty


Recent methodological advances have transformed the landscape. Electrophysiological techniques offer the temporal precision required to study language as it unfolds in real time. Imaging methods provide spatial context. Machine‑learning tools allow researchers to analyze richer, more naturalistic data.


Yet tools alone do not solve theoretical problems. Models must remain modest. Animal studies cannot reveal grammatical principles, but they can illuminate generic computational motifs conserved across evolution. Engineering‑style decomposition, breaking complex problems into simpler subproblems, offers a productive path forward.


Throughout, restraint is essential. Brains are complex, and explanatory success will be partial. The goal is not a final theory of language in the brain, but a progressively better understanding of how structure, computation, and biology constrain one another.


VIII: Reflections


The conversation that inspired this section ends, fittingly, with thanks and mutual appreciation. That ending is instructive. Progress in understanding language and the brain will not come from disciplinary conquest or theoretical bravado. It will come from sustained dialogue, shared problems, and a willingness to work at the interfaces.


Linguistics brings clarity about structure. Neuroscience brings constraints from biology. Between them lies an open field, difficult, uneven, and intellectually alive. That field is where the next generation of insights will emerge.


David Poeppel on Language and the Brain - Oxford University Linguistics Society

Tags

Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.