THEORETICAL ARCHITECTURES OF CONSTRUCTION GRAMMAR
From Ontology to Neural Realization
Riaz Laghari
THESIS
This post argues that Construction Grammar (CxG) is not merely a usage-based alternative to generative grammar, but a competing architecture of mind whose viability depends on resolving three foundational challenges:
- Ontological precision — What exactly is a construction?
- Computational adequacy — Can constructions generate language systematically?
- Neurocognitive reality — Do constructions correspond to mental and neural representations?
The post situates CxG within the broader debate:
Is grammar fundamentally derivational and computational, or network-based and probabilistic?
CENTRAL CLAIM
Construction Grammar is not anti-formal.
It is a different formalism.
If properly articulated, it offers:
- A unified theory of idioms and syntax
- A cognitively grounded model of argument structure
- A network-based architecture of grammar
- A bridge between symbolic and statistical models
Significance
It would:
- Position you in the generative vs usage-based debate
- Integrate your expertise in syntax and cross-linguistic comparison
- Contribute to theoretical linguistics beyond descriptive exposition
It would not merely explain CxG.
It would test whether CxG can be a full theory of grammar.
STRUCTURE OF THE POST/The architecture mirrors the intellectual progression of the field.
PART I- ONTOLOGICAL FOUNDATIONS
What Kind of Thing Is Grammar?
1- The Collapse of the Lexicon–Syntax Divide
- Core–periphery distinction in generative grammar
- Fillmore’s continuum thesis
- Idioms as gradient constructions
- The anti-modular shift
The Question of Boundaries
Few distinctions have structured modern linguistic theory as powerfully as the division between the lexicon and syntax. In generative grammar, this division is not merely organizational; it is ontological. The lexicon stores idiosyncratic information about words. Syntax computes structure through general rules. The lexicon is memory; syntax is computation.
Construction Grammar (CxG) challenges this architecture at its root. It proposes that the fundamental units of grammar are not lexical items on the one hand and syntactic rules on the other, but constructions, conventionalized pairings of form and meaning at varying levels of schematicity. Under this view, idioms, argument structure patterns, and abstract phrase structure schemas differ only in degree, not in kind.
This shift appears modest. It is not. It collapses one of the central ontological distinctions of generative linguistics and replaces it with a continuum model of grammatical organization.
The question that motivates this chapter is therefore foundational:
What kind of entity is grammar if the lexicon–syntax divide dissolves?
The Core–Periphery Distinction in Generative Grammar
Generative grammar traditionally distinguishes between:
Core grammar: rule-governed, productive, universal principles (e.g., phrase structure rules, X-bar theory, Merge).
Periphery: idiosyncratic constructions, idioms, lexical exceptions.
In Government and Binding and later Minimalism, the core grammar is computationally generative. It produces infinite expressions from finite means. The periphery contains irregularities that must be listed but do not define the architecture.
This distinction preserves explanatory economy:
- General principles belong to syntax.
- Exceptions belong to the lexicon.
- Productivity emerges from derivation.
The architecture is modular. Syntax operates independently of lexical idiosyncrasy except where features trigger computation.
However, idioms present a structural challenge. Expressions like:
- kick the bucket
- take advantage of
- What’s X doing Y?
cannot be reduced to lexical irregularities without referencing syntactic structure. They are not mere words. They are structured expressions with partially predictable meaning.
The periphery begins to look structurally organized.
Fillmore’s Continuum Thesis
Charles Fillmore’s work in Construction Grammar reframed this tension. Rather than treating idioms as peripheral anomalies, Fillmore proposed that they reveal a deeper truth:
There is no sharp boundary between lexical items and syntactic constructions.
Instead, grammar consists of a continuum:
- Fully fixed expressions (e.g., by and large)
- Partially schematic idioms (e.g., What’s X doing Y?)
- Argument structure constructions
- Fully abstract syntactic schemas
Each of these is a form–meaning pairing.
The crucial insight is this:
If idioms require structured representation, then syntax already tolerates stored pairings beyond single words. Once this is admitted, the conceptual wall separating lexicon and syntax begins to erode.
Fillmore’s continuum thesis thus destabilizes the core–periphery distinction. What generative grammar treats as marginal may instead reveal the architecture of grammar itself.
Idioms as Gradient Constructions
Idioms are not binary entities. They exhibit gradience:
- Some are semantically opaque (kick the bucket).
- Others are partially compositional (spill the beans).
- Still others are structurally constrained but semantically transparent (the X-er, the Y-er).
This gradience challenges categorical classification. If some idioms allow internal variation while others do not, the boundary between lexical storage and syntactic rule becomes porous.
Construction Grammar interprets this gradience as evidence that:
- All grammatical knowledge consists of stored pairings.
- The difference between idioms and rules is one of schematicity.
Under this view:
- Words are low-level constructions.
- Argument structure patterns are mid-level constructions.
- Phrase structure templates are high-level constructions.
The lexicon–syntax distinction is replaced by a hierarchy of constructions linked in a network.
The Anti-Modular Shift
The collapse of the lexicon–syntax divide entails a broader theoretical transformation: the rejection of strong modularity.
In generative grammar:
- The lexicon provides features.
- Syntax computes structure.
- Semantics interprets output.
Each module has relative autonomy.
Construction Grammar, particularly in its usage-based variants, adopts a non-modular stance:
- Form and meaning are paired at every level.
- Syntax and semantics are inseparable.
- Pragmatic constraints may be encoded within constructions.
Grammar becomes an inventory of symbolic units organized in a network rather than a derivational engine operating over abstract primitives.
This anti-modular shift has profound implications:
- There is no purely syntactic computation independent of meaning.
- Productivity arises from abstraction across stored instances.
- The architecture of grammar resembles a structured memory system more than a computational procedure.
The Strength of the Collapse
The dissolution of the lexicon–syntax divide achieves several theoretical gains:
- It eliminates arbitrary boundaries.
- It integrates idiomatic and productive phenomena.
- It accounts for gradience and partial productivity.
- It aligns with psycholinguistic evidence for stored multi-word units.
- It integrates frequency effects naturally.
In doing so, Construction Grammar resolves long-standing descriptive tensions in generative theory.
However, theoretical gain comes with ontological cost.
Ontological Inflation
If every conventionalized form–meaning pairing qualifies as a construction, then grammar risks becoming:
- An unbounded inventory.
- A network without clear inclusion criteria.
- A theory that predicts everything and therefore explains little.
This is the problem of ontological inflation.
When does a pattern count as a construction?
- Is any frequently occurring phrase a construction?
- Is every collocation stored?
- Are highly productive patterns also stored as wholes?
Without principled constraints, the theory risks trivialization:
If everything is a construction, then “construction” ceases to discriminate.
The lexicon–syntax collapse thus creates a new demand:
CxG must articulate explicit criteria for constructional status.
Toward Ontological Precision
The remainder of this book takes this inflation seriously.
To avoid triviality, Construction Grammar must specify:
- Thresholds of conventionalization.
- Evidence from productivity and frequency.
- Psycholinguistic indicators of storage.
- Formal representational constraints.
- Mechanisms of abstraction and compression.
Only then can the collapse of the lexicon–syntax divide serve as an explanatory advance rather than a descriptive expansion.
From Divide to Continuum
The collapse of the lexicon–syntax distinction marks one of the most significant architectural shifts in contemporary linguistics.
Generative grammar builds a theory of computation.
Construction Grammar builds a theory of structured memory.
Whether grammar is best modeled as derivation or as networked symbolic pairing remains an open question.
What is clear, however, is this:
Once idioms are admitted into the structural core, the boundary between lexicon and syntax cannot remain intact.
The task now is not to restore the boundary, but to discipline its absence.
The next section turns to the central question this collapse raises:
What exactly is a construction?
2- The Ontology of the Construction
- Form–meaning pairings
- Schematicity gradients
- Entrenchment and abstraction
- The Boundary Problem
- Conventionalization
- Frequency threshold
- Semantic non-compositional contribution
- Psycholinguistic evidence
From Inventory to Entity
Once the lexicon–syntax divide dissolves, grammar becomes an inventory of constructions. But this shift immediately raises a deeper question:
What kind of entity is a construction?
Is it:
- A stored chunk?
- A generalized template?
- A symbolic pairing?
- A usage-based abstraction?
- A mental schema?
- A descriptive convenience?
Without ontological clarity, Construction Grammar risks becoming a taxonomy rather than a theory.
This section argues that constructions must be treated as psychologically real, gradient symbolic schemas whose status is determined by principled criteria. The central task is to define when a pattern qualifies as a construction and when it does not.
Form–Meaning Pairings: The Minimal Definition
Across its dialects, Construction Grammar converges on a minimal definition:
A construction is a conventionalized pairing of form and meaning.
“Form” includes:
- Phonological structure
- Morphosyntactic configuration
- Prosodic contour
“Meaning” includes:
- Lexical semantics
- Argument structure roles
- Information structure
- Pragmatic constraints
This definition is deceptively simple. Its strength lies in its inclusivity: it allows words, idioms, abstract schemas, and discourse patterns to fall under a single representational category.
But this generality generates a theoretical risk: the definition is so broad that it may overgenerate.
If any pairing of form and meaning counts, then every utterance is a construction.
The ontology must therefore be refined.
Schematicity Gradients
Constructions differ in their level of abstraction.
Consider a gradient:
Fully fixed:
by and large
Partially schematic:
What’s X doing Y?
Argument structure schematic:
[Subj V Obj Obj₂] (ditransitive)
Highly abstract phrase structure schema:
NP → Det N
These are not categorically distinct objects but points on a continuum of schematicity.
Schematicity refers to the degree to which a construction:
- Contains fixed lexical material
- Specifies syntactic slots
- Constrains semantic roles
The gradient nature of constructions allows CxG to model:
- Partial productivity
- Analogical extension
- Prototype effects
However, gradience complicates ontological commitment. If constructions vary continuously in abstraction, what anchors their identity?
Entrenchment and Abstraction
Usage-based models introduce two central mechanisms:
Entrenchment
Entrenchment refers to the strengthening of mental representation through repeated exposure. Frequently encountered patterns become cognitively stable and more readily accessible.
Entrenchment predicts:
- Faster processing
- Reduced susceptibility to change
- Increased resistance to innovation
Abstraction
Abstraction refers to the generalization across instances.
From repeated exposure to:
- “give him the book”
- “send her a letter”
- “hand me the keys”
a learner abstracts the ditransitive schema.
The ontology of constructions thus depends on two interacting forces:
- Storage of experienced tokens
- Abstraction over those tokens
This dynamic allows CxG to explain productivity without derivation.
But it introduces a new problem.
The Boundary Problem
If constructions emerge gradually through entrenchment and abstraction, then:
- When does a pattern cross the threshold from usage event to stored construction?
- Is every frequent phrase a construction?
- Is low-frequency but semantically distinctive structure a construction?
- Does storage require fixed lexical material?
Without clear criteria, the theory risks becoming unfalsifiable.
We therefore require principled diagnostics.
Criteria for Constructional Status
This post proposes four interlocking criteria for determining whether a pattern qualifies as a construction.
None is individually sufficient. Together, they constrain ontological inflation.
Criterion 1: Conventionalization
A construction must be socially shared.
It must:
- Be recognized across speakers.
- Exhibit stability over time.
- Function as part of communal linguistic knowledge.
Conventionalization distinguishes:
Ephemeral co-occurrencesfrom
Established linguistic patterns.
A creative metaphor is not automatically a construction. Repeated adoption across speakers may transform it into one.
Conventionalization anchors constructional status in social cognition rather than private inference.
Criterion 2: Frequency Threshold
Frequency alone does not define a construction. However, frequency contributes to entrenchment and abstraction.
Two types of frequency matter:
- Token frequency: repetition of specific strings.
- Type frequency: diversity of lexical fillers within a schema.
High token frequency promotes storage of fixed expressions.
High type frequency promotes schematic abstraction.
A construction typically emerges when:
A pattern crosses a usage threshold sufficient to stabilize representation.
Frequency is therefore probabilistic evidence, not categorical definition.
Criterion 3: Semantic Non-Compositional Contribution
A construction qualifies when it contributes meaning not predictable from its parts.
This contribution may be:
- Idiomatic (kick the bucket)
- Argument structural (caused-motion adds causation + path)
- Information-structural (clefts introduce focus)
- Pragmatic (incongruity constructions signal stance)
The key is that the configuration itself carries semantic force.
If a pattern adds no independent meaning beyond compositional syntax, its status as a distinct construction weakens.
This criterion prevents trivial multiplication of constructions.
Criterion 4: Psycholinguistic Evidence
A pattern gains ontological credibility if experimental evidence suggests:
- Independent storage
- Distinct priming effects
- Faster processing relative to novel combinations
- Neurological activation patterns specific to the configuration
Structural priming studies are particularly relevant. If exposure to a construction increases the likelihood of reusing that configuration independent of lexical overlap, this suggests representation at the constructional level.
Psycholinguistic evidence anchors the ontology in cognitive reality.
Constructions as Gradient Symbolic Schemas
Taken together, these criteria support a view of constructions as:
- Gradient in abstraction
- Socially conventionalized
- Frequency-sensitive
- Semantically contributory
- Psychologically real
They are not merely descriptive labels imposed by linguists.
They are emergent symbolic schemas shaped by usage but stabilized by representation.
Avoiding Ontological Inflation
The four criteria collectively prevent trivialization.
Not every frequent phrase is a construction.
Not every abstract pattern qualifies.
Not every co-occurrence demands storage.
Constructional status requires convergence of:
- Conventionalization
- Usage stability
- Semantic contribution
- Cognitive evidence
This multi-criterion approach transforms Construction Grammar from a descriptive framework into a constrained ontological model.
From Inventory to Architecture
The ontology of the construction cannot remain informal if Construction Grammar is to compete with derivational theories.
The central question-when does a pattern become a construction?-forces CxG to articulate:
- Its theory of representation
- Its theory of learning
- Its theory of cognitive storage
Grammar, under this view, is neither purely computational nor purely associative.
It is a structured network of symbolic schemas whose existence is determined by usage, meaning, and mental representation.
The next chapter addresses the remaining tension:
If constructions are stored, how does the system avoid representational overload?
This leads us to the Storage–Abstraction Paradox.
3- The Storage–Abstraction Paradox
- The Storage Explosion Problem
- Type vs Token frequency
- Network compression mechanisms
- Prototype effects
Construction Grammar makes a radical claim:
All levels of linguistic structure are constructions.
Words, idioms, partially schematic templates, argument structure frames, discourse patterns , all are stored form–meaning pairings.
But this generates a serious cognitive problem:
If every learned pairing is stored, how does the system avoid representational overload?
This is the Storage–Abstraction Paradox:
- Usage-based learning predicts rich storage.
- Cognitive plausibility demands compression.
CxG must reconcile these forces or risk theoretical collapse.
The Storage Explosion Problem
Consider the scale of linguistic input:
- Millions of tokens over development.
- Thousands of recurring constructions.
- Countless partially overlapping patterns.
If each encountered string leaves a distinct trace, the grammar would require:
- Massive redundancy
- Unbounded memory
- Inefficient retrieval
Naively interpreted, usage-based CxG implies storage of:
- Every token
- Every type
- Every sub-pattern
- Every abstraction
This leads to a combinatorial explosion.
Generative grammar avoided this problem by sharply separating:
- A compact rule system
- A lexicon of atomic items
Construction Grammar dissolves that boundary- and must therefore provide an alternative compression strategy.
Type vs. Token Frequency
Frequency plays a dual role in CxG, but its effects differ fundamentally depending on whether we consider token or type frequency.
Token Frequency
High token frequency:
- Strengthens memory traces.
- Encourages chunking.
- Promotes direct retrieval.
Examples:
- “I don’t know”
- “at the end of the day”
These patterns may become stored holistically.
Type Frequency
High type frequency across a pattern:
- Encourages abstraction.
- Supports generalization.
- Weakens reliance on fixed lexical content.
For example, exposure to:
- give her a book
- send him a letter
- show them a picture
supports abstraction of a ditransitive schema.
The paradox arises because:
- Token frequency pushes toward storage of specific instances.
- Type frequency pushes toward schematic generalization.
A cognitively realistic model must integrate both forces without duplicating everything.
Redundancy vs. Efficiency
Natural language exhibits massive redundancy.
Consider:
- “kick the bucket”
- “spill the beans”
- “blow off steam”
Each could be stored separately.
But these idioms also share structural and semantic properties:
- Verb + determiner + noun
- Non-literal meaning
- Restricted substitution
The cognitive system must represent:
- Their individual identities
- Their shared structure
The paradox becomes architectural:
How can the grammar encode both similarity and specificity without exponential storage?
Network Compression Mechanisms
The solution lies not in abandoning storage, but in formalizing compression.
Constructions must be organized as a network with the following properties:
Inheritance Hierarchies
Higher-level schemas encode shared structure.
Lower-level constructions inherit features.
For example:
[Transitive Construction]
↓
[Ditransitive Construction]
↓
[Double Object Construction]
Shared properties are stored once at higher nodes.
This reduces redundancy.
Default Inheritance
Features need not be fully repeated.
A construction inherits default properties unless overridden.
This mirrors object-oriented systems:
- General schema specifies argument roles.
- Specific idiom overrides semantic mapping.
This prevents duplication of information.
Pattern Clustering
Constructions with overlapping properties form clusters.
Clusters allow:
- Partial sharing
- Prototype-based organization
- Gradient membership
Not every construction must be derived from a single parent.
Similarity-based links allow lateral compression.
Frequency-Based Pruning
Low-frequency patterns that do not stabilize may:
- Fail to consolidate.
- Remain weakly represented.
- Be subsumed under broader schemas.
The system does not permanently store every encountered pattern.
Stability requires repeated reinforcement.
Prototype Effects in Constructional Networks
Empirical evidence suggests that categories exhibit prototype structure.
Within the ditransitive family:
Prototypical example:
- “John gave Mary a book.”
Less prototypical:
- “John baked Mary a cake.”
Even less prototypical:
- “John allowed Mary a break.”
These differ in semantic centrality.
Prototype theory predicts:
- Faster processing for central members.
- Gradient acceptability judgments.
- Asymmetric generalization patterns.
Thus, constructions are not rigid rule schemas but radial categories.
Prototype structure contributes to compression:
- Central exemplars anchor abstraction.
- Peripheral cases attach via similarity links.
- The system avoids uniform duplication.
Toward Formal Abstraction Algorithms
To remain cognitively plausible, CxG must specify:
- How generalization occurs.
- How redundancy is minimized.
- How overlapping schemas compete.
This requires formal mechanisms.
Possible algorithmic principles include:
Statistical Generalization
Abstract a schema when:
- Type frequency crosses threshold.
- Variance within slots stabilizes.
Bayesian Inference
Learners update probabilistic expectations:
- P(schema | data)
- P(data | schema)
Schemas survive if predictive.
Similarity-Based Clustering
Group constructions via:
- Shared semantic roles
- Shared morphosyntactic frames
- Shared discourse function
Competition and Entrenchment
Stronger schemas inhibit weaker overlapping ones.
High-entrenchment patterns dominate processing.
Without explicit algorithms, CxG risks remaining metaphorical.
Comparison with Generative Minimalism
Minimalism achieves compression through:
- A small set of operations (Merge)
- Feature checking
- Economy principles
Construction Grammar must achieve comparable economy via:
- Network architecture
- Inheritance
- Statistical consolidation
If it cannot, the generative critique, that CxG lacks formal compactness, gains force.
The Storage–Abstraction Paradox is therefore not peripheral; it is existential.
Thesis: Formalization Is Necessary
The central claim:
Construction Grammar must formalize abstraction algorithms to remain cognitively plausible.
Without:
- Defined learning thresholds
- Structured inheritance
- Statistical consolidation principles
the theory risks:
- Ontological inflation
- Representational redundancy
- Descriptive excess
With them, CxG becomes:
- A compressed symbolic network
- Usage-driven but computationally disciplined
- Compatible with cognitive constraints
Toward Neural Realization
The resolution of the Storage–Abstraction Paradox pushes the theory toward implementation-level questions:
- How are constructional networks encoded neurally?
- How are prototypes represented in cortical systems?
- How does frequency reshape connectivity?
These questions move us from ontology to cognitive architecture.
The next chapter begins that transition:
From Symbolic Networks to Neural Substrates
PART II- ARGUMENT STRUCTURE & ARCHITECTURAL COMPETITION
- Caused-motion
- Ditransitive
- Resultative
- Constructional meaning
For much of late 20th-century syntax, argument structure was assumed to be:
- Lexically specified.
- Projected from the verb.
- Structurally realized through syntactic rules.
Under projectionist models:
- The verb encodes its argument frame.
- Syntax builds structure accordingly.
- Constructions are derivative, not generative.
For example:
- give selects three arguments.
- Syntax projects [Agent, Theme, Goal].
- The surface structure reflects lexical specifications.
Goldberg’s intervention fundamentally reverses this architecture.
The Core Claim
Goldberg (1995) proposes:
Argument structure constructions themselves carry meaning.
Verbs do not project argument structure.
Instead, verbs are inserted into constructional frames.
This shift has enormous theoretical consequences.
Under this view:
- Constructions contribute semantic roles.
- Verbs specify only event-type content.
- Argument structure is emergent from pairing.
This is not a minor adjustment. It is an architectural revolution.
The Caused-Motion Construction
Consider:
- “She sneezed the napkin off the table.”
- “He laughed the actor off the stage.”
- “They pushed the cart into the garage.”
Traditional projectionism struggles with verbs like sneeze and laugh, which are intransitive.
Yet they appear in a transitive, causative frame.
Goldberg’s analysis:
The Caused-Motion Construction carries the meaning:
X causes Y to move to Z.
Form:
[Subj V Obj Oblpath]
Meaning:
Agent causes Theme to move along Path.
The verb need not lexically encode causation.
Instead:
- The construction contributes causation and motion.
- The verb integrates as a manner component.
Thus:
- sneeze provides manner.
- The construction provides causative transfer.
This elegantly explains productivity.
The Ditransitive Construction
Consider:
- “John gave Mary a book.”
- “She baked him a cake.”
- “He tossed her the keys.”
- “She knitted him a sweater.”
Projectionist accounts must treat give, bake, toss, and knit as lexically subcategorizing for two objects , or derive alternations.
Goldberg instead posits the Ditransitive Construction:
Form:
[Subj V Obj Obj₂]
Meaning:
X causes Y to receive Z.
The construction itself encodes transfer.
Verbs compatible with this meaning can appear within it.
Crucially:
- bake does not lexically encode transfer.
- The construction imposes a transfer interpretation.
The meaning is not derived from the verb alone.
It is constructionally supplied.
The Resultative Construction
Consider:
- “She hammered the metal flat.”
- “He wiped the table clean.”
- “They drank the pub dry.”
Resultatives pose severe challenges for lexical projection.
The verb’s lexical semantics does not specify:
- A resultant state.
- A scalar endpoint.
Goldberg’s solution:
The Resultative Construction carries:
Form:
[Subj V Obj Adj/Result Phrase]
Meaning:
X causes Y to become Z.
The construction contributes:
- Causation
- Result state
- Telicity
The verb contributes manner or process.
Thus argument structure is constructionally licensed.
Constructional Meaning
Goldberg’s most radical insight is that constructions possess:
- Independent semantic content.
- Independent argument structure.
- Independent productivity.
Constructions are not epiphenomena of syntax.
They are meaningful pairings with:
- Event templates
- Participant roles
- Constraints on compatibility
Meaning composition therefore becomes:
Verb semantics
Construction semantics
= Integrated interpretation
This reconceptualizes compositionality.
Evidence Against Pure Projection
Several empirical facts support the constructional view:
Novel Verb Insertion
Children accept novel verbs in constructions:
“She mooped him the ball.”
This suggests:
The construction licenses argument structure.
The verb need not pre-specify it.
Semantic Coercion
Certain verbs shift interpretation to fit constructional meaning:
“She sneezed the napkin off the table.”
The verb adapts to the frame.
Projectionist accounts require lexical rule expansion.
Construction Grammar predicts compatibility-driven insertion.
Cross-Verb Generalizations
Different verbs share:
- Transfer interpretation in ditransitives.
- Causation interpretation in caused-motion.
These shared meanings are not reducible to lexical entries.
They are constructionally generalized.
Architectural Competition
We now confront the theoretical stakes.
Projectionist Architecture
- Verb-centered.
- Argument structure lexically encoded.
- Syntax maps lexical frames.
Advantages:
- Compact lexicon.
- Clear derivational procedure.
Limitations:
- Verb-class proliferation.
- Ad hoc lexical rules.
- Difficulty with creative extensions.
Constructional Architecture
- Frame-centered.
- Argument structure emerges from constructions.
- Verbs integrate into existing schemas.
Advantages:
- Explains productivity.
- Accounts for coercion.
- Captures semantic clustering.
Risk:
- Proliferation of constructions.
- Potential redundancy.
- Need for formal constraint (see section 3).
Argument structure becomes the proving ground for which architecture better models linguistic reality.
Theoretical Consequences
If argument structure is constructionally emergent:
- Lexical entries are semantically thinner.
- Syntax is semantically enriched.
- Compositionality is distributed.
- Grammar becomes a network of event schemas.
This disrupts:
- Theta-role assignment theories.
- Lexical projection principles.
- Strict modular separation between lexicon and syntax.
It supports a non-modular, meaning-driven architecture.
Claim
The central claim stands:
Argument structure is not projected from verbs; it emerges from constructions.
This is the Goldbergian Revolution.
It relocates explanatory force from:
Verb lexicon → Constructional network.
Whether this relocation is sustainable depends on:
- The compression mechanisms discussed in section 3.
- The neural plausibility to be addressed in Part III.
Transition
Having established the constructional basis of argument structure, the next chapter deepens the competition:
- How do constructions interact?
- How are conflicts resolved?
- What governs alternations?
5- Projectionism vs Constructionism
- Theta theory
- RRG
- Lexical semantics
- Coercion
This section presents a hybrid model integrating lexical and constructional constraints.
The Architectural Conflict
The debate over argument structure is not merely empirical. It is architectural.
Two competing theses dominate:
Projectionism
Argument structure originates in the verb and projects upward.
Constructionism
Argument structure originates in constructions into which verbs are inserted.
The disagreement concerns:
- The locus of semantic roles
- The direction of explanatory flow
- The storage of event structure
- The nature of compositionality
This section examines both positions carefully and argues for a constrained hybrid architecture.
Theta Theory: The Projectionist Baseline
In classical generative grammar, Theta Theory asserts:
- Verbs assign thematic roles (Agent, Theme, Goal, etc.).
- Each argument must receive exactly one theta-role.
- Theta-role assignment determines syntactic structure.
Example:
“John gave Mary a book.”
Lexical entry for give:
give
⟨Agent, Theme, Goal⟩
The syntactic structure is built to satisfy this lexical specification.
The lexicon is therefore:
- Rich
- Structured
- Event-encoded
Syntax is a projection mechanism.
Strengths
- Clear argument-role accounting.
- Economy in structural generation.
- Avoidance of construction proliferation.
Weaknesses
- Proliferation of lexical rules.
- Difficulty with creative coercion.
- Poor explanation of cross-verb generalizations.
Role and Reference Grammar (RRG)
Role and Reference Grammar attempts to refine projectionism by integrating:
- Lexical semantics
- Logical structures
- Linking algorithms
RRG decomposes verbs into logical representations:
give
[do'(x, Ø)] CAUSE [BECOME have'(y, z)]
Argument realization follows from semantic decomposition.
RRG acknowledges:
- Semantic alternations
- Variable linking patterns
- Gradience in argument realization
However, it remains fundamentally projectionist:
- The verb’s semantic structure determines syntactic realization.
- Constructions are secondary mappings.
RRG softens projectionism but does not abandon it.
Lexical Semantics as Event Templates
Projectionist models increasingly adopt event decomposition:
- Activity
- Accomplishment
- Achievement
- State
Lexical semantics encodes:
- Causation
- Telicity
- Result states
But this raises an issue:
If verbs already contain detailed event templates, why do we observe:
- Coercion into new argument frames?
- Productivity across verb classes?
- Cross-verb constructional meaning?
The more semantics is packed into verbs, the harder it becomes to explain flexibility.
The Coercion Problem
Consider again:
- “She sneezed the napkin off the table.”
- “He laughed the actor off the stage.”
- “She baked him a cake.”
If verbs lack inherent transfer or causation, then:
- The construction adds meaning.
- The verb adapts.
But this introduces a theoretical puzzle.
If verbs are coerced:
Where is the semantic adjustment stored?
Three possibilities arise:
Option 1: Temporary Online Adjustment
Problem:
Cannot explain entrenchment of recurring coerced uses.
Option 2: Lexical Update
The coerced meaning becomes part of the verb’s lexical entry.
Problem:
- Leads to rapid lexical inflation.
- Duplicates constructional meaning.
Option 3: Constructionally Mediated Storage
The semantic shift is licensed and stored at the construction–verb pairing level.
This suggests:
- The verb retains core semantics.
- The construction contributes event structure.
- Compatibility constraints regulate integration.
This third path motivates a hybrid model.
Toward a Hybrid Architecture
A plausible model must integrate:
- Lexical constraints
- Constructional schemas
- Compatibility principles
Lexical Core
Each verb encodes:
- Basic event type
- Participant structure
- Semantic features (e.g., [+motion], [+contact], [+transfer])
This core is stable.
Constructional Templates
Constructions encode:
- Event schemas
- Role configurations
- Information structure patterns
These templates are independently stored.
Compatibility Mapping
Integration occurs via feature alignment:
A verb may enter a construction if:
- Its semantic features are compatible with the construction’s event template.
- No feature conflict blocks insertion.
For example:
Caused-motion requires:
- An event capable of causing displacement.
“Sneeze” lacks inherent causation but:
- Is compatible with manner-of-action verbs.
- Can be interpreted as producing force.
Where Is Semantic Adjustment Stored?
Under the hybrid model:
Semantic adjustment is not stored inside the verb.
Nor is it entirely ephemeral.
Instead, it is stored as:
Strengthened links between specific verbs and constructional schemas.
Over repeated exposure:
- “sneeze” becomes associated with the caused-motion construction.
- The association gains strength.
- Processing becomes faster.
This preserves compression while explaining coercion.
Constraint Interaction
In this hybrid system, argument structure emerges from:
- Lexical semantic constraints
- Constructional semantic templates
- Competition among constructions
- Frequency-based entrenchment
Conflicts are resolved by:
- Semantic compatibility
- Processing efficiency
- Conventionalization
This produces:
- Alternations
- Gradience
- Acceptability variation
Without proliferating lexical entries or unconstrained constructions.
Reframing the Debate
The projectionism vs. constructionism debate has often been polarized.
But the empirical data suggest:
- Verbs are not empty.
- Constructions are not epiphenomenal.
- Meaning is distributed.
Argument structure is neither purely lexical nor purely constructional.
It is emergent from interaction.
Critical Conclusion
The central question:
If verbs are coerced, where is semantic adjustment stored?
Answer:
It is stored in strengthened associative links between lexical cores and constructional schemas, not in wholesale lexical redefinition.
This hybrid model:
- Retains lexical structure (against radical constructionism).
- Retains constructional meaning (against strict projectionism).
- Avoids lexical inflation.
- Avoids constructional overgeneration.
- Preserves cognitive plausibility.
PART III- FORMAL DIALECTS OF CONSTRUCTION GRAMMAR
6- Radical Construction Grammar (Croft)
- Typological anti-essentialism
- Construction-specific categories
- Rejection of Universal Grammar
The Radical Turn
If Goldberg relocates argument structure from verbs to constructions, Croft goes further.
Radical Construction Grammar (RCG) proposes:
Constructions are the only primitives of grammatical theory.
Not words.
Not categories.
Not universal syntactic templates.
Everything, including categories such as “noun,” “subject,” or “verb”, is construction-specific.
This is not merely usage-based grammar. It is typological anti-essentialism.
Typological Anti-Essentialism
Traditional linguistic theory assumes cross-linguistic universals:
- Noun vs verb distinction
- Subject as a grammatical relation
- Hierarchies of syntactic roles
RCG challenges this assumption.
Croft argues:
- Categories emerge from constructions.
- There are no universal syntactic categories independent of language-specific constructions.
- Cross-linguistic comparison must proceed via functional prototypes, not structural identities.
Thus:
“Subject” in English
≠ “Subject” in Hindi
≠ “Subject” in Tagalog
Each is defined internally within a constructional system.
This dissolves category essentialism.
Construction-Specific Categories
Under RCG:
Grammatical categories are:
- Defined relative to constructions.
- Emergent from usage.
- Non-universal in formal realization.
For example:
In English:
- Nouns appear in determiner constructions.
- Verbs appear in tense constructions.
In another language:
- The same lexical items may participate in different structural configurations without clear noun–verb partition.
Categories are therefore:
Relational, not ontological.
They are positions within constructions.
This reverses the traditional hierarchy:
Instead of:
Categories → Constructions
We have:
Constructions → Categories
Rejection of Universal Grammar
RCG explicitly rejects Universal Grammar as a formal syntactic blueprint.
Instead, Croft proposes:
- Universal tendencies emerge from communicative function.
- Typological patterns reflect cognitive and discourse pressures.
- No innate syntactic architecture constrains category formation.
Universality becomes statistical, not structural.
Cross-linguistic generalizations arise from:
- Shared cognitive constraints
- Interactional needs
- Processing biases
Not from a pre-specified syntactic module.
The Typological Advantage
RCG excels in domains where projectionist and generative theories struggle:
Non-configurational Languages
Languages with:
- Free word order
- Extensive morphology
- Fluid category boundaries
fit naturally into a construction-based model.
Split Alignment Systems
Ergative, active-stative, and fluid-S systems resist uniform “subject” analysis.
RCG allows:
- Role generalizations within constructions
- Without forcing universal syntactic roles
Category Fluidity
In languages where:
- Words shift between nominal and verbal functions
- Derivational boundaries blur
RCG accommodates category gradience.
Its descriptive reach is broad.
The Cost of Radicalism
However, this expansion comes at a theoretical price.
Weakening of Universality
If:
- Categories are construction-specific,
- No universal syntactic relations exist,
then explanatory generalization becomes limited.
Cross-linguistic comparison must proceed via:
- Functional analogy
- Prototype mapping
But without shared structural primitives, universality risks dissolving into typological cataloguing.
Loss of Formal Compactness
Generative grammar compresses diversity into:
- A small set of universal operations.
RCG instead posits:
Language-specific constructional inventories.
This enhances descriptive adequacy but reduces theoretical compression.
Predictive Limitations
Without universal structural constraints:
- What limits possible grammars?
- Why do certain patterns recur cross-linguistically?
- Why are some logically possible systems unattested?
RCG appeals to functional pressures, but these explanations can become post hoc.
Comparison with Other CxG Dialects
Radical Construction Grammar differs from:
Goldbergian CxG
- Retains broader cross-linguistic generalizations.
- Accepts partial universals.
Sign-Based Construction Grammar (SBCG)
- Strongly formalized.
- Preserves typed feature structures.
Fluid Construction Grammar
- Computationally implemented.
- Emphasizes dynamic adaptation.
RCG is the most anti-essentialist and typologically radical variant.
Evaluation
The evaluation of RCG must be balanced.
Strengths
- Captures typological diversity.
- Avoids category imperialism.
- Models language-specific organization faithfully.
- Aligns with usage-based and functionalist insights.
Weaknesses
- Weak explanatory universality.
- Reduced formal compression.
- Risk of descriptive pluralism without predictive constraint.
The Central Evaluation
The claim :
Radical Construction Grammar expands descriptive reach but weakens explanatory universality.
It successfully liberates linguistic theory from category essentialism.
But in doing so, it sacrifices:
- Strong cross-linguistic constraints
- Formal minimalism
- Compact generative power
The question becomes:
Is grammar best modeled as:
A constrained universal system?
or
A network of language-specific constructional ecologies?
RCG decisively chooses the latter.
Transition
The next section to a contrasting dialect:
Sign-Based Construction Grammar (SBCG)
Where Croft embraces typological fluidity, SBCG embraces formal precision.
We now move from anti-essentialism to typed feature structures and formal constraint systems.
7- Sign-Based Construction Grammar (SBCG)
- Typed Feature Structures
- AVMs
- Constraint-based formalization
The Formal Turn Within Construction Grammar
Construction Grammar began as a cognitively motivated and usage-based alternative to generative syntax. Early formulations, particularly Goldbergian CxG, prioritized descriptive insight and psychological plausibility over formal explicitness. Critics frequently argued that such models lacked the precision required for computational modeling, formal semantics, and predictive grammar design.
Sign-Based Construction Grammar (SBCG), developed primarily by Sag, Boas, and Kay, emerges as a response to this critique. SBCG integrates the core constructional insight, that grammar consists of form–meaning pairings, with the formal machinery of constraint-based grammar frameworks, especially Head-Driven Phrase Structure Grammar (HPSG).
SBCG thus represents an attempt to demonstrate that:
Construction Grammar can be formally generative without relying on derivational syntax.
This move repositions CxG as a serious competitor within formal linguistic theory.
The Concept of the Linguistic Sign
At the core of SBCG lies the notion of the sign. A sign is a structured bundle of linguistic information that simultaneously encodes:
- Phonological form
- Syntactic structure
- Semantic interpretation
- Pragmatic conditions
Unlike derivational models, SBCG does not construct sentences through sequential transformations. Instead, it models grammatical well-formedness as the satisfaction of constraints across interconnected levels of representation.
Grammar becomes:
A system of licensed sign structures governed by constructional constraints.
This architecture reflects a non-procedural view of grammar. Rather than generating sentences through operations, SBCG characterizes the set of well-formed linguistic objects through declarative constraints.
Typed Feature Structures
The formal backbone of SBCG is the Typed Feature Structure (TFS).
What Is a Typed Feature Structure?
A typed feature structure is a hierarchical representation consisting of:
- A type label
- Attribute–value pairs
- Recursive embedding
Each linguistic object belongs to a type within a formal type hierarchy. Types impose inheritance constraints and allow generalization across related constructions.
For example, a simplified sign structure may include features such as:
- PHON (phonological representation)
- SYN (syntactic properties)
- SEM (semantic content)
- CONTEXT (pragmatic constraints)
Each feature may itself contain nested substructures, enabling precise modeling of linguistic information.
The Role of Type Hierarchies
Type hierarchies allow SBCG to model schematicity and inheritance without invoking transformational operations. General constructions define shared properties, while sub-constructions inherit and refine those properties.
This architecture mirrors biological taxonomy:
General construction
→ Intermediate construction
→ Specific instantiation
Inheritance structures, therefore, replace rule application as the mechanism of grammatical generalization.
Attribute-Value Matrices (AVMs)
Typed feature structures are typically represented through Attribute-Value Matrices (AVMs). AVMs provide a visual and formal method for encoding complex linguistic constraints.
An AVM organizes grammatical information into:
- Attributes (features)
- Values (specifications)
For instance, a schematic ditransitive construction can be represented through constraints specifying:
- Two internal arguments
- Semantic roles (Agent, Recipient, Theme)
- Syntactic valence structure
- Constructional meaning of transfer
AVMs allow linguists to:
- Encode fine-grained structural relationships
- Model cross-construction generalizations
- Maintain formal precision required for computational implementation
Constraint-Based Formalization
SBCG rejects derivational syntax in favor of constraint satisfaction. Sentences are well-formed if they satisfy all relevant constructional constraints simultaneously.
Declarative Grammar
Grammar in SBCG is declarative rather than procedural. This means:
- Grammar specifies conditions linguistic expressions must meet.
- It does not specify step-by-step derivations.
This approach aligns SBCG with:
- Formal logic
- Constraint-based computational systems
- Non-transformational syntactic theory
Constructional Licensing
Constructions in SBCG function as licensing devices. Each construction specifies constraints over signs, determining which combinations of linguistic elements are permissible.
This allows SBCG to model:
- Idioms
- Argument structure patterns
- Morphological constructions
- Discourse-level templates
All within a unified formal system.
Generativity Without Derivation
The central theoretical contribution of SBCG is its reconceptualization of generativity.
Traditional generative grammar equates generativity with:
- Recursive derivations
- Transformation rules
- Computational operations such as Merge
SBCG instead demonstrates that:
Generativity can emerge from hierarchical constraint systems and inheritance networks.
The generative capacity of the grammar derives from:
- Recursive feature embedding
- Type inheritance
- Constraint interaction
Thus, SBCG retains strong formal expressive power without invoking derivational syntax.
Modeling Argument Structure in SBCG
SBCG provides a precise formal account of argument structure constructions. Instead of deriving argument structure through lexical projection or transformational mapping, SBCG encodes argument relations directly within constructional constraints.
This allows:
- Explicit representation of semantic roles
- Flexible interaction between lexical and constructional meaning
- Formal modeling of coercion and argument alternations
Argument structure becomes a property of licensed sign configurations rather than derivational mapping between lexical and syntactic levels.
Interface Integration
One of SBCG’s major strengths lies in its ability to integrate multiple linguistic interfaces.
Syntax–Semantics Interface
SBCG directly links syntactic structures with semantic representations through shared feature structures. This allows fine-grained modeling of compositional meaning without relying on separate derivational mapping rules.
Syntax–Pragmatics Interface
Contextual constraints can be encoded directly within constructional representations. SBCG therefore provides formal tools for capturing pragmatic restrictions within grammatical descriptions.
Morphology–Syntax Continuum
Because constructions operate across levels of linguistic organization, SBCG naturally accommodates morphological constructions alongside syntactic ones. This supports the anti-modular spirit of Construction Grammar while maintaining formal rigor.
Computational Implications
SBCG is particularly attractive for computational linguistics because:
- Typed feature structures can be implemented algorithmically.
- Constraint-based grammars are compatible with parsing systems.
- AVMs allow machine-readable representation of grammatical knowledge.
SBCG thus bridges theoretical linguistics and natural language processing, demonstrating that constructional approaches can scale computationally.
Theoretical Tensions
Despite its strengths, SBCG introduces several theoretical challenges.
Cognitive Plausibility
Highly structured feature representations raise questions about psychological realism. It remains uncertain whether human language processing operates with representations resembling formal AVMs.
Complexity of Representation
The precision of SBCG can lead to representational density. Complex constructions require elaborate feature specifications, which may obscure broader cognitive generalizations.
Usage-Based Integration
SBCG incorporates constructional inheritance but does not always integrate frequency effects, entrenchment, and statistical learning as centrally as usage-based models. This creates tension between formal precision and experiential learning accounts.
SBCG Within the Construction Grammar Landscape
Within the broader CxG family, SBCG occupies a distinctive position:
- More formally explicit than Goldbergian CxG
- Less typologically radical than Radical Construction Grammar
- More computationally implementable than many usage-based models
It represents a synthesis between constructional insight and formal constraint-based grammar.
Evaluation
The central claim of this chapter can be summarized as follows:
SBCG demonstrates that Construction Grammar can achieve full formal generativity without adopting derivational architecture.
Its major contributions include:
- Mathematical precision
- Interface integration
- Computational applicability
- Explicit modeling of inheritance and constraint interaction
However, SBCG also faces enduring challenges:
- Questions of cognitive realism
- Balancing formal rigor with usage-based learning
- Managing representational complexity
Theoretical Significance
SBCG transforms Construction Grammar from a descriptive framework into a formally competitive grammatical architecture. It challenges the assumption that only derivational systems can achieve generative adequacy.
In doing so, SBCG reframes the central debate in linguistic theory:
Is generativity fundamentally procedural, or can it emerge from declarative constraint systems?
8- Construction Grammar and Computational Modeling
- Fluid Construction Grammar
- FrameNet
- Constructional parsing
- Neural network parallels
This section bridges symbolic CxG with probabilistic machine learning.
From Theory to Computation
Construction Grammar, with its emphasis on form–meaning pairings, inheritance networks, and usage-based generalizations, presents a promising framework for computational modeling. However, historically, CxG has faced challenges when formalizing constructions for parsing, generalization, and probabilistic reasoning.
This section examines how computational implementations, from Fluid Construction Grammar to FrameNet and neural networks, operationalize constructional insights. It also raises a provocative question:
Are modern large language models (LLMs) implicitly constructionist?
Fluid Construction Grammar (FCG)
Overview
Fluid Construction Grammar (Steels, 2011) is a computational system designed to:
- Represent constructions as executable form–meaning pairings
- Enable dynamic grammar learning and adaptation
- Support agent-based communication experiments
FCG treats grammar as fluid, evolving with interaction and usage. Unlike traditional symbolic grammar, it is not statically precompiled.
Mechanisms
- Constructional Rules: Constructions act as bidirectional mapping rules between form and meaning.
- Parsing and Production: The same constructional inventory supports comprehension and generation.
- Conflict Resolution: Multiple applicable constructions are selected based on compatibility and frequency.
- Learning: Constructions are added, modified, or generalized based on communicative success and input statistics.
Cognitive Implications
- Models emergence of constructions from usage.
- Simulates constructionalization and entrenchment.
- Provides a testbed for hypotheses about lexical–construction interactions.
FrameNet and Constructional Parsing
FrameNet (Baker et al., 1998) operationalizes constructional insights in lexical semantics:
- Frames encode prototypical event structures.
- Lexical units are linked to semantic roles within frames.
- Constructions map arguments to frame roles.
Constructional parsing in FrameNet allows:
- Automatic argument labeling
- Semantic role resolution
- Pattern generalization across verbs and constructions
FrameNet thus demonstrates that constructional knowledge can be formalized for large-scale linguistic annotation.
Neural Network Parallels
Modern deep learning models, including transformers and LLMs, appear to implicitly encode constructional knowledge:
- Recurrent co-occurrence patterns correspond to form–meaning pairings.
- Layered attention mechanisms capture argument structure regularities.
- Distributional embeddings mirror entrenchment and frequency effects.
Key parallels with CxG:
| Construction Grammar Concept | Neural Network Parallel |
|---|---|
| Construction as form–meaning unit | Pattern of co-occurring tokens / embeddings |
| Inheritance network | Hierarchical representation in hidden layers |
| Entrenchment / frequency effects | Weighting via gradient descent |
| Coercion / argument adaptation | Context-dependent token predictions |
These observations suggest that LLMs may be implicitly constructionist, learning statistical regularities that correspond to constructions without explicit symbolic representations.
Constructional Parsing Algorithms
Constructional parsing operationalizes CxG for computational applications:
- Input: Surface string
- Output: Constructional analysis with role assignment and semantic interpretation
- Mechanism: Match input to constructional patterns
- Constraints: Type, semantic compatibility, token frequency
Fluid Construction Grammar provides a flexible implementation, supporting:
- Parsing under ambiguity
- Dynamic updates as constructions are learned or generalized
- Bidirectional production and comprehension
Bridging Symbolic and Probabilistic Models
Construction Grammar now sits at the intersection of:
- Symbolic formalism: Typed Feature Structures, AVMs, constraint-based modeling
- Probabilistic computation: Frequency-driven learning, statistical generalization, gradient entrenchment
This hybridization enables:
- Formal guarantees for well-formedness
- Empirical alignment with language usage
- Modeling innovation, coercion, and productivity
It also positions CxG for integration with cognitive modeling, psycholinguistic simulation, and AI language systems.
Major Question: Are LLMs Implicitly Constructionist?
- LLMs are trained on vast corpora, capturing distributional patterns.
- They can generate and comprehend novel argument structures.
- They exhibit coercion-like flexibility in sentence interpretation.
- However, they lack explicit symbolic construction inventories.
This raises theoretical and empirical questions:
- Does pattern-based learning suffice for cognitive plausibility?
- Can symbolic and probabilistic constructional models be integrated?
- How can we extract explicit constructional knowledge from neural networks?
Evaluation
Construction Grammar and computational modeling jointly demonstrate:
Strengths:
- Formalization of constructions (FCG, FrameNet)
- Modeling productivity, coercion, and generalization
- Empirical tractability for NLP and AI
Challenges:
- Cognitive realism of neural embeddings
- Bridging symbolic and statistical representations
- Scaling inheritance networks to massive lexicons
Construction Grammar is no longer merely a descriptive linguistic theory. Its computational incarnations show that:
- Constructions can be algorithmically represented and manipulated
- Grammar can be adaptive, usage-based, and formally precise
- Statistical learning and neural models may capture implicit constructional regularities
- CxG now occupies the conceptual space between cognitive realism, formal rigor, and machine learning applicability.
PART IV- LEARNING, STATISTICS & PRODUCTIVITY
9- Statistical Preemption
- Why “He fell the cup” fails
- Competition models
- Negative evidence
- Entrenchment mechanisms
The Problem of Overgeneralization
One of the classic puzzles in language acquisition is why children do not produce ungrammatical forms despite limited negative evidence:
- Example: He fell the cup vs. He dropped the cup
- Both involve a Caused-Motion semantic scenario
- Only the attested construction (drop the cup) is grammatical
This is known as the overgeneralization problem.
Construction Grammar offers a solution: Statistical Preemption.
Core Concept: Statistical Preemption
Statistical Preemption (Goldberg, 2006; Ambridge et al., 2011) posits:
Multiple competing constructions exist for a given event or meaning.
The presence of a frequent attested construction blocks alternative, ungrammatical forms.
Entrenchment of the attested form prevents overgeneralization.
Formally:
Let Ci be a set of competing constructions for semantic event E
The probability of selecting Ci is proportional to its frequency in the input:
Rare or unattested constructions are preempted.
Competition Models
Statistical preemption is implemented as competition between constructions:
- Each construction carries semantic overlap with the target event.
- Candidate constructions are activated based on semantic compatibility.
- Activation is weighted by frequency and entrenchment.
Example: Caused-Motion constructions in English
| Construction | Semantic Fit | Frequency | Outcome |
|---|---|---|---|
| drop NP | ✔ | High | Licensed |
| fall NP | ✔ | Low | Preempted |
| fall NP with causative | ✔ | None | Blocked |
Children thus avoid forms like He fell the cup, even without explicit correction.
Negative Evidence
Statistical preemption solves the poverty-of-stimulus problem:
- Traditional argument: Children do not receive enough negative evidence to block ungrammatical forms.
- CxG perspective: Negative evidence is indirect, inferred from frequency distributions.
- Preemption emerges naturally from input statistics.
Implications:
- Grammar is usage-based, not innate.
- Cognitive learners exploit distributional regularities.
- Acquisition depends on competitively weighted constructional inventories.
Entrenchment Mechanisms
Entrenchment is central to statistical preemption:
- Frequent constructions become strongly activated mental representations.
- Less frequent or unattested forms remain weak or non-existent.
- Learning models simulate this via connectionist weight updates or Bayesian inference.
For instance:
- Exposure to “He dropped the cup” repeatedly strengthens the Caused-Motion construction.
- Alternative forms (He fell the cup) are never sufficiently activated and thus fail to surface.
Cross-Linguistic Evidence
Statistical preemption is not limited to English:
- Saraiki & Urdu: Ditransitive alternations show high-frequency lexical–construction pairings blocking unattested forms.
- Japanese: Light verb constructions demonstrate preemption patterns where children avoid overgeneralized causatives.
This supports CxG’s claim that learning is statistical, not strictly rule-governed.
Computational Implementation
Preemption can be modeled computationally:
- Connectionist networks: Weighting constructions by frequency
- Bayesian models: Prior probabilities derived from attested constructions
- Corpus simulation: Predict which forms are licensed or blocked in acquisition
These models confirm:
- Type frequency correlates with productivity
- Token frequency strengthens entrenchment
- Preemption is a robust statistical phenomenon.
Argument: Preemption vs Poverty-of-Stimulus
Statistical preemption allows Construction Grammar to respond to classical linguistic challenges:
- Poverty-of-stimulus: No explicit negative evidence is needed; frequency suffices.
- Overgeneralization: Competing constructions block unattested forms.
- Acquisition of argument structure: Learners infer causative, resultative, and ditransitive constructions from patterns in the input.
Thus:
Preemption is arguably CxG’s strongest empirical and theoretical defense against nativist objections.
Theoretical Implications
- Grammar is emergent, not pre-specified.
- Constructions are competitively constrained networks.
- Learning is probabilistic, distribution-sensitive, and usage-based.
- Statistical mechanisms explain gradient acceptability, coercion, and constructional innovation.
Transition
The next section, Productivity, Schematicity, and Frequency, builds on preemption to explain:
- How constructions survive, adapt, or die in linguistic populations
- How token/type frequency shapes schematic abstractions
- Why some constructions are creative and others frozen
10- Productivity & Gradient Grammar
- Category formation
- Type frequency effects
- Probabilistic productivity
- Corpus modeling
Beyond Categorical Grammar
Traditional generative grammar treats grammatical categories as discrete and categorical:
- Nouns, verbs, transitive/intransitive constructions
- Well-formed vs. ill-formed sentences
Construction Grammar challenges this view. Usage patterns, frequency, and entrenchment show that category membership and grammaticality are gradient:
- Some constructions are highly productive
- Others are frozen or semi-productive
- Novel utterances reflect statistical tendencies, not hard rules
This section explores how gradient grammar emerges from frequency, usage, and abstraction mechanisms.
Category Formation in CxG
Constructions form the building blocks of grammar. Category formation arises via:
Schematicity:
- Abstractions over multiple specific instances
- Example: Ditransitive construction ⟨N P1 V N P2 N P3⟩ generalizes across many verbs
Entrenchment:
- Frequent constructions become strongly activated mental representations
- Weak or rare constructions remain peripheral
Similarity & Generalization:
- New forms are accepted if they resemble entrenched constructions
- Leads to probabilistic grammaticality judgments
Type Frequency Effects
Type frequency refers to the number of different lexical items a construction licenses:
- High type frequency → highly schematic constructions → high productivity
- Low type frequency → low productivity, often frozen
Example: English Ditransitive Construction
| Construction Type | Number of Verbs | Productivity |
|---|---|---|
| NP1 V NP2 NP3 | 50+ verbs | Highly productive |
| NP1 V NP2 | 3 verbs | Low productivity / idiomatic |
Implications:
- Schematicity is driven by cross-lexical generalization
- Cognitive learners extract abstract patterns from multiple verb usages
- Frequency determines which constructions survive and generalize
Probabilistic Productivity
Productivity is not binary; it is probabilistic:
Novel utterances are licensed based on statistical likelihood
Type and token frequencies jointly shape probability
Psycholinguistic experiments show gradient acceptability:
High-frequency constructions → rapid, natural production
Low-frequency constructions → slower, more error-prone
Formally:
Where = novel utterance, = constructional pattern
Corpus Modeling
Corpus studies provide empirical support for gradient grammar:
Token-based analyses
Measure frequency of constructional occurrences
Reveal entrenched patterns and usage-based learning
Type-based analyses
Count lexical diversity within constructions
Identify highly schematic, productive patterns
Computational modeling
Simulate construction acquisition and productivity
Test hypotheses about probabilistic generalization, coercion, and innovation
Example: English Caused-Motion constructions
She dropped the cup → high frequency, productive
She sneezed the napkin off the table → low frequency, emergent, yet acceptable via schematic extension
Cognitive Implications
Gradient grammar explains several phenomena:
- Acceptability judgments: People rate sentences probabilistically rather than categorically
- Overgeneralization errors: Children produce He falled the cup temporarily
- Coercion: Constructions can shift meaning of verbs based on probabilistic association
- Innovation: New constructions emerge when novel patterns match entrenched schemata
Integration with Statistical Preemption
Section 9 introduced preemption. section 10 extends it:
- Preemption + Gradient Productivity = statistical regulation of grammatical innovation
- Learners avoid unattested forms (preemption) while producing probabilistically licensed novel forms
- Grammar is thus dynamic, usage-driven, and adaptive
Computational Modeling of Gradient Grammar
- Probabilistic models capture frequency-weighted generalization
- Bayesian and connectionist networks simulate entrenchment, type frequency effects, and productive extension
- Corpus-driven parsing demonstrates gradient grammaticality prediction
Applications:
- NLP systems using constructional probabilistic grammars
- Cognitive simulations of child language acquisition
- Predictive models of constructional evolution over time
Claim: Grammar as a Statistical System
Implications:
- Challenges strict generative assumptions
- Supports usage-based models of language learning
- Bridges formal representation (SBCG, AVMs) with probabilistic cognitive reality
- Explains cross-linguistic productivity and diachronic change
Transition
The next part, Part V- Networks, Inheritance & Diachrony, examines:
- How constructions are linked into networks
- Horizontal (polysemy) and vertical (inheritance) connections
- Diachronic evolution of grammatical patterns
Together, sections 9 and 10 provide the statistical-cognitive foundation for understanding dynamic, networked grammar.
PART V- NETWORKS, POLYSEMY & DIACHRONY
- Vertical vs horizontal links
- Family resemblances
- Network topology
Grammar as a Network
- Constructions are nodes
- Semantic, formal, and functional relationships are edges
- Both vertical inheritance (generalization) and horizontal polysemy (related constructions) coexist
This network perspective accommodates:
- Gradient categories
- Constructional polysemy
- Evolutionary flexibility
Vertical vs. Horizontal Links
Vertical (Inheritance) Links
Capture taxonomic generalization
Example:Higher-level constructions provide schematic slots
Lower-level constructions inherit argument structure, semantic roles, and constraintsHorizontal (Polysemy) Links
Connect sister constructions sharing similar semantics or form
Example: English ditransitive constructions| Construction | Shared Roles | Polysemy Relation |
|---|---|---|
| She gave him a book | NP1 Agent, NP2 Recipient | Horizontal link |
| She sent him a letter | NP1 Agent, NP2 Recipient | Horizontal link |
| She threw him a glance | NP1 Agent, NP2 Recipient | Horizontal link (metaphorical extension) |
Horizontal links capture semantic extension, metaphorical usage, and coercion
Polysemy is structurally represented, not incidentalFamily Resemblances & Cognitive Reality
Network analyses reveal that construction networks are scale-free:
- Hub constructions: Highly connected, highly frequent, highly schematic
- Peripheral constructions: Rare, lexically specific, specialized functions
- Small-world properties: Most constructions are reachable via a few links
- Implication: Efficient generalization and retrieval in the mental lexicon
Formal modeling tools:
- Graph-theoretic metrics: degree centrality, clustering coefficient
- Computational simulations: how new constructions attach to network hubs
Constructional Polysemy
12- Constructionalization and Grammaticalization
- Grammaticalization vs constructionalization
- Emergence of new constructions
- Diachronic layering
The Dynamics of Grammar
Grammar is not static. Constructions emerge, evolve, and sometimes disappear, reflecting both cognitive constraints and communicative pressures.
Two central processes in Construction Grammar and diachronic linguistics:
- Grammaticalization – lexical items acquire grammatical function
- Constructionalization – entirely new form–meaning pairings emerge as constructions
While related, these processes differ in mechanism, scope, and temporal dynamics.
Grammaticalization vs. Constructionalization
| Feature | Grammaticalization | Constructionalization |
|---|---|---|
| Origin | Lexical items | Multi-word patterns / constructions |
| Process | Semantic bleaching, phonological reduction | Conventionalization, abstraction |
| Scope | Typically functional | Any level of linguistic structure |
| Example | English going to → future marker | She sneezed the napkin off the table (novel Caused-Motion) |
Grammaticalization often follows predictable paths (lexical → grammatical → affixal)
Constructionalization can be innovation-driven, context-dependent, and network-mediatedEmergence of New Constructions
Mechanisms:
Reanalysis of existing constructions
Lexical verbs acquire new argument structures via analogy
Example: English resultative constructions (hammer the metal flat)
Semantic extension via polysemy
Horizontal links in networks allow metaphorical transfer
Example: throw a glance (from physical to abstract causation)
Entrenchment and conventionalization
Frequency drives consolidation
Low-frequency innovations may fade or remain peripheral
Innovation through coercion
Constructional slots force verbs into new argument structures
Example: She sneezed the napkin off the table
Diachronic Layering
Constructions are layered historically: multiple competing patterns coexist
Core constructions: entrenched, productive, and central hubs in the network
Peripheral constructions: emerging, rare, or highly specialized
Obsolete constructions: losing frequency, eventually pruned from the network
Network topology ensures robustness and adaptability
Hub constructions anchor semantic schemata, peripheral nodes allow innovation
Network Reconfiguration as a Model of Evolution
Grammar evolves via network reconfiguration, not rule replacement
Mechanisms:
Node addition – new constructions enter the network
Node deletion – low-frequency or obsolete constructions fade
Edge formation – new horizontal or vertical links create semantic or formal generalizations
Edge weakening – disused links lose cognitive salience
Stability of core grammar
Flexibility for novel constructions
Distribution of grammatical productivity
Cross-Linguistic Evidence
English: Resultative and Caused-Motion constructions evolve via coercion and analogy
Saraiki & Urdu: Light verb constructions and causatives exhibit constructionalization patterns
Japanese & Mandarin: Serial verb constructions and aspectual markers show network-mediated emergence
Patterns suggest universality of constructional network evolution across typologically diverse languages.
Psycholinguistic Implications
Diachronic and Evolutionary Synthesis
Transition
The next part, Part VI — Extended Domains: Pragmatics & Processing, will explore how:
- Construction Grammar interfaces with discourse, context, and cognitive processing
- Psycholinguistic and neurocognitive evidence supports real-time activation of construction networks
- Constructions extend beyond sentence-level syntax into pragmatic, semantic, and processing domains
PART VI- PRAGMATICS, PROCESSING & NEURAL REALIZATION
13- Pragmatics Within Constructions
- Contextual licensing
- Discourse constraints
- Incongruity constructions
The Pragmatic Turn
Construction Grammar emphasizes form–meaning pairings. But meaning is not only semantic, it is often pragmatic and context-sensitive:
Some constructions are licensed only in particular discourse contexts
Examples include irony, humor, politeness, and incongruity constructions
This section asks:
Are pragmatic constraints encoded in the construction or inferred by context?
Contextual Licensing
Discourse Constraints
Constructions interact with discourse-level factors:Information structure: focus, topic, givenness
Pragmatic functions: emphasis, irony, sarcasm
Sequential context: previous utterances constrain possible constructions
Incongruity Constructions
Encoding vs. Inference
Key theoretical question:
| Perspective | Encoding | Inference |
|---|---|---|
| Encoding | Construction specifies pragmatic conditions explicitly | Less flexible, more predictable |
| Inference | Pragmatics computed dynamically from context | Flexible, gradient, context-dependent |
Evidence:
Experimental psycholinguistics:
Eye-tracking shows delayed processing when discourse expectations are violated
ERP studies indicate real-time inferencing of pragmatic content
Corpus studies:
Certain constructions are consistently tied to specific discourse frames, suggesting partial encoding
Cross-linguistic variation:
Languages differ in how much pragmatics is built into constructions vs. inferred
Constructionist synthesis:
Some pragmatic constraints are schematized
Others are flexibly inferred, allowing context-sensitive interpretation and creativity
Implications for Gradient Grammar
Research Questions & Future Directions
What is the computational architecture for integrating pragmatics into constructions?How are discourse-licensed constructions stored in the mental lexicon?
Can large language models simulate constructional pragmatics?
How do incongruity constructions inform theories of creativity and innovation in language?
Pragmatics within constructions is dynamic, probabilistic, and context-sensitive
Constructions encode partial pragmatic constraints, leaving inference to discourse and cognition
Networked, gradient grammar provides a unified framework linking form, meaning, context, and cognitive processing
Next: Psycholinguistic Evidence, bridging constructionist theory with priming, eye-tracking, and neural data, completing Part VI.
14- Psycholinguistic Evidence
- Structural priming
- Eye-tracking
- EEG
- Argument structure activation
Construction Grammar in the Brain
Construction Grammar posits that form–meaning pairings are the core units of grammar. Psycholinguistic research seeks to test:
- Are constructions cognitively real units?
- How are argument structures activated during comprehension and production?
- Do constructions have distinct neural signatures?
This section evaluates the empirical evidence from structural priming, eye-tracking, EEG/ERP, and fMRI.
Structural Priming
Eye-Tracking Studies
Critical Evaluation
Strengths:
- Psycholinguistic data supports cognitive reality of constructions
- Evidence for abstract, generalized patterns beyond individual lexical items
- Interaction of frequency, entrenchment, and probabilistic activation is observable
Limitations:
- Neural distinctiveness of constructions remains debated
- fMRI resolution insufficient to resolve fine-grained network hubs
- EEG signals can be ambiguous between syntactic vs. semantic processing
- Future directions:
- Multimodal neuroimaging (EEG + fMRI) to map constructional activation
- Cross-linguistic psycholinguistic studies to test universality of constructional networks
- Computational modeling linking network topology with neural activation patterns
15- Neural Realization of Constructions
- Distributed representation models
- Constructional activation patterns
- Embodied semantics
From Mind to Brain
Construction Grammar posits that form–meaning pairings are the core units of grammar. section 14 reviewed psycholinguistic evidence; here we ask:
How are constructions realized in the brain?
Can neural systems encode abstract constructions and argument structure patterns?
This section integrates distributed representation models, neuroimaging evidence, and embodied semantics to offer a speculative but theoretically grounded proposal.
Distributed Representation Models
Neural realizations of constructions are likely distributed, not localized
Constructions may be represented as networks of interconnected neurons:
Form nodes → phonological and syntactic patterns
Meaning nodes → semantic and argument structures
Edges → associative links shaped by experience and frequency
Cognitive plausibility:
Explains gradience, polysemy, and coercion
Aligns with connectionist and deep learning approaches
Computational parallels:
Neural network embeddings (word2vec, LLMs) encode constructional patterns implicitly
Weight distributions mirror entrenchment and type frequency
Constructional Activation Patterns
Evidence from ERP, MEG, and fMRI suggests distributed networks activate during comprehension:
Broca’s area → syntactic schemata and argument structure retrieval
Temporal lobes → semantic frame processing
Parietal cortex → integration of action and event knowledge
Motor regions → processing of action-related constructions (embodied semantics)
Activation is context-dependent:
Familiar, high-frequency constructions → rapid, efficient activation
Novel or low-frequency constructions → broader cortical recruitment, reflecting coercion and inference
Hypothesis:
Constructions correspond to dynamic neural assemblies, where form and meaning nodes co-activate, modulated by context and prior experience.
Embodied Semantics
Some constructions encode sensorimotor knowledge:
Throw the ball → activates motor planning areas
Sneeze the napkin off the table → combines abstract causal reasoning with sensorimotor simulation
Embodied semantics supports:
Argument structure acquisition
Coercion and polysemy
Creative extensions of constructions
Implication:
Constructions are grounded in experience, linking linguistic, motor, and perceptual systems
Network-Level Speculation
Scale-free network hypothesis (section 11) extends to the brain:
Hub constructions → highly entrenched, densely connected neural nodes
Peripheral constructions → novel, sparsely connected assemblies
Efficient retrieval and generalization correspond to small-world network dynamics in cortical circuits
Neural predictions:
Hub constructions → rapid, automatic activation; minimal cortical recruitment
Peripheral constructions → slower activation, more distributed cortical recruitment
Polysemy → overlapping activation patterns across related constructions
Evidence from Computational Modeling
Recurrent and transformer-based neural networks replicate constructional effects:
Implicit learning of argument structures
Predicting coercion effects (She sneezed the napkin off the table)
Distributional frequency effects mimic entrenchment
Implication:
Large-scale models provide a bridge between symbolic CxG and neural computation
Suggests that language networks in the brain and deep learning models share analogous representational properties
Open Questions and Future Directions
How do frequency, entrenchment, and network centrality modulate neural activation?
Can fMRI + MEG/EEG multimodal imaging reveal dynamic, constructional assemblies in real time?
How does embodied experience shape construction acquisition and generalization?
Can computational models predict human neural responses to novel constructions?
Embodied semantics anchors constructions in sensorimotor and perceptual experience
Constructions are dynamic, probabilistic, and context-sensitive, providing a cognitively and neurobiologically plausible account of grammar
PART VII — THE GRAND DEBATE
16- Construction Grammar vs Minimalism
Comparison across:
| Dimension | Minimalism | Construction Grammar |
|---|---|---|
| Ontology | Derivational computation | Networked pairings |
| Learning | UG + parameter setting | Statistical abstraction |
| Productivity | Rule-based | Gradient |
| Universals | Innate | Emergent |
| Economy | Central | Peripheral |
Two Competing Paradigms
Construction Grammar (CxG) and Minimalism (MP) represent diametrically opposed approaches to understanding grammar:
- Minimalism: Grammar is derivational, computational, and economy-driven
- Construction Grammar: Grammar is a network of learned form–meaning pairings, usage-driven and probabilistic
This section provides a systematic comparison along multiple dimensions, highlighting theoretical and empirical tensions.
Comparative Dimensions
| Dimension | Minimalism | Construction Grammar |
|---|---|---|
| Ontology | Derivational computation; rules generate structures from a small set of primitives | Networked pairings; constructions are stored as units at multiple levels of abstraction |
| Learning | UG + parameter setting; poverty-of-stimulus problem central | Statistical abstraction; frequency, entrenchment, preemption guide acquisition |
| Productivity | Rule-based; categorical | Gradient; probabilistic, frequency-sensitive |
| Universals | Innate principles and constraints | Emergent from experience and typological patterns |
| Economy | Central organizing principle (e.g., minimal derivations) | Peripheral; economy is emergent through frequency and cognitive cost |
Ontological Tension
- Minimalism: Abstract operations like Merge and Move define core syntactic structures
- CxG: Structures emerge from conventionalized pairings; no derivational primitives exist
Critical Question:
Is grammar computationally reducible, or is it better understood as a probabilistic network?
Learning and Acquisition
- Minimalism: Children set parameters with limited evidence; UG provides universal constraints
- CxG: Children track frequency, distribution, and preemption
Empirical evidence favors CxG in gradient acquisition phenomena:
- Overgeneralization errors (He goed) are preempted by entrenched constructions
- Productivity is probabilistic, not categorical
Productivity and Gradience
- Minimalism: Categorical rules generate any grammatical combination
- CxG: Gradient acceptability; frequency modulates ease of use and generalization
Example:
Ditransitive: “She gave him a book” vs. “She sneezed the napkin off the table”
Minimalism predicts both as derivable
CxG predicts gradient acceptability, depending on constructional entrenchment
Universals and Typology
- Minimalism: Universal Grammar (UG) imposes constraints across languages
- CxG: Cross-linguistic generalizations emerge from shared cognitive capacities, usage patterns, and networked constructions
Implications:
- Typologically diverse languages (Saraiki, Urdu, Japanese) support construction-specific categories, challenging strict UG universals
- CxG provides a flexible, adaptive model for language diversity
Economy and Derivational Cost
- Minimalism: Economy central, derivations must be minimal
- CxG: Economy is emergent, shaped by frequency, cognitive load, and entrenchment
- Constructions with high token frequency are retrieved more efficiently; low-frequency constructions require coercion or inferential effort
Critical Synthesis
For PhD-level research:
CxG encourages corpus-based, experimental, and computational approaches
Minimalism encourages formal proofs, derivational modeling, and parameterized cross-linguistic comparison
A modern research agenda may integrate:
Psycholinguistic experimentation (priming, eye-tracking)
Network modeling of constructions
Computational simulations of grammar acquisition
Cross-framework comparison studies
Construction Grammar and Minimalism represent two poles of linguistic theorizing: emergent, networked, probabilistic vs derivational, universal, economy-driven.
The PhD-level challenge is to critically navigate these paradigms, identifying where constructions suffice and where derivational insight is indispensable
Future research requires formal precision, computational scalability, and cross-linguistic grounding
The Future of Construction Grammar
CxG must resolve:
- Ontological precision
- Formal explicitness
- Computational scalability
- Neurocognitive validation
Its survival depends on integrating:
- Formal constraint systems
- Statistical modeling
- Neural plausibility
The Challenge Ahead
Construction Grammar (CxG) has transformed our understanding of grammar:
- Dissolving the lexicon–syntax divide
- Accounting for argument structure independent of verbs
- Explaining gradient productivity, coercion, and frequency effects
- Modeling grammar as a networked, evolving system
Yet, CxG faces crucial challenges that must be addressed to remain a leading theoretical framework:
- Ontological precision
- Formal explicitness
- Computational scalability
- Neurocognitive validation
This section outlines a roadmap for the next generation of CxG research.
Ontological Precision
The Boundary Problem remains unresolved: when does a pattern qualify as a construction?
Current practice risks ontological inflation, if everything is a construction, theory loses predictive power.
Future directions:
Define formal criteria: conventionalization, frequency thresholds, semantic non-compositionality
Empirically anchor constructions using psycholinguistic and corpus evidence
Integrate cross-linguistic and diachronic perspectives to refine universal vs. language-specific units
Goal: Constructions as cognitively real, empirically grounded entities
Formal Explicitness
CxG has historically been informally stated, especially Goldbergian frameworks
Formal dialects (SBCG, RCG) provide a pathway, but often lose cognitive elegance
Needed advances:
Typed feature structures for argument structure
Constraint-based formulations that remain usage-driven
Mathematical models of schematicity and generalization
Goal: A rigorous formalism that supports both computational implementation and cognitive plausibility
Computational Scalability
Large-scale language modeling demands CxG-compatible architectures:
Fluid Construction Grammar (FCG)
FrameNet-inspired parsing
Neural embeddings and LLMs
Challenges:
Scaling networked constructions to millions of patterns
Integrating probabilistic learning, preemption, and frequency effects
Modeling creative coercion and polysemy computationally
Goal: Construction-aware AI and simulations of human grammar
Neurocognitive Validation
Psycholinguistic studies (priming, eye-tracking, ERP) support constructional units
Neural distinctiveness remains debated: are constructions distinct nodes or emergent network patterns?
Future research directions:
Multimodal neuroimaging (EEG + fMRI) to map constructional assemblies
Testing hub–periphery network hypotheses in the brain
Investigating embodied semantics and sensorimotor grounding
Goal: Link constructional theory with observable brain activity
Integrative Future: Bridging Theory, Computation, and Cognition
The survival and advancement of CxG depend on synergistic integration:
Formal constraint systems → ensure precision, predictivity, and generativity
Statistical modeling → capture gradient productivity, frequency effects, and usage patterns
Neural plausibility → anchor theory in cognitive and neurobiological reality
Such integration will allow CxG to compete with Minimalism, HPSG, and other formalist paradigms at the highest level of theoretical and empirical rigor
Vision
Construction Grammar is poised to become a comprehensive theory of human grammar if it can:
Precisely define the ontology of constructions
Combine formal rigor with cognitive realism
Scale to computational and empirical demands
Demonstrate neural plausibility through psycholinguistic and neuroimaging research
The next generation of CxG scholars will not only map form–meaning networks but also bridge mind, brain, and computation, creating a truly unified theory of language.
References
Antonopoulou, E., & Nikiforidou, K. (2011). Construction grammar and conventional discourse: A construction-based approach to discoursal incongruity. Journal of Pragmatics, 43(10), 2594–2609.
Barðdal, J. (2008). Productivity.
Barðdal, J. (2008). Appendix A. Predicates and case and argument structure constructions in the text corpora. In Productivity (pp. 191-198). John Benjamins Publishing Company.
Bencini, G. (2017). Speech errors as a window on language and thought: A cognitive science perspective. Altre modernità , 243-262.
Borin, L., & Lyngfelt, B. (2025). Framenets and constructiCons. The Cambridge handbook of construction grammar.
Bybee, J. (2010). Language, usage and cognition. Cambridge University Press.
Bybee, J. L. (2006). From usage to grammar: The mind's response to repetition. Language, 82(4), 711-733.
Bybee, J., & Thompson, S. (1997, September). Three frequency effects in syntax. In Annual Meeting of the Berkeley Linguistics Society (pp. 378-388).
Bybee, J., & Scheibman, J. (1999). The effect of usage on degrees of constituency: the reduction of don't in English. Linguistics: An interdisciplinary journal of the language sciences, 37(4).
Bybee, J. (2003). Phonology and language use (Vol. 94). Cambridge University Press.
Croft, W. (2001). Radical construction grammar: Syntactic theory in typological perspective. Oxford University Press.
Croft, W. (2010). Construction grammar.
Diessel, H. (2019). The grammar network. Cambridge University Press.
Ellis, R. (1997). Second language acquisition. The United States: Oxford, 98, 37.
Fillmore, C. J. (1988, October). The mechanisms of” construction grammar”. In Annual Meeting of the Berkeley Linguistics Society (pp. 35–55).
Fillmore, C. J., Kay, P., & O'connor, M. C. (2014). Regularity and idiomaticity in grammatical constructions: The case of let alone. In The new psychology of language (pp. 243-270). Psychology Press.
Geeraerts, D., Dirven, R., Taylor, J. R., Langacker, R. W., & Geeraerts, D. (Eds.). (2006). Cognitive linguistics: Basic readings. Mouton de Gruyter.
Goldberg, A. E. (1995). Constructions: A construction grammar approach to argument structure. University of Chicago Press.
Goldberg, A. E. (2006). Constructions at work: The nature of generalization in language. Oxford University Press
Goldberg, A., & Suttle, L. (2010). Construction grammar. Wiley Interdisciplinary Reviews: Cognitive Science, 1(4), 468–477.
Goldberg, A. E., & Bencini, G. M. (2005). Support from language processing for a constructional approach to grammar. Language in use: Cognitive and discourse perspectives on language and language learning, 3-18.
Goldberg, A. E., Casenhiser, D. M., & Sethuraman, N. (2005). The role of prediction in construction-learning. Journal of Child Language, 32(2), 407-426.
Goldberg, A. E., Casenhiser, D., & White, T. R. (2007). Constructions as categories of language. New ideas in psychology, 25(2), 70-86.
Hare, M. L., & Goldberg, A. E. (2020, December). Structural priming: Purely syntactic?. In Proceedings of the Twenty-First Annual Conference of the Cognitive Science Society (pp. 208-211). Psychology Press.
Haspelmath, M. (2004). Explaining the ditransitive person-role constraint: A usage-based approach. Constructions, 1.
Haspelmath, M. (2025). construction-strategies. Locative and existential predication: On forms, functions and neighboring domains, 9.
Haspelmath, M. (2025). Some comments on robustness in comparative grammar research: commentary on “Replication and methodological robustness in quantitative typology” by Becker and Guzmán Naranjo. Linguistic Typology, 29(3), 519-526.
Haspelmath, M. Beijing Lectures 3: On the nature of the word.
Haspelmath, M. (2023). On what a construction is. Constructions, 15(1).
Hilpert, M. (2019). Construction grammar and its application to English. Edinburgh University Press.
Hoffmann, T. (2022). Construction grammar. Cambridge University Press.
Hoffmann, T., & Trousdale, G. (Eds.). (2013). The Oxford handbook of construction grammar. Oxford University Press.
Jackendoff, R. S. (2010). Foundations of language: Brain, meaning, grammar, evolution. (No Title).
Langacker, R. W. (1987). Foundations of cognitive grammar: Volume I: Theoretical prerequisites (Vol. 1). Stanford University Press.
Leclercq, B. (2023). Linguistic Knowledge and language use: Bridging construction Grammar and relevance theory. Cambridge University Press.
Leclercq, B., Desagulier, G., & Glynn, D. (2024). Constructions and context (s). CogniTextes. Revue de l’Association française de linguistique cognitive, (Volume 25).
Pappert, S., & Pechmann, T. (2013). Bidirectional structural priming across alternations: Evidence from the generation of dative and benefactive alternation structures in German. Language and Cognitive Processes, 28(9), 1303-1322.
Perek, F. (2021). Construction Grammar in action: the English Constructicon project. CogniTextes. Revue de l’Association française de linguistique cognitive, 21(Volume 21).
Perek, F., & Patten, A. L. (2019). Towards an English Constructicon<? br?> using patterns and frames. International Journal of Corpus Linguistics, 24(3), 354-384.
Pickering, M. J., & Ferreira, V. S. (2008). Structural priming: a critical review. Psychological bulletin, 134(3), 427.
PINE, J. M. (2005). TOMASELLO, M., Constructing a language: a usage-based theory of language acquisition. Cambridge, MA: Harvard University Press, 2003. Pp. 388. Hardback,£ 29.95. ISBN 0-674-01030-2. Journal of Child Language, 32(3), 697-702.
Sag, I. A., Boas, H. C., & Kay, P. (2012). Introducing sign-based construction grammar. Sign-based construction grammar, 1–29.
Silvennoinen, O. O. (2023). Is construction grammar cognitive?. Constructions, 15(1).
Sommerer, L., Van de Velde, F., Fried, M., & Nikiforidou, K. (2025). Constructional networks.
Steels, L. (2011). Design patterns in fluid construction grammar.
Tizón-Couto, D. (2021). Holger Diessel, The grammar network: How linguistic structure is shaped by language use. Cambridge: Cambridge University Press, 2019. Pp. xvii+ 289. ISBN 9781108671040. English Language & Linguistics, 25(3), 663-671.
Traugott, E. C., & Trousdale, G. (2013). Constructionalization and constructional changes (Vol. 6). OUP Oxford.
Van Valin, R. D. (2007). Review of Adele E. Goldberg, Constructions at Work: The Nature of Generalization in Language. Journal of Linguistics, 43(1), 234–240.
Xu, L., Wu, J., Peng, J., Gong, Z., Cai, M., & Wang, T. (2023). Enhancing language representation with constructional information for natural language understanding. arXiv preprint arXiv:2306.02819.

