header logo

Language Inside the Head

 

                                                                                                                             (image Source: Paul Pietroski)

Language Inside the Head

Meaning, Mind, and the Hidden Architecture of Human Thought

This post is written for those who were told that philosophy is too abstract, linguistics too technical, and semantics too elite.


It rejects that verdict.

Language is not a museum artifact preserved in textbooks. It is a living cognitive capacity, executed silently, instantly, and creatively inside ordinary human minds. This post defends a radical but empirically grounded claim: to understand meaning, we must study the internal procedures that generate it, not the social products that result.


Inspired by the work of Paul Pietroski, this post offers a fully internalist theory of meaning, written for global readers beyond elite universities.


Preface


For decades, students have been taught that meaning is something sentences have because they match the world. Truth tables, models, and extensions became the gatekeepers of semantic theory.


But ordinary speakers never consult models. Children acquire language without truth conditions. Brains execute instructions, not logical proofs.


This post begins from a different place: meaning is what the mind does with linguistic form.


1: Two Ways of Thinking About Language


There are two radically different conceptions of language:

E-language: language as an external object, a set of sentences, inscriptions, or conventions.

I-language: language as an internal computational system implemented in individual brains.


Cognitive science has no choice. Only internal systems can explain acquisition, processing, creativity, and error. Languages do not float in the air; they run in heads.


2: Why Internalism Is Not Optional


If meaning were external, learning a language would require learning infinite facts about the world. Yet children learn language rapidly and uniformly.


Internalism explains this by treating meaning as part of a biological capacity, constrained by human cognition.


This section dismantles the myth that internalist semantics reduces meaning to “mere syntax.” Instructions are not empty, they are executable.


3: What Semantics Really Is (and Is Not)


The word semantics has been hijacked.


Originally, it meant the study of meaning. After Tarski, it came to mean truth-conditions for formal languages.


Natural language does not owe allegiance to this technical invention. Treating truth-conditional semantics as mandatory for natural language was a historical gamble, not a discovery.


4: Meanings as Mental Instructions


A sentence does not describe the world. It instructs the mind.

Just as musical notation instructs performance, linguistic meaning instructs cognition.


This section develops the central metaphor:

Pronunciations are instructions for articulation.

Meanings are instructions for building concepts.

The result is not relativism, but explanatory power.


5: Meaning Is Not a Concept


One of the deepest confusions in semantics is the failure to distinguish:

meanings (instructions)

concepts (what gets built)


The same instruction can yield different conceptual outcomes in different contexts. This explains ambiguity, flexibility, and creativity without bloating lexical meaning.


6: Lexical Meaning Without Encyclopedias


Words like cat, democracy, and book do not encode theories.

They instruct the cognitive system to retrieve concepts from a pre-existing conceptual repertoire.

This section studies why lexical meanings must be minimal, or communication would collapse under individual differences.


7: Functional Words and Cognitive Operations


Words like every, if, and most do not name things. They trigger operations.


This section explores how functional vocabulary interfaces with:

numerical cognition

relational reasoning

conditional thinking

Quantifiers reveal where language meets ancient cognitive systems shared with other animals.


8: What Compositionality Must Explain


Recursion alone is not composition.

True compositionality explains why longer expressions are systematically more informative and more restrictive than their parts.


Viewing meanings as instructions allows composition to be modeled as instruction embedding, not mere specification.


9: Why Meanings Are So Simple


Natural languages overwhelmingly build meanings from:

one-place properties

two-place relations


This section explains why three-place predicates are rare, and why complexity is delayed until higher structure.

Simplicity is not a limitation. It is a design feature.


10: Innateness Without Excess


Language is biologically real, but biology is conservative.

This section defends a minimalist nativism:

syntax provides structure

semantics maps that structure onto a restricted conceptual space

Language builds complexity by recombining simple inherited resources.


11: Where Semantics Ends and Pragmatics Begins


Semantics delivers instructions. Pragmatics uses them.

Some pragmatic phenomena are theoretically tractable. Others resist formalization.

This section argues that recognizing limits is a scientific virtue, not a failure.


12: Interfaces as Scientific Opportunity


The most promising research lies at interfaces:

language and number

language and action

language and perception

By studying where systems meet, we learn what language really contributes.


13: Does Natural Language Have Truth Conditions?


Truth-conditional semantics remains a powerful idealization.

But idealizations are tools, not truths.

This section sees how truth can be recovered at the level of use without being encoded in meaning.


14: Why This Matters Beyond Academia


Internalist semantics has ethical consequences.

If meaning lives in human cognitive capacity, then no accent, passport, or institution has privileged access to it.

Linguistic competence is universal, even when opportunity is not.


Epilogue: A Program, Not a Conclusion


This post does not close debates. It opens them.

The future of semantics lies in empirical rigor, conceptual humility, and global access.

Language belongs to minds, not to elites.


Share it; knowledge grows by circulation, not enclosure.


Paul Pietroski on Internalism and the Philosophy of Linguistics

Tags

Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.