We know (as discussed in a previous post) that you don’t learn to make sentences by memorizing them. Rather, you pick up on general word order patterns—patterns based on grammatical categories like Noun (N),Verb (V), and Adjective (Adj).
But even with grammatical categories, there are still two fundamentally different ways—each, on the surface, equally plausible—that things might work in your brain:
Model One – Full-Sentence Templates
It could be that each pattern gives you the word order for a full sentence, as in the example below (similar to the N V and N V N templates we used in the last post):
If this model were right, building a sentence would just be a matter of choosing a sentence template and filling words into the ‘slots’—though note that you’d have to learn a separate template for each type of sentence in your language.
Model Two – Step by step: a ‘Generative’ Grammar
An alternative way of understanding sentence building is that each pattern describes only part of a sentence. According to this model, you follow one pattern to build one part of the sentence, and another pattern to build another part (or to combine the pre-built parts together).
Linguists who believe this is right describe it in terms of ‘generative rules’—rules that ‘generate’ chunks of sentences according to certain patterns (or that combine chunks together). The examples below illustrate the idea (using simplified versions of the rules, which we will expand on shortly):
So which approach is right? Well, full-sentence templates might seem simpler, and they do (as we discussed in the last post) let you create very large numbers of sentences. Nevertheless, only the latter approach—the ‘generative rule’ approach—really works. Here’s why:
POWER – Generative rule systems are extremely powerful: a relatively small number of rules can generate a ginormous variety of sentences. To capture the full variety of sentence types in natural language with sentence templates, you’d need an impossibly large number of templates.
LEARNABILITY – Though powerful, generative rule systems are also much more learnable: you end up with relatively few things to learn, and learning a particular generative rule is often just a matter of choosing between a couple of pre-determined options. This fits with the fact that, despite their surface complexity, human languages are easily learnable—at least for kids.
STRUCTURE – The generative approach structures words into units, like the ‘NP‘s and ‘VP‘s illustrated above—and these units turn out to be linguistically ‘real’ in other ways, too. For example, among other things, most languages have processes, such as stylistic inversions, that move the words in NP (or VP, etc.) as a unit.
INFINITY – Finally, the generative approach captures infinity: with just a small set of generative rules (as little as three or four, if you’re clever) you can generate not just lots of sentences, but an infinite number—something which is impossible to do with non-generative templates. And (as discussed in a previous post) all human languages are, indeed, infinite.
For all these reasons, it seems that you must have learned—albeit unconsiously, and certainly not from your grammar teachers—a system for building sentences based on generative rules.
In the next two posts in this series, we’ll explore exactly how these rules work, and how they give you this infinite power.
This is the third of five posts on the topic ‘Capturing Infinity in Natural Language’. Here is the series TOC:
- Post 1 – World’s Longest Sentence? (Introduction)
- Post 2 – The Power of Grammatical Categories
- Post 3 – The Infinite Power of Generative Rules (this post)
- Post 4 – The Power of NP (coming soon)
- Post 5 – VP, S, and Infinite Syntax (coming in a while)
Chomsky, Noam. Syntactic Structures. The Hague: Mouton, 1957. Print.
Chomsky, Noam. “On Certain Properties of Formal Grammars.” Information and Control 2 (1959): 137-67.
Lasnik, Howard, Marcela A. Depiante, and Arthur Stepanov. Syntactic Structures Revisited: Contemporary Lectures on Classic Transformational Theory. Cambridge, MA: MIT, 2000. Print.