Knowledge of Language
People are language users: they read, write, speak, and listen; and they do all of these things in natural languages such as English, Russian, and Arabic. Many philosophers and linguists have been interested in knowing what accounts for this facility that language users have with their language. A language may be thought of as an abstract system, characterized either as a set of grammatical rules or as an axiomatic theoretical structure (think, for example, of the way one would characterize chess as a set of rules, or the way one conceives of geometry as an axiomatic system). So the question may be posed: What relationship do speakers of a language have to the abstract system that constitutes the language they speak? The most popular line of thought is to cast this relationship in terms of knowledge, specifically, knowledge about linguistic facts: those who have mastered English have knowledge about the syntax and semantics of English. Moreover, it is because they have this knowledge that they are able to read, write, speak, and have conversations in English. Though this view is widely accepted, it is not without its objectors, and in the present article we shall examine the arguments for attributing linguistic knowledge to speakers and shall also think about the nature of this knowledge.
Table of Contents
- What is it that Speakers of a Language Know?
- Why Think that Speakers of a Language have Knowledge about their Language?
- What Kind of Knowledge is Tacit Knowledge?
- References and Further Reading
Alex Barber puts the thesis we shall be investigating this way:
…ordinary language users possess structures of knowledge, reasonably so called, of a complex system of rules or principles of language. (2003b, 3)
And Robert Matthews characterizes what he calls the “Received View” similarly:
Knowing a language is a matter of knowing the system of rules and principles that is the grammar for that language. To have such knowledge is to have an explicit internal representation of these rules and principles, which speakers use in the course of language production and understanding. (2003, 188-9)
Though this view is widely accepted, it is not without its objectors, and in the present article we shall examine the arguments for attributing linguistic knowledge to speakers and shall also think about the nature of this knowledge.
There are three major questions that need to be addressed. First, assuming that it is correct to say that masters of a language have knowledge about their language, there is the question of what, precisely, they know. Stephen Stich (1971), in a discussion of speakers’ knowledge of syntactic principles and concepts, distinguishes three alternatives. (A) Speakers of a language might be said to know facts about the particular properties of particular sentences and expressions of their language. Those who speak English, for instance, might be said to know that “Mary had a little lamb” is ambiguous, or that “Nancy likes Ben” and “Ben is liked by Nancy” are related as active and passive voice transformations. (B) More generally, speakers might be said to know the syntactic and/or semantic theory for their language. Speakers of English might be said, on this alternative view, to know the entire Davidsonian truth theory for English or to know, on the syntactic side, that NP → Det+Adj+N is a rule of the grammar of English. (Stich, 1971, 480). (C) Finally, and most generally, speakers might be said to know the principles and rules of what linguists call universal grammar. That is, they might be said to know “that all human languages have phrase structure and transformational rules, or that the grammar of every language contains the rule S → NP+VP.” (Stich, 1971, 480). In more recent discussions of this topic which have centered on knowledge of a Davidsonian truth theory for the language rather than on knowledge of syntactic principles, the issue has been whether speakers know only the theorems of the truth theory or the axioms as well.
Second, why should we think that the relevant relationship is one of knowledge at all? The movements of a bicyclist who successfully rounds a corner are properly described by a complicated set of equations in physics, but there is certainly no need for the bicyclist to know these equations in order to keep her balance. In a similar vein, then, why can we not say that the linguistic behavior of a speaker of English is merely properly described by the semantic and syntactic rules of English? Why, in other words, must we say that speakers of English know the rules of English instead of merely saying that their linguistic behavior is correctly described by those rules in the way that the bicyclist’s behavior is correctly described by the laws of physics? This article will briefly look at some of the more prominent arguments for the thesis that masters of a language know the semantic and syntactic theories of their language.
Third, and perhaps most importantly, there is the question of what sort of knowledge linguistic knowledge is. All the participants in this debate agree that if masters of English have knowledge of the semantic and/or syntactic theory of English, this knowledge is importantly different from more ordinary sorts of knowledge. In addition to other important differences between knowledge of language and more ordinary sorts of knowledge, those who allegedly have knowledge of language are rarely, if ever, able to say what it is they know and the knowledge in question is largely, if not entirely, inaccessible to consciousness. The term “tacit knowledge” has been introduced to mark this distinction. Ruth, an English speaker, may know, in the ordinary sense of the term, that Chicago is the largest city in Illinois (if asked, for instance, what the largest city in Illinois is, she will answer correctly), but the knowledge she has of the semantic theory of English is best characterized as “tacit” since she is unable, among other things, to think about or tell someone else the content of what she knows. We shall discuss further the arguments for thinking that the knowledge we have of our language is tacit, the ways in which tacit knowledge differs from knowledge in the ordinary sense of the term, and the different conceptions of tacit knowledge that have been offered over the years.
The question of tacit linguistic knowledge has come up in connection with two separate issues in the philosophy of language. It first arose in the 1960s in connection with Noam Chomsky’s claim that every speaker of a natural language knows both the grammar of the language she speaks (English, Arabic, and so on) as well as the universal grammar which specifies linguistic universals, or grammatical properties of all natural languages. Chomsky’s claims drew the attention of philosophers not simply because of his claims of tacit linguistic knowledge, but because he claimed that knowledge of the universal grammar was innate to human beings. This claim, inasmuch as it seemed to revive certain key principles of 17th Century Rationalism, quickly attracted critical attention from the philosophical world. According to Chomsky’s view (at least as it was once expressed) human beings are born knowing the principles of universal grammar and, by deploying those principles in an environment of, say, English speakers, they come to learn the grammar of English. Knowing the grammar of English, Chomsky further claimed, is necessary for being able to read, write, speak, and understand English. Since Chomsky’s concern was primarily with the syntactic rules and principles of a language, the debate surrounding Chomsky’s nativism became a debate about whether or not speakers have syntactical (or, as it is frequently called, grammatical) knowledge of their language. In connection with this debate, philosophers have seen fit to think about three separate knowledge claims:
(a) That speakers of a language know the grammatical properties of individual expressions of their language;
(b) That speakers of a language know the particular grammatical rules of a natural language; and
(c) That speakers of a language know the principles of universal grammar. (See Stich, 1971, and Graves, et. al., 1973 for this taxonomy)
Most of our discussion here will focus on (a) and (b), though we will make some brief mention of claim (c). One of the central issues in this debate turns on the fact that the grammatical rules for any natural language are abstract, technical, and complex and, as such, are formulated in concepts that the average speaker does not possess. Because of these features of the grammatical rules, many philosophers are hesitant to ascribe knowledge of them to speakers. In the second place, the issue of tacit linguistic knowledge arose in connection with the truth-theoretic semantics inspired by the work of Donald Davidson. Davidson was more concerned with semantics than with syntax, and was interested in the project of constructing a semantic theory for a natural language. These theories (known in the literature as “T-theories” or “Truth-theories”) have an axiomatic structure, with the axioms specifying the meanings of the atomic elements of the language (roughly, the words) and the theorems — which are logically derived from the axioms — specifying the meanings of the sentences. Here the question of a speaker’s linguistic knowledge is the question of whether competent speakers of a language must be said to know the truth theory for their language, and, if they do, whether they are to be credited with knowledge of the theorems alone, or with knowledge of the axioms as well (though Davidson himself was not interested in this particular question).
One of the central issues in the debate over knowledge of the axioms of a truth theory is the idea that there are multiple ways of axiomatizing the same set of theorems. If English speakers are said to know the axioms of the truth theory for English, which axiom set do they know? In addition to this problem of multiple axiomatizations, the issues of complexity and inaccessibility to the consciousness of speakers that arise in the Chomskian debate also surface here.
It is clear that speakers’ linguistic knowledge, if they have it, is an odd sort of knowledge. That is, such knowledge differs in significant ways from ordinary, everyday knowledge. Though a complete analysis of the conditions for knowledge is well beyond the scope of this article, Stich lays out some relevant features of ordinary knowledge:
Commonly when a person knows that p he has occasionally reflected that p or has been aware that p; he will, if inclined to be truthful and otherwise psychologically normal, assert that p if asked. More basic still, he is capable of understanding some statement which expresses what he knows. (1971, 485-6)
But these conditions are rarely, if ever, met in the case of language users’ knowledge of the grammatical principles of their language. Martin Davies (1989) identifies three significant differences between tacit knowledge and knowledge ordinarily so called: propositions that are tacitly known are (i) inaccessible to the knower’s consciousness, (ii) deploy concepts which the knower only tacitly possesses and (iii) are inferentially isolated from other propositions that the knower may know. (The inferential isolation of linguistic knowledge will be discussed in Section IV below.) The upshot of these considerations is that the argumentative burden is on the advocates of linguistic knowledge. After all, without such an argument, an appeal to Occam’s Razor would seem to tell us that the simplest approach is simply to say that speakers’ linguistic behavior is merely accurately described by the principles of a semantic or syntactic theory, not that they actually know the theory itself. (Think back to our example of the bicyclist: given that most bicyclists couldn’t tell us or even bring to their own consciousness the details of the physical equations that describe their cycling behavior, without an argument for attributing them knowledge of those equations, we should say only that their behavior is accurately described by those equations.) In this section we shall look at some of the more prominent arguments for the attribution of linguistic knowledge to masters of a language.
There are some accounts of the nature of language learning that seem to imply that masters of a language have knowledge about their language. According to some accounts, a child learning a language is involved in much the same sort of activity as a field linguist who is trying to figure out the language of the natives she is studying. The field linguist is involved in constructing a theory of the native language: the linguist formulates hypotheses about what certain words and phrases mean, tests these hypotheses (perhaps by making predictions about what the natives would say in a certain situation, or by talking to the natives and making predictions about their replies to her), and modifies her theory in light of the results of those tests. The idea is that infant language learners are “little linguists” involved in the same sort of process: the infant is engaged in the formulating, testing, and revision of hypotheses about the meaning and structure of the language being spoken by those around him. Of course, on this picture of language learning as theory construction, the theory construction takes place at a subconscious level and the hypotheses are formulated in the so-called Language of Thought, which is distinct from any natural language.
If this account of language learning is true (Quine, for one, seems to be a proponent of it), then it must be the case that language learners have linguistic knowledge. For one, the language learners will know the results of their theory. In much the way that the linguist, at the end of the day, knows that “toktok” is the native word for “fire”, so the language learner will know the meanings of the words of the language he has learned. Second, the language learner must have knowledge of the concepts required for the formulation of his hypotheses. If, for instance, the hypotheses formulated by the language learner include claims like “‘The large box’ is a noun phrase” and “‘The box was painted by Nancy’ is in the passive voice”, then the language learner must know what noun phrases are and what it means for a sentence to be in the passive voice. To formulate hypotheses about noun phrases, the passive voice, and other semantic and syntactic categories, the language learner must have knowledge about those categories. Or, to put the point another way, the language learner must possess the concepts he deploys in the hypotheses he formulates in the process of learning the language.
This argument is not without its objections. For one, there are philosophers who reject the model of language learners as “little linguists”. Second, even if this account of language learning is true, it tells us nothing about whether linguistic knowledge (that is, knowledge of the semantics and syntax of a natural language) is involved in our everyday use of language. Perhaps, even if knowledge is involved in learning a language, such knowledge plays the same role that training wheels play in learning how to ride a bicycle: though necessary for learning how to cycle, they are jettisoned afterward. When mature cyclists ride, they are not using training wheels, and it might similarly be the case that when mature language users use their language they are no longer utilizing the knowledge which they made use of in acquiring it. What we are interested in here is whether using a language in everyday reading, writing, and conversing requires that the language users draw on linguistic knowledge, and so, the present argument is, taken by itself, incomplete.
Language users sometimes, though not frequently, reflect on the semantic features of their language. They may do so on their own or they may do it in the course of being interviewed by a linguist. In the course of such reflection, language users make judgments about the semantic and syntactic properties of, and relations among, sentences. So, presented with a set of English sentences, masters of English will be able to match up those in the active voice with their synonymous passive versions, or declarative sentences with the corresponding questions, and so on.
One might think that something about the explicit linguistic judgments that language users make in the course of this second order, metalinguistic reflection requires the attribution of linguistic knowledge. Perhaps the fact that language users are able to make explicit judgments about the semantic properties of sentences they have never encountered before is reason to say that they must have known semantic truths beforehand. Thomas Nagel (1969) has argued that a certain feature of the reflective process — the fact that when presented with certain propositions of semantic and syntactic theories, language users recognize them “from the inside” as correct — implicates prior linguistic knowledge.
As already mentioned, one of the large obstacles barring the way to ascriptions of linguistic knowledge is the fact that the propositions of the relevant semantic theories are highly complex and involve technical theoretical concepts. In light of these facts, Nagel wonders under what conditions it may be proper to attribute knowledge of such propositions to speakers. Nagel turns his attention to “unconscious knowledge in the ordinary psychoanalytic sense” for a clue.
The psychoanalytic ascription of unconscious knowledge, or unconscious motives for that matter, does not depend simply on the possibility of organizing the subject’s responses and actions in conformity with the alleged unconscious material. In addition, although he does not formulate his conscious knowledge or attitude of his own accord, and may deny it upon being asked, it is usually possible to bring him by analytic techniques to see that the statement in question expresses something that he knows or feels. That is, he is able eventually to acknowledge the statement as an expression of his own belief, if it is presented to him clearly enough and in the right circumstances. Thus what was unconscious can be brought, at least partly, to consciousness. It is essential that his acknowledgment not be based merely on the observation of his own responses and behavior, and that he come to recognize the rightness of the attribution from the inside. (1969, 175-6)
Nagel then offers the following proposal for attribution of unconscious or tacit knowledge:
…where recognition of this sort is possible in principle, there is good reason to speak of knowledge and belief, even in cases where the relevant principles or statements have not yet been consciously acknowledged, or even in cases where they will never be explicitly formulated. (1969, 176)
and claims that this sort of recognition exists in the linguistic realm:
…we may observe that accurate formulations of grammatical rules often evoke the same sense of recognition from speakers who have been conforming to them for years, that is evoked by the explicit formulation of repressed material which has been influencing one’s behavior for years. (1969, 176)
Accordingly, he concludes, we have reason to attribute linguistic knowledge to language users. Nagel has, it seems, found a phenomenon — recognition “from the inside” of the correctness of a rule or principle — which is adequately explained only by the ascription of prior knowledge. We cannot make adequate sense of this “Of course! That’s it! I knew it all along!” phenomenon unless (or so it is argued) we say that language users had knowledge prior to being questioned.
There are two objections to this argument. First, even if this is sound, we would need to hear more about how this applies to unreflective language use. In general, one may try to explain some feature of explicit linguistic judgments in terms of linguistic knowledge, but in order for us to conclude that first order language use involves the active deployment of linguistic knowledge, we need an argument for the claim that first order language use consists in making explicit linguistic judgments. To build on the earlier analogy of cycling, we may say that a cyclist has all sorts of knowledge of the mechanical workings of his bicycle — and we may show that he does by interviewing him before the race in his garage — but it does not follow that he is deploying or using that knowledge in the course of cycling.
Second, as Stich (1971) has claimed, it is doubtful that we can actually bring speakers to this sort of recognition. While it is certainly possible to do this with some linguistic rules, the fact that the rules which, according to linguists and philosophers, constitute any natural language are exceedingly abstract, complex, and technical would argue against the possibility of bringing speakers of a language to this “from-the-inside” recognition of the linguistic rules of that language.
The two arguments we have just examined fail to give us conclusive reasons for thinking that ordinary every day language use requires the attribution of linguistic knowledge to speakers. While they may take us some of the way toward that conclusion, they are, at best, incomplete. The Behavior Rationalizing Argument, by contrast, focuses precisely on everyday language use to establish its conclusion and is, for that reason, a stronger argument.
One common justification for ascribing knowledge to people is that such knowledge ascriptions are necessary to explain their behavior. So, to borrow an example from Ernest LePore, a proponent of this argument, if we see Cinderella running and seek to explain that behavior of hers, we will naturally ascribe to her a desire (say, to be home by midnight) and some beliefs (say, that it is almost midnight and that she won’t get home by midnight unless she runs). The only way to rationalize (i.e make sense of) Cinderella’s behavior is to ascribe some set of beliefs and desires to her. So far, this is merely standard belief-desire psychology and has nothing in particular to do with linguistic knowledge. LePore, however, has adapted this argument to make the case for linguistic knowledge, and it is that adaptation that constitutes the “Behavior Rationalizing Argument” for linguistic knowledge.
LePore asks us to imagine that Cinderella begins running because Arabella has yelled to her, “It’s almost midnight!” In this case, in order to make sense of Cinderella’s behavior, it seems we have to ascribe to Cinderella at least three additional beliefs:
(i) that Arabella uttered the sentence “It’s almost midnight”; and
(ii) that “It’s almost midnight” means that it’s almost midnight; and
(iii) that Arabella is telling the truth
Claiming that Cinderella has these three beliefs seems necessary to adequately explain why Cinderella believes, upon hearing Arabella, that it’s almost midnight. (And, given her belief that she can get home by midnight only if she runs and her desire to be home by midnight, we can understand why she is running.) Notice, however, that if this is the story to tell, we have, with (ii), ascribed to Cinderella a belief about the semantic properties of a particular English sentence. If Cinderella runs because Arabella yelled to her “It’s almost midnight,” it seems that rationalizing Cinderella’s behavior requires attributing to Cinderella a belief about the linguistic properties of a sentence of her language. Rationalizing Cinderella’s behavior, therefore, requires that we attribute linguistic knowledge to Cinderella.
The point can be further appreciated if we imagine that Cinderella does not understand English. Upon Arabella’s yelling “It’s almost midnight”, Cinderella may still form beliefs (i) and (iii), (belief (i), note, is just about the words that Arabella has uttered; even if she doesn’t understand English, Cinderella may still believe that Arabella has uttered certain words) but she will not begin running. The reason she will not is because she has not understood what Arabella has said. That is, she lacks belief (ii). This seems to be a strong case for conceiving of a speaker’s understanding of the language in terms of linguistic knowledge of the language itself. LePore puts the point this way:
What about understanding language justifies, for example, the belief that it is midnight, when this understanding combines with other attitudes, for example, the belief that Arabella uttered “It’s [almost] midnight”? It is hard to see how else we could justify such a belief without ascribing additional beliefs, knowledge, or other propositional attitudes the speaker might have but the non-speaker lack. (1986, 5)
Such, then, is the Behavior Rationalizing Argument for the conclusion that speakers of a language have beliefs about the meanings of particular sentences of their language. The behavior of language users (in particular, their reactions to the utterances of others) shows that they have beliefs about what sentences of their language mean. Upon noticing a sign in a shop window that reads “Free philosophy books inside!” Cinderella enters the shop. Rationalizing her behavior requires that we ascribe to Cinderella the belief that there are free philosophy books inside the shop. And the best explanation for how she came by that belief is that she knows what the English sentence “Free philosophy books inside!” means. And so on for her reactions to other sentences of English. It is only if we ascribe linguistic knowledge to English speakers that we can make sense of their behavior. What is important about this argument is that it appeals to ordinary, everyday, features of language use, and that is one of its strengths.
One of the limitations of this argument, however, is that it succeeds in attributing to speakers knowledge of the semantic properties of only particular sentences of their language. In terms of Davidsonian theories of meaning, in other words, it is an argument that Cinderella knows the theorems of those theories. For an argument that Cinderella knows more than this, we need to turn to the Novel Sentence Recognition argument.
This is perhaps one of the best known, and most relied upon, arguments for linguistic knowledge, and we can approach it by picking up where the Behavior Rationalizing Argument left off. That argument, if sound, has established that speakers’ understanding of the sentences of their language consists in their having beliefs about the meanings of those sentences. Now, philosophers and linguists have long been impressed by the fact that, after being exposed to only a small number of strings of language, masters of a language are able to understand a potential infinity of previously unencountered strings of language. After exposure to only a small number of English sentences, speakers are able to recognize, of just about any English sentence — including sentences they have never seen or heard before — what that sentence means. This is a remarkable feat, and cries out for explanation. As Crispin Wright characterizes it, the central project of theoretical linguistics is to “explain our recognition of the syntax and sense of novel sentences” (1989, 258), and, according to the Novel Sentence Recognition Argument, the best such explanation will appeal to cognitive states of language users.
The best explanation of speakers’ ability to have beliefs about the meanings of a potential infinity of sentences involves the claim that speakers are deriving their belief about the meaning of a sentence from other beliefs about (simplifying a bit) the meanings of the component words. The reason why Nancy has a belief about the meaning of a sentence she has never encountered before is that she already has beliefs about the meanings of all the words (and semantic significance of the syntax) in that sentence. Since Nancy’s beliefs about the meanings of the sentences are viewed as beliefs about the theorems of a Davidsonian theory of meaning, we can view the conclusion of this argument as attributing to Nancy beliefs about the axioms of the theory.
It may help to think about the language itself, setting aside the question of speakers’ knowledge of the language. What is it that allows for the construction of novel sentences of English, sentences that have never before been constructed? Surely it is the fact that English is compositional: sentences are constructed out of words, to put it simply. A finite collection of words can be arranged in an infinite number of ways, generating the potential infinity of English sentences. This compositionality applies, then, to the structure of speakers’ knowledge of their language: their ability to understand (which, according to the Behavior Rationalizing Argument, consists in having a semantic belief) a potential infinity of sentences is rooted in their knowledge of the axioms of the theory of meaning.
Inspired by Wittgenstein’s discussion in The Philosophical Investigations, there is a tradition according to which speaking a language is conceived of as a matter of following a set of rules: the language itself is conceived of as a set of rules (as chess is) and those who speak the language are following those rules in the course of their language use, much like chess players are following the rules of chess as they play. John Searle is a proponent of this view of language use:
Speaking a language is engaging in a (highly complex) rule-governed form of behavior. To learn and master a language is (inter alia) to learn and to have mastered these rules. This is a familiar view in philosophy and linguistics. (Searle, 1969, 12)
Somewhat later, and more simply, Searle says this: “speaking a language is performing acts according to rules.” (1969, 36) If we adopt this view, we can construct an argument for attributing linguistic knowledge to speakers of a language.
The first point to make is that there is an important difference between, on the one hand, following a rule or being guided by a rule, and, on the other hand, acting in accordance with a rule or having one’s behavior correctly described by a rule. Quine illustrates the distinction this way:
Imagine two systems of English grammar: one an old-fashioned system that draws heavily on the Latin grammarians, and the other a streamlined formulation due to Jespersen. Imagine that the two systems are extensionally equivalent, in this sense: they determine, recursively, the same infinite set of well-formed English sentences. In Denmark the boys in one school learn English by the one system, and those in another school learn it by the other. In the end all the boys sound alike. Both systems of rules fit the behavior of all the boys, but each system guides the behavior of only half the boys. (Quine, 1972, 442)
Only half of the boys are following the Jespersen rules (because only half the boys learned the Jespersen rules), but all the boys are acting in accordance with the Jespersen rules. That is, the behavior of all of the boys is correctly described by the Jespersen rules. Or, put differently, none of the behavior of any of the boys ever violates the Jespersen rules.
According to advocates of the Rule-Following Argument, fluent speakers of English are to be thought of as following the rules of English and not as merely acting in accordance with them. What is the difference between one who is following a rule and one who is merely acting in accordance with it? The Rule-Following Argument claims that drawing this distinction requires attributing knowledge of the rules to fluent speakers.
The argument goes like this. First, an agent is following a rule only if that rule is somehow involved in the explanation of her behavior. If we say that Nancy, while playing chess, is following the rule “Bishops may move diagonally only”, then we commit ourselves to the view that the explanation of why Nancy acted as she did will appeal to that rule. By contrast, that rule does not appear in the explanation of the behavior of someone who is merely acting in accordance with that rule. Second, the way in which the rule shows up as part of the explanation of Nancy’s rule-following behavior is that the rule appears as one of the causes of her behavior. Accordingly, the rule is not involved in the causal explanation of the behavior of someone who is merely acting in accordance with that rule. The most we can say of a rule with which an agent is merely acting in accordance is that the rule truly describes her behavior. The rule is among the causes of the behavior of an agent who is following that rule. Third, and finally, a rule features as a cause of an agent’s behavior because the agent knows, or somehow has present to mind, that rule. From these three claims, we get the conclusion that fluent speakers of a language (whose linguistic behavior is conceived of as rule-following behavior) have linguistic knowledge: they know the rules they are following. Rosenberg gives a nice description of this position:
Learning to behave according to certain rules is, presumably, learning to pursue or eschew certain activities. But it is not simply that. A pigeon who has been trained (conditioned) to peck at a key under certain circumstances has not learned to behave according to any rules. What more is required is that the activities in question be pursued or eschewed because they are enjoined or proscribed by the rules. If an agent is following a rule in the course of his activities, then the rule in question must, in some sense, be “present to the mind.” (1974, 31)
This Rule-Following Argument, with its talk of the difference between following a rule and acting in accordance with a rule, differs in its starting point from the Behavior Rationalizing Argument. Its focus is on making sense of agents’ responses to their interlocutors’ utterances, but it ends up in much the same place: fluent language users have linguistic knowledge and make use of that knowledge in the course of their language use.
Jerry Fodor defends “intellectualist” accounts of psychology, and, in the course of so doing, provides another argument for the attribution of tacit knowledge to language users. Fodor is concerned with psychology generally, and not simply with the explanation of linguistic behavior, and so fully appreciating the argument requires that we briefly review his intellectualist position.
According to Fodor, the explanation for how people snap their fingers or tie their shoes is that there are instructions for how to do these things — descriptions, in terms of the elementary operations of our nervous, perceptual, and muscular systems — and that these instructions are encapsulated as information in our minds. Since, in snapping our fingers or tying our shoes, we are applying these instructions, we must know them. Fodor frequently uses the images of “little men in our heads”, but the cash value of this metaphor is simply that the information is somehow represented in our minds. Whenever we tie our shoes, little agents in our head (and in other parts of our nervous system) execute the instructions encapsulated in the “instruction manual” for shoe tying. To say that we know how to tie our shoes is simply to say that we know the instructions for doing so. What makes his position an intellectualist one is precisely this appeal to represented information as part of the explanation of our behavior. As Fodor himself puts it, “The intellectualist account of X-ing says that, whenever you X, the little man in your head has access to and employs a manual on X-ing; and surely whatever is his is yours.” (1968, 636)
Fodor is sensitive to the fact that those of us who possess this knowledge are unable to answer the question, “How does one X”? That is, Ruth may be unable to explain (in terms of nerve firings and muscle contractions and so on) how it is she snaps her fingers, but, all the same, she knows the instructions for finger snapping which are formulated in terms of nerve firings and muscle contractions. Thus, Fodor acknowledges, this knowledge must be tacit, and he seeks to provide an argument for saying, despite her inability to say how she X-es, that Ruth knows the instructions for X-ing. His argument appeals to optimal simulations of an organism’s behavior — that is, to a machine or computer program, or some other artificial device that would simulate the organism’s behavior.
Fodor’s position on tacit knowledge attributions is aptly summed up thus:
…if X is something an organism knows how to do but is unable to explain how to do, and if S is some sequence of operations, the specification of which would constitute an answer to the question “How do you X?,” and if an optimal simulation of the behavior of the organism X-s by running through the sequence of operations specified by S, then the organism tacitly knows the answer to the question “How do you X?,” and S is a formulation of the organism’s tacit knowledge. (1968, 638)
If we build a robot that optimally simulates Ruth’s finger snapping behavior, and the robot runs through a series of instructions S1, S2, S3, and so on, then, according to Fodor, Ruth tacitly knows S1, S2, S3, and so on A particularly odd feature of this proposal is that it draws a conclusion about Ruth upon noticing something about a robot. The fact that we can build a robot to simulate Ruth’s (or any human being’s) finger snapping shouldn’t give us any evidence at all about Ruth, should it? As Fodor puts it, “how could any fact about the computational operations of some machine (even a machine that optimally simulates the behavior of an organism) provide grounds for asserting that an epistemic relation [that is, tacit knowledge] holds between an organism and a proposition?” (638)
It is at this stage that Fodor deploys the following, seemingly reasonable, inductive principle: From like effects, infer like causes. Since the robot and Ruth are exhibiting similar effects, and we know the cause of the robot’s behavior — it is running through the instructions — we can infer (inductively, of course) that Ruth’s behavior has a similar cause.
If machines and organisms can produce behaviors of the same type and if descriptions of machine computations in terms of the rules, instructions, and so on, that they employ are true descriptions of the etiology of their output, then the principle that licenses inferences from like effects to like causes must license us to infer that the tacit knowledge of organisms is represented by the programs of the machines that simulate their behavior. (640)
So far we have spoken in general terms about the behavior of organisms — shoe tying, finger snapping, and so on, — but, of course, we can apply Fodor’s argument to linguistic behavior. Since speaking English or reading German or having a conversation in Arabic are intelligent behaviors on a par with shoe tying and finger snapping, if we can (a) arrive at a specification of a set of instructions for how one does these things — a set of instructions which will, in all likelihood, make reference to the semantic and syntactic theories of these languages — and if we can (b) produce an optimal simulation of such language use which simulates human language use by running through those instructions, then we can, by Fodor’s reasoning, conclude that human speakers of those languages have tacit knowledge of the semantic and syntactic theories of the languages they speak.
We have seen a number of arguments that attempt to establish that speakers of a language have knowledge of the semantic and syntactic properties of the words and sentences of their language. It is worth reiterating that the argumentative ball is in the court of the proponent of linguistic knowledge: the many ways in which linguistic knowledge, if it exists, differs from ordinary knowledge puts the burden of argument on the philosopher who advocates the position that every ordinary speaker of a language has syntactic and semantic knowledge.
The arguments assembled here are, in one way or another, all arguments to the best explanation. There are some phenomena (language learning, novel sentence recognition, behavior in response to an utterance, and so on) which, according to the arguments, can best (or, perhaps, only) be explained by the attribution of knowledge to the speakers. This is a perfectly legitimate form of argument, of course, and may ultimately carry the day. But, as with all such arguments, they are vulnerable to the objector who thinks either that the phenomena in question do not need explanation or can be explained in simpler terms — that is, terms that don’t require knowledge attribution.
If, however, we accept the conclusion of these arguments, we need next to investigate the nature of tacit knowledge. In what respects is tacit knowledge like other, more familiar sorts of knowledge? In what ways is it different? Might it be so different as to not qualify as knowledge at all? These are some of the questions we shall be discussing in the final section.
If we accept the conclusion of the above arguments and, consequently, attribute tacit knowledge of a language to speakers of that language, the question that next presents itself is this: what sort of knowledge is tacit knowledge? How is tacit knowledge of a language like other sorts of knowledge that we ordinarily ascribe to people?
A common move by those who are somewhat skeptical of the attribution of tacit linguistic knowledge is to draw a distinction between propositional knowledge and practical knowledge, or, more colloquially, between “knowledge that” and “knowledge how”. (Ryle (1949) is credited with the original distinction, but also see Stanley and Williamson (2001) for a more recent treatment.) The distinction is meant to emphasize that not all knowledge should be regarded as a relationship between a knower and a proposition. So, for instance, when we say
(1) Sophie knows that Paris is the capital of France
we usually understand that attribution in terms of Sophie’s relationship to the proposition expressed by the sentence “Paris is the capital of France.” To possess that knowledge, accordingly, Sophie must bear some sort of cognitive relationship to that proposition. She must, in some sense, “have that proposition before her mind”. By contrast, were we to say
(2) Sophie knows how to swim
we would not thereby be attributing to Sophie any relationship to any propositions. There may be a good many propositions that accurately describe what Sophie is doing while she is swimming (“Sophie is kicking her feet 75 times a minute”, “Sophie is traveling 5 miles an hour”, and so on) but, the position holds, she need not bear any cognitive relationship to those propositions in order for us to truly assert (2). To say that Sophie knows how to do something is to attribute to Sophie a practical ability, but in doing so (if we accept the knowledge-that/knowledge-how distinction) we do not attribute to her cognitive relationships to a particular set of propositions.
Some have argued that the sort of knowledge that speakers have of their language should be conceived of as knowledge-how. Wittgenstein gives voice to the sentiment in the Investigations thus:
To understand a sentence means to understand a language. To understand a language means to be master of a technique. (1958, para. 199)
But is has been more clearly asserted more recently by Anthony Kenny:
To know a language is to have an ability: the ability to speak, understand, and perhaps read the language. (1989, 20)
and by Michael Devitt who claims that we should view linguistic competence
not as semantic propositional knowledge, but as an ability or skill: It is knowledge-how not knowledge-that. (1996, 25)
To accept this line of thought is to conceive of the propositions that constitute the grammar or theory of meaning for a particular language as accurately describing the linguistic behavior of speakers; those propositions are not to be conceived of as the content of speakers’ propositional attitudes.
There are a number of reasons for accepting the view that linguistic knowledge is knowledge-how, but perhaps the most popular line of thought is this: Since, or so it has been claimed, propositional knowledge, or knowledge-that, requires that one understand a language (the language in which the propositions are represented), linguistic understanding cannot, on pains of regress or circularity, be analyzed in terms of propositional knowledge. We cannot, it is argued, analyze Cinderella’s understanding of English in terms of her knowledge of a set of English sentences of the sort found in, say, Davidsonian meaning theories, for example,
“Snow is white” is true if and only if snow is white
because knowing the propositions expressed by those sentences requires understanding English.
There are responses to this argument and there are, as mentioned, other reasons to endorse the view that linguistic knowledge should be viewed as knowledge-how. Moreover, and perhaps more importantly, there are arguments against the knowledge-how/knowledge-that distinction. Stanley and Williamson have argued that “all knowing-how is knowing-that” (2001, 444). If their argument stands up to scrutiny, it makes the project of trying to analyze linguistic knowledge as a species of practical knowledge much more difficult. The topic of practical knowledge and its relationship to propositional knowledge is a fascinating one, and the brevity of this discussion here should not be taken as a dismissal of the importance or complexity of the existing debate.
If we accept that speakers of a language have propositional knowledge of the grammar, or meaning theory, for their language, we need to think about the ways in which that knowledge is like other sorts of propositional knowledge. One condition that seems satisfied by ordinary beliefs (and states of knowledge) is the following:
Beliefs (and states of knowledge) are the sorts of states that interact with the believer’s desires and which must potentially be at the service of many of the believer’s different projects.
Gareth Evans has endorsed this condition on beliefs:
It is the essence of a belief state that it be at the service of many distinct projects, and that its influence on any project be mediated by other beliefs. (1981, 132)
So consider Susie who believes that a pot of soup is laced with cyanide. According to this condition on beliefs, Susie counts as having this belief (and, if she meets other conditions, counts as knowing that the soup is laced with cyanide) only if it is possible for this cognitive state to serve a number of different projects. Susie’s belief might lead to her refusing to eat the soup herself, to her keeping her friends from eating the soup, to serving the soup to her enemies, and, if Susie further believes that ingesting a bit of cyanide each day for a month renders one immune to its effects and desires to develop a cyanide immunity, her belief that the soup is laced with cyanide might lead to her taking a spoonful of it each day for a month. Susie thus stands in contrast to a laboratory rat to whom, given its conditioning, we might be tempted to attribute the belief that the soup is laced with cyanide. What makes it the case that the rat does not have a genuine belief is that this belief leads to only one kind of behavior — avoiding eating the soup. This putative belief of the rat’s does not help to explain anything else the rat does, and because of this, it does not count as a genuine belief.
The plausibility of this condition on our ordinary concept of belief emerges when we realize that these multiple projects are the result of multiple desires. Susie’s different desires — for her own health, for the health of her friends, for the demise of her enemies, for immunity to cyanide — are what interact with the belief that the soup is laced with cyanide to produce different behaviors. A belief is the kind of thing that can interact with multiple desires to produce behavior, and, consequently, so with knowledge. Beliefs (and thus states of knowledge) cannot be isolated to the degree that they are incapable of interacting with different desires to produce different behavior.
All of this is relevant to our discussion of linguistic knowledge because, according to many authors, the knowledge that speakers have of the grammar or meaning theory of their language is, or seems to be, isolated in the way that ordinary beliefs are not. A speaker’s linguistic beliefs(whose content are the grammatical principles of their language or the contents of the meaning theory for their language) seem to be inferentially isolated from the rest of her beliefs and from her desires. Such beliefs operate (especially if we are attracted to either the Behavior Rationalizing Argument or the Novel Sentence Recognition Argument above) simply to account for a speaker’s understanding of a string of the language. If we are convinced by the Novel Sentence Recognition Argument to ascribe to a speaker a belief about some syntactic structure, we do so only in order to explain the fact that the speaker is able to understand a sentence she has never encountered before. That belief interacts with no other desires of the speaker and is at the service of one project alone: the comprehension of encountered sentences. Accordingly, if we accept Evans’ claim, we should conclude that while an English speaker may have some cognitive relationship to the grammar or meaning theory for English, that relationship is not a full-fledged belief. It is, perhaps, not even a belief at all. Investigation of the particular cognitive status of these subdoxastic states is an important topic not just in relation to tacit linguistic knowledge, but in cognitive science generally.
- Barber, Alex. ed. Epistemology of Language. Oxford University Press, Oxford and New York, 2003a.
- Barber, Alex. “Introduction” Epistemology of Language. Ed. Alex Barber. Oxford University Press, Oxford and New York, 2003b. 1-43.
- Davies, Martin. “Tacit Knowledge and Subdoxastic States.” Reflections on Chomsky. Ed. Alexander George. Basil Blackwell, Oxford and Cambridge,1989. 131-52.
- Devitt, Michael. Coming to Our Senses. Cambridge University Press, Cambridge and New York, 1996.
- Evans, Gareth. “Semantic Theory and Tacit Knowledge.” Wittgenstein: To Follow a Rule. Eds. Holtzman, S.H. and C.M. Leitch. Routledge and Kegan Paul, London,1981.
- Fodor, Jerry. “The Appeal to Tacit Knowledge in Psychological Explanation.” Journal of Philosophy 65 (1968): 627-40.
- George, Alexander. Reflections on Chomsky. Basil Blackwell, Oxford and Cambridge, MA, 1989.
- Graves, Christina, et. al. “Tacit Knowledge.” Journal of Philosophy 70, (1973): 318-30.
- LePore, Ernest. “Truth in Meaning.” Truth and Interpretation. Ed. Ernest Lepore, Basil Blackwell, Cambridge, MA, 1986. 3-26.
- Matthews, Robert. “Does Linguistic Competence Require Knowledge of Language?” Epistemology of Language. Ed. Alex Barber. Oxford University Press, Oxford and New York, 2003. 187-213.
- Nagel, Thomas. “Linguistics and Epistemology.” Language and Philosophy. Ed. Sidney Hook. New York University Press, New York, 1969. 171-82.
- Quine, W.V. “Methodological Reflections on Current Linguistic Theory.” Semantics of Natural Language. Eds. Donald Davidson and Gilbert Harman. D. Reidel, Dordrecht, 1972. 442-454.
- Rosenberg, Jay. (1974). Linguistic Representation. D. Reidel, Dordrecht.
- Ryle, Gilbert. The Concept of Mind. Hutchinson, London,1949.
- Searle, John. Speech Acts. Cambridge University Press, New York, 1969.
- Stanley, Jason and Timothy Williamson. “Knowing How.” Journal of Philosophy, 98 (2001): 411-444.
- Stich, Stephen. “What Every Speaker Knows.” Philosophical Review, 80 (1971): 476-96.
- Wittgenstein, Ludwig. Philosophical Investigations. G.E.M. Anscombe, trans. Macmillan, New York, 1958.
- Wright, Crispin. “Wittgenstein’s Rule-following Considerations and the Central Project of Theoretical Linguistics.” Reflections on Chomsky. Ed. Alexander George. Basil Blackwell, Oxford and Cambridge, MA, 1989. 233-64.
Andrew P. Mills
U. S. A.