Knowledge, The Priority of

views updated

KNOWLEDGE, THE PRIORITY OF

One fairly specific understanding of the priority of knowledge is the idea that instead of trying to explain knowledge in terms of belief plus truth, justification, and something, we should explain belief in terms of knowledge. This is to reverse the usual explanatory priority of knowledge and belief. This fairly specific idea generalizes in two directions. (1) Perhaps we should explain other notions in terms of knowledge as well. Some possibilities include assertion, justification or evidence, mental content, and intentional action. (2) Perhaps we could explain other relatively internal states like intentions, attempts, and appearances in terms of their more obviously external counterparts: intentional action and perception.

That knowledge is prior to belief has historically been a minority opinion. The idea that a belief, and the mind more generally, is what it is regardless of any actual connection to the external world is still widely accepted. Accepting the priority of knowledge constitutes a rejection of the picture of the mind as a self-contained, inner realm.

Understanding Belief

Bernard Williams (1973) tries to explain the impossibility of believing at will in terms of the idea that belief aims at the truth. Suppose you are anxious about tomorrow's weather but have no access to a weather forecast or any other evidence. If you want to reduce your anxiety, then you might, if it was in your power to do so, simply decide to believe that it will be sunny tomorrow. But if you knew that this attempted belief was based not on evidence or any apparent connection to the facts, but on a decision, then it would be hard for you to see your attempted belief as aiming at the truth. It would also be hard for you to see it as a belief. So, perhaps, it could not be a belief.

Let us agree that in this particular case seeing your attempted belief as the result of a decision seriously casts doubt on the possibility of its being a belief. Is this best explained by the idea that belief aims at the truth? Since you have no evidence about the weather, the problem cannot be that you have reason to think the attempted belief will fail to achieve this aim. On the contrary, you have every reason to think it is at least possible that it will achieve this aim. So what keeps you from aiming at it?

If you merely guess that a flipped coin will come up heads, then you probably do not believe that it will. Guesses are not beliefs. But guessing aims at the truth. In guessing you are trying to get it right, and if you succeed, this is as good as a guess can get. When you see your attempted belief as the result of a decision, you may still be hoping, trying, or aiming to get things right. But you know believing is not epistemically justified for you in that instance. Whatever practical reasons you may have for believing that p, you have no evidence that p. It is seeing your state as unjustified while remaining in it that seriously casts doubt on the possibility of its being a belief. To understand belief, we need a connection between belief and justification, not just between belief and truth.

Suppose that someone has an unjustified, true belief. If belief aims at the truth, then this belief has achieved its aim. Perhaps justification is a good guide or a means to the truth. But if truth is the aim, and this belief has achieved that aim by other means, then epistemic justification or lack thereof is irrelevant to the evaluation of this belief. So if belief merely aimed at the truth, then it would not be automatically subject to evaluation from the epistemic point of view. If belief aims at knowledge, however, instead of mere truth, then it is clear why it is subject to this kind of evaluation. Unjustified beliefs may be true, but they cannot constitute knowledge.

Perhaps this does not capture what is meant in saying that belief aims at the truth. When you believe that p, you do not merely hope or try to get things right. In some sense it seems to you as though you already have gotten things right. We do not want to say that if you believe that p, you believe that your belief that p is true. This leads to an infinite number of beliefs. You do not need beliefs about beliefs to have beliefs about the world. But if you do have a view about your views, it must cohere with those views, where coherence involves more than just logical consistency. You can think that your belief that p is true, but you cannot think that your belief that p is false. You cannot assert, "I believe that p, but not p," and you cannot believe it either. This "cannot" is probably a normative "cannot," rather than an expression of logical impossibility.

What goes for error goes for ignorance as well. There is something wrong with assertions of the form "p, but I do not believe that p." Whatever is wrong with these assertions, they would be just as bad in the privacy of your own mind. If you think about Moore-paradoxical statements from the normative perspective, then the same kind of incoherence that is involved in the standard cases also seems to infect the following: p, but I have no reason to believe that p; I believe that p, but I should not believe it; and p, but I am completely unreliable about these things. The belief that p not only conflicts with the belief that you are wrong or that you do not believe that p. It also rules out the belief that you are unjustified or not in a position to know. These first-person facts about belief can be explained by the idea that belief aims at knowledge, but not by the idea that belief aims at truth.

The idea that belief aims at knowledge is a normative claim. From the point of view of belief there is something wrong with false beliefs, but there is also something wrong with unjustified beliefs. There is something wrong with accidentally true beliefs, even when they are justified. But from the point of view of belief, there is nothing wrong with knowledge. For a belief, knowledge is as good as it gets.

Assertion and Evidence

Moore's Paradox tells us not only about the nature of belief but also about the nature of assertion. Peter Unger (1975) and Timothy Williamson (2000) are both defenders of the priority of knowledge. Both agree that when you assert that p, you not only represent p as being trueyou not only represent yourself as believing that pyou also represent yourself as knowing that p. Propositions of the form "p, but I do not know that p" are unassertable because they violate the rule of assertion: assert only what you know. Unlike Williamson, Unger is a radical skeptic. When he tells you not to assert what you do not know, he is basically telling you to keep quiet. The consequences you draw from the priority of knowledge will depend on your general views about knowledge. But the basic idea does not discriminate against skeptics.

Unger and Williamson also agree that there is an important connection between knowledge and justification, though they articulate the connection in different ways. Unger's general idea is that if you are justified in believing that p, then you must know something. More specifically, he believes that if your reason for believing that p is that q, then you must know that q. According to Williamson evidence is knowledge. If your body of evidence consists of a set of propositions, then you must know each member of this set. Both of these views open up the possibility of merely apparent evidence. This is not a problem for Unger, since he thinks that all evidence is merely apparent.

Is there a problem for Williamson? Suppose you have a justified, false belief that p; you infer that q on the basis of this belief; and you think that p is your evidence that q. If evidence is knowledge, then you are simply mistaken in thinking that p is your evidence that q. You may even be mistaken in thinking that you have evidence that q. This can seem problematic if you think that evidence is such that, if you have some, then you are at least in a position to know that you have some; and if you do not have any, then you are in a position to know that you do not have any; and if p is or is not evidence for you to believe that q, then you are in a position to know whether or not this is so.

According to Williamson evidence is not this kind of thing, but neither is anything else. In Williamson's terminology a condition is "luminous" just in case one is in a position to know that the condition obtains, if it does. For example, you could easily be sleeping in a cold room without being able to tell that the room is cold. So the condition of one's being in a cold room is not luminous. But you might have thought that, if you feel cold, seem to see a red wall, or believe that there is life on Mars, then you are in a position to know that you feel cold, seem to see a red wall, or believe that there is life on Mars. In other words you might have thought that these conditions are luminous.

Williamson has a general argument designed to show that there are no nontrivial luminous conditions. Not even the condition that one feels cold is luminous. There is always a potential gap between the facts and your ability to know the facts, even when the facts are about your own present state of mind. So the idea that evidence would not be luminous if only knowledge were or could be evidence is no objection to the view. Evidence would not be luminous regardless what it was. If we do have some other form of privileged access to evidence or the justification of our own beliefs, and if our having that access is incompatible with the idea that evidence is knowledge, then it must be shown.

Mental Content

Gilbert Harman (1999) believes that the basic mental notions are knowledge and intentional action. Belief and intention are generalizations of these that allow for error and failure. Harman therefore clearly endorses the priority of knowledge. He also believes that the content of a concept is determined by its functional or conceptual role: its typical or normal connections to perception, its role in practical and theoretical reasoning, and its connection to intentional action. Finally, he accepts content externalism: the view that it is possible for intrinsic duplicates to differ in the contents of their thoughts.

The first two of Harman's views explain why he holds the third. My concept of water is typically caused by perceptions of and hearing about water, and the concept is causally involved in my intentional interactions with water. Suppose that I have an intrinsic duplicate on Hilary Putnam's (1975) Twin Earth. On Twin Earth there is something that looks, smells, tastes, and feels like water but is not water. Call it XYZ. When I interact with water, my twin interacts with XYZ. This difference in our interactions does not influence our intrinsic natures. But it does influence the contents of our thoughts. Unlike me, my twin never perceives or interacts with water. Even if you dragged my twin into my kitchen, he would not intentionally interact with water, nor would he perceive that the water was running. My concept differs in content from my duplicate's concept because the functional roles of the concepts are different. The functional roles of the concepts are different because these roles must be understood in terms of knowledge, perception, and intentional action.

Harman is not the only philosopher to combine the priority of knowledge with a conceptual role account of content. Christopher Peacocke (1999) has a sophisticated version of this view. According to Peacocke epistemically individuated concepts can be individuated, at least in part, in terms of the conditions under which certain judgments involving those concepts would constitute knowledge. Furthermore, every concept is either epistemically individuated, or individuated in part in terms of its relations to epistemically individuated concepts. If epistemically individuated concepts do in fact play this central role in our system of concepts, then the priority of knowledge may provide an explanation of this fact.

The conceptual role theory of content is or is a descendant of the idea that the content of a thought or concept is determined by what Wilfrid Sellars (1956) calls its place in a space of reasons. John McDowell (1996) argues, among other things, that if you take this idea seriously, then thinking of the space of reasons broadly enough to encompass not only beliefs but also knowledge is not an optional extra. There is no purely internal space of reasons. To understand how experience can be part of the logical space of reasons, and so how our thoughts can have any content at all, we need to understand how a subject can be open to the way things manifestly are, where this involves knowing about what is going on around you.

Setting aside conceptual roles, the priority of knowledge may provide an adequacy condition for the theory of content. Suppose there was a kind of content or a kind of representation that could not distinguish a situation in which water is wet from a situation in which something that merely looks, smells, feels, and tastes like water is wet. A picture in the head, qualitatively conceived, may be such a representation. Accepting this kind of representation or content could never constitute knowledge, since there would not be the right kind of distinction between justification and knowledge. If one of your beliefs about barns constitutes knowledge, then it matters whether or not there are fake barns in your neighborhood. If believing something about barns were a matter of accepting one of these phony propositions that cannot distinguish between real and fake barns, then it would not matter whether the barns were real or fake. According to the priority of knowledge, if these representations are not even candidates for knowledge, then they are not to be believed.

Contact

What justifies this preoccupation with knowledge? Each account of something in terms of knowledge must of course be judged on its own merits, but is there anything special about knowledge that holds them all together? A true belief will match or accurately represent the world, but knowledge seems to involve a kind of contact with the world. The recognition of the importance of this kind of contact is one of the underlying ideas that unifies these various approaches.

Edmund L. Gettier (1963) shows that justified, true belief alone is not sufficient for knowledge. If a justified, true belief is inferred from a false premise, then it will not constitute knowledge, even if that premise was justified. Not all cases of justified, true belief without knowledge involve inference from a false belief. Alvin I. Goldman (1992) imagines a case in which you look at a barn that is surrounded by realistic barn facades and form the justified, true belief that it is a barn. You do not know even though you are right because you just got lucky. Though you do get lucky, and it is just an accident, we cannot deny that your belief about the barn is causally connected to the barn. If we were trying to understand knowledge in terms of being in contact with the world, then we would need to specify the right kind of contact. But if you are using the notion of knowledge to explain other things, then it is easy to say what kind of contact you have in mind: you are connected in the right way to p if you know that p.

The presence or absence of this kind of contact matters in a variety of areas in philosophy. For example, as Unger argues, a factive propositional attitude either entails knowledge or the absence of knowledge. If you are happy that it is raining, or you notice that it is raining, then it follows that it is raining. These propositional attitudes are factive. Moreover, if you are in one of these mental states, then it also follows that you know that it is raining. By contrast, if you forget that it is raining, then it still follows that it is raining. Forgetting that p is just as factive as being surprised or embarrassed that p, but if you forget or are unaware that p, then it does not follow that you know that p. It follows that you do not know that p. Not all factive attitudes entail knowledge. But they do not leave the question of knowledge open. As Robert Gordon (1969) points out the propositional emotions, even the nonfactive ones, do not leave open the question of knowledge either. If you fear, hope, or are worried that p, then it does not follow that p. But it does follow that you do not know whether or not that p.

What matters in all these cases is genuine contact with the world, rather than merely a match between what is inside and what is outside the mind. You might be happy when it rains without being happy that it rains. You need the right kind of connection between the rain and the happiness for the happiness to be about the rain. If the disturbing sight of the rain leads to your taking certain kinds of medication, then the rain, and your knowledge thereof, may cause the happiness, but you will not necessarily be happy that it is raining. The rain is causally related to the happiness, but not in the right way. What is the right way? It looks like the happiness has to be connected to the rain in the same way that a belief has to be connected to a fact for the belief to constitute knowledge.

Whenever something interesting requires contact between the mind and the world, a causal theory of that thing will at least look plausible. But any such theory will be faced with deviant causal chains: cases where there is a causal connection, but not the right kind of causal connection. You might intend to run over your uncle, and this may lead you to back your car out of your driveway to drive to his house. But if, unknown to you, your uncle is napping behind the wheels of your car, you will run him over; your intention to run him over will cause you to run him over; but you will not, in this case, run him over on purpose. Your intention to A is causally related to your A-ing, but not in the right way, so you do not intentionally A. This is a deviant causal chain.

Here is one thing to notice about the case. You correctly believe that backing out of your driveway will lead to running over your uncle. Given your plan, the belief, we may say, is justified. But the belief does not constitute knowledge. If it is just an accident that your belief is true, and you act on that belief, then it will just be an accident that your attempts are successful, if they are successful at all. To get intentional action, your means-ends beliefs must constitute knowledge. This is one suggestion for ruling out causal deviance in action theory. If this is right, then it not only follows that we can explain particular actions in terms of particular states of knowledge. It at least suggests that we understand intentional action, one kind of contact between the mind and the world, in terms of knowledge. Unless we also have to understand knowledge in terms of action, it looks as though knowledge is the more fundamental notion.

See also Belief.

Bibliography

Adler, Jonathan E. Belief's Own Ethics. Cambridge: Massachusetts Institute of Technology Press, 2002.

Burge, Tyler. "Individualism and the Mental." In Midwest Studies in Philosophy. Vol. 4, edited by Peter A. French, Theodore E. Uehling, and Howard K. Wettstein. Minneapolis: University of Minnesota Press, 1979.

Gettier, Edmund L. "Is Justified True Belief Knowledge?" Analysis 23 (1963): 121123.

Gibbons, John. "Knowledge in Action." Philosophy and Phenomenological Research 62 (2001): 579600.

Goldman, Alvin I. "Discrimination and Perceptual Knowledge." In Liaisons: Philosophy Meets the Cognitive and Social Sciences. Cambridge: Massachusetts Institute of Technology Press, 1992.

Gordon, Robert. "Emotions and Knowledge." Journal of Philosophy 66 (1969): 408413.

Griffiths, A. Phillips, ed. Knowledge and Belief. London: Oxford University Press, 1967.

Harman, Gilbert. "(Nonsolisistic) Conceptual Role Semantics." In Reasoning, Meaning, and Mind. Oxford, U.K.: Clarendon Press, 1999.

McDowell, John. "Knowledge and the Internal." In Meaning, Knowledge, and Reality. Cambridge, MA: Harvard University Press, 1998.

McDowell, John. Mind and World. Cambridge, MA: Harvard University Press, 1996.

Moore, G. E. Commonplace Book, 19191953, edited by Casimir Lewy. London: Allen and Unwin, 1962.

Peacocke, Christopher. Being Known. Oxford, U.K.: Clarendon Press, 1999.

Putnam, Hilary. "The Meaning of 'Meaning.'" In Mind, Language, and Reality. Cambridge, U.K.: Cambridge University Press, 1975.

Sellars, Wilfrid. "Empiricism and the Philosophy of Mind." In Minnesota Studies in the Philosophy of Science. Vol. 1, edited by Herbert Feigl and Michael Scriven. Minneapolis: University of Minnesota Press, 1956.

Shoemaker, Sydney. "Moore's Paradox and Self-Knowledge." In The First-Person Perspective and Other Essays. Cambridge, U.K.: Cambridge University Press, 1996.

Sorensen, Roy A. Blindspots. Oxford, U.K.: Clarendon Press, 1988.

Unger, Peter. Ignorance: A Case for Scepticism. Oxford, U.K.: Clarendon Press, 1975.

Velleman, J. David. "On the Aim of Belief." In The Possibility of Practical Reason. Oxford, U.K.: Clarendon Press, 2000.

Williams, Bernard. "Deciding to Believe." In Problems of the Self: Philosophical Papers, 19561972. Cambridge, U.K.: Cambridge University Press, 1973.

Williamson, Timothy. Knowledge and Its Limits. Oxford, U.K.: Oxford University Press, 2000.

John Gibbons (2005)

About this article

Knowledge, The Priority of

Updated About encyclopedia.com content Print Article