The Cloister of Cognition

I’ve been talking epistemology with a student here at Mises U. this week, and at one point I wrote up a couple of pages for him. So I thought I’d share them with the rest of you:


I take your view to be as follows: that genuine knowledge includes a) awareness of our own subjective mental states, and b) the grasp of a priori conceptual truths like mathematics and praxeology, but not c) sensory perception and the judgments based thereon – and that the reason for this is that it’s possible for (c) to be mistaken while it’s not possible for (a) and (b) to be so, and that while it may be appropriate to believe things that could possibly be wrong, we shouldn’t claim to know them. And part of your reason for this latter claim is that treating beliefs that might be wrong as cases of knowledge is equivalent to deciding what knowledge is by randomly throwing darts at a dartboard.

So here are some of my objections (some of these I talked about yesterday, others not):

knowledge

1. This is not how the word is used in ordinary language. In ordinary language, we regularly apply the word to fallible beliefs; and since use determines meaning, what we ordinarily mean by knowledge seems to be something that does not fit your criteria. So in effect you’re proposing to change the meaning of the word “knowledge, ” or you’re introducing some special philosophical sense of knowledge different from the ordinary one (call it Knowledge-with-a-capital-K). And then the question is why we should care about Knowledge-with-a-capital-K, as opposed to (what I’m tempted to call) real knowledge.

2. I suspect the attractiveness of Knowledge-with-a-capital-K depends in part on a couple of fallacies. One turns on the ambiguity of “if I know something, then I can’t be wrong about it. ” That’s true if it’s read as “NEC:(If I know that p, then I am not wrong about whether p)”; but there’s a tendency to shift illicitly from this claim to the stronger claim “If I know that p, then NEC:(I am not wrong about whether p).” But the latter claim doesn’t follow. (This is called a confusion of necessitas consequentiæ and necessitas consequentis.) The other fallacy is that of sliding from “If it’s infallible, then it’s certain” to “if it’s certain, then it’s infallible.” (That’s called affirming the consequent.)

3. The difference between a priori and empirical knowledge is not that the first is infallible and the second not. A priori knowledge is fallible too. After all, we can make mistakes in math, for example. I might be wrong in thinking that 32794 + 85649 = 118443; maybe I forgot to carry a 2 or something. The difference lies not in whether it’s fallible or not, but rather in what kinds of evidence are relevant to showing it to be wrong. Objections to empirical claims appeal to empirical evidence; objection to conceptual claims appeal to conceptual evidence.

A related mistake is that of confusing the necessity of the fact stated by a claim with the necessity of our being right about the claim. If it’s really true that 32794 + 85649 = 118443, then it is necessarily true that 32794 + 85649 = 118443; but likewise if it’s really true that F=G(m1m2/r2), then it’s also necessarily true that F=G(m1m2/r2), even though the former is a conceptual claim and the latter is empirical. But we could be wrong about either one.

4. The principle that we can only know things that can’t possibly be doubted doesn’t seem to pass its own test; that is, it seems possible to doubt (indeed I do doubt) that we can only know things that can’t possibly be doubted – so by its own standards that claim doesn’t count as knowledge.

You tear men down like G. E. Moore

5. The distinction between what we can know and what it’s appropriate for us to believe for practical purposes seems difficult to maintain. First, if I can’t know that there’s a table in front of me, then I can’t know that I have good practical reason to acts as though there’s a table in front of me either. Second, if a belief isn’t justified, then by definition we shouldn’t believe it; so there doesn’t seem to be room for a class of beliefs that are unjustified but that should be accepted for practical purposes. Third, it’s difficult to accept a belief and yet claim not to know it; “p, but I don’t know whether p” seems Moore-paradoxical.

6. I think your position makes sense-perception impossible. After all, if I look at a table and have a hallucination of a swan, my experience of the swan doesn’t count as my perceiving the table. Yet similarly, if I look at a table while simultaneously having a hallucination of a table, that doesn’t count as my perceiving the table either. But what if my sensory experience of a table is caused by the table; in that case is it now a genuine perception rather than a hallucination? On my view, sure; but I think your view requires you to say otherwise. For if you really think that a belief that’s only probably true is no better off, knowledge-wise, than throwing darts randomly at a dartboard, then I think you also have to say that as long as our experience of a table could be caused by something other than an actual table, then its status is equivalent to that of a hallucination even when, as chance has it, it’s caused by an actual table. And that means that we never make genuine cognitive contact with the world through perceptual experience at all; we’re always merely hallucinating, though some of our hallucinations are accidentally accurate. And as a result, all our knowledge of the world is hypothetical; we can know that if there are 2 + 2 bottles on the table, then there are four bottles on the table, but we cannot know whether there are actually any bottles on the table or indeed anywhere else.

the view from my head

I think what this view advocates then, is a kind of pathological alienation from the world. It means that you’ve never actually seen or touched a physical object; you’ve only theorised about them. Likewise you’ve never actually seen or touched another person; again, you’ve only theorised about them. The attitude your view seeks to inculcate has characteristics of mental illness.

More to the point, I think it’s incoherent. Here’s why. The ability to apply a concept (not exceptionlessly, but at least with reasonable reliability) is part of having the concept; we don’t count as having a concept unless we know how to apply it. After all, the process of acquiring a concept just is the process of learning to recognise and identify instances of it in our environment. But it’s an upshot of your view that we have no such ability to recognise and identify anything in our environment. But that would mean that we’d be unable not just to know but even to conceive of physical objects, or of minds other than our own; we’d be driven to solipsism.

For example, since we can identify agency only in our own case (I would claim we couldn’t even do that – since agency is a general concept its possession requires ability to apply it to more than one case – but never mind that for now), we can never apply the concept of interpersonal exchange, since that requires more than one agent. But that in turn would mean that we can’t even have the concept of interpersonal exchange thus, rendering praxeology impossible. (That’s what I meant in saying that we couldn’t even have praxeology unless our fallible empirical beliefs counted as knowledge.) Since in fact we do have the concept of interpersonal exchange, that shows that our ability to identify such exchanges is genuine even though it’s fallible. (Thus the skeptic’s inference from “you could be wrong in any particular case” to “you could be wrong in all cases simultaneously” doesn’t go through.)

A related point: you seem to accept uncritically the Humean empirical conception of perceptual experience. (Ditto for Hoppe when he says we can only perceive correlations and not causings.) The point of a Kantian approach is not to turn the realm of perception over to Hume but then retreat to a higher conceptual realm; rather it’s to claim that the perceptual realm is already conceptually ordered.

,

51 Responses to The Cloister of Cognition

  1. Wayne Adams July 27, 2011 at 7:16 pm #

    Posts like this inch me closer to going to grad school.

  2. Bob July 27, 2011 at 8:59 pm #

    Yeah, you would gratify many of us if you gave us more of this and less of Amy Winehouse. Just in case you’re interested in gratifying us.

    You say: ‘Thus the skeptic’s inference from “you could be wrong in any particular case” to “you could be wrong in all cases simultaneously” doesn’t go through.’ Should we understand you to imply here that radical skeptical scenarios like brains-in-vats, evils demons, and the like are straight up incoherent? I’m often tempted to say that although skeptical scenarios are perfectly coherent, we could not conceivably have good reasons to believe that they obtain. Hence they show us nothing more than what we can imagine and conceive without contradiction, but have no bearing whatsoever on whether we have knowledge. I’d be curious to know whether you take a stronger view than that.

    • Roderick July 28, 2011 at 9:24 pm #

      Should we understand you to imply here that radical skeptical scenarios like brains-in-vats, evils demons, and the like are straight up incoherent?

      Yes. For example: if we were brains in vats, then we would have no cognitive access to or ability to identify actual brains or vats, and so our terms for “brains” and “vats” would not refer to brains or vats, and so we could not entertain the hypothesis that we were brains in vats.

      • Bob July 29, 2011 at 9:14 pm #

        I think I see how your view makes it incoherent for us to believe that we are brains in vats. I don’t yet see, though, how it could show that we could not possibly be radically deceived about the world, such that none of our empirical beliefs are true and none of our language refers to objects, properties, or relations that actually exist. So far as I can see, there is nothing to make some such scenario inconceivable, even if we could never conceive exactly what the true scenario is because our language could never refer to real things. To be clear: I agree that it’s crazy to think that we aren’t in genuine cognitive contact with the world, I’m just having a hard time seeing how considerations about reference show that it could not possibly be true that we are radically deceived about the world.

        • Roderick July 31, 2011 at 12:26 am #

          But how is that scenario different from the brains-in-vats scenario?

    • Black Bloke July 28, 2011 at 10:47 pm #

      I, for one, appreciate the Amy Winehouse post among the many other posts that demonstrate the variety of interests Roderick has.

      • Bob July 29, 2011 at 9:19 pm #

        Hey, don’t get me wrong. I didn’t say that the Amy Winehouse post, or the Doctor Who posts (which are more to my own particular taste), or any other similar posts were worthless. I just said that I dig the philosophy posts a whole lot more.

  3. Matt July 27, 2011 at 11:45 pm #

    5. The distinction between what we can know and what it’s appropriate for us to believe for practical purposes seems difficult to maintain. First, if I can’t know that there’s a table in front of me, then I can’t know that I have good practical reason to acts as though there’s a table in front of me either. Second, if a belief isn’t justified, then by definition we shouldn’t believe it; so there doesn’t seem to be room for a class of beliefs that are unjustified but that should be accepted for practical purposes. Third, it’s difficult to accept a belief and yet claim not to know it; “p, but I don’t know whether p” seems Moore-paradoxical.

    I’m not sure I fully understand Student’s view. Here are three things Student might believe:
    (1) skepticism about knowledge claims: we can never know (or even stronger, even justifiably believe) that we know something. We might know things. But we don’t know that we know them. This somewhat fits with the dart throwing statement. The dart statement (to me) doesn’t obviously preclude that we might know things. It just suggests that sorting falliable (might be mistaken) beliefs into ‘known’ and ‘unknown’ buckets is like throwing darts at a dartboard (especially the phrase “treating beliefs that might be wrong as cases of knowledge”).
    (2) skepticism about knowledge: we might have justified belief, but our level of justification is never sufficient to raise our beliefs to knowledge; we can believe things, and with justification to boot, but our beliefs are nevertheless not knowledge
    (3) skepticism about justification (given the presumably near-universal view that knowledge requires justification, this entails skepticism about knowledge as well): we don’t have knowledge and we can’t even justifiably believe things; not only do I not know that there is a table before me, my belief that there is a table before me isn’t justified either.

    “Second” in point 5 suggests that there is no room (on Student’s view) for a justified belief in the table being in front of us. (Student’s view here seems to be that it is ‘practical’ to act as though there is a table before me although I am not justified in believing that there is a table before me. And that does seem untenable.) So I presume Student accepts (the untenable) (3).

    But it might also be that you (Roderick) believe that there can be no justification without knowledge. And so the lack of knowledge (described in “First”) has the consequence that there is no justification. And so although Student only espouses (2), she is committed to (3). But that follows only if Student accepts the view that justification requires knowledge, and it’s not obvious that Student need accept that.

    So I’m not sure if I should think that “Second” is targeting Student because she accepts (3) or if “Second” is targeting Student because she accepts (2) and there is an implicit (non-Student) premise that justification requires knowledge.

    Later, there is the comment on perceptual experience:

    For if you really think that a belief that’s only probably true is no better off, knowledge-wise, than throwing darts randomly at a dartboard, then I think you also have to say that as long as our experience of a table could be caused by something other than an actual table, then its status is equivalent to that of a hallucination even when, as chance has it, it’s caused by an actual table.

    Here, it seems that it is allowed that Student countenances probably true beliefs (epistemically probable, I assume, which would suggest backed by evidence). So this suggests that Student does accept justified confidence in something’s being probably true (which I think we can safely handwave into partial justification or justification to a particular degree-of-belief that something *is* true). And that suggests that Student accepts (1) or (2), but rejects (3), in which case, she is immune to “Second” from point 5 above (putting aside the premise that justification requires knowledge aside, which doesn’t seem to be endorsed by Student). And if she accepts (1) and rejects (2), then it doesn’t seem very clear how her view falls into rampant skepticism about the external world.

    Back to point 5 and “First”:
    Why should knowledge that there is a table in front of me or knowledge that I have practical reason to act as though there were a table in front of me be necessary for having practical reason to act as though there were a table in front of me? It seems that “Is the ice safe to skate on? I don’t know, but I think so” is both reasonable and ordinary grounds for going skating on the ice. I don’t need to know that the ice is safe. I don’t need to ‘know’ that I have practical grounds. I just need to have (justified of course!) confidence that the ice is safe. (And the degree of confidence required for the choice to go to be practically rational will depend upon non-epistemic factors such as the danger of falling through. It’s one thing to skate on the ice over a shallow pond. It’s another to cross an ice bridge across a chasm.) It seems to me that we often make choices (and justified ones too) without ‘knowing’ which option is favored by the balance of reasons, and more than that, we know that we don’t know which option is favored by the balance of reasons. Yet we choose and are practically justified all the same (although not always, of course!).

    Back to point 5 and “Third”:
    By “claim” do you mean “assert”? If so, is this an intuition about belief and knowledge or about the speech-act of assertion? Surely there are many propositions p such that is is reasonable for me to believe p and believe that I don’t know p and entertain both of those beliefs together.
    When it comes to assertion, the categoriclal assertion “p” plausibly connotes a stronger commitment than the qualified asertion “I think that p”. That is why Timothy Williamson argues that knowledge is the norm for assertion. I think that Williamson is wrong, but (even) he is not arguing that knowledge is the norm for belief. Surely there are many propositions that I am justified in believing but do not know. And that doesn’t seem to be an ineffable truth that I must ‘sidle up’ to, but rather something that I can easily recognize. “P” (I say to myself), “but I don’t know that p.” Whatever the merits of Williamson’s view, it seems that the Moorean paradox has a lot more grip when it comes to verablized assertion to others rather than privately saying “p, but I don’t believe that p” to myself.

    The remark in 5 “Third” seems to defend what I earlier described as a (possible) implicit premise connecting knowledge and justification such that “Second” doesn’t criticize Student for accepting (3) outright but rather criticizes Student for accepting (2) and then being committed to (3) by an implicit premise that justification requires knowledge. The idea seems to be that ‘accepting’ a belief involves some commitment to *knowing* that what is believed is true. By Moore’s Paradox, you can’t even get so far as (justifiably) ‘accepting’ something to be true without taking yourself to know it. More development is necessary, but it’s easy to see how that would push in the direction of “justification always depends upon knowledge.” Although that could be correct, it is certainly controversial, and it does not appear to be part of Student’s espoused view.

    I would like to hear more about Student’s view and how point 5 applies. And I like the egg picture. It looks like Star Trek in his head. “Captain, we’re approaching a Class 2 breakfast.” “On screen, number 1!”
    But I am equally gratified by Amy Winehouse posts.

    • Roderick July 28, 2011 at 9:47 pm #

      His view seemed closest to (3) to me (subject to the caveat that he’s a skeptic only about empirical knowledge, not about all knowledge). It’s definitely not (1).

      But it might also be that you (Roderick) believe that there can be no justification without knowledge.

      No, I don’t think that. I think you can have justified beliefs that aren’t true, and also justified true beliefs that (for Gettier reasons) fall short of knowledge.

      However, I also think that from a first-person perspective you can’t regard coherently regard yourself as justifiably believing p without regarding yourself as knowing p.

      Here, it seems that it is allowed that Student countenances probably true beliefs (epistemically probable, I assume, which would suggest backed by evidence). So this suggests that Student does accept justified confidence in something’s being probably true (which I think we can safely handwave into partial justification or justification to a particular degree-of-belief that something *is* true).

      No, the student’s view as I understand it is that merely-probably-true beliefs get zero epistemological justification.

      Back to point 5 and “First”:?Why should knowledge that there is a table in front of me or knowledge that I have practical reason to act as though there were a table in front of me be necessary for having practical reason to act as though there were a table in front of me?

      I think it has to be on his view, since he thinks merely-probably-true beliefs aren’t even partly justified.

      Surely there are many propositions p such that is is reasonable for me to believe p and believe that I don’t know p and entertain both of those beliefs together.

      Can you give an example? Because that sounds incoherent to me (at least if the two reasonablenesses are the same in degree).

      it seems that the Moorean paradox has a lot more grip when it comes to verablized assertion to others rather than privately saying “p, but I don’t believe that p” to myself.

      I don’t see that.

      • Matt July 29, 2011 at 8:54 pm #

        Regarding the example of entertaining the belief that p and the belief that I don’t know p:
        “There’s going to be a deal on the debt before Aug 2, but I don’t know that.”
        Seems ok to me.

        Regarding the Moorean paradox, I miswrote in saying that the Moorean paradox of “p, but I don’t believe that p” had more grip when verbalized that said to myself. I find that odd whether verbalized or said to myself. I meant to say that saying privately to myself “p, but I don’t know that p” doesn’t seem odd for many p. The deal on the debt example above is such a p.

        • Roderick July 31, 2011 at 12:28 am #

          “There’s going to be a deal on the debt before Aug 2, but I don’t know that.”
          Seems ok to me.

          It sounds to me as though it’s making a move and then taking it back.

  4. Dan July 28, 2011 at 8:31 am #

    Third, it’s difficult to accept a belief and yet claim not to know it; “p, but I don’t know whether p” seems Moore-paradoxical.

    Doesn’t this assume something that is not obvious, namely that belief is the norm of assertion? Because without this assumption you get the much more natural sounding “I believe that p, but I don’t know whether p.”

    • Roderick July 28, 2011 at 9:48 pm #

      My thought was that if you’re justified in believing something, you’re also justified in asserting it.

  5. aretae July 28, 2011 at 1:08 pm #

    Since you’re in epistemology at the moment, could you outline why the concepts of truth & knowledge are useful at all? Don’t they necessarily get caught up in this kind of difficulty?

    Wouldn’t the concept of “effective prediction” substitute out ALL the crap associated with truth/knowledge while keeping all of the good stuff (that isn’t purely emotional attachment)?

    Sorry…not a philosopher, just a student of, and I’ve not found an answer since I started drifting that direction some years back. I’d take a book recommendation.

    • Roderick July 28, 2011 at 9:51 pm #

      Well, can you assent to “this is an effective predictor” without assenting to “it’s true that this is an effective predictor”?

      • aretae July 28, 2011 at 9:59 pm #

        My question was intended to be about usefulness, not truth. I specifically suggested that I don’t find the concept of truth useful (I predict that it predicts nothing).

        As far as I can tell, your response was about truth (it tried to switch the context of the statement).

        I don’t see that the truth statement that you made adds anything useful to my statement…and am inclined to think that the result holds generally.

        I would not be surprised to find that this was a standard position in philosophy…but I haven’t encountered it in my 20 years of amateur poking about.

        Is there a standard response that doesn’t just circle back to talking about truth? Am I missing something?

        • Roderick July 28, 2011 at 10:48 pm #

          My point was that any attempt to dismiss the concept of truth presupposes what it’s trying to dismiss.

          In other words, the “circling back to truth” comes not just in what I said, but in what you said.

          As for usefulness — well, just try expressing anaphoric reference without the concept of truth.

        • aretae July 28, 2011 at 11:08 pm #

          Indeed, I’m perfectly willing to acknowledge that attempts to deny truth are self-referentially false…and lead awfully quickly to an abandonment of argument. Still happy to have learnt that from Rand in 1990.

          I’m not denying truth…I’m attempting to dismiss it as not part of a useful discussion…which is, I think, the natural endpoint of the Humean/Positivist direction in epistemology. Let’s talk entirely about uncertainty and prediction…and abandon the pretense of truth.

          I was, up until a few minutes ago, unfamiliar with the grammatical term anaphoric reference.

          I am rereading your post, and looking up a few terms to attempt to understand your answer.

        • aretae July 28, 2011 at 11:08 pm #

          And thank you for taking the time to answer.

        • Roderick July 28, 2011 at 11:36 pm #

          Re anaphoric reference: suppose Julius Caesar says: “All Gaul is divided into three parts, one of which the Belgae inhabit, the Aquitani another, those who in their own language are called Celts, in ours Gauls, the third. All these differ from each other in language, customs and laws. The river Garonne separates the Gauls from the Aquitani; the Marne and the Seine separate them from the Belgae. Of all these, the Belgae are the bravest, because they are furthest from the civilization and refinement of Provincia, and merchants least frequently resort to them, and import those things which tend to effeminate the mind; and they are the nearest to the Germans, who dwell beyond the Rhine, with whom they are continually waging war; for which reason the Helvetii also surpass the rest of the Gauls in valor, as they contend with the Germans in almost daily battles, when they either repel them from their own territories, or themselves wage war on their frontiers.”

          What happens if you want to express agreement with him? Without the concept of truth, you can’t say “What Caesar just said is true.” Instead, you have to say, all over again: “All Gaul is divided into three parts, one of which the Belgae inhabit, the Aquitani another, those who in their own language are called Celts, in ours Gauls, the third. All these differ from each other in language, customs and laws. The river Garonne separates the Gauls from the Aquitani; the Marne and the Seine separate them from the Belgae. Of all these, the Belgae are the bravest, because they are furthest from the civilization and refinement of Provincia, and merchants least frequently resort to them, and import those things which tend to effeminate the mind; and they are the nearest to the Germans, who dwell beyond the Rhine, with whom they are continually waging war; for which reason the Helvetii also surpass the rest of the Gauls in valor, as they contend with the Germans in almost daily battles, when they either repel them from their own territories, or themselves wage war on their frontiers.”

        • aretae July 28, 2011 at 11:44 pm #

          Re: your point 5 above.
          You say: “The distinction between what we can know and what it’s appropriate for us to believe for practical purposes seems difficult to maintain.”

          I say, self-referentially aware, that belief is always uncertain/conditional. A poker player believes his opponent holds two queens, or the next card will be a spade, but can’t even pretend to know that.

          The question seems to me to be whether one ought approach the world like the poker player, who operates under conditions of profound uncertainty…or like the accountant who doesn’t. My claim is that the poker-player model is more effective for all complex discussions…and the accountant model is merely a simplification for purposes of discussion.

        • aretae July 28, 2011 at 11:53 pm #

          I could (not as easily) say:
          I predict (.9 < p < 1) that were you to test any one of Caesar’s pronouncements (such as the Belgae are the fiercest inhabitants of Gaul), your test would accord with his claim.

          But when I shorten to “What Caesar said is true”, I’ve lost quite a bit of information between what I believe and what I’ve said.

          Could we accept the concept of truth as a simplification in cases like: She is Bob’s mother? I see little harm. But when you extend to complex cases, all of which are uncertain at a reasonable level…the term itself causes more problems than it solves.

        • Dan July 29, 2011 at 1:18 pm #

          I don’t see that the truth statement that you made adds anything useful to my statement…and am inclined to think that the result holds generally.

          I would not be surprised to find that this was a standard position in philosophy…but I haven’t encountered it in my 20 years of amateur poking about.

          Look up so-called deflationist theories of truth… there is a massive literature (some of which gets quite technical) arguing about essentially these claims.

        • Matt July 30, 2011 at 12:39 am #

          Aretae, I am wondering whether your concern with the concept of truth is connected with a concern about absolute attitudes towards the world. You suggest that it would be better to confront the world like the poker player rather the accountant. It seems that what makes the poker player right is his appreciation for the uncertainty of what we believe. You say, “A poker player believes his opponent holds two queens, or the next card will be a spade, but can’t even pretend to know that.” You also say, “Let’s talk entirely about uncertainty and prediction…and abandon the pretense of truth.” This comment also suggests that part of your concern with truth is a concern about uncertainty.

          I fully agree with the thought that we are generally uncertain of what we believe. But I would argue that this thought does not support or detract from a substantive concept of truth. The concept of truth suggest a kind of binary (on-off) standard of correctness for beliefs. Accepting that there is a such a standard (or that is useful) doesn’t force us to accept a binary account of beliefs (either you believe it or you don’t). It seems true that we believe some things more strongly than others. This leads to the idea that we don’t have just two attitudes ‘belief’ and ‘disbelief’. We have ‘belief-to-a-degree’. But ‘belief-to-a-degree’ isn’t the same as 100% belief in some probabilistic statement. In other words, it’s possible for us to characterize the poker player’s state of mind as a partial belief in the non-probabilistic proposition that the opponent has two queens, rather than as the 100% belief that there is a certain probability that the opponent has two queens.

          The view I’ve just described is called Bayesian Epistemology. You can find more information about it here:
          Stanford Encylopedia of Philosophy Entry for Bayesian Epistemology

          The upshot of this view for the concern that I perceive you to have is this:
          Beliefs are often conditional, qualified, and uncertain. And some of our beliefs are certainly beliefs about probabilities. But the fact that our beliefs are often conditional, qualified, and uncertain doesn’t mean that the content of those beliefs is conditional, qualified, and uncertain. It is my attitude of belief that is qualified, not the content.

          So when I believe (with less than 100% certainty) that what Caesar said is true, what I believe (again with less than 100% certainty) is that what Caesar said is true. It’s not that I accept (with 100% certainty) some prediction with explicit error margin attached or some other probabilistic statement.

          This doesn’t do anything to settle debates over the substance or usefulness of the concept of truth. As Dan mentions, there is a large literature on the debate between ‘deflationists’ about truth and their opponents. And as for the ineliminability of the concept, one quick response to Roderick would be “I agree with what Caesar said.” You have anaphoric reference. And you succeed in expressing agreement. But you don’t (obviously at least) include the concept of truth. So you might argue that truth contributes nothing useful in itself since it contributes nothing above “I agree with what Caesar said.” Truth might be a valid concept (i.e. we *can* use it to say “What Caesar said is true”), but we can do without it, just as we can do without it when it comes to shortening “it’s true that this is an effective predictor” to “this is an effective predictor”.

          So the argument about truth can continue, but I think it’s important to separate the logical concept of truth (and arguments over its meaning and usefulness) from the epistemic concept of uncertainty.

          And there are arguments aplenty in epistemology as well. Some folks in the positivist part of the philosophy world think that all of our everyday talk is really a kind of ‘loose’ talk. Really all we have to go on is various perceptual observerations. I see something that I normally call “lightning” do what I normally call “hit” something that I normally call “a tree”. But all of that is just some ‘working model’ that I (very) tentatively accept. I don’t really have much confidence in the proposition that I saw lightning hit a tree. I am not committed to the ‘truth’ of this proposition because my evidence is grossly insufficient to the task of justifying strong belief in it.

          That would be a kind of strong skepticism about the justification we have for everyday beliefs about empirical reality (like the Student that Roderick described in the post). And this skepticism could come in various degrees of strength. For instance, you say “Could we accept the concept of truth as a simplification in cases like: She is Bob’s mother? I see little harm. But when you extend to complex cases, all of which are uncertain at a reasonable level…the term itself causes more problems than it solves.” That seems to suggest that you distinguish between ordinary everyday cases like someone’s being Bob’s mother and more complex cases such as science, economics, and politics where uncertainty quickly takes hold, perhaps more quickly than most people realize.

          And so we need to be very careful in all cases, especially those cases where there is significant uncertainty. The idea that we will ‘get to the truth’ is just a dream. We should be content with much more tentative conclusions like “The evidence we have favors hypothesis H over H*” or “The evidence we have favors hypothesis H to degree .8”.

          But this just skepticism again, but of a less strong variety. This view’s beef is not with the concept of truth per se but rather with the idea that we are epistemically justified to the degree that we normally take ourselves to be justified. We are far less justified that we realize. Your concession that “She is Bob’s mother” is ok and does “little harm” seems to fall into this pattern. The concept of ‘truth’ isn’t mentioned here at all. Only the concept of “she”, “Bob”, and “motherhood”. So unless you have worries about one of those concepts, it seems that your worry is about how we can be justified in actually *believing* that she is Bob’s mother (as if we are going out on a limb by so boldly committing ourselves). Perhaps you will be less worried if you accept a framework in which justification (and the attitude of belief itself) can be partial. Perhaps you think it suspect to say that I have 100% justification in believing that she is Bob’s mother. I agree! But asserting “She is Bob’s mother” expressions no such committment, on the Bayesian view. It merely expresses some (relatively strong) *degree* of belief or confidence. Deciding whether to believe something is not a matter of deciding whether to ‘go out on a limb’ and believe wholeheartedly in something. It’s a matter of how *strong* you want to believe something. And that can vary anywhere from 0 to 1. And Bayesianism allows for ranges in degrees of belief as well (e.g. I believe that she is Bob’s mother to a degree between .8 and .9).

          But whether or not any of this epistemological discussion applies to your concerns, it’s important to note how this skeptical worry (an epistemic concern) is not a concern with the concept of truth, either its validity, its substance, or its usefulness. A skeptical worry about our evidence is just that, a skeptical worry about our evidence (namely how much we can know or justifiably believe [and to what degree of strength we can justifiably have those beliefs]). It is not a worry about truth.

          And as for the usefulness of truth, it might be worth considering David Velleman’s article “The Aim of Belief.” This paper takes it for granted (as I recall) that belief ‘aims’ at the truth. The goal of the paper is not to argue for this feature of belief but rather to explain it. In what sense does belief aim at the truth? Some people say that belief ‘aims’ at the truth because you can’t believe that p without believing that p is true. But Velleman points out that this applies to hoping, wishing, and dreaming. You can’t hope that p without hoping that p is true. But hope doesn’t aim at the truth. It aims at what is good (or bad if you prefer the bad). Velleman suggests instead that belief aims at the truth in the sense that truth is a regulating norm for belief. Truth is what makes beliefs correct (or incorrect). One immediate worry would be that it at least seems like it is sometimes ‘correct’ to hold false beliefs, namely when the evidence justifies them. In such a case, it would be ‘incorrect’ to hold the opposite belief (even though it is true). I don’t remember the paper all that well. I assume he addresses with these concerns. But whether his paper succeeds or not, that idea might be a fruitful way of exploring the usefulness of the concept of truth. Truth helps us understand what beliefs are. Of course, it might be argued in response that belief is better understood as regulated by predictive success than by truth.

        • aretae July 30, 2011 at 1:05 am #

          Matt and Dan,

          Thank you.

        • Roderick July 31, 2011 at 12:30 am #

          A poker player believes his opponent holds two queens, or the next card will be a spade, but can’t even pretend to know that.

          Well, he guesses that his opponent holds two queens. But does he believe it? If so, what criterion of knowledge does he take himself to lack? Truth? Justification? The Gettier condition? It seems like it had better not be any of those.

        • Roderick July 31, 2011 at 1:04 am #

          FWIW, I recommend Richard Miller’s critique of Bayesian epistemology.

  6. P. July 28, 2011 at 6:12 pm #

    What does “NEC” mean?

    • Matt July 28, 2011 at 6:47 pm #

      The necessity operator.
      “NEC p” means “It’s necessarily true that p.”
      Usually symbolized with a square, but much easier to type “NEC”.

      • Roderick July 28, 2011 at 9:53 pm #

        Yeah, I wasn’t sure the blog software would handle the symbol.

        • Brandon July 29, 2011 at 6:06 am #

          I’m not sure which character this is, but we use utf-8 here, and that means every character in unicode — all 109k of them — are represented. If you’re not sure you can save the post as a draft and look at it before you publish it for everybody else to see.

        • Rad Geek August 24, 2011 at 9:56 am #

          For possibility, the diamond operator (⋄) is U+22C4. It’s in the main Mathematical Operators range (in between the union operator and the dot operator).

          For necessity, the box operator (◻) is U+25FB. Unicode actually puts this in the “Geometric Shapes” range rather than “Mathematical Operators,” and it’s surrounded by a bunch of other white box characters that look basically the same, but there seems to be a (somewhat weak and fragile) consensus that U+25FB WHITE MEDIUM SQUARE is the one that is equivalent to the modal operator.

        • Rad Geek August 24, 2011 at 9:59 am #

          However, my Unicode characters in the text box seem to have been replaced by question-mark characters in the comment as displayed. I don’t know if the same problem would apply to a post as applies to the comment edit box, but it may be a sign that there’s something in the character set support that needs fixin’.

        • Brandon August 24, 2011 at 11:26 am #

          This character does work in posts, but not in comments, which could be a variety of reasons, but the mixture of old and new collations probably has something to do with it. I need to convert all the tables that aren’t utf8 already to it soon.

        • Roderick August 24, 2011 at 1:00 pm #

          There are a number of functions that work not only in posts but in comment previews, but not in the actual comments themselves.

  7. P. July 28, 2011 at 6:30 pm #

    By the way, excellent post. I missed this sort of posts in your blog. It seems your first blog was way more filled with philosophical insights than this one.

    • Brandon July 28, 2011 at 9:16 pm #

      Well, he doesn’t have a lot of time since he got screwed by uncle sucker.

  8. P. July 29, 2011 at 6:13 pm #

    The ability to apply a concept (not exceptionlessly, but at least with reasonable reliability) is part of having the concept; we don’t count as having a concept unless we know how to apply it. After all, the process of acquiring a concept just is the process of learning to recognise and identify instances of it in our environment. But it’s an upshot of your view that we have no such ability to recognise and identify anything in our environment.

    This is one of your arguments that I’ve always been curious about. Why wouldn’t the hallucinating person be able to identify instances of concepts?

    Why wouldn’t he be able to identify chair-hallucinations, for example?

    Sure, he won’t be able to identify real chairs or whatever is actually in front of him… but why is that necessary for him to be able to apply concepts? Ain’t applying concepts to hallucinations enough to count as possessing a concept?

    • Brandon July 29, 2011 at 6:29 pm #

      I edited your comment because we don’t use bbcode here, as is clearly marked. We use html. Please use the blockquote html tag for such a long quotation.

      • P. July 29, 2011 at 6:39 pm #

        Sorry… I’m kind of “internet retarded”.

    • Roderick July 31, 2011 at 12:38 am #

      Ain’t applying concepts to hallucinations enough to count as possessing a concept?

      Well, imagine two people, Bonnie and Clyde. Bonnie’s a normally situated person and Clyde’s a brain in a vat. Bonnie’s concepts apply to actual external objects. Clyde’s concepts apply to (what we would call) hallucinations. I claim that while Bonnie’s concepts of “table,” “chair,” “brain,” and “vat” refer to actual tables, chairs, brains, and vats, Clyde’s concepts of “table,” “chair,” “brain,” and “vat” refer to hallucinations. In other words, Clyde does not and cannot have our concepts and cannot refer to tables, chairs, brains, and vats.

      • Roderick July 31, 2011 at 12:43 am #

        By the way, here’s a more detailed exposition (by St. Hilary) of the point I’ve been making about brains in vats.

      • P. July 31, 2011 at 3:30 am #

        Are you saying Clyde does in fact possess concepts, it’s just that she doesn’t possess *our* concepts, or are you saying we can’t even make sense of Clyde’s possessing any concept whatsoever?

        If it’s the last alternative, I don’t understand why.

        • Roderick July 31, 2011 at 3:49 am #

          I actually don’t think the notion of someone who is having only hallucinations makes sense. Mental states require transaction with extramental reality. But I was granting for the sake of argument that Clyde’s case was possible, in order to make the point that Clyde can’t coherently entertain the hypothesis that he’s a brain in a vat. Because he can’t so much as refer to what we call brains in vats — and what he calls brains and vats are what we call hallucinations.

        • P. July 31, 2011 at 3:14 pm #

          Ok, I think I understand you now. Thank you.

          After reading Putnam’s argument my only worry is whether his account of reference is widely accepted, since his argument pressuposes that account (I think).

          I know that this guy has criticized some version of the so-called “causal theory of reference”.

          Are you aware of his criticism? Does it imperil Putnam’s argument?

        • Roderick August 1, 2011 at 2:06 am #

          From what I recall of Evans’ criticisms of the causal theory of reference, his complaint was that it didn’t go far enough in the direction of making reference depend on successful identification. So I think he would have to endorse Putnam’s argument even more strongly than Putnam would. But it’s been a while since I read Evans.

  9. Andrew July 30, 2011 at 7:58 pm #

    A lot of your points are inspired by Wittgenstein, no? I know you like to cite him, but I definitely see the influence here, especially since I read On Certainty somewhat recently and really enjoyed it

    • Roderick July 31, 2011 at 12:40 am #

      A lot of your points are inspired by Wittgenstein, no?

      Guilty as charged.

      • Andrew July 31, 2011 at 4:58 pm #

        Aha! I always enjoy a little Wittgenstein here and there.

        What’s interesting about his views in On Certainty is that he seems to agree that in a sense, we don’t know certain empirical propositions, say, “I have two hands” or “The world is more than 50 years old,” in opposition to G.E. Moore who said we did. His whole point was that many of these seeming empirical generalizations are not empirical, but are rules of our language-games. Thus, there is an oddity in saying “I know I have two hands.” It makes sense to say that we know it in some contexts (like if we suspect someone has a concussion and we are asking basic questions to test their knowledge), but it is usually not something we provide justification for; rather, as a constitutive rule, a statement like “I have two hands” justifies or licenses other statements of the game together with other such grammatical propositions. Since we usually reserve “know” for things we provide fallible evidence for and something for which it is possible to provide evidence against, saying that we know a grammatical proposition like “I have two hands” seems nonsensical. He says that such a statement’s turning out to be wrong would be like a “annihilation of yardsticks.”

        Yeah, anyways, you might have known that, but I think it supports the strange feeling that people like the person at Mises U for saying we “know” some of these basic propositions. When looked at closely, we find it hard to justify them, opening the door for skepticism, which is the wrong move. Rather than simply assert that we know them as Moore did, Wittgenstein points our their grammatical function.

        • Roderick August 1, 2011 at 2:28 am #

          Yes. My Wittgenstein is a bit more laidback than the real Wittgenstein; mine says “it’s fine to say you know them, so long as you don’t lose track of the ways in which that use of ‘know’ differs from more ordinary uses of ‘know.'” (Actually the real Wiggy says things like that too, sometimes.)

Trackbacks/Pingbacks

  1. A Short (But Helpful) Lesson in Epistemology | Thinking and Believing - July 28, 2011

    […] Read it here. […]

Leave a Reply to Bob Click here to cancel reply.

Powered by WordPress. Designed by WooThemes