Monday, January 20, 2014

Grounds

This sounds like the kind of book I would like to read. The review seems good, too, but I was struck by this sentence:
Heidegger and the later Wittgenstein agree that the basis of intelligibility, the groundless ground of our conceptual edifice, is the everyday shared reality in which we already find ourselves.
That can't be right, can it? The basis of intelligibility, if there is such a thing, must be independent of intelligibility, hence unintelligible. But how could we find ourselves in an unintelligible "everyday shared reality"? There's also this:
Both Heidegger and Wittgenstein intended to lay bare the background experiences, the forms of life, the worldhood of the world, that first make language meaningful.
How can language be made meaningful? If it isn't already meaningful then it isn't language.

A little charity might be in order. There isn't a background that makes language meaningful or a ground that is the basis of intelligibility. At least not if the words 'basis,' 'ground', etc. are being used in the normal way. Rather, when we look for a basis or background all we find is everyday reality, forms of life, etc. But these are not the foundation of language, however important they might be in or to language. Language is not the kind of thing that could have a foundation. Asking for its basis is like asking what matter (or whatever we call the fundamental stuff of material reality) is made out of, as if matter were a cake whose ingredients we could discover. We can ask what goes in the cake, and what baking powder, etc. are made of, and what atoms are made of, but at some point it's a mistake to think you can ask "And what's that made of?" The fundamental constituent of material things is not itself made of anything else. Similarly, language is what stories, arguments, etc. are made of. It is not itself made of anything else. (It isn't made of words, for instance, because a word is not a word unless/until it is part of a language. So we can say that language is made of words, but not in any foundational sense. The words don't come first even if sounds and marks that later become words do.)

23 comments:

  1. ha, yes having been thru earlier generations of the taming/institutionalizationing of anti-foundationalisms/post-structuralisms I can say most of the buzzing flies remain in the traps, darn cognitive-biases...
    -dmf

    ReplyDelete
  2. Yes. I was disturbed to read that "It is now widely agreed that the writings of the period from 1946 until his death (1951) constitute a distinctive phase of Wittgenstein's thought" in the Stanford Encyclopedia of Philosophy entry on Wittgenstein, although it's not entirely clear that the authors support the foundationalist, "third Wittgenstein" idea. I hope a foundationalist reading of Wittgenstein is not widely agreed on.

    ReplyDelete
    Replies
    1. the more times people can throw a citation to 'moyal-sharrock (2004)' into a footnote, the wider the 'agreement' will become.

      Delete
  3. "The basis of intelligibility, if there is such a thing, must be independent of intelligibility, hence unintelligible. But how could we find ourselves in an unintelligible 'everyday shared reality'?"

    But saying that reality is not susceptible to intelligible explanation cannot be the same as saying that it is not, itself, intelligible, can it? Don't we just mean here that questions of intelligibility don't even apply in this case?

    The problem perhaps arises when we think of "ground" as naming some thing in the world (rather like the ground we hit when alighting from an airplane or a boat) when all we should take "ground" in this case for is a term for designating the underpinning to the variety of games we engage in -- the games in which it makes sense to speak of, and understand, things (i.e., to recognize them as intelligible)?

    Perhaps the problem here is that there is just no easy way to speak about the underpinnings in this sense. "Underpinnings" conjures a very concrete picture, of pylons and structural foundations, say -- in which case we find ourselves looking for something akin to these, when, in fact, the term "underpinnings" in this sense really designates nothing like these at all.

    I think there is an important sense in which the writer is saying something intelligible, i.e., that both Wittgenstein and Heidegger looked at the world (in which we find ourselves) as in need of no justification but as the basis for the games in which justification claims can be made, for these games are part of the world (which includes our being in it), too.

    But I am not very good on Heidegger, I'm afraid. I always found his work obscure while Wittgenstein, in his approach, seems to make sense. But perhaps there is this particular link between their two projects as the writer you quote seems to suggest (which would give me more respect for Heidegger than I have heretofore granted I guess).

    ReplyDelete
    Replies
    1. I'm not good on Heidegger either, but I think he's worth studying. Hardly easy though.

      Underpinnings" conjures a very concrete picture, of pylons and structural foundations, say -- in which case we find ourselves looking for something akin to these, when, in fact, the term "underpinnings" in this sense really designates nothing like these at all.

      This is well said.

      Delete
  4. that last paragraph is fairly discouraging:

    'Heidegger and Wittgenstein both suggest a new direction for philosophy, but there are differences between them. If philosophers are to heed the lessons of Wittgenstein and Heidegger, how exactly should we proceed? Where Heidegger and Wittgenstein disagree, whom should we follow and why? While these questions remain, for the most part, unanswered in this anthology, I would be delighted if the editors offered us a second volume.'

    a new direction ~60-90 years old… which leaves us with the question 'tell me why i should adopt THIS GUY'S doctrines'.

    ReplyDelete
    Replies
    1. It is discouraging when you put it like that. But if there were more Cavell/Rorty/Mulhall types around that wouldn't be a bad thing. It's just that that isn't a type. Perhaps it's a type that is taking a really long time to emerge.

      Delete
    2. the wish for types is damaging in and of itself. what is the reviewer's question but a question about the quickest way to make oneself a type?

      Delete
    3. the wish for types is damaging in and of itself.

      True.

      Delete
  5. "Language is not the kind of thing that could have a foundation. Asking for its basis is like asking what matter (or whatever we call the fundamental stuff of material reality) is made out of, as if matter were a cake whose ingredients we could discover."

    But couldn't we speak of language having a foundation in a different sense, i.e., in terms of the kinds of mechanisms an organism must have to have/use language?

    In one sense, at least, isn't it right to say that language, as we find it in humans, has its foundation or basis in the signaling which other animals on a different section of the evolutionary scale engage in? In this case the idea of a foundation is not one of justification but functional explanation, as if we might say if you want to understand how language develops in particular kinds of organisms look at the more rudimentary forms of it that precede it on the evolutionary scale. And then, if you do that, wouldn't we expect to come to a point where what we're looking at has become so different from language that we might not recognize it as that without a sense of the connections which link it, through increasingly complex exemplars, in various organisms moving up the evolutionary ladder? In that case we might have an instance of a "foundation" of language that doesn't actually have the look of a language.

    In this sense isn't it correct to think of language in foundational terms?

    Of course, that isn't what we mean when we think of the foundation as a justification for this or that use or instance of language, as if we could not speak about something unless we could also ground it first in something more basic. But that just strikes me as looking to play the wrong game here. Maybe that's why it's hard to see how one could think of the world as unintelligible when, of course, here we are within it and acting with understanding vis a vis a whole panoply of worldly events, things, etc. Of course the world is intelligible to us in this sense, in a way, that is, in which it would not be if we suddenly stepped out of our house and found the laws of physics or familiar human practices suddenly gone haywire.

    One doesn't ask why is the world as it is (except in a plainly scientific sense that the earth could have been different or the laws of physics might have been). Given the world as it is, there's just nothing to explain in terms of justification even if we can expect to explain the physical events which constitute it.

    Or is this too opaque to even make sense of?

    ReplyDelete
  6. On the one hand, it seems as though language must have emerged out of something. So then meaning must have emerged, or grown, or been built out of something else, meaning-lite or sub-meaning. But that doesn't seem to make sense. Things have meaning or they don't. Meaning is like consciousness--it's hard to imagine a component of it that isn't the thing itself. Presumably consciousness evolved out of something else, but then that something else is something else, not part of consciousness in the way that the foundation is part of the house. You won't understand consciousness or meaning by studying something of a completely different kind, however necessary the latter might be to the former. It would be like studying chalkboards in order to understand arithmetic. (I think--it's all getting opaque to me too now.)

    ReplyDelete
  7. I think your right, that consciousness (or, better, the array of features we find in ourselves, when we pay attention to that area of our lives, which, in the aggregate, we lump together under the rubric "consciousness") is sui generis, and has to be taken at "face value" in an important sense. But suppose the issue is to build a conscious machine (not outlandish in this day and age)?

    In that case, what we want to know is what the constituent elements of consciousness are, the features that underlie the features we encounter as our mental lives, as what it is to be conscious, which, if we could construct them and put them together in the right way, we would produce something with consciousness.

    Isn't that an important element in trying to understand consciousness too and, if so, doesn't it also say something worth knowing about consciousness in ourselves?

    Just as language must be grounded in more primitive features if it we accept that it developed in an evolutionary way, and there is little reason to think otherwise, doesn't it also make a kind of sense to suppose that consciousness (the features of our mental lives) is grounded in more basic features we would not call consciousness, too? But does granting something like that undermine our acceptance of the fact that consciousness is a phenomenon in its own right in the universe?

    It seems to me that there are different ways we use "foundation" and "grounded" and that, just because one way, the way that seems to involve trying to justify, makes no sense, some other way still might. I guess I'd want to say that there's room for more than one sense in using these kinds of words and maybe our job is just to point out which ones belong in the game and which don't?

    ReplyDelete
  8. It seems to me that there are different ways we use "foundation" and "grounded" and that, just because one way, the way that seems to involve trying to justify, makes no sense, some other way still might.

    Yes. I think there are basically two ways we can use a word like 'grounded' (or two ways that we do use such words). There are literal grounds, which provide physical support for a building, and metaphorical grounds, providing justification for a belief. One has to do with causes, the other with reasons (and reasons not in the sense of "for what reason does the building not fall down?" but as in "what reason is there to believe that?", i.e. reason sin the sense of justification). I don't think it makes sense to talk about justifying language itself. But I also think it makes little sense to talk about anything like causes of language. It's the wrong kind of thing to have a cause, existing at the wrong level.

    The idea of trying to build a conscious machine seems similarly confused, although I think it makes perfect sense to try to make a machine that would be considered conscious. The thing to do would be to focus on getting its behavior, and probably its appearance, right. This could teach us something about "consciousness" and perhaps something about consciousness too, but it would be quite different from a scientific investigation into the causes of some phenomenon.

    ReplyDelete
    Replies
    1. Would a machine, in which we had got the appearance of consciousness (the behaviors) right, in certain situations, be enough for us to expect it to act consciously in the wide range of situations in which we do? If we agree (and I don't see how we cannot) that we have a mental life (i.e., all sorts of things -- including pictures in our heads -- going on subjectively when we act or decide to act), then could we expect a machine that lacked that to act with the appearance of conscious thought across such a broad array of circumstances as we confront in our lives?

      Can language be caused? I agree with you that that seems to make no sense in one very important way because language is not a thing that happens but something we do, it's a propensity not an event. But there are some uses in which it might make sense to speak of language as caused. We can cause someone to use language by prompting them in the right way. Language can be caused to occur in a device if we build it in the right way (though that seems like a somewhat strange turn of phrase). Language, as it occurs in creatures like ourselves, may have been caused by an evolutionary process which led to brains with language capacity. There may be particular evolutionary forces we can pick out and point to as causative of language in the mechanical (not the teleological) sense of causation. Certain operations of our brains are likely the proximate cause (again, in the mechanical sense) of our use of sounds or symbols in a linguistic way.

      Delete
  9. Would a machine, in which we had got the appearance of consciousness (the behaviors) right, in certain situations, be enough for us to expect it to act consciously in the wide range of situations in which we do?

    I wouldn't think so. We'd have to experiment, i.e. try it and see.

    If we agree (and I don't see how we cannot) that we have a mental life (i.e., all sorts of things -- including pictures in our heads -- going on subjectively when we act or decide to act)

    I agree that we have a mental life. I don't object to the rest of it either, as long as not too much is based on talk of "pictures in the head" and so on.

    could we expect a machine that lacked that to act with the appearance of conscious thought across such a broad array of circumstances as we confront in our lives?

    I'm not sure what to say here. Do we have a clear idea of what the 'that' in question is? I think it's a mistake to reify the subjective or to get too Cartesian about these things. It seems possible to me that a machine might act with the appearance of conscious thought across a broad array of circumstances. Whether a machine ever will do this is an empirical question. Could it do so if it did not really feel anything? Well, what does it mean to really feel? I'm not sure that we know. To me this seems like an ethical question, one that calls for judgment. What will we count as real in this area?

    We can cause someone to use language by prompting them in the right way.

    Certainly.

    Language can be caused to occur in a device if we build it in the right way (though that seems like a somewhat strange turn of phrase).

    Maybe. There is room for disagreement about whether devices really use language, isn't there? Although it depends what you mean by "language occurring" in something. If it occurs in books then it can occur in devices. Otherwise you have the whole Chinese room issue.

    Certain operations of our brains are likely the proximate cause (again, in the mechanical sense) of our use of sounds or symbols in a linguistic way.

    Presumably, yes.

    ReplyDelete
    Replies
    1. I guess my main point is to suggest that it isn't Cartesian to recognize we have a mental life, even if we embrace the Wittgensteinian view that language doesn't really work in the same way with regard to it that it does when we speak of public phenomena. I know that some Wittgensteinians suppose that Wittgenstein's point about private language leads to rejection of any referential uses of language with regard to mental phenomena (what is private to each of us) but I think that is too strong an interpretation of his point. My view on this is, however, somewhat controversial I have found among Wittgensteinians. Yet I don't see how a man like Wittgenstein, who was so thoughtful and aware of his own private musings and thought processes in general, could be construed to be saying that the mental has no place at all in referential language. Rather, I think, his view is better understood as a claim that referring language works differently in the private arena than in the public.

      Certainly the question of artificial intelligence (AI) opens this issue up because any effort to construct an artificial entity with intelligence of the conscious variety such as we have must confront the question of what should be going on inside. I agree that it's an empirical question as to whether consciousness is a system level feature of certain kinds of contraptions but the very fact that it is suggests that we must consider ways of speaking about the subjective interior of the system. Any effort to construct a conscious behaving machine would have to address more than getting the right behaviors to occur because you'd need more than some kind of canned outputs since reality is open ended and unpredictable and no humanly conceivable programmed approach could anticipate everything. So you'd need a system that has the capacity to self-program, at least to some degree, i.e., to do the kinds of things we do when confronted with unanticipated situations and events. That is, you'd need a system with the kind of mental life we find in ourselves: the ability to think about, reflect on, plan, imagine, and experience experiences. All of that is private/internal to the system just as it is in us.

      Perhaps the job of the philosopher here is to consider and propose just what we mean by these internal/private features, including things like "really feeling", etc.?

      Delete
    2. We do have a mental life and can refer to its contents. That is, I can refer to (talk about) the headache I had this morning or the dream I had last night, etc. But this is quite different from referring to public, physical objects. There isn't, in the case of the mental, an object there that I can designate in the way that there is with sofas, trees, etc. Insofar as mental objects are objects at all, they are a very different kind of objects. Language works differently in regard to them. So we can say that referring language works differently in this arena, but it's important to keep in mind that this isn't just a coincidence or quirk of language. And to keep in mind that the arena in question is a metaphorical one.

      I agree that it's an empirical question as to whether consciousness is a system level feature of certain kinds of contraptions

      I don't think this. If it is an empirical question at all then it is one of a peculiar kind. We might say, "Look at this machine and tell me that it is (not) conscious!" That's a kind of empiricism, but the scientific kind. The observer has to make a judgment, and there is logical, conceptual room for disagreement, even if psychologically only one reaction is possible (e.g. I cannot bring myself to call a machine conscious or, on the contrary, I cannot deny that my robot carer is really upset when it cries, etc.).

      we must consider ways of speaking about the subjective interior of the system

      Probably, but we are going to be making these up, or discovering what we find natural and/or helpful, not discovering the (absolutely) right ways to speak about such things.

      An 'intelligent' machine would have to self-program or learn, but machines can do this to some extent already, as far as I know. There are computer games that 'learn', one of which was supposedly inspired by Wittgenstein';s work. I don't know that they would need to feel in order to be thought of as conscious, although to the extent that we thought of them as conscious we would be more likely to think of them as feeling.

      Delete
    3. I think we're on the same page re: language working differently when we apply it in the private sphere of our lives. Perhaps we really do differ on the empirical question as you suggest. By "empirical" I mean a scientific issue, to be resolved by attention to the facts as we discover them, not the idea that we need to think empirically about whether there are other minds, say, in order to assure ourselves there are.

      In fact, I think the Wittgensteinian account, that we recognize consciousness in others by behaviors, that what we mean by ascriptions of consciousness in others hinges on what they do, is right. Where perhaps we separate is on the question of what we mean by a term like "consciousness."

      I don't think the word is exhaustively accounted for by observed criteria alone. I think, in recognizing it in others, we also recognize a likeness with ourselves that is not simply the result of how we speak, though it is that, too, because language arises in us against that background. Before we have language though we have recognition of other subjects as subjects, as when the pre-lingusitic infant reacts to the adults around it. It recognizes the behaviors of the adults on an instinctive level.
      Language develops in us and reflects that instinct in its forms and, eventually, we expand our usages, as we grow and learn about others and ourselves, learning to apply ideas that are formed in the public sphere in more personal ways, i.e., to include ideas about the other's subjectness based on our own. If I dream and feel pain I think that you do as well when you speak and act as if you do. I make that connection to my own subjectness.

      I guess what I'm saying is that I don't think the usual bare-bones Wittgensteinian account, that it's all in how we learn to speak, is enough. It's important and indispensable for our form of life, I think, but it's not the whole picture. The problem though is to see the picture in its entirety.

      If we ran into a machine that acted enough like us that, if it were one of us, we would treat it as conscious (behave toward it as if it were conscious, too), then why wouldn't we think it conscious as well, in roughly the same way we use that word for other creatures like ourselves, i.e., by supposing there is a mental life present in it?

      The problem, I guess, is what do we think consciousness consists of? It can't be behaviors alone because, from an empirical standpoint at least, we know that's not the case with ourselves. Not because we've done any kind of a study of this business in others but because we directly experience our own subjectivity, our mental lives. So we impute a mental life to the other because its behaviors mesh with ours in a way that we recognize at a very deep level.

      This doesn't mean we have to say we only know another has a mind by analogy though. It just means that something else comes into play which complicates the traditional Wittgensteinian picture driven by the realization of the impossibility of private language. So the conscious behaving machine would be no less conscious to us (have a mental life) than other humans or equivalent creatures.

      The problem at the empirical level then becomes finding out what it would take to get that kind of behavior in a machine? And here I'd say we need to think about what makes up our own mental lives.

      We think we can imagine a perfectly behaving automaton that fools us all the time, every time. But that's where the empirical problem kicks in because it's just an assumption that such a complete simulation would be possible without some level of subjectivity such as we have -- an assumption that looks more like an intuition than a valid conclusion. And intuitions can be notoriously wrong.

      Delete
    4. Re-reading what I wrote above I see I made some typos, including the omission of the word 'not' at one point ("That's a kind of empiricism, but notthe scientific kind.") Sorry about that. You seem to have managed to understand what I meant anyway.

      I agree with almost everything you say here. The tricky part--and I think you agree that it's tricky--is where you say that we need to think about what makes up our mental lives. What that means and how to go about doing it are very difficult questions to answer. If I were trying to create AI I think i would forget about this side of things and focus on producing the right kinds of behavior. As you say, this won't guarantee everything we want, but I'm not sure that anything will. It would at least give the project a practical focus, without the danger of heading into a philosophical dead-end.

      Delete
    5. I largely agree though I'm not sure a "philosophical dead-end" matters much to the science of this. Scientists would have to concentrate on the practical part, of course, but I guess I'd say the practical includes whatever goes into a system to give it the kind of self-programming capacities we find in ourselves. Could that be done without the awarenesses (I use the plural because I think there are many different types of awareness as well as things we may be aware of) such as we have? Maybe.

      Perhaps a machine could do reasoning and evaluating and even a kind of thinking without the awarenesses we have, which seem to contribute to our sense of selfhood, of being a self. What goes into the awareness of self? A historical memory qua biography (both recallable explicitly, in a narrative way, and built into the ways in which we think about things and react to them based on learned information) would likely be necessary for a self at a minimum.

      Could a machine have some kind of consciousness without that though? I'd be inclined to think it could -- but it would not merely be severely limited, compared to ourselves (a given when you drop these items from the mix). It would also be incomplete conceptually and so it would not fully count as what we mean by "consciousness." But, as you say, there's a lot of room for variations on this word and some of this will involve becoming more cognizant of the range of applications over which it applies, testing it and perhaps broadening the senses in which we use that term.

      It does seem to me though that one could not expect to build a successful conscious-behaving machine without giving it the "internal" system features we have. It would need some capacity to respond to the unexpected, after all, because no one but a deity could ever program into it a set of responses so complete as to meet every contingency.

      Isn't that what being intentional finally is, having the ability to operate autonomously, to act from one's own understanding and, when relevant, interests? How could we expect a machine to do that if it had only a set of rules to follow mechanistically when faced with this or that set of inputs, this or that circumstance but no (or only a radically incomplete) awareness of itself as an entity? If it lacked that, what would it do when facing new situations which its programmer had failed to foresee?

      Delete
    6. Right, philosophical dead-ends don't matter to the science as long as you don't wait to answer the philosophical questions before starting on the science. You would never get anywhere that way.

      As for autonomy, etc., those are tricky questions. Robots do act "autonomously" now. Whether they are really autonomous is another, and to some people uninteresting, question. Similarly, they can be self-aware through various self-scanning mechanisms. But is this real awareness? From a scientific point of view I don't see that it matters. As long as the machine can handle the unexpected. Then again, even people don't always deal with the unexpected very well.

      Delete
  10. the desire for some master key/code that will secure meanings and order(s) is very old and obviously has not died despite the reported 'deaths' of many related figure-heads...
    -dmf

    ReplyDelete
    Replies
    1. I'm inclined to say that this is part of human nature, i.e. something that will never die.

      Delete