Wednesday, March 13, 2013

The consequentialist 'ought'

Shelly Kagan is visiting Lexington (thanks to Washington and Lee University), so I read his paper "Do I Make a Difference?", went to talk he gave on why it's bad to be dead, and last night had dinner with him and a group of other philosophers. I had thought from his paper that he judged acts by their expected utility, and I have a long draft of a blog post about what this might mean. Can there be such a thing as the expected utility rather than the utility that this or that person (or these persons) expects? Does judging acts not by the consequences they actually have but by the consequences they can reasonably be expected to have even count as consequentialism? So I asked him about this, and he told me that in fact he thinks the right act is the one that has the best consequences, not the one that has the best expected consequences. (If you have read the paper and don't see how this can be his position, see footnote 8.) This led to a very interesting discussion about what one should do in cases of imperfect knowledge about what the right thing to do is. In his view you should do what will have the best consequences, but since you don't know what this is you have to decide as best you can. But then should you act on that decision? In a sense yes, of course, but if your decision tells you to do something that is not in fact the act that will have the best results, then in a sense no. 

Yesterday I also happened to read Anscombe's essay "Must One Obey One's Conscience?" To deliberately act against one's conscience, she says, is to choose to do what is wrong (as you see it). That can't be right (as Kant might agree). But to act in accordance with a conscience that tells you to do the wrong thing (think of Huck Finn feeling guilty about helping an escaped slave) also cannot be right. So you are stuck. The only way out is to find out that your conscience is wrong and fix it. This is impossible for a consequentialist, though, it seems to me, because you cannot know the future with sufficient certainty, and that would be the only way to know that your conscience was wrong. You might find out that generally acting in such-and-such a way produces the best results, but you cannot know that it always will. 

I think that this is at least related to the reason why Anscombe, in "Modern Moral Philosophy," writes:
... if you are a consequentialist, the question "What is it right to do in such‑and‑such circumstances?" is a stupid one to raise. The casuist raises such a question only to ask "Would it be permissible to do so‑and‑so?" or "Would it be permissible not to do so‑and‑so?" Only if it would not be permissible not to do so‑and‑so could he say "This would be the thing to do." Otherwise, though he may speak against some action, he cannot prescribe any‑for in an actual case, the circumstances (beyond the ones imagined) might suggest all sorts of possibilities, and you can't know in advance what the possibilities are going to be. Now the consequentialist has no footing on which to say "This would be permissible, this not"; because by his own hypothesis, it is the consequences that are to decide, and he has no business to pretend that he can lay it down what possible twists a man could give doing this or that; the most he can say is: a man must not bring about this or that; he has no right to say he will, in an actual case, bring about such‑and‑such unless he does so‑and‑so.  
There seems to be a problem for consequentialism here, not so much to do with the difficulty of knowing the future (because that isn't always difficult to know) but to do with the coherence of the moral 'ought'. This apparent incoherence has nothing to do with giving a law to oneself. It has to do with what it might mean to say that one ought to do this or that particular thing. Shelly gave a very nice example, which I hope it's OK for me to use here: Imagine that a mine is flooding with people in it, and you can push one of three buttons that might close various doors and save people's lives. One button will save all the people from drowning, one will ensure that they all drown, and one will cause a few to drown but save most. You don't remember which button does which thing, but you know that button C has the third of these effects, drowning a few but saving most. The right consequentialist thing to do is to push the button that saves all the people, but you don't know what this is. The highest expected utility comes from pushing button C, even though this is definitely not the best button to push. What should you do? In one sense it seems you should push A or B (whichever is the one that will save all the people from drowning), in another sense you should push C. But what is the meaning of 'should' here? It doesn't seem as though it can be the same in both senses.

And that seems like a problem for consequentialism as a moral theory, as a view that can tell us what we should or ought to do, what is right or permissible or wrong or impermissible, as a practical guide to action.

4 comments:

  1. Forgive me for putting this here but I have a quick question for you and couldn't find an email address.

    I'm currently writing a post linking the notion of nonsense in the Tractatus with the approach of apophatic mystical theologians such as Denys and Gregory of Nyssa.

    The similarity (using language to transcend language and see the world aright) seems strikingly obvious to me - so much so that I'm sure others must have written about the Tractatus along these lines. But if they have I've not come across it. Do you know of anything like this?

    ReplyDelete
  2. No problem. I don't think there's really very much along these lines, but there is a recent book by Peter Tyler called The Return to the Mystical: Ludwig Wittgenstein, Teresa of Avila and the Christian Mystical Tradition. I saw it on amazon the other day. Looking for it again just now several things came up when I searched within amazon for "Wittgenstein amazon". A bit obvious, perhaps, but that's the best I can do.

    ReplyDelete
    Replies
    1. Ta. As it happens I've got the free sample version of Tyler's book on my Kindle, but I've not got round to buying the whole thing yet. I'm pleased that there doesn't seem to be too much about this out there - although of course that increases the chances that I'm talking through my hat.

      Delete
  3. That should have been "Wittgenstein mysticism", sorry. But you probably realized that.

    It's probably true that a lack of work in any area means there's a higher chance that there's nothing worth saying about it, but so much gets ignored or overlooked that it doesn't mean much. And the glory is all the greater if you're the first to open up a new area.

    ReplyDelete