Yesterday I also happened to read Anscombe's essay "Must One Obey One's Conscience?" To deliberately act against one's conscience, she says, is to choose to do what is wrong (as you see it). That can't be right (as Kant might agree). But to act in accordance with a conscience that tells you to do the wrong thing (think of Huck Finn feeling guilty about helping an escaped slave) also cannot be right. So you are stuck. The only way out is to find out that your conscience is wrong and fix it. This is impossible for a consequentialist, though, it seems to me, because you cannot know the future with sufficient certainty, and that would be the only way to know that your conscience was wrong. You might find out that generally acting in such-and-such a way produces the best results, but you cannot know that it always will.
I think that this is at least related to the reason why Anscombe, in "Modern Moral Philosophy," writes:
... if you are a consequentialist, the question "What is it right to do in such‑and‑such circumstances?" is a stupid one to raise. The casuist raises such a question only to ask "Would it be permissible to do so‑and‑so?" or "Would it be permissible not to do so‑and‑so?" Only if it would not be permissible not to do so‑and‑so could he say "This would be the thing to do." Otherwise, though he may speak against some action, he cannot prescribe any‑for in an actual case, the circumstances (beyond the ones imagined) might suggest all sorts of possibilities, and you can't know in advance what the possibilities are going to be. Now the consequentialist has no footing on which to say "This would be permissible, this not"; because by his own hypothesis, it is the consequences that are to decide, and he has no business to pretend that he can lay it down what possible twists a man could give doing this or that; the most he can say is: a man must not bring about this or that; he has no right to say he will, in an actual case, bring about such‑and‑such unless he does so‑and‑so.There seems to be a problem for consequentialism here, not so much to do with the difficulty of knowing the future (because that isn't always difficult to know) but to do with the coherence of the moral 'ought'. This apparent incoherence has nothing to do with giving a law to oneself. It has to do with what it might mean to say that one ought to do this or that particular thing. Shelly gave a very nice example, which I hope it's OK for me to use here: Imagine that a mine is flooding with people in it, and you can push one of three buttons that might close various doors and save people's lives. One button will save all the people from drowning, one will ensure that they all drown, and one will cause a few to drown but save most. You don't remember which button does which thing, but you know that button C has the third of these effects, drowning a few but saving most. The right consequentialist thing to do is to push the button that saves all the people, but you don't know what this is. The highest expected utility comes from pushing button C, even though this is definitely not the best button to push. What should you do? In one sense it seems you should push A or B (whichever is the one that will save all the people from drowning), in another sense you should push C. But what is the meaning of 'should' here? It doesn't seem as though it can be the same in both senses.
And that seems like a problem for consequentialism as a moral theory, as a view that can tell us what we should or ought to do, what is right or permissible or wrong or impermissible, as a practical guide to action.