Wednesday, April 26, 2017
Wittgenstein's hut
There's a video about it here.
Monday, April 24, 2017
A problem with bullshit
In "On Bullshit" Harry Frankfurt says that, "the
essence of bullshit is not that it is false but that it is phony." This suggests that it is not the content of speech that makes it bullshit but something else, something like the motive for producing it. He also says:
Generally when people lie it is because they are trying to get away with something. Perhaps some people simply enjoy deceiving others and lie for the sake of lying, but most lies are surely not of this type. People lie to get out of trouble, to impress others, to help make a sale, and so on. Their motive is unconcerned with how things are.
So how does lying differ from bull? Bull could be true, false, or half-true: the bullshitter does not care. Except that we never go around stating truths for no reason other than that they are true. We always mean to achieve something. (Hmm. Do we? At any rate, we do far more than state truths when we speak, and when we do state truths we rarely, if ever, do so simply because they are true.) So truthful bull seems not unlike normal truthful speech, except perhaps that the speaker has a vicious motive. And untruthful bullshit seems very much the same as normal lying. The clearest case of bull seems to be the half-truth. But then why do people tell half-truths? Isn't it usually because they care both about getting the result they want and the truth? (Even if they only care about the truth because they care about possibly getting caught in a lie.) Perhaps what Orwell calls "sheer cloudy vagueness" should be another category here, alongside half-truths. Like half-truths, though, a more or less meaningless cloud of words is what people produce when they want to say something (silence would be costly) but have nothing they want to say (because they are ignorant or because the truth is unpleasant).
We might need to distinguish between two kinds of bull: all speech (true, false, or nonsensical) by people who do not value truth or honesty as such (roughly: consequentialists), and carefully constructed half-truths or misleading whole truths told by people who do care about truth and/or honesty but also care about other things too, e.g. St Athanasius' saying "He is not far from here" when asked about his location and wanting to throw his ill-intentioned pursuers off the scent without actually telling a lie.
Jennifer Saul (here) argues that you might as well "Just go ahead and lie" as mislead in the way that Athanasius did, but I disagree. Here's the conclusion of a longer version of her paper:
Mill says that one should not tell lies even when doing so would maximize utility because lying is bad for one's character, reducing one's honesty or commitment to truth-telling, making it more likely that one will lie in other, less justifiable, cases later. (He might also say the opposite somewhere else, but never mind.) This sounds like an empirical hypothesis, and a dubious one at that. But it can be taken in a different way, having to do with Anscombe's remark about its showing a corrupt mind if one is prepared to accept "in advance, that it is open to question whether such an action as procuring the judicial execution of the innocent should be quite excluded from consideration." Mill might be thinking that it is corruption of mind (in a way that might have nothing much to do with consequences) to lie, or perhaps to be prepared (in advance) to lie, over relatively trivial (non-life-threatening) matters. This case is life-threatening, of course, but does Charla fully understand that? We don't know.
As well as thinking about cases in which misleading might seem just as bad as lying, it could be worthwhile to think about those in which there seems to be a difference. Consider Athanasius again. I don't know what really happened or why, but it is surely conceivable that he wanted to avoid lying while also helping to prevent (his own) murder. And that this was not because he wanted to keep his hands clean, although that is one possible motive, but because he wanted to act with due respect for God, who not only commanded that we not bear false witness but is also sometimes identified with Truth. Atheists won't be moved by such considerations, of course, but they might still care about truth and/or truthfulness.
Here's a quick run through of Saul's paper.
"If the result is the same, and the motivation is the same, why should we have this moral preference?" for misleading over lying, Saul asks. The immediate motivation--deception--might be the same, but respect for truth might be part of the motivation in the case of misleading. (This is easier to imagine in Athanasius' case than Charla's.) Indeed the result would not be the same in this case either--if one merely misleads then one does not, apparently, disrespect the truth as much as if one lies. One has not violated the norm of not lying. Perhaps that is irrelevant, but it is hardly obvious that it doesn't matter.
Saul does accept that misleading is better than lying in some cases (see points 2 and 3 above). In a courtroom lying is perjury while misleading is not. And then there's this, slightly odd, example: "A couple with an open marriage might [...] agree that lying about affairs is not acceptable while misleading is." (This seems odd because it sounds as though there is bad faith in the relationship. Why is misleading OK here if both partners are happy with the marriage's being open? The idea seems to be that whatever consenting adults agree to is OK, which I find hard to believe. Because a) is consent genuine when bad faith is involved (or does, e.g., one partner 'agree' to the open relationship because of fear of losing the other, and is agreement resulting from fear what we want to count as consent (if so, what's so great about consent?)?)?, and b) is 'OK' the right evaluation of, say, consensual cannibalism?
And that's about it. (Although I have skipped the part where Saul discusses the "overwhelming majority of justifications for the belief that lying is worse than mere misleading" which "turn, in one way or another, on holding the audience more responsible for the falsehood they believe in the case of mere misleading." I don't see that as being a very promising line of attempted justification.) Saul draws the conclusion that I quoted above (from the longer version of the argument).
My objection is partly that her argument is based on unrealistic examples and partly that her focus is too narrow. The two concerns are related. Saul appears not to look far beyond questions about motivation and results, and the motivations and results she focuses on are those directly involving people and their immediate future: what counts as Charla's motivation is sex with Dave, what counts as the relevant result is Dave's infection. Any possible concern with honesty on Charla's part is not mentioned. (And what is Dave's deal? "Do you have AIDS?" is surely not credible dialogue in the circumstances.) What about motivation and results as far as norms/principles/values and their violation go? Don't people in fact think and care about such things as well as wanting sex, information, etc.?
Real life is not like the encounter we are presented with between Charla and Dave. And this matters because one thing that is missing from Charla is any concern with ethics. Perhaps she is meant to be a sociopath, like a dishonest version of Saga Norén. But if she were more normal she would surely have some thoughts, before of after, about her dishonesty. These might be in terms of the value of honesty or the badness of lying or deception. Or they might be about Dave and her abuse of his trust. We might then think about the value of general principles. Misleading and lying are both abuses of trust, but is one perhaps a greater abuse than the other? Might it be better if we all valued truth itself or honesty itself, in an abstract way, rather than only valuing concrete individuals such as Dave?
The moral universe of these examples seems to be one in which people want various things and are entitled to pursue them as long as they don't break various rules (violate Dave's rights, produce needless unhappiness, or whatever it might be). The rules all immediately concern people. Not only are they not concerned with God, they are also not concerned with any value independent of people, such as beauty or truth. And this lack of concern with truth is what makes speech bullshit, at least according to Frankfurt.
I'm not sure how far it's possible to live without some values of this kind, though. Roughly: is act-consequentialism possible in practice? Or must we internalize principles (rules) of some sort if we are to behave ethically? And without some values of this sort, how can we make sense of valuing people or their wants or rights? Aren't people worth caring about partly because they are rational, and rational beings can discover truths, and truth is good? Or because people can love and love is good? Maybe that's not quite right. Maybe the value of people is more basic than that. But there are things about people that we value. And there are other things that we value too, like the beauty of nature. Motivations and results, on the other hand, surely matter, but don't seem to be the right kinds of things to be valued as good. Dave's continued good health might be good, for instance, but only because Dave is good. (Not morally good, but a good thing (in a non-instrumental sense) for the universe to contain.) This seems to be missing from Saul's thinking. But I'm reading a lot into one short paper. I've also relied heavily on rhetorical questions.
With that in mind, let's have a go at Saul's seemingly preferred example:
The point of Saul's examples, I take it, is to challenge the idea that misleading is always better than lying by finding cases in which it seems equally bad because the motive and result are the same either way. I think she's right that they are not always morally different. But I don't know how significant this is. Stealing $10 from a dying man doesn't seem significantly worse than stealing $5 from a dying man, but this does not show that the amount stolen is never relevant to how bad an act of theft is. Or take Mill's example of saving a man from drowning in order to torture him before he is killed. This might show that saving someone from drowning is not always better than not saving someone from drowning. But it still is generally better.
The idea of having a motive along the lines of murder-without-lying is hard to imagine (or to value), but Athanasius' possible intention to avoid-being-murdered-without-lying seems both conceivable and conceivably valuable. Nor is it the same motive as avoid-being-murdered-by-any-means-necessary. So if we really do take motives seriously, and not only consequences, then misleading can appear to be better than lying.
Finally, there is the question of consequences. Whether he lies or misleads, George still murders Frieda. And whether he lies or misleads, Athanasius still gets away unharmed. But there are other possible consequences that are not the same. If lying is worse than misleading then the consequence having lied is worse than that of having misled. We can't prove that lying is not worse than misleading by assuming this not to be the case. How might having lied be worse than having misled? To have lied is to have betrayed commitment to truthfulness to a greater extent than to have misled is. (I mean this to be analytic.) Is it therefore worse? It is if commitment to truthfulness is a good thing (generally, even if perhaps not always). And such commitment does seem to be a good thing.
Or am I simply assuming that Athanasius-style 'bullshit' is better than student-who-hasn't-done-the-reading-but-won't-admit-it bullshit?
The fact about himself that the bullshitter hides, on the other hand, is that the truth-values of his statements are of no central interest to him; what we are not to understand is that his intention is neither to report the truth nor to conceal it. This does not mean that his speech is anarchically impulsive, but that the motive guiding and controlling it is unconcerned with how the things about which he speaks truly are.And this: "However studiously and conscientiously the bullshitter proceeds, it remains true that he is also trying to get away with something."
Generally when people lie it is because they are trying to get away with something. Perhaps some people simply enjoy deceiving others and lie for the sake of lying, but most lies are surely not of this type. People lie to get out of trouble, to impress others, to help make a sale, and so on. Their motive is unconcerned with how things are.
So how does lying differ from bull? Bull could be true, false, or half-true: the bullshitter does not care. Except that we never go around stating truths for no reason other than that they are true. We always mean to achieve something. (Hmm. Do we? At any rate, we do far more than state truths when we speak, and when we do state truths we rarely, if ever, do so simply because they are true.) So truthful bull seems not unlike normal truthful speech, except perhaps that the speaker has a vicious motive. And untruthful bullshit seems very much the same as normal lying. The clearest case of bull seems to be the half-truth. But then why do people tell half-truths? Isn't it usually because they care both about getting the result they want and the truth? (Even if they only care about the truth because they care about possibly getting caught in a lie.) Perhaps what Orwell calls "sheer cloudy vagueness" should be another category here, alongside half-truths. Like half-truths, though, a more or less meaningless cloud of words is what people produce when they want to say something (silence would be costly) but have nothing they want to say (because they are ignorant or because the truth is unpleasant).
We might need to distinguish between two kinds of bull: all speech (true, false, or nonsensical) by people who do not value truth or honesty as such (roughly: consequentialists), and carefully constructed half-truths or misleading whole truths told by people who do care about truth and/or honesty but also care about other things too, e.g. St Athanasius' saying "He is not far from here" when asked about his location and wanting to throw his ill-intentioned pursuers off the scent without actually telling a lie.
Jennifer Saul (here) argues that you might as well "Just go ahead and lie" as mislead in the way that Athanasius did, but I disagree. Here's the conclusion of a longer version of her paper:
The picture sketched in this chapter is a very complicated one, which both rejects and respects both traditions with which we began this chapter. The first tradition holds that method of deception is never a matter of moral significance: lying and misleading are equally bad or good, and the moral status of any particular deception depends on such things as its goal or its consequences. The second tradition holds that lying is always wrong, and that misleading is always better than lying (even if it isn't always morally acceptable). The view developed here firmly rejects both of these.
Against the second view, I maintain that misleading is not always better than lying. I showed this through an examination of cases in which misleading was not morally preferable to lying. Further, I argued that there does not seem to be any good justification even for a defeasible version of this view-on which misleading is, except in certain special cases, morally better than lying.
But I do not agree with the first tradition's claim that method of deception is never a matter of moral significance. I take method of deception to be moral significance in all of the following ways:
(1) Whether an agent chooses to lie or merely mislead can make an important difference to moral evaluations of an agent.
(2) In an adversarial context like a courtroom, misleading is morally better than lying.
(3) Where there is a prior agreement that misleading is to be preferred to lying, misleading is morally better than lying.And here's a key part of her argument, a case in which misleading is supposedly not morally preferable to lying:
Charla is HIV positive, but she does not yet have AIDS, and she knows both of these facts. Dave is about to have sex with Charla for the first time, and, cautiously but imprecisely, he asks (3).
(3) Do you have AIDS?
Charla replies with (4).
(4) No, I don't have AIDS.
Charla and Dave have unprotected sex, and Dave becomes infected with HIV. It is unquestionably true that Charla deceived Dave about her HIV status, and also unquestionably true that Charla did not lie--she merely misled him. Yet it seems completely absurd to suppose that Charla's deception was even a tiny bit better due to her avoidance of lying. In this case, misleading is in no way morally preferable to lying. If misleading was, quite generally, morally preferable to lying, it would be morally preferable in this case. Since it is not, we should reject the strong general claim (M).This seems wrong to me. I agree that in terms of what Charla did to Dave her misleading response was as bad as a lie. It might even be worse, because of the apparent sadism of toying with him like this. But as well as Dave I think we should keep in mind both Charla herself and what we might call truth, or respect for the truth, or respect for the value of truthfulness. (I think it might be most useful to think in terms of truth or even the dreaded Truth, but I don't mean this in a way that cannot be translated into naturalistic talk about relations between people.)
Mill says that one should not tell lies even when doing so would maximize utility because lying is bad for one's character, reducing one's honesty or commitment to truth-telling, making it more likely that one will lie in other, less justifiable, cases later. (He might also say the opposite somewhere else, but never mind.) This sounds like an empirical hypothesis, and a dubious one at that. But it can be taken in a different way, having to do with Anscombe's remark about its showing a corrupt mind if one is prepared to accept "in advance, that it is open to question whether such an action as procuring the judicial execution of the innocent should be quite excluded from consideration." Mill might be thinking that it is corruption of mind (in a way that might have nothing much to do with consequences) to lie, or perhaps to be prepared (in advance) to lie, over relatively trivial (non-life-threatening) matters. This case is life-threatening, of course, but does Charla fully understand that? We don't know.
As well as thinking about cases in which misleading might seem just as bad as lying, it could be worthwhile to think about those in which there seems to be a difference. Consider Athanasius again. I don't know what really happened or why, but it is surely conceivable that he wanted to avoid lying while also helping to prevent (his own) murder. And that this was not because he wanted to keep his hands clean, although that is one possible motive, but because he wanted to act with due respect for God, who not only commanded that we not bear false witness but is also sometimes identified with Truth. Atheists won't be moved by such considerations, of course, but they might still care about truth and/or truthfulness.
Here's a quick run through of Saul's paper.
"If the result is the same, and the motivation is the same, why should we have this moral preference?" for misleading over lying, Saul asks. The immediate motivation--deception--might be the same, but respect for truth might be part of the motivation in the case of misleading. (This is easier to imagine in Athanasius' case than Charla's.) Indeed the result would not be the same in this case either--if one merely misleads then one does not, apparently, disrespect the truth as much as if one lies. One has not violated the norm of not lying. Perhaps that is irrelevant, but it is hardly obvious that it doesn't matter.
Saul does accept that misleading is better than lying in some cases (see points 2 and 3 above). In a courtroom lying is perjury while misleading is not. And then there's this, slightly odd, example: "A couple with an open marriage might [...] agree that lying about affairs is not acceptable while misleading is." (This seems odd because it sounds as though there is bad faith in the relationship. Why is misleading OK here if both partners are happy with the marriage's being open? The idea seems to be that whatever consenting adults agree to is OK, which I find hard to believe. Because a) is consent genuine when bad faith is involved (or does, e.g., one partner 'agree' to the open relationship because of fear of losing the other, and is agreement resulting from fear what we want to count as consent (if so, what's so great about consent?)?)?, and b) is 'OK' the right evaluation of, say, consensual cannibalism?
And that's about it. (Although I have skipped the part where Saul discusses the "overwhelming majority of justifications for the belief that lying is worse than mere misleading" which "turn, in one way or another, on holding the audience more responsible for the falsehood they believe in the case of mere misleading." I don't see that as being a very promising line of attempted justification.) Saul draws the conclusion that I quoted above (from the longer version of the argument).
My objection is partly that her argument is based on unrealistic examples and partly that her focus is too narrow. The two concerns are related. Saul appears not to look far beyond questions about motivation and results, and the motivations and results she focuses on are those directly involving people and their immediate future: what counts as Charla's motivation is sex with Dave, what counts as the relevant result is Dave's infection. Any possible concern with honesty on Charla's part is not mentioned. (And what is Dave's deal? "Do you have AIDS?" is surely not credible dialogue in the circumstances.) What about motivation and results as far as norms/principles/values and their violation go? Don't people in fact think and care about such things as well as wanting sex, information, etc.?
Real life is not like the encounter we are presented with between Charla and Dave. And this matters because one thing that is missing from Charla is any concern with ethics. Perhaps she is meant to be a sociopath, like a dishonest version of Saga Norén. But if she were more normal she would surely have some thoughts, before of after, about her dishonesty. These might be in terms of the value of honesty or the badness of lying or deception. Or they might be about Dave and her abuse of his trust. We might then think about the value of general principles. Misleading and lying are both abuses of trust, but is one perhaps a greater abuse than the other? Might it be better if we all valued truth itself or honesty itself, in an abstract way, rather than only valuing concrete individuals such as Dave?
The moral universe of these examples seems to be one in which people want various things and are entitled to pursue them as long as they don't break various rules (violate Dave's rights, produce needless unhappiness, or whatever it might be). The rules all immediately concern people. Not only are they not concerned with God, they are also not concerned with any value independent of people, such as beauty or truth. And this lack of concern with truth is what makes speech bullshit, at least according to Frankfurt.
I'm not sure how far it's possible to live without some values of this kind, though. Roughly: is act-consequentialism possible in practice? Or must we internalize principles (rules) of some sort if we are to behave ethically? And without some values of this sort, how can we make sense of valuing people or their wants or rights? Aren't people worth caring about partly because they are rational, and rational beings can discover truths, and truth is good? Or because people can love and love is good? Maybe that's not quite right. Maybe the value of people is more basic than that. But there are things about people that we value. And there are other things that we value too, like the beauty of nature. Motivations and results, on the other hand, surely matter, but don't seem to be the right kinds of things to be valued as good. Dave's continued good health might be good, for instance, but only because Dave is good. (Not morally good, but a good thing (in a non-instrumental sense) for the universe to contain.) This seems to be missing from Saul's thinking. But I'm reading a lot into one short paper. I've also relied heavily on rhetorical questions.
With that in mind, let's have a go at Saul's seemingly preferred example:
Consider for a moment the story of Frieda, who suffers from a peanut allergy so dire that even the tiniest amount of peanut oil could be deadly. George knows that, and invites her to dinner, murderously preparing a stir-fry with peanut oil. Frieda, appropriately cautious (though not cautious enough), asks, ‘Are there any peanuts in the meal?’, to which George replies ‘No, there are no peanuts’. Frieda eats the stir-fry, and dies. If misleading is always better than lying, then George did something slightly less bad by choosing his true but misleading utterance rather than a false one like ‘No, it’s perfectly safe for you.’ This seems clearly wrong.One problem here, it seems to me, is that George's murdering Frieda casts a huge shadow over the example, obscuring any difference there might be in the ethics of lying and misleading. Another, related problem is that we don't know why he chose to mislead rather than lie. In some cases someone might make this choice because of fear of God or love of truth or a desire to be an honest person, but it's hard to imagine any of these motives at work in a cold-blooded murderer like George. It does not follow that such motives are generally irrelevant to the ethics of the deeds they motivate though. It is, as Frankfurt argues, hard to make sense of the nature and badness of bullshit if we don't accept the value of concern with truth or honesty.
The point of Saul's examples, I take it, is to challenge the idea that misleading is always better than lying by finding cases in which it seems equally bad because the motive and result are the same either way. I think she's right that they are not always morally different. But I don't know how significant this is. Stealing $10 from a dying man doesn't seem significantly worse than stealing $5 from a dying man, but this does not show that the amount stolen is never relevant to how bad an act of theft is. Or take Mill's example of saving a man from drowning in order to torture him before he is killed. This might show that saving someone from drowning is not always better than not saving someone from drowning. But it still is generally better.
The idea of having a motive along the lines of murder-without-lying is hard to imagine (or to value), but Athanasius' possible intention to avoid-being-murdered-without-lying seems both conceivable and conceivably valuable. Nor is it the same motive as avoid-being-murdered-by-any-means-necessary. So if we really do take motives seriously, and not only consequences, then misleading can appear to be better than lying.
Finally, there is the question of consequences. Whether he lies or misleads, George still murders Frieda. And whether he lies or misleads, Athanasius still gets away unharmed. But there are other possible consequences that are not the same. If lying is worse than misleading then the consequence having lied is worse than that of having misled. We can't prove that lying is not worse than misleading by assuming this not to be the case. How might having lied be worse than having misled? To have lied is to have betrayed commitment to truthfulness to a greater extent than to have misled is. (I mean this to be analytic.) Is it therefore worse? It is if commitment to truthfulness is a good thing (generally, even if perhaps not always). And such commitment does seem to be a good thing.
Or am I simply assuming that Athanasius-style 'bullshit' is better than student-who-hasn't-done-the-reading-but-won't-admit-it bullshit?
Friday, April 21, 2017
Friedlander II
Having said I was considering a series of posts on Eli Freidlander's "Missing a Step Up the Ladder," I find that I have little I want to say beyond recommending the paper. I might do one more post on it, but this looks like being a short series. Here is one more passage that I don't understand though:
The ethical will is the actualization of the capacity for being in agreement with the world. This is not an agreement with what you represent to yourself to be essential to life. For such an agreement is understood through the primacy of ends, and the highest reality cannot be represented as an end I strive for—it is manifest as a limit I recognize. One could then say that “seeing the world aright” or simple and sober clarity of vision is the ethical imperative. Acting right is being in agreement with what has the highest reality, acting wrongly is letting yourself remain unclear, one might say unrealistic. What Wittgenstein calls in the Notebooks the voice of “conscience” arises out of a sense of non-being in my existence in meaning. This is also why ethics is so closely related to the question of nonsense in language.The part I find especially difficult is the part I have put in bold. It might be impossible to understand this without reading the whole paper, which I should probably do again, but if anyone has any other suggestions I'd be grateful.
Friday, April 14, 2017
Wiggling chairs
I loved reading Eli Friedlander's "Missing a Step Up the Laddeer" (which I'm sure was open access when I got it, but which doesn't seem to be any longer). I might do a series of posts on it (cue nothing at all for three weeks followed by a "what I saw on TV last night" post instead).
Having said that I like the paper, there are some bits of it that I don't get. Perhaps blogging about them will help me understand.
For instance, on pp. 58-59 Friedlander says:
But the experiment with psychic powers seems perfectly intelligible to me. The chair won't move, of course. Telekinesis is not possible. Still, denying (or affirming) its possibility makes sense in a way that denying or affirming that sounds can be colored does not. Doesn't it? A magician might, after all, seem to move physical objects through sheer mental power, while no magician could ever even seem to color sounds. Doing so is unimaginable because the idea is unintelligible--the words make no sense (and 'because' here just means '=').
It's a minor point, if I'm right, but a) it's good to be right, and b) I wonder whether I'm missing something. Is attempting to use psychic powers that I know I haven't got really intelligible? Does my thinking that it is reveal some level of superstition on my part, a refusal or failure to rule out completely the possibility that people might have psychic powers? Is it like, or related to, the following question of the rationality of buying lottery tickets? It is often said that it is irrational to buy lottery tickets because the chances of winning are so small, but if you get a dollar's worth of pleasure from buying the ticket then it is rational to pay a dollar for a ticket. But then you only get that pleasure because you imagine that you might win, which is irrational of you. If you really comprehend the smallness of the odds of winning then having a ticket would give you no pleasure at all. And if you really understood how the world works, perhaps the very idea of psychic powers would seem not just false but nonsensical to you. The words 'psychic powers' (and others of the same kind) would be completely withdrawn from circulation in your conceptual economy.
That doesn't seem right though. My not thinking about, or even slightly believing in, psychic powers doesn't mean that these words have no meaning. And 'meaning for me' is not really a thing. "Those words have no meaning for me" just means I don't use those words. Or that's how it seems to me, anyway.
Having said that I like the paper, there are some bits of it that I don't get. Perhaps blogging about them will help me understand.
For instance, on pp. 58-59 Friedlander says:
I can will to move my hand, but I cannot in the same sense will the chair to move. My act of will, I would like to say, cannot connect directly to the chair. I can only move my body which is the one to move the chair. But Wittgenstein asks himself what would it be like to find out that something is essentially not in the scope of my will. For this negation to make sense, one must be able to conceive of the possibility of trying to will such-and-such and not being able to do so. Someone asked, for instance, to try to will the chair to move, might concentrate on the chair intensely, fasten his gaze on it, narrow his eyes, and express determination. But would this count as trying to will the chair to move and finding out that it is the kind of thing that does not obey the will? There is no trying and discovering that the chair is out of the range of my will. It would be as nonsensical as trying to find out whether sounds can be colored.The first sentence sounds plausible enough. Actually, I'm not sure that I can will to move my hand in any different sense than that in which I can will to move a chair by psychic means. I can, though, move my hand or, if it is restrained or paralyzed, try to move it. I cannot move a chair in the same way. Nor can I try to move a chair in the same way or the same sense. A doctor might ask me to wiggle first one hand then the other, and perhaps also ask me to wiggle each foot in turn to test for something or other. But if she then said, "Now wiggle the chair" she would either be kidding or else using 'wiggle' in some other sense, one that involves moving over to the chair and applying physical force to it. That's what I take the first part of this quoted passage to be getting at.
But the experiment with psychic powers seems perfectly intelligible to me. The chair won't move, of course. Telekinesis is not possible. Still, denying (or affirming) its possibility makes sense in a way that denying or affirming that sounds can be colored does not. Doesn't it? A magician might, after all, seem to move physical objects through sheer mental power, while no magician could ever even seem to color sounds. Doing so is unimaginable because the idea is unintelligible--the words make no sense (and 'because' here just means '=').
It's a minor point, if I'm right, but a) it's good to be right, and b) I wonder whether I'm missing something. Is attempting to use psychic powers that I know I haven't got really intelligible? Does my thinking that it is reveal some level of superstition on my part, a refusal or failure to rule out completely the possibility that people might have psychic powers? Is it like, or related to, the following question of the rationality of buying lottery tickets? It is often said that it is irrational to buy lottery tickets because the chances of winning are so small, but if you get a dollar's worth of pleasure from buying the ticket then it is rational to pay a dollar for a ticket. But then you only get that pleasure because you imagine that you might win, which is irrational of you. If you really comprehend the smallness of the odds of winning then having a ticket would give you no pleasure at all. And if you really understood how the world works, perhaps the very idea of psychic powers would seem not just false but nonsensical to you. The words 'psychic powers' (and others of the same kind) would be completely withdrawn from circulation in your conceptual economy.
That doesn't seem right though. My not thinking about, or even slightly believing in, psychic powers doesn't mean that these words have no meaning. And 'meaning for me' is not really a thing. "Those words have no meaning for me" just means I don't use those words. Or that's how it seems to me, anyway.
Subscribe to:
Posts (Atom)