Saturday, September 13, 2014

Three days in Mexico City

I lost five pounds over ten days in Mexico, and I didn't get ill. I consumed less than I do at home, but I think most of the weight loss came from walking. Certainly I ached more than normal. So the following itinerary might not be for everyone, but it can be done. I was in the state of Veracruz for a week before I went to Mexico City, but that's somewhat off the tourist trail (although if you have a way to get around it is a fantastic place to visit) so I'll leave that part of the trip out.

Sunday: My hotel was conveniently close to the Paseo de la Reforma, a two-mile boulevard dotted with statues and very friendly to cyclists, so I walked there and then up to the cathedral and other buildings on the Zocalo (central square). It's non-stop mass on Sundays but you can go inside and look around without joining in. Outside there are people selling things and native people dancing for tourists. Then on to the Templo Mayor, and then the Diego Rivera murals at the National Palace. Chocolate and churros for lunch before heading all the way down the Paseo de la Reforma to visit the anthropological museum in Chapultepec Park. Drink 1.5 litres of hibiscus-flavored water (not as good as it sounds, but how could it be?) on the way back to the hotel. Dinner at the surprisingly not bad chain restaurant Vips.


Monday: Almost all museums are closed in Mexico City on Mondays, so it's a good day to visit Teotihuacan. I decided to book a tour there, though, and it didn't go until Tuesday, so I headed by metro to the Coyoacan area, where my guidebook describes a walk, which I did. Walked past the closed Frida Kahlo and Leon Trotsky houses, and should have had lunch in this area but instead kept walking until I reached a metro station. Headed up to the Plaza de las Tres Culturas, which involved a bit of a walk through a not very prosperous neighborhood but no one mugged me and the threatened rain stayed away. The Aztec ruins here turned out to be open (and free), which was a nice surprise. Very similar to the Templo Mayor, but without the museum. The church was closed, but for all I know it always is. Then on by metro to the Basilica of Our Lady of Guadalupe, where there was, oddly, an unsuccessful snake charmer (outside the metro, not in the church), and another mass going on. Moving walkways behind and below the priest take you past the miraculous image of the Virgin Mary. Back to Vips for lunch at dinner time, then dinner almost immediately after at a place called Hellfish.


Tuesday: Bus trip to Teotihuacan (should have gone by public transport and walked to the murals at Tepantitla). Arrived not long after the place opened, but even later in the day it was not very crowded. Spent the morning looking around and climbing pyramids, then off to taste tequila (followed by too much time in a gift shop) and eat lunch. Back in the city I visit the murals in the Palacio de Bellas Artes and have dinner at the old world-seeming Restaurante Danubio, whose walls are covered with framed cartoons and messages from, presumably, happy customers. 



Tuesday, September 9, 2014

Differently abled?

So, this post of Jon Cogburn's about ableism. (See also here.) I wanted to try to sort the wheat from the chaff in both the post and the comments without spending all day doing so, but in an hour I didn't manage to get far at all into the whole thing. Instead of a point-by-point commentary, then, here's more of a summary with comment.

I take his key claim to be this:
all else being equal, it is better to be able. Speaking in ways that presuppose this is not bad, at least not bad merely in virtue of the presupposition 
That it is better to be able than disabled is surely close to being analytic, although of course one might disagree about its always being better to be "able" than "disabled." (That is, so-called abilities might not be so great and so-called disabilities might not be so bad, but actual disabilities can hardly fail to be at least somewhat bad.) Perhaps disability has spiritual advantages over ability (I don't think it does, but someone might make that claim) but in ordinary, worldly terms disabilities are bad. Hence the prefix 'dis'.

Cogburn makes two claims here. Not only that it is better to be able but also that it is OK to speak in ways that presuppose this. The English language comes very close to presupposing this, so Cogburn is more or less defending the language that even anti-ableist-language people speak. There is language and there is language, of course, as in English, on the one hand, and the language of exclusion, say, on the other. But the idea that it is undesirable to be blind, deaf, lame, weak, sick, insane, and so on and so on runs deep in ordinary English. Could this change? Surely it could. Should it? That is the question. Or one of the questions. Another is how bad it is to argue that speaking in such ways, ways that presuppose the badness of blindness, etc., "is not bad."  

A caveat is probably necessary, or even overdue, at this point. Cogburn has been accused, among other things, of defending hate speech, so I should address this thought. He is not defending attacks on disabled people. He is not attacking disabled people. He is defending some linguistic expressions of the idea that all else being equal it is better to be able. These expressions might harm disabled people (by perpetuating harmful prejudices) or fail to benefit them as much as some alternative expressions would (by countering those prejudices), but Cogburn's claim is that no use of language should be condemned simply on the grounds that it involves the presupposition that disabilities are generally bad things to have.

Roughly his claim is that the presupposition is true, and therefore ought to be allowed, and that it is patronizing to disabled people to think that they need protection from words that are only imagined to be hurtful to their feelings. The claim against him (again roughly, and there are multiple claims against him) is that some speech directed at disabled people really is hurtful, even when it's intended to be sympathetic, and that the kind of speech in question creates an environment, a kind of society, that is detrimental to the interests of disabled people whether they feel it or not, and whether it is intended or not. It is this that is the more interesting claim, I think,because Cogburn agrees that disabled people should not be insulted or patronized.

As I see it, two questions arise here. Is it lying to say that it is not better to be able (other things being equal)? And if so, is this a noble lie?

The idea that it is a noble lie would depend on a form of utilitarianism combined with faith in the possibility of linguistic engineering. There is something obviously Orwellian about this idea, but something seemingly naive too. If we start calling blind people visually impaired instead how much will change? I have no objection at all to making such changes if the people they are intended to help like them. Presumably Cogburn doesn't object to this kind of change either. But if all we do is to change the vocabulary we use without changing the grammar then sooner or later 'visually impaired' will be used exactly the same way that 'blind' is now used, and will have exactly the same meaning, connotations, etc. These superficial linguistic changes, i.e. changes in vocabulary or diction only, will not effect deep grammatical change (by what mechanism would they do so?). Superficial changes can have deep effects, as when disfiguring someone's face leads to people treating them much worse, but it isn't obvious that changing labels will have good effects. Nor will they change anyone's ability to see.

Which brings us to the question whether that matters. Is it bad to be blind, or worse to be blind than to be sighted? In "Practical Inference" Anscombe writes:
Aristotle, we may say, assumes a preference for health and the wholesome, for life, for doing what one should do or needs to do as a certain kind of being. Very arbitrary of him.
Ignoring her irony, is it arbitrary? Is it bad or simply different to have 31 or 33 teeth rather than the standard 32? Are two legs better than one? One comment at New APPS says that it is not disability but suffering that is bad. Is that what we should say?

I don't think I would bother trying to do anything about a disability that did not lead to suffering, that much is true. But some conditions surely lead to suffering more often than not. Some of this will be fairly direct. My father's muscular dystrophy leads to his falling down from time to time. This hurts. And some of the suffering is less direct, involving other people's attitudes and reactions. Falling in public is embarrassing, but would not be if people were better. So should we fix the physical and mental problems (i.e. conditions that lead to suffering) that people have when we can or should we fix other people, the ones who regard or treat the suffering badly? Surely so far as we can, other things being equal, we should do both. And medical advances are more dependable than moral ones.

We might argue at great length about what is and is not a disability, but that some people are more prone to suffering than most because of the condition of their bodies or minds is surely beyond doubt. Pretending to deny this isn't going to help anybody.    

I feel as though I've been rushing things towards the end of this post, but I also don't want to keep writing and writing without ever posting the results. One thing that encourages me to keep writing is the desire to be careful to avoid both genuine offensiveness (bad thinking) and causing offense by writing ambiguously or misleadingly (bad writing). But then I think that what I'm saying, or at least what I mean to say, is just so obviously right that no one could possibly disagree. What happened at New APPS shows that this is false. It also shows that this is very much a live (as in 'live explosive') issue, and one that brings questions about consequentialism, relativism, Aristotelian naturalism, and ordinary language together in ways that can be very personal and political. I don't mean that it's Anscombe versus the remaining New APPS people, as if one had to pick one of those two sides, but it would be interesting to see a debate like that.      

Friday, September 5, 2014

Poor fellow

I was surprised by the reaction to Robin Williams' suicide, which saw some people reacting as if to the death of a personal friend and others as if to the death of the very personification of humor. I didn't feel that way at all. Then people started saying he was killed by depression and that we all need to know and understand more about this disease. This bothers me, although I'm not sure I can put my finger on why. Partly it's that if someone is ill then the solution might seem to be technical and so we don't have to worry about treating them like a human being. Or rather, I suppose, we don't have to worry about treating their depression as an emotion. We don't need to cheer the depressed person up or attempt empathy. In fact it would be a mistake to do so, a symptom of blameworthy ignorance. So we just shove them towards a doctor and wait till they are fixed before relating to them as normal again. This strikes me as an uncaring form of 'concern', although I've seen it come from people who clearly do care as well as those who I can't believe really do. It is what we are taught to think.

One thing that pushes us this way is the desire to deny that people who kill themselves are being selfish or cowardly. It wasn't a choice, the pain made him do it, people say. But of course suicide is a choice. I think it's bizarre, at least in cases like Williams', to call it selfish or cowardly. How much unhappiness are people supposed to put up with? How much was he living with? More than he could take, obviously. But it's insulting to deny that he acted of his own free will. Why can't the decision to commit suicide be accepted and respected? I don't mean by people who really knew him--far be it from me to tell them what they can or should accept--but by strangers and long-distance fans.

Because it's so horrible, I suppose. And that's why it's considered selfish. You are supposed to keep the horribleness inside you, quarantining it until it can be disposed of by a doctor, or talk it out therapeutically. Not put it in the world. Don't bleed on the mat, as they used to say unsympathetically at the judo class my brother and sister went to. But that is selfish. If you have to bleed, bleed. People often say that we should not bottle up our emotions, that men should not be afraid to cry, and so on. This, I think, is partly right and partly based on a mistaken idea about the effects of expressing painful emotions, namely that once they are expressed they will be gone. But this is not true. They don't go away once let out of the bottle. It's partly also, though, a kind of hypocrisy. We don't actually want to see people's emotions, not the really bad ones. No doubt we do want to see some emotions, including painful ones, and quite possibly more than are often on display, but there is a reason why we don't wear our hearts on our sleeves.

It's hard to think well about suicide. Here's Chesterton:
Under the lengthening shadow of Ibsen, an argument arose whether it was not a very nice thing to murder one's self. Grave moderns told us that we must not even say "poor fellow," of a man who had blown his brains out, since he was an enviable person, and had only blown them out because of their exceptional excellence. Mr. William Archer even suggested that in the golden age there would be penny-in-the-slot machines, by which a man could kill himself for a penny. In all this I found myself utterly hostile to many who called themselves liberal and humane. Not only is suicide a sin, it is the sin. It is the ultimate and absolute evil, the refusal to take an interest in existence; the refusal to take the oath of loyalty to life.
I think he's right that there is something monstrous about suicide, and there is something really nightmarish about Archer's idea. But what I want to do is to say "poor fellow." Not to praise nor to blame. And not to regard suicides as anything other than fellows, with as much free will as the rest of us. And along with that sympathy to feel some relief that the person's suffering is over.

Monday, September 1, 2014

Understanding human behavior

A question that came up prominently during the seminar in Mexico has been discussed recently also by Jon Cogburn at NewAPPS. The question is about what is involved or required for understanding human behavior.

Jon Cogburn says that:
One could say that given a set of discourse relevant norms held fixed, understanding in general just is the ability to make novel predictions. For Davidson/Dennett, we assume that human systems are largely rational according to belief/desire psychology and then this puts us in a position to make predictions about them. We make different normative assumptions about functional organization of organs, and different ones again about atoms. But once those are in place, understanding is just a matter or being better able to predict.
I'm writing this as a post of my own rather than as a comment at NewAPPS partly because I suspect it will be too long for a comment and partly because I don't know what all of this means and don't want to appear snarky or embarrassingly ignorant. I genuinely (non-snarkily) don't know what it means to hold fixed a set of discourse relevant norms, nor what it means to put normative assumptions in place. But what I take Cogburn to be saying is, in effect, that "understanding in general just is the ability to make novel predictions." 

Winch says that being able to predict what people are going to do does not mean that we really understand them or their activity. He cites Wittgenstein's wood-sellers, who buy and sell wood according to the area covered without regard to the height of each pile. We can describe their activity and perhaps predict their behavior but we don't, according to Winch, really understand it. He surely has a point. Here's a longish quote (from pp. 114-115 of the linked edition, pp. 107-108 of my copy):
       Some of Wittgenstein’s procedures in his philosophical elucidations reinforce this point. He is prone to draw our attention to certain features of our own concepts by comparing them with those of an imaginary society, in which our own familiar ways of thinking are subtly distorted. For instance, he asks us to suppose that such a society sold wood in the following way: They ‘piled the timber in heaps of arbitrary, varying height and then sold it at a price proportionate to the area covered by the piles. And what if they even justified this with the words: “Of course, if you buy more timber, you must pay more”?’ (38: Chapter I, p. 142–151.) The important question for us is: in what circumstances could one say that one had understood this sort of behaviour? As I have indicated, Weber often speaks as if the ultimate test were our ability to formulate statistical laws which would enable us to predict with fair accuracy what people would be likely to do in given circumstances. In line with this is his attempt to define a ‘social role’ in terms of the probability (Chance) of actions of a certain sort being performed in given circumstances. But with Wittgenstein’s example we might well be able to make predictions of great accuracy in this way and still not be able to claim any real understanding of what those people were doing. The difference is precisely analogous to that between being able to formulate statistical laws about the likely occurrences of words in a language and being able to understand what was being said by someone who spoke the language. The latter can never be reduced to the former; a man who understands Chinese is not a man who has a firm grasp of the statistical probabilities for the occurrence of the various words in the Chinese language. Indeed, he could have that without knowing that he was dealing with a language at all; and anyway, the knowledge that he was dealing with a language is not itself something that could be formulated statistically. ‘Understanding’, in situations like this, is grasping the point or meaning of what is being done or said. This is a notion far removed from the world of statistics and causal laws: it is closer to the realm of discourse and to the internal relations that link the parts of a realm of discourse.
Winch talks elsewhere (see p. 89, e.g.) about what it takes for understanding to count as genuine understanding: reflective understanding of human activity must presuppose the participants' unreflective understanding. So the concepts that belong to the activity must be understood. Without such understanding all we will generate is (p. 88) "a rather puzzling external account of certain motions which certain people have been perceived to go through."
 
But does Winch get to say what counts as genuine understanding? This was a point we discussed when I was at the University of Veracruz. Several students there seemed to want less than what Winch would accept as true understanding of human behavior. They did not want to empathize. They wanted to identify patterns, and if they were able to do so well enough to be able to make accurate predictions then they would be very satisfied. A "rather puzzling external account of certain motions" is basically what they were hoping to produce, as long as it allowed them to make accurate predictions.

It looks to me as though Winch would resist or even reject such a desire, but can a desire be mistaken? And I don't know how to settle the apparent disagreement between Winch and others about what is and is not real understanding. Can we just say that as long as you see the facts you may say what you like? Or is that too easy?

Saturday, August 23, 2014

Anscombe Forum on Human Dignity

Conference to be held on March 13-14, 2015, at Neumann University, which is located in Aston, PA, in the greater Philadelphia area. 
The forum is an annual event designed to explore the work of G.E.M. Anscombe and topics in her work that are of continuing importance within the Catholic intellectual tradition.  In March 2014 the forum was initiated with a conference focused on the question of Anscombe's contributions to the Catholic intellectual tradition.  The March 2015 Forum will be dedicated to the subject of human dignity.
Featured speakers: Candace Vogler, David B. and Clara E. Stern Professor of Philosophy, University of Chicago; Nicholas Wolterstorff, Noah Porter Emeritus Professor of Philosophical Theology, Yale University; Duncan Richter, Professor of Philosophy, Virginia Military Institute.
We welcome all contributions on the subject of human dignity and are particularly interested in contributions that engage the work of Anscombe or Peter Geach, or that otherwise engage elements of the Catholic intellectual tradition.  For further information contact Dr. John Mizzoni at mizzonij@neumann.edu.  Submissions (full papers only; 20-30 minute reading time) should be emailed no later than December 30, 2014 to mizzonij@neumann.edu.  
More information will become available at www.neumann.edu/anscombeforum
Select papers from the conference will be published by Neumann University Press.  
I'm looking forward to this, but being in such distinguished company is a little intimidating. I'd better say something good.

Friday, August 22, 2014

The Ludwig Wittgenstein Chair at the University of Veracruz

I'm just about back and ready to catch up on emails, blogging, etc., having been in Mexico as a visiting professor at the University of Veracruz. Every 18 months or so they bring someone in for this position, and the purpose of this post is basically to describe what it involves. Robert Arrington was the first person to occupy the chair, and I was the second. In other words, it's pretty new, so who knows how the position might develop in future. What I can tell you is what I did.

My job was to give a public lecture on a Wittgenstein-related subject to an audience of roughly a hundred people and then to lead a seminar for two hours every day for a week, with between twenty and thirty people (mostly graduate students but also members of the faculty) in the seminar. All of this, apart from the last meeting of the seminar, was recorded, so perhaps it will be available online somewhere sometime. The subject of Winch's The Idea of a Social Science was suggested to me, so my lecture was directly about that book, and the seminar dealt with related topics: Wittgenstein's lecture on ethics (which perhaps is not all that relevant, in fact, but it seemed like a good idea at the time), the first part of the Philosophical Investigations, Wittgenstein's remarks on Frazer, selected parts of Winch's book, and Winch's "Understanding a Primitive Society."

I speak no Spanish, so language was an issue sometimes, but not a huge problem. More important, I think, was that most people in the audience at both the lecture and the seminar were not philosophers but psychologists and other social scientists. They were certainly interested in Wittgenstein, but particularly in how what he said might relate to social science. And for the most part their idea of the aims of social science is not the same as Winch's. More about this, perhaps, in another post.

What did I get out of it? A lot. It's a real joy to teach students who are genuinely interested in the subject and do not have to be manipulated with carrot and stick to read the assigned material and discuss it. It's also a pleasure to discuss philosophy with people who are more knowledgeable and sophisticated than the typical undergraduate. Not just a pleasure but an education too. I also had my expenses covered, so I got a plane ticket there and back, hotel, and meals, plus a car, driver, and interpreter to take me around during the day (the seminars were held in the late afternoon/early evening) to all the best sights in the area. The people there are extremely hospitable and gave me various gifts as well. In short, if you get the chance to do this I highly recommend it.

Sorry if this comes across as bragging but I think I may have been annoyingly obscure about what I've been up to, and I loved it, so it's hard not to talk about it.             

Friday, August 8, 2014

Consequentialism

'Consequentialism' and 'utilitarianism' are used pretty much interchangeably these days, but of course Anscombe coined the term 'consequentialism' in order to distinguish the view of Sidgwick and others from utilitarianism. It can be hard to see what difference she saw, so I might get it wrong, so I'll quote what she says:
Let us suppose that a man has a responsibility for the maintenance of some child. Therefore deliberately to withdraw support from it is a bad sort of thing for him to do. It would be bad for him to withdraw its maintenance because he didn't want to maintain it any longer; and also bad for him to withdraw it because by doing so he would, let us say, compel someone else to do something. (We may suppose for the sake of argument that compelling that person to do that thing is in itself quite admirable.) But now he has to choose between doing something disgraceful and going to prison; if he goes to prison, it will follow that he withdraws support from the child. By Sidgwick's doctrine, there is no difference in his responsibility for ceasing to maintain the child, between the case where he does it for its own sake or as a means to some other purpose, and when it happens as a foreseen and unavoidable consequence of his going to prison rather than do something disgraceful. It follows that he must weigh up the relative badness of withdrawing support from the child and of doing the disgraceful thing; and it may easily be that the disgraceful thing is in fact a less vicious action than intentionally withdrawing support from the child would be; if then the fact that withdrawing support from the child is a side effect of his going to prison does not make any difference to his responsibility, this consideration will incline him to do the disgraceful thing; which can still be pretty bad. And of course, once he has started to look at the matter in this light, the only reasonable thing for him to consider will be the consequences and not the intrinsic badness of this or that action. So that, given that he judges reasonably that no great harm will come of it, he can do a much more disgraceful thing than deliberately withdrawing support from the child. And if his calculations turn out in fact wrong, it will appear that he was not responsible for the consequences, because he did not foresee them. For in fact Sidgwick's thesis leads to its being quite impossible to estimate the badness of an action except in the light of expected consequences. But if so, then you must estimate the badness in the light of the consequences you expect; and so it will follow that you can exculpate yourself from the actual consequences of the most disgraceful actions, so long as you can make out a case for not having foreseen them. Whereas I should contend that a man is responsible for the bad consequences of his bad actions, but gets no credit for the good ones; and contrariwise is not responsible for the bad consequences of good actions.

The denial of any distinction between foreseen and intended consequences, as far as responsibility is concerned, was not made by Sidgwick in developing any one "method of ethics"; he made this important move on behalf of everybody and just on its own account; and I think it plausible to suggest that this move on the part of Sidgwick explains the difference between old‑fashioned Utilitarianism and that consequentialism, as I name it, which marks him and every English academic moral philosopher since him. By it, the kind of consideration which would formerly have been regarded as a temptation, the kind of consideration urged upon men by wives and flattering friends, was given a status by moral philosophers in their theories.
One difference between consequentialism so understood and old-fashioned Utilitarianism is surely that under consequentialism "you can exculpate yourself from the actual consequences of the most disgraceful actions, so long as you can make out a case for not having foreseen them." This means that one problem with consequentialism is that it is, in a sense, not consequentialist enough. There is too much scope for failure of imagination (by way of excess or deficiency) to exculpate. Hence, to give some examples, I might not be responsible for killing an innocent man if I genuinely felt (i.e., imagined that I was) mortally threatened by him and did not foresee that he might pose no real threat to my life, and I might not be responsible for plunging a country into violent anarchy if I sincerely expected my invading troops to be greeted as liberators. 

Another difference is that in consequentialism "the kind of consideration which would formerly have been regarded as a temptation [...] was given a status by moral philosophers." We see this in the following quotation from Benny Morris:
Even the great American democracy could not have been created without the annihilation of the Indians. There are cases in which the overall, final good justifies harsh and cruel acts that are committed in the course of history. 
He is talking about Israel, but many people in the United States think something like this. Or they think that the annihilation never happened, or it wasn't so bad because those were different times, or it was bad but it's in the past and therefore irrelevant, or they don't think about it at all. Or all of the above. What matters is the evasion of responsibility, not how the evasion is achieved.

But I don't mean to single out Americans. The Khmer Rouge were consequentialists too. Consequentialism is bad, ubiquitous, and not well understood. Philosophers can at least address the last of these. As I see it (going solely by this passage, which is probably a mistake), consequentialism holds (or at least implies) that the goodness or badness of an action depends entirely on the consequences expected by the agent. Whether these consequences are foreseen or intended does not matter. The intrinsic goodness or badness of the action is also irrelevant. And for these reasons consequentialism is doubly bad.