Tuesday, December 23, 2014

The best and worst of 2014

This blog has been quiet for a couple of weeks and is likely to stay that way for a couple of weeks more (lots of work, a little flu, lots of traveling), but in the meantime here's something. Looking at other people's lists of the top ten films and albums of the year, not to mention books, I realize that I haven't seen or heard (or read) as many as ten new movies/albums/books this year. So I won't even attempt a top ten, or top five, of anything. What I will do is make some negative comments and then end on a positive note.

The Guardian named Under the Skin the best movie of the year. It's not. People who like it talk about how haunting and visually stunning it is. It is memorable and visually striking. On the other hand, as the Rotten Tomatoes summary hints at ("Its message may prove elusive for some"), the whole thing seems pointless, despite very much appearing to want to have a serious point. The word for this kind of thing, I think, is pretentious. Another sci-fi disappointment was Snowpiercer. This is the kind of movie I really enjoy, just not a very good example of the kind. Maybe my expectations were too high, but any top ten list that includes it goes down in my estimation.

Now for the good news. I've mentioned some albums I've bought this year and I'm sure nobody cares, so I won't review them. But from other top ten lists I've recently discovered (a bit late) The New Pornographers (not really my cup of tea but sometimes very catchy indeed in a good way) and Allo Darlin' (presumably named after this). If I have an album of the year it's their We Come from the Same Place. Pitchfork calls them "bookish," which I suppose is my kind of thing.  

And I haven't seen enough new films to be sure, but my guess is that Locke was the best movie of last year. It's (even) better than Boyhood: a movie for grownups, and visually interesting too.

Wednesday, December 17, 2014

Nordic Wittgenstein Review

The new issue of the Nordic Wittgenstein Review is out now. There's lots of good stuff in it, as ever.

Thursday, December 11, 2014

Religion without God

Howard Wettstein's review of a new book by Ronald Dworkin is worth reading. Here's a taste:
In "Chapter 2: The Universe,"[3] Dworkin turns from the religious values that "fill the lives of ordinary people" to "the religious value of celestial beauty that intoxicated Einstein" (p. 47). He makes the point that evolution and the
grand universe it has created is itself a source of beauty. This thought is not available to a naturalist. Only those parts of the universe that produce pleasure in our sight can be, for him, beautiful. He finds the universe as a whole an incalculably vast accident of gas and energy. Religion finds it, on the contrary, a deep complex order shining with beauty . . . . Theists find it obvious why the universe is sublime: it was created to be sublime. (p. 48)
Dworkin's naturalist denies -- why need she deny this? -- the existence of "a complex order shining with beauty," and his theist seems strangely logically inept.
Dworkin seems to have missed the beauty of the idea of the universe as "an incalculably vast accident of gas and energy." The "incalculably vast" part is at least impressive, and the thought that everything we see, care about, and understand is part of this huge accident is mind-boggling. In the case of things we like the accident is happy as well as vast. If anything inclines me to religion it's that, not the Apollonian thought of "order shining with beauty." There is a kind of order in nature, of course. Enough for science to be possible. But there's enough chaos to keep things interesting too.    

Monday, December 1, 2014

And another site to check out

A new philosophy of religion blog, featuring Martin Shuster and others, here.

Humane Philosophy

This looks like a good site. It includes a video lecture by Anthony Kenny, the text of a lecture by Stephen Mulhall, and another video lecture by Peter Hacker. And a lot more besides. 

Saturday, November 29, 2014

What should we read?

I've just (much too late) started reading Ian Hacking's The Social Construction of What? and think that maybe everyone should read it. It's written in a funny way though, being superficially accessible but assuming a fair amount of background knowledge and quick comprehension. He brings up relativism, for instance, but then refuses either to define it or to argue for or against it. He seems fairly sympathetic, but it's hard to tell. Mostly he seems to think that talking about it is a waste of time. But then why does he bring it up? Never mind, we're on to the next topic: the point of talk about social construction. As far as I can see (I'm on p. 16) the book is written for a general audience, but a general audience either with some familiarity with the people and ideas Hacking talks about or else without any concern to understand the references he makes. What kind of audience is that? Perhaps the rest of the book explains things more, or else avoids references to Sartre's early work, etc. It's relatively easy reading, and seems like a good aid to cultural literacy, but I think my students would be lost. Is there anything similar but better?

Speaking of books that everyone should read, Jon Cogburn writes:
I think Kaufmann is an underappreciated treasure, especially for ninteen year olds. His Nietzsche: Philosopher, Psychologist, Anti-Christ is up there with Ray Monk's The Duty of Genius, Marcuse's One Dimensional Man, and Magee's The Philosophy of Schopenhauer as easy to read philosophy books that would be required teen reading if I had my druthers.
And elsewhere (although I can't find it now) he has suggested that everyone should be familiar with the critiques of religion presented by Marx, Nietzsche, and Freud. (If he didn't say this then I will.) But what exactly should people read by Nietzsche? As far as I know he didn't write a nice 20-page "Right, here's what I think about religion" essay that teenagers could read and understand. If everyone ought to know what he thought, though, then it would be handy to have some version of it to give to people who won't (or haven't yet had the opportunity to) study his work more seriously.

Finally, and perhaps most importantly, what else should everyone read? I'm thinking especially of lucid, accessible, reliable critiques of influential ideas and ideologies. Partly I'm thinking what I should try to get my students to read, but partly also I'm wondering, if I've missed Hacking (whose book I was at least aware of), what else might I have missed? And someone with a slight knowledge of Derrida's work recently asked me how Wittgenstein's related to it. What should someone like that (an interested non-philosopher) read? I'm tempted to tell people like that just to give up, but that's not very friendly, and they aren't likely to listen. So is there a decent Philosophy of Language for English Professors book out there? (That's not an English professors = dummies joke. The friend who asked is an English professor, and he's not alone in being interested.) Or perhaps these books don't exist and I should be writing them.

Monday, November 24, 2014

Rape culture

[UPDATE: I expect everyone knows by now, but for the record see also this.]

If you haven't yet read this article about rape at US universities then you should (and then see this followup). (Also see this.) When I was an undergraduate a typical party involved bringing and guarding your own unrefrigerated beer (no one was rich or generous enough to provide free drinks for other people), possibly having a conversation or two with people who turned out to be no more interesting than the friends you had gone with, leaving once you had drunk your beer, and wondering why you had expected this party to be any different from the others. In the US there are fraternities that provide unlimited free drinks in a country where most students can't legally drink in bars. Drunkenness ensues. So does sex. And violence, especially sexual violence.

The Rolling Stone article focuses on the University of Virginia, where I got my PhD, but the problem is nationwide. We had a discussion this past week at VMI with students from Washington & Lee University about sexual harassment. Apparently it is common there. (It happens at VMI, too, but we don't allow fraternities or alcohol on campus, which I think makes it rarer.) At UVA when I was there some fraternities had a reputation for rape, but they won't be closed down unless specific allegations are proved. This is hard because rape is often hard to prove, because victims are especially discouraged from prosecuting in these cases (do you really want to harm the university's reputation?, do you know what this will do to your reputation?, do you want never to be invited to another party ever again?, etc.), and because the fraternity members all stick together in defense of their "brothers" and against women who go against them in any way. The only people I know who defend fraternities at all are people who were in one when they were in college and who point out that not all frat boys are rapists, that fraternities typically do charitable work as well as throwing parties, and that being in a fraternity provides an ineffable bond of brotherhood whose value can never be appreciated by outsiders. It is about as clear as it ever could be that the bad of fraternities outweighs the good. But universities won't get rid of them because parents and alumni support them, and these are the people who provide the money that keeps universities going. As with gun control, large numbers of people are prepared to accept violent crime against young people for the sake of the very dubious benefits of their own preferred way of life.

Lowering the legal drinking age to 18 would surely help, but something called rape culture is also said to be to blame. It is not clear what this idea amounts to. Part of it, I think, is that we live in a culture that is too tolerant of rape. Which is to say that rape happens in our culture, and happens far more often than it should. This is true. But part of the idea also seems to be that rape is a product of culture, so that to blame specific rapists is naive. It is more important, perhaps not more urgent but at least closer to addressing the problem at its root, to attack rape-friendly aspects of culture, such as rape jokes and pornography. This, I think, is less true, and perhaps not true at all.

I don't mean that rape jokes are OK. It's more that getting rid of rape jokes will not necessarily get rid of rape. Perhaps rape jokes make rape seem more acceptable to people who might commit or help cover up rape. But perhaps they don't. And perhaps it's more a case of actual rape making rape jokes, etc. more common than vice versa. Jokes about rape are neither funny nor sensitive to the suffering of rape victims. So I'm not defending them. But putting a stop to them will not necessarily do anything at all to reduce the amount of rape that occurs.

Focusing on rape culture rather than rape might therefore lead to efforts going in the wrong direction. It has other likely effects too. It makes the issue one about what might be called texts (jokes, movies, etc.) rather than people and how they treat each other. (Enter the theorists...) It also greatly increases the number of people who can be condemned. (Let slip the dogs of war...) In doing these things, i.e. making the issue one for theorists and one that directly involves far more people, there is a risk of watering the problem down. If it's a theoretical matter how concrete can it be? And if it's about offensive jokes, etc. then it certainly seems less serious than when it was about rape and only rape.

So I sympathize with this kind of statement from the Rape, Abuse & Incest National Network:
In the last few years, there has been an unfortunate trend towards blaming “rape culture” for the extensive problem of sexual violence on campuses. While it is helpful to point out the systemic barriers to addressing the problem, it is important to not lose sight of a simple fact: Rape is caused not by cultural factors but by the conscious decisions, of a small percentage of the community, to commit a violent crime.
While that may seem an obvious point, it has tended to get lost in recent debates. This has led to an inclination to focus on particular segments of the student population (e.g., athletes), particular aspects of campus culture (e.g., the Greek system), or traits that are common in many millions of law-abiding Americans (e.g., “masculinity”), rather than on the subpopulation at fault: those who choose to commit rape. This trend has the paradoxical effect of making it harder to stop sexual violence, since it removes the focus from the individual at fault, and seemingly mitigates personal responsibility for his or her own actions. 
I sympathize, but I'm not sure I agree. Amanda Marcotte responds to the statement I just quoted that:
This doesn't make sense. People who use the phrase "rape culture" do not deny that rape is a matter of individuals making the active choice to rape. "Rape culture" is a very useful way to describe the idea that rapists are given a social license to operate by people who make excuses for sexual predators and blame the victims for their own rapes. Instead of recognizing this, or, at the very least, just not bringing it up at all in its memo, RAINN instead bashes a straw man, arguing that the focus on "rape culture" diverts "the focus from the individual at fault, and seemingly mitigates personal responsibility for his or her own actions."
What's at issue here, it seems to me, is whether, or how, useful the idea of "rape culture" is. Does it help us refer to the way that rapists are helped by those who make excuses for them, or does it move our focus from where it belongs? I suspect it does both, i.e. it has both good and bad effects. Whether it does more harm than good is an empirical question that I'm not in a position to answer. Marcotte points out some of the good the term does:
The bill addressing sexual assault in the military that passed in December demonstrates the impact that "rape culture" as a concept has had. Most of the provisions—disallowing commanders to overturn rape convictions, making it a crime to retaliate against accusers, and giving civilian defense officials more power in prosecuting rape—stem from a new understanding about the way that a rapist's friends and colleagues will often give him cover and protection and blame the victim for her disruptive accusations.
There is a difference between the kind of provisions listed here and the much broader cultural features that critics of rape culture often condemn. For instance, Marshall University's Women's Center lists both "sexually explicit jokes" and "refusing to take rape accusations seriously" as examples of rape culture. The former, it seems to me, are not necessarily bad at all, while the latter is extremely bad. Mixing both types of behavior together seems both likely to be unproductive (although I can only speculate about this) and confused (because it shows no recognition that there are different degrees of badness here). The same website says that "rape functions as a powerful means by which the whole female population is held in a subordinate position to the whole male population." I don't deny that there is some truth to this. But the primary victims of rape are rape victims, not all women. And it seems especially unfortunate that a kind of rape metaphor (holding in a subordinate position) is seemingly used to explain the badness of rape. The primary evil of rape is the evil done to its victims, not the consequent psychological and social effects on women in general. (Which is not to deny that those effects exist and should be taken seriously.) If we did not already understand the evil of literally holding someone in a subordinate position then we would not understand the metaphor apparently presented to explain the effects of rape on society at large. There is an implicit recognition here, in other words, that what is primarily bad about rape is not these effects. They might exist, and they might be a real problem, but we do not need to be warned about them nearly as much as we need the kind of reminder of the horror of rape that the Rolling Stone article provides.      

I've gone off track. My main point was meant to be simply the fact that our culture appears to be more rotten and dangerous than most of us realized. My secondary point, though, is that the way to fix this is surely to attack the most rotten points, not to retreat into language-reform and theory. By all means let's fight not only against rape but against all sexual assault, all sexual harassment, and all sexist behavior. But let's not pretend to know that cultural factors are the most salient cause of rape, or that these factors can be altered by conscious actions, or that we know how to carry out cultural surgery or social engineering successfully. A misguided consequentialism, I suspect, lies at the root of the insistence that tasteless jokes must cause violent crime in order to be rejected. And then certainty that such jokes are bad leads to unproved claims that they cause rape. We cannot prove causal claims like this. What we can do is take the kind of action that is starting to happen now because of the Rolling Stone article. And when we read essays like that we do not want to tell or laugh at sexist jokes.

Thursday, November 20, 2014

Vote for A Bag of Raisins

Here.

(If you want to, of course, and honestly think it's the best candidate among the other philosophy blog posts from the past year selected by 3 Quarks Daily.)

Wednesday, November 19, 2014

Hard questions

It's been a week since my last post, so I feel as though I ought to write something. I also feel that some time in my life I ought to have a go at answering some really big or hard or interesting problem. So here's a small first step in that direction. I'm not leaping into lion-taming directly but more moving towards it via a move from accountancy to banking. Still, here goes.

Four of the most amazing things are that there is something rather than nothing, that some of what there is is alive, that some of what is alive is conscious, and that some of what is conscious is also rational, i.e. capable of making sense. The first and third of these facts correspond with well known metaphysical puzzles or research projects. The second (that some things are alive) does not appear to be regarded as much of a mystery, at least in comparison with the other questions, and is generally treated as a scientific question. Michael Thompson has shown that the related question of what life is, at least, is philosophically interesting. And the fourth amazing fact has to do with meaning or language. To answer why there is meaning we would seem to have to figure out what meaning is, and that gets us into the philosophy of language. So the amazing facts closely relate to a set of questions, and these questions are fundamental in ontology, philosophy of biology (actually I know nothing about the philosophy of biology, but it seems as though 'What is life?' ought to be the fundamental question there), philosophy of mind, and philosophy of language.

But the questions also seem to be closely related. Wittgenstein links the first and last of them when he says that:  "I am tempted to say that the right expression in language for the miracle of the existence of the world, though it is not any proposition in language, is the existence of language itself." With some reluctance and/or self-mockery Michael Thompson writes that:
a life form is like a language that physical matter can speak. It is in the light of judgments about the life form that I assign meaning and significance and point and position to the parts and operations of individual organisms that present themselves to me.  
This links the second fact with the fourth, if only by analogy. And the third (about consciousness) is surely related both to questions, or matters, of life and questions or matters of meaning or sense.

So, first point: the most amazing facts about the world are not just facts but important philosophical mysteries, the mystery being in each case why this fact is the case. And second point: the mysteries appear to be interrelated in some way (albeit I have not come close to proving that the apparent inter-relatedness is real or at all important). My third point is that the questions seem to be similar in nature. "Why is there something rather than nothing?" is, or appears to be, unanswerable. So does "What is consciousness?" So does "What is life?" if I'm remembering this correctly. And I don't think that "What is meaning?" has much of an answer either.

Are these questions all somehow the same question? Or are they not the very same thing but all equally nonsensical? Or are the similarities I am seeing all merely superficial?

I have no intention of working on any of these questions any time soon, or ever really. But having written this out I may as well post it. (My desire not to post rubbish is in danger of killing the blog completely, so I'm going to try to resist it. And at the very least I have linked to work by Thompson that is not rubbish at all.)

Tuesday, November 11, 2014

Lest we forget

It has seemed to me for a long time now that criticizing World War I has become too easy. People, especially in Britain, like to point to this war as an illustration of the pointlessness and badness of war. Of course it is that, but it's such a comfortable example that it encourages not thinking rather than thinking. The tendency to act thoughtful and sad, to do the things you are expected to do during a minute's silence, without being thoughtful or sad also produces empty words. I had a student a few years ago end an email with the words, "Lest we forget!" He seemed to think that these words meant "Let us not forget" rather than "So that we don't forget." The words need to be attached to a memorial to make sense, and using them in ways that don't make sense shows thoughtlessness, the very opposite of the thoughtful remembrance supposedly intended. Of course students will always make mistakes, but now here's John Quiggin making what appears to be exactly the same mistake. Of course people who are not students will always make mistakes too, but I take this as a sign that the rot has really set in.       

Wednesday, October 22, 2014

Teaching in groups

[UPDATE: thanks to dmf this post is now being discussed over at the Daily Nous. If you want to read some very interesting responses to the use of group work in philosophy classes head over there.]

I often wonder what the point of classes is. Not just rhetorically, but actually, the idea being that if we know why we have classes then this might help us do a better job with them. (I also sometimes wonder in the rhetorical sense, because when I was an undergraduate I was actively discouraged from attending lectures and yet many classes are so large that lecturing is more or less inevitable.)

The point, it seems to me, is for students to interact with an expert on the material they are studying. They get to ask questions about assigned readings that they did not understand. They get some kind of (possibly very short) lecture on this reading so that if they only think they understood it they will be corrected. They get to ask questions about the relevant issues, and probably some kind of lecture on these issues. And hopefully some kind of discussion of the issues will either happen of its own accord in the process of all this or, more likely, will be made to happen by the teacher. The point of this discussion being to get students to think more, more carefully, and in a more informed way, than they otherwise would, about either issues that matter or issues that it is somehow useful to think about. (For instance, an issue might not matter in itself but debates about it might be historically or politically important, or it might be an issue debating which is thought to develop certain intellectual skills. Nothing turns on whether the weights in the gym are up or down, but moving them up and then down again can be very beneficial. If there is an intellectual equivalent (a very big if, of course) then academic work, including discussion, might well be it, or at least an ingredient of it.) Ideally this will all happen in a way that feels natural to the students, so that it connects as seamlessly as possible with the rest of their lives. Then discussing ideas, asking questions, reading, and otherwise exploring the world intellectually can become greater parts of their lives.

I think its apparent unnaturalness is why group-work feels so wrong to me. As far as I can tell, though, it's becoming the norm. See comments here and here, for instance, and the reference to "structured activity" here. (And while you're at it, see this comment for some of the problems with group-work.) I'm also going by what I've seen other teachers do--increasingly it seems to involve group-work and student presentations. So, why do I think this is so bad?

The first thing to say is there is a real problem of ambiguity and possible misunderstanding here. Not all lectures, or things that people call lectures, are the same, and not all group-work or structured in-class activities are the same either. The second thing to say is that I'm not defending lectures. I think they are largely a waste of time. When I was an undergraduate we were told not to go to lectures on the grounds that you can learn more, and more efficiently, by reading. Lectures were presented as a remedial option. I think they can be useful in this way, and when my students just seem lost I do resort to lecturing. But I see it as a sign that some failure has occurred, not as a go-to option. Enough about me though. On to complaining about other people.

Here are problems with group-work that I have observed or heard about multiple times from students:
  • the members of the group (unless the group is the whole class) do not include an expert on either the topic for discussion or the assigned reading on it, so mistakes can go uncorrected and misunderstanding can be increased (if plausibly, confidently, or charismatically defended) 
  • there can be a tendency for everyone in a group to want to get along and agree, so that diversity of opinion (which is sometimes healthy and at least indicative of independent thought) can be replaced by a kind of groupthink, in which the better (or better-supported) ideas by no means always win out
  • neither every student nor even every group engages in the exercise seriously or at all (policing can help here, of course, but is not likely to be 100% effective, and brings its own problems simply by making the teacher take on the role of police officer)
  • groups can be dominated by loudmouths (although they might also be more comfortable environments for some students to speak in)
  • the whole thing can feel like a waste of time
The first of these problems is probably less serious at more selective places. If everyone in the group has a decent grasp of the issues, ideas, facts, etc. then the wisdom of the crowd might drive out individual kinks of ignorance and misunderstanding. But if enough students have not done the reading, or not done it carefully, or done it but without sufficient comprehension, then trouble lies ahead.

The problem of the whole thing feeling like a waste of time could be addressed by explaining why it isn't, but this would require being able to do that. It might be enough to say, "Trust me, the discussion will be much better afterwards." But why should students trust the person who says this? If they are an expert on philosophy, what do they know about educational psychology? And, in fact, what proof is there that discussion is valuable, let alone group-work intended to improve discussion? I think discussion is part of the examined life, but there's no evidence to support that claim. There might be evidence that it helps with remembering facts, but if it does, so what? Memorizing facts is not what the liberal arts claim to be about. It certainly isn't what philosophy is about, anyway.

The biggest problem, though, has to do with the suggestion made here that such activities feel forced and unnatural. They are, after all, forced and unnatural. They involve the teacher's going from being a resident expert there to help students in his/her area of expertise to being a classroom manager, manipulating students for their own good. Class is no longer (if it ever was) a place where a conversation takes place between people who (at least might) care about ideas and books. It is now a place where learning is facilitated. Of course the change is not from black to white, but students seem a bit more patronized in the new way of doing things, and the ideas (literature, arguments, whatever) being taught seem a bit more remote from life, a bit less like things that anyone might actually care about when off duty. It seems a shame to me.

Having said all that, I am a strong believer in doing what works, and I think that if we're qualified to judge work in our areas, as we (professional teachers) surely are, then we can also judge when a discussion is going well or not, and whether it is going better or worse than past discussions. So if a little bit of group-work really does improve discussion then I'm all for it. But there is a downside that should not be completely ignored. And I don't think that group-work should be done just because it's the latest thing or because it helps fill up the time we are required to spend in class (as I suspect is sometimes the case).

No doubt a thousand grumpy old men have said much the same thing before. What I hope might be new is the ethical angle. Patronizing and manipulating people should be avoided as much as possible. And there is a great evil in the world that might be called 'management' (or 'bureaucracy' or 'assessment' or whatever you like to call it), replacing freedom, individuality, and spontaneity with various systems of control. There is, it seems to me, a real danger that classroom management might be part of this problem.        

Friday, October 17, 2014

The concept of prayer

In case you don't always read everything at Jon Cogburn's blog I thought I'd draw your attention to the comments on this post. Thomas Carroll makes good points, in response to some of which Jon mentions Carroll's new book, which looks like essential reading. In the introduction Carroll writes:
The approach to reading Wittgenstein on religion advanced in this book is a variation on the ethical-therapeutic interpretations developed by Stanley Cavell, James Conant, Cora Diamond and Stephen Mulhall. 
Sounds good to me.

Thursday, October 16, 2014

Philosophers' Carnival #168

The new Philosophers' Carnival is here, featuring A Bag of RaisinsJon Cogburn's Blog, and this blog, among others. Thanks to Tristan Haze for the link.

On my post ("The Truth in Relativism") Tristan writes:
It's not clear to me from the post what the scope of the relativism is that he has in mind (is he talking about the subject matter of philosophy? or all subject matter?), and I don't find myself resonating with much of it.
I've added some links to the post that might help clarify what I'm talking about, but I'll try to say a little about it here too. I had in mind a very general kind of relativism, what Russell calls "the view which substitutes the consensus of opinion for an objective standard." This view might be held with regard to ethics, say, or to anything else. It seems to me that it can seem that we really have nothing to go on but the consensus of opinion. Don't we decide both matters of empirical fact and ethical questions in this way? (I'm not saying that we do, or don't. I'm saying that it can seem this way.) And maybe everything else too. (Some people seem to talk as if they think this way, at least.) Hence the consensus of opinion is the measure of all things.

But I think that a better measure of opinion than taking a poll is looking at what people actually do. And this includes the way they use words. So step one towards an improved version of relativism is to judge matters of fact, ethics, or whatever it might be by how people ordinarily talk about such things. What is called a fact, what is called right, and so on. (And the "and so on" ought to cover a wide range, including what people actually do as well as what they say they think they ought to do.)

When we take this step we should see that 'right' does not mean (is not used synonymously with) 'considered by the majority to be right' and that 'fact' does not mean 'generally regarded as a fact'. If the consensus of opinion is our guide then we must speak with the vulgar, and the vulgar speak like realists. But they don't mean the philosophically objectionable things that realists mean. So we must speak and think with the vulgar in the sense of not reading philosophical mistakes like platonism into ordinary language, despite the learned temptation (compulsion, almost irresistible tendency) to do so. We must understand, that is, that language does not have--in itself--the metaphysical implications that we think it does (and that, therefore, it really does have, though only for us, not in itself ). So step two is rejecting anything that it would make sense to call relativism and going back to ordinary ways of talking, but rinsed free of the problematic philosophical entanglements that we had found there.

The more I write about this the more convinced I am that it's right but also the more I think I'm just repeating Wittgenstein. (By which I don't mean everything Wittgenstein ever wrote.) So I don't mean to take credit for ideas that aren't mine. But I don't mean to saddle Wittgenstein with my mistakes either. If this sounds like a (crude, blog-post) version of Wittgenstein then good. Perhaps I still haven't explained what I mean well enough for anybody to be able to tell though.

Sunday, October 12, 2014

Philosophical training (and rankings)

In all the talk of rankings there are often references to good philosophers and the good training that they give to their students. This seems silly.

I don't mean to reject completely the idea that some people are better at philosophy than others. I am better than any of my students, if only because they are just starting out. I have had enough philosophical education and teaching experience to be qualified to judge what grades their work deserves, for instance. (Even though no two philosophy professors are likely to give exactly the same grades to every essay in any given stack of papers.) And there might be some geniuses so great that they are clearly the best philosophers of their age. This does not seem to be true right now, but it might have been true at some times. Still, the idea that we can compare ethicists with logicians, say, and objectively find a member of the former group to be better or worse than some particular member of the latter group in every case seems absurd. Even within the set of ethicists, who you regard as better is likely to depend on who you think is closer to the truth. It is possible to respect the knowledge and skill of someone with whom one disagrees, but it is easier when you agree. Utilitarians are likely to think more highly of Peter Singer, for instance, than non-utilitarians do. And presumably no one would think that we can sensibly rank philosophical positions. Any ranking of philosophers who have PhDs but are not as great as Kant or Plato is not going to be purely a matter of personal taste or conviction, but these factors will muddy the waters so much that the ranking would have little value.

And then there is the question of training. The best athletes and musicians do not always make the best teachers. The same is likely to be true of philosophers. And graduate school in philosophy is not really a matter of going through a period of training anyway. Most of the time you are on your own reading (or not), not being coached. You might well be put right on this point or that in a seminar or by a comment on a paper, but there isn't a lot of this, as I recall. You learn various arguments and positions, and you sharpen your ability to defend your own positions and attack others. But there isn't so much intense sharpening that a dull mind will become an expert in just a few years. A good adviser can help you a lot, but good philosophers need not be very attentive advisers. Mine was, but you couldn't tell that by reading her publications or looking at her CV.

Ranking philosophers seems a bit like ranking football players. You can't really compare a striker with a defender, and even if Ronaldo stands out from almost everyone else, the rest of the pack (as long as the pack is suitably defined) is much of a muchness. If players were ranked, the 13th ranked player would be unlikely to be significantly better than the 23rd player. Numbers like these are meaningless. To then think that if you study with the 13th best you will improve more than if you studied with the 23rd is to compound the mistake. Philosophy professors provide reading lists and a kind of model of how to do philosophy, but the demonstration might not work particularly well in every case.

Which is all to say that I think there is something very problematic about ranking graduate programs in philosophy. This has been said before, and perhaps is obvious. But it's funny how seriously rankings and pedigree are taken. If you graduated from a good program you must be good because of the great training you will have received. It's easy to think this way. I think it's very hard to justify it though.

I do think it's good that there is information about graduate programs available online. When I was looking for a place to do my PhD in the 1980s I asked an American friend, who named what he thought were the top ten universities in the USA (without special regard to their philosophy programs), and D. Z. Phillips, who told me about some people in the States who had recently done work that he admired (not all of whom taught at PhD-granting institutions). I decided not to apply to Illinois because I wasn't sure that Peter Winch would stay there. So I applied to the University of Virginia (because Cora Diamond was there) and Rutgers (because Rupert Read went there, and he surely knew what he was doing). It worked out for me, but I was pretty ignorant. I never even considered Pittsburgh, for instance, which would have been a good place given my interests. (Although I might not have got in, of course.) So I'm glad it's easier now for people to find out who is where, who does what, and, to some extent, which places are considered better than others. But quality, being so subjective, matters mostly because you don't want to go somewhere that guarantees you will never get a job. I know there are hardly any jobs to get any more, but that's no reason to shoot your prospects in the foot unwittingly. Which leads me to think that information about which places are especially good for certain types of interest and a placement ranking is all that's needed. If there must be rankings by overall quality or reputation, let there be as many as possible so that both what consensus there is and what wide disagreement there is are apparent.  

I'm tempted not to bother posting this because it all seems obvious or at least familiar. What is interesting to me is the combination of just how obvious at least some of it is and just how often it is (or at least appears to be) ignored in practice.    

UPDATE: This (a comment by Helen De Cruz) seems relevant to the above:
Unsurprisingly, the philosophers' beliefs (theism, atheism, agnosticism) predicted to a significant extent how strong they thought these arguments were. It's no surprise that philosophers who were theists thought the arguments for theism were strong, and that the arguments against theism were weak, and that the opposite pattern held for atheists. Correlations between religious belief and perceived strength of argument were quite strong, e.g., an r score of -.483 for the cosmological argument. If argument evaluation is objective, how can we explain these strong correlations?
Perhaps there is some flaw in her work, and perhaps a similar pattern would not be found among ethicists or philosophers of language, say. That is, maybe philosophy of religion is a special case. But I suspect it isn't. And I suspect that her findings are significant. There might be widespread agreement about who the top one or two Kant scholars or people working in x-phi are, but a) objectively evaluating Kant or x-phi seems to be a hopeless task, and b) below the very top, differences in ability are likely to be so small that they are either invisible or easily obscured by subjective noise (pedigree, personal acquaintance, etc.).

If I think about Wittgenstein scholars I can easily think of several who are neither definitely in the top two nor merely part of an indistinguishable mass, so perhaps I'm exaggerating. But something like the following categories seem to exist: the one or two people who I imagine are the most sought after for anthologies, conferences, etc.; ten to twenty big names whom it would make no sense to try to rank against each other; a mass of other people who are more or less indistinguishable from one another in terms of objective merit; other people who work on Wittgenstein but are basically unknown, perhaps because they have not published anything yet. The second category (the top ten-to-twenty) might turn out to be larger than I think if I actually started naming names. Anyway, I think that just about every Wittgenstein scholar who teaches at a PhD-granting institution probably belongs in this category. And the best scholars are not necessarily the best teachers or advisers. So anyone wanting to study Wittgenstein (and the point goes for anything else you might want to study) would be better off looking at how Wittgenstein-heavy a department is than at how highly ranked its one Wittgenstein scholar is.              

Saturday, October 11, 2014

Sean Wilson's new book project

Sean writes:
I am seeking feedback on the enclosed proposal. I wonder if people think it looks like a viable project? Would the thesis of such a book interest you? Basically, the book is a bit personal: it's based upon an intellectual transformation that I went through and how I came to see the fields of political science, law, and philosophy so differently. The premise is that Wittgenstein did this to me. But the important part is not that -- it is: (a) what this "new thinking" is; and (b) why it is important for other scholars to think this way. The enthymeme here is that the fields of law, political science and philosophy need more Wittgensteinians.
The proposal is here.

Saturday, October 4, 2014

The truth in relativism?

[This post is a follow-up to this comment and subsequent comments here and here. Knowing this won't make what follows crystal clear, but it might help.]

Bertrand Russell writes (about Plato's political philosophy):
Two general questions arise in confronting Plato with modern ideas. The first is: Is there such a thing as "wisdom"? The second is: Granted that there is such a thing, can any constitution be devised that will give it political power?
"Wisdom," in the sense supposed, would not be any kind of specialized skill, such as is possessed by the shoemaker or the physician or the military tactician. It must be something more generalized than this, since its possession is supposed to make a man capable of governing wisely. I think Plato would have said that it consists in knowledge of the good, and would have supplemented this definition with the Socratic doctrine that no man sins wittingly, from which it follows that whoever knows what is good does what is right. To us, such a view seems remote from reality.
That seems right. As does this:
It should be observed, further, that the view which substitutes the consensus of opinion for an objective standard has certain consequences that few would accept. What are we to say of scientific innovators like Galileo, who advocate an opinion with which few agree, but finally win the support of almost everybody? They do so by means of arguments, not by emotional appeals or state propaganda or the use of force. This implies a criterion other than the general opinion. In ethical matters, there is something analogous in the case of the great religious teachers. Christ taught that it is not wrong to pluck ears of corn on the Sabbath, but that it is wrong to hate your enemies. Such ethical innovations obviously imply some standard other than majority opinion, but the standard, whatever it is, is not objective fact, as in a scientific question. This problem is a difficult one, and I do not profess to be able to solve it.
The first of these passages raises a question about what a philosopher might be. Surely not a lover of wisdom if wisdom does not exist. The first sentence of the second passage points out that the consensus of opinion is against the idea that the consensus of opinion is the standard of truth/rightness (i.e. the idea that I think of as relativism). In other words, as is well known, simple relativism undermines itself.

Rejecting platonism in favor of something like relativism leads to the idea that there is a standard other than majority opinion, but not an objective fact of the kind found in science. I think this is right. And it suggests also that philosophy is, or at least perhaps might be, the teasing out of this kind implication and the clarification of its not being an instance of various kinds of mistake (such as platonism and relativism) despite appearances to the contrary. This "something like relativism" rejects views that "seem to us remote from reality" and that "few would accept." Not in a superficial way though. It does not, for instance, simply take platonism and then substitute the consensus of opinion for an objective standard. That would lead to a conservatism that few would accept. Instead it investigates our language from the inside, looking not so much at opinions as they might be reported to outside observers but at what we say and would say. And we can know what we would say because the 'we' in question includes us. Of course sometimes we might be divided, and then one can only speak for those who think like oneself, but this need not be a big problem in many cases.

This is what I mean when I say that relativism properly thought through leads to ordinary language philosophy. I can't say that I have properly thought this through myself, though, so I could be wrong. Or it might have been said much better long ago. Or both.

Sunday, September 28, 2014

Two star

It's not good to get only two stars out of five, but this isn't really such a bad review after all:
The amount of "Used" stickers on this book was a little much. Otherwise it is an amazing read with the author trying his best not to show his own opinions too often. It all honesty I prefer reading this book to listening to my professors lectures any day. This book also had a few annotations, but were in pencil so nothing serious! In conclusion, amazing book, was just sent quite an aged copy.

Good reason

This post was originally going to be called "Kids these days," so be warned that what follows is likely to be cranky nonsense. I'm writing in response to a couple of things. One is a student in my poverty course (we are discussing distributive justice at the moment) who said, as if to save us all a lot of wasted effort, that the thing about this stuff is that it's all normative. He seemed to mean that because it is normative there is no point discussing or thinking about it. De gustibus non disputandum, and of course the normative is all a matter of gustibus. I say 'seemed' because he was unable to say what 'normative' meant. It was almost as if he had been taught that 'normative' means impossible (or at best pointless) to think or talk about. A colleague at another school who also teaches a course on poverty tells me that this is the biggest obstacle he faces. Students just don't accept that, or see how, one can reason about questions of justice, right and wrong, etc.

The other thing that has set me off is a visit to the theatre on Monday night. The performance was not the greatest I have ever seen, and some people in the audience got restless. One person near me laughed at all the most highly dramatic points, which was understandable but distracting. Perhaps she couldn't help it, or didn't realize that the play was not a comedy. The two people behind me talked throughout the second half of the play, seeming to think that it was OK as long as they whispered and so did not drown out the voices of the actors. This too was distracting, and the last thing you need when trying harder than usual to suspend disbelief. It was annoying, but it also struck me as stupid. Not just thoughtless, but betraying a kind of blindness to the reality of other people. No doubt they had not thought through what it would be like to try to enjoy a performance while people around you were whispering constantly. But also, and one reason for this thoughtlessness, they appeared not to have much sense of other people as subjects, as beings both sentient and mattering.

Sentience and mattering seem to me to go together. Not because the sentient can feel pain. But, roughly speaking, because sentience, consciousness, is a miracle. Failure to appreciate this is a kind of mental dullness. I ought to say more about this, though, which is not easy to do. One aspect of what I mean is related to the fact that anyone who appreciates a great work of art will care about its treatment. This, I take it, is analytic. And it's not that people are great works of art, but we are like great works of art. What a piece of work is a man! and all that. The moral importance of human beings is something like self-evident.

Another part of my idea is that we are people. Whatever other people are is what we are (so far as we are people). Tat tvam asi. That you are in that seat and I am in mine, or any of the other individuating differences between us, is obviously irrelevant at some, important, level. Moral equality is self-evident too, in other words.

So bad behavior is a kind of stupid behavior. And the normative is something we can reason about, however difficult it might be to do so. We just need the right sense of reason, one that has more to do with reasonableness than with means-end reasoning. It's a sense of reason that includes the ability to empathize and to see (but not exaggerate) the value of civility. (This point seems relevant to, but is not intended to be about, a bunch of debates going on elsewhere on philosophy blogs.)      

Thursday, September 25, 2014

Declining decline?

While the philosophy blogosphere collapses, or at least undergoes some kind of spasm, I wonder whether this is happening because of a general sense that we are in the last days of the philosophy profession (or what Brian Leiter called "the so-called philosophy 'profession' (it's mostly a circus"--he seems to have removed the post, but if you search for "philosophy profession circus" you can find the fragment I have quoted here). 

With that as background I was just sent an article designed to help college instructors whose students are not doing the assigned reading. It contains fourteen handy tips, the first three of which are as follows:
Tip 1: Not every course is served by requiring a textbook
Consider not having a required textbook...     
Of course textbooks are not always the best books or readings to assign. But this tip does not mention assigned readings other than textbooks. Recommended readings are suggested instead, at least for some courses. Guess how many students read merely recommended material?

Tip 2: “Less is more” applies to course reading
A triaged reading list should contain fewer, carefully chosen selections, thereby reducing student perception of a Herculean workload...
It's all about perception after all.
Tip 3: Aim reading material at “marginally-skilled” students
Assess reading material to determine the level of reading skill students need in order to read the text in a manner and for the ends that the instructor has intended. A text included in the course readings primarily for entertainment purposes, for example, will require a less-strong set of student reading skills than will a text included for content purposes.
So entertainment before content.

I'm not saying it's the end of the world, but if anyone ever wonders whether or how dumbing down occurs then this provides a pretty good example.

Tuesday, September 16, 2014

Metacognition

Colleen Flaherty at Inside Higher Ed writes:
Metacognition is the point at which students begin to think not just about facts and ideas, but about how they think about those facts and ideas.
What if they think about facts and ideas badly, and become aware of this? Doe that count as metacognition? I suspect not. I don't think Flaherty (and other people who talk this way) really means what she says. She goes on:
Metacognition has always underpinned a liberal arts education, but just how to teach it has proved elusive. Hence the cottage industry around critical thinking – even in an era when employers and politicians are calling for more skills-based training, competency and “outcomes.”
Thinking about thinking, as metacognition is defined elsewhere in the article, has surely not underpinned a traditional liberal arts education. I think that what she means is something like what Dewey calls reflection, and that is basically being rational, or thinking for oneself, rather than swallowing beliefs uncritically. It means, in a nutshell, believing what there is good reason to believe, and knowing what the reasons are (and, perhaps, why they are good). It's the kind of thing that you learn by writing essays rather than taking short answer tests, by being grilled in class rather than lectured at, and by being held to (more or less) objective standards rather than being encouraged to think that every opinion is equally valid. In other words, if critical thinking skills are less than they used to be (and they might not be), then I suspect this is because of large class sizes (and the labor-saving pedagogy they encourage) and perhaps also some sloppy relativism floating around in classes where students do have to write and argue.
   
This bit is interesting too:
As a possible solution [to the problem of untenured professors being afraid to make their students think critically], Sheffield and his colleagues from the summer institute are hoping to talk with administrators about a way to offer some “immunity or amnesty” for professors who are taking chances to make their curriculums more rigorous, but who fear negative student reactions. 
Over at the Daily Nous dmf points out that we lack a definition of critical thinking, to which Matt Drabek responds that Sheffield appears at least to be familiar with Peter Facione's work on the subject. The essay by Facione that Flaherty links to seems to be aimed at a non-scholarly audience and deliberately avoids defining 'critical thinking', but ends with this as something like a working definition of the term:


(It really does end like this, I'm not just snipping badly.)

The idea seems to be intellectual virtue. But can such virtue be taught? I would think that "CT skills" can be taught, although not in big lecture classes with multiple-choice tests. And quite possibly not by people worried about being popular with their students. Nurturing the relevant dispositions is another matter. That seems like something that might have to be done outside as well as inside the classroom. It's a matter of socialization. And a kind of socialization that goes against the grain of much contemporary culture. 

Saturday, September 13, 2014

Three days in Mexico City

I lost five pounds over ten days in Mexico, and I didn't get ill. I consumed less than I do at home, but I think most of the weight loss came from walking. Certainly I ached more than normal. So the following itinerary might not be for everyone, but it can be done. I was in the state of Veracruz for a week before I went to Mexico City, but that's somewhat off the tourist trail (although if you have a way to get around it is a fantastic place to visit) so I'll leave that part of the trip out.

Sunday: My hotel was conveniently close to the Paseo de la Reforma, a two-mile boulevard dotted with statues and very friendly to cyclists, so I walked there and then up to the cathedral and other buildings on the Zocalo (central square). It's non-stop mass on Sundays but you can go inside and look around without joining in. Outside there are people selling things and native people dancing for tourists. Then on to the Templo Mayor, and then the Diego Rivera murals at the National Palace. Chocolate and churros for lunch before heading all the way down the Paseo de la Reforma to visit the anthropological museum in Chapultepec Park. Drink 1.5 litres of hibiscus-flavored water (not as good as it sounds, but how could it be?) on the way back to the hotel. Dinner at the surprisingly not bad chain restaurant Vips.


Monday: Almost all museums are closed in Mexico City on Mondays, so it's a good day to visit Teotihuacan. I decided to book a tour there, though, and it didn't go until Tuesday, so I headed by metro to the Coyoacan area, where my guidebook describes a walk, which I did. Walked past the closed Frida Kahlo and Leon Trotsky houses, and should have had lunch in this area but instead kept walking until I reached a metro station. Headed up to the Plaza de las Tres Culturas, which involved a bit of a walk through a not very prosperous neighborhood but no one mugged me and the threatened rain stayed away. The Aztec ruins here turned out to be open (and free), which was a nice surprise. Very similar to the Templo Mayor, but without the museum. The church was closed, but for all I know it always is. Then on by metro to the Basilica of Our Lady of Guadalupe, where there was, oddly, an unsuccessful snake charmer (outside the metro, not in the church), and another mass going on. Moving walkways behind and below the priest take you past the miraculous image of the Virgin Mary. Back to Vips for lunch at dinner time, then dinner almost immediately after at a place called Hellfish.


Tuesday: Bus trip to Teotihuacan (should have gone by public transport and walked to the murals at Tepantitla). Arrived not long after the place opened, but even later in the day it was not very crowded. Spent the morning looking around and climbing pyramids, then off to taste tequila (followed by too much time in a gift shop) and eat lunch. Back in the city I visit the murals in the Palacio de Bellas Artes and have dinner at the old world-seeming Restaurante Danubio, whose walls are covered with framed cartoons and messages from, presumably, happy customers. 



Tuesday, September 9, 2014

Differently abled?

So, this post of Jon Cogburn's about ableism. (See also here.) I wanted to try to sort the wheat from the chaff in both the post and the comments without spending all day doing so, but in an hour I didn't manage to get far at all into the whole thing. Instead of a point-by-point commentary, then, here's more of a summary with comment.

I take his key claim to be this:
all else being equal, it is better to be able. Speaking in ways that presuppose this is not bad, at least not bad merely in virtue of the presupposition 
That it is better to be able than disabled is surely close to being analytic, although of course one might disagree about its always being better to be "able" than "disabled." (That is, so-called abilities might not be so great and so-called disabilities might not be so bad, but actual disabilities can hardly fail to be at least somewhat bad.) Perhaps disability has spiritual advantages over ability (I don't think it does, but someone might make that claim) but in ordinary, worldly terms disabilities are bad. Hence the prefix 'dis'.

Cogburn makes two claims here. Not only that it is better to be able but also that it is OK to speak in ways that presuppose this. The English language comes very close to presupposing this, so Cogburn is more or less defending the language that even anti-ableist-language people speak. There is language and there is language, of course, as in English, on the one hand, and the language of exclusion, say, on the other. But the idea that it is undesirable to be blind, deaf, lame, weak, sick, insane, and so on and so on runs deep in ordinary English. Could this change? Surely it could. Should it? That is the question. Or one of the questions. Another is how bad it is to argue that speaking in such ways, ways that presuppose the badness of blindness, etc., "is not bad."  

A caveat is probably necessary, or even overdue, at this point. Cogburn has been accused, among other things, of defending hate speech, so I should address this thought. He is not defending attacks on disabled people. He is not attacking disabled people. He is defending some linguistic expressions of the idea that all else being equal it is better to be able. These expressions might harm disabled people (by perpetuating harmful prejudices) or fail to benefit them as much as some alternative expressions would (by countering those prejudices), but Cogburn's claim is that no use of language should be condemned simply on the grounds that it involves the presupposition that disabilities are generally bad things to have.

Roughly his claim is that the presupposition is true, and therefore ought to be allowed, and that it is patronizing to disabled people to think that they need protection from words that are only imagined to be hurtful to their feelings. The claim against him (again roughly, and there are multiple claims against him) is that some speech directed at disabled people really is hurtful, even when it's intended to be sympathetic, and that the kind of speech in question creates an environment, a kind of society, that is detrimental to the interests of disabled people whether they feel it or not, and whether it is intended or not. It is this that is the more interesting claim, I think,because Cogburn agrees that disabled people should not be insulted or patronized.

As I see it, two questions arise here. Is it lying to say that it is not better to be able (other things being equal)? And if so, is this a noble lie?

The idea that it is a noble lie would depend on a form of utilitarianism combined with faith in the possibility of linguistic engineering. There is something obviously Orwellian about this idea, but something seemingly naive too. If we start calling blind people visually impaired instead how much will change? I have no objection at all to making such changes if the people they are intended to help like them. Presumably Cogburn doesn't object to this kind of change either. But if all we do is to change the vocabulary we use without changing the grammar then sooner or later 'visually impaired' will be used exactly the same way that 'blind' is now used, and will have exactly the same meaning, connotations, etc. These superficial linguistic changes, i.e. changes in vocabulary or diction only, will not effect deep grammatical change (by what mechanism would they do so?). Superficial changes can have deep effects, as when disfiguring someone's face leads to people treating them much worse, but it isn't obvious that changing labels will have good effects. Nor will they change anyone's ability to see.

Which brings us to the question whether that matters. Is it bad to be blind, or worse to be blind than to be sighted? In "Practical Inference" Anscombe writes:
Aristotle, we may say, assumes a preference for health and the wholesome, for life, for doing what one should do or needs to do as a certain kind of being. Very arbitrary of him.
Ignoring her irony, is it arbitrary? Is it bad or simply different to have 31 or 33 teeth rather than the standard 32? Are two legs better than one? One comment at New APPS says that it is not disability but suffering that is bad. Is that what we should say?

I don't think I would bother trying to do anything about a disability that did not lead to suffering, that much is true. But some conditions surely lead to suffering more often than not. Some of this will be fairly direct. My father's muscular dystrophy leads to his falling down from time to time. This hurts. And some of the suffering is less direct, involving other people's attitudes and reactions. Falling in public is embarrassing, but would not be if people were better. So should we fix the physical and mental problems (i.e. conditions that lead to suffering) that people have when we can or should we fix other people, the ones who regard or treat the suffering badly? Surely so far as we can, other things being equal, we should do both. And medical advances are more dependable than moral ones.

We might argue at great length about what is and is not a disability, but that some people are more prone to suffering than most because of the condition of their bodies or minds is surely beyond doubt. Pretending to deny this isn't going to help anybody.    

I feel as though I've been rushing things towards the end of this post, but I also don't want to keep writing and writing without ever posting the results. One thing that encourages me to keep writing is the desire to be careful to avoid both genuine offensiveness (bad thinking) and causing offense by writing ambiguously or misleadingly (bad writing). But then I think that what I'm saying, or at least what I mean to say, is just so obviously right that no one could possibly disagree. What happened at New APPS shows that this is false. It also shows that this is very much a live (as in 'live explosive') issue, and one that brings questions about consequentialism, relativism, Aristotelian naturalism, and ordinary language together in ways that can be very personal and political. I don't mean that it's Anscombe versus the remaining New APPS people, as if one had to pick one of those two sides, but it would be interesting to see a debate like that.      

Friday, September 5, 2014

Poor fellow

I was surprised by the reaction to Robin Williams' suicide, which saw some people reacting as if to the death of a personal friend and others as if to the death of the very personification of humor. I didn't feel that way at all. Then people started saying he was killed by depression and that we all need to know and understand more about this disease. This bothers me, although I'm not sure I can put my finger on why. Partly it's that if someone is ill then the solution might seem to be technical and so we don't have to worry about treating them like a human being. Or rather, I suppose, we don't have to worry about treating their depression as an emotion. We don't need to cheer the depressed person up or attempt empathy. In fact it would be a mistake to do so, a symptom of blameworthy ignorance. So we just shove them towards a doctor and wait till they are fixed before relating to them as normal again. This strikes me as an uncaring form of 'concern', although I've seen it come from people who clearly do care as well as those who I can't believe really do. It is what we are taught to think.

One thing that pushes us this way is the desire to deny that people who kill themselves are being selfish or cowardly. It wasn't a choice, the pain made him do it, people say. But of course suicide is a choice. I think it's bizarre, at least in cases like Williams', to call it selfish or cowardly. How much unhappiness are people supposed to put up with? How much was he living with? More than he could take, obviously. But it's insulting to deny that he acted of his own free will. Why can't the decision to commit suicide be accepted and respected? I don't mean by people who really knew him--far be it from me to tell them what they can or should accept--but by strangers and long-distance fans.

Because it's so horrible, I suppose. And that's why it's considered selfish. You are supposed to keep the horribleness inside you, quarantining it until it can be disposed of by a doctor, or talk it out therapeutically. Not put it in the world. Don't bleed on the mat, as they used to say unsympathetically at the judo class my brother and sister went to. But that is selfish. If you have to bleed, bleed. People often say that we should not bottle up our emotions, that men should not be afraid to cry, and so on. This, I think, is partly right and partly based on a mistaken idea about the effects of expressing painful emotions, namely that once they are expressed they will be gone. But this is not true. They don't go away once let out of the bottle. It's partly also, though, a kind of hypocrisy. We don't actually want to see people's emotions, not the really bad ones. No doubt we do want to see some emotions, including painful ones, and quite possibly more than are often on display, but there is a reason why we don't wear our hearts on our sleeves.

It's hard to think well about suicide. Here's Chesterton:
Under the lengthening shadow of Ibsen, an argument arose whether it was not a very nice thing to murder one's self. Grave moderns told us that we must not even say "poor fellow," of a man who had blown his brains out, since he was an enviable person, and had only blown them out because of their exceptional excellence. Mr. William Archer even suggested that in the golden age there would be penny-in-the-slot machines, by which a man could kill himself for a penny. In all this I found myself utterly hostile to many who called themselves liberal and humane. Not only is suicide a sin, it is the sin. It is the ultimate and absolute evil, the refusal to take an interest in existence; the refusal to take the oath of loyalty to life.
I think he's right that there is something monstrous about suicide, and there is something really nightmarish about Archer's idea. But what I want to do is to say "poor fellow." Not to praise nor to blame. And not to regard suicides as anything other than fellows, with as much free will as the rest of us. And along with that sympathy to feel some relief that the person's suffering is over.

Monday, September 1, 2014

Understanding human behavior

A question that came up prominently during the seminar in Mexico has been discussed recently also by Jon Cogburn at NewAPPS. The question is about what is involved or required for understanding human behavior.

Jon Cogburn says that:
One could say that given a set of discourse relevant norms held fixed, understanding in general just is the ability to make novel predictions. For Davidson/Dennett, we assume that human systems are largely rational according to belief/desire psychology and then this puts us in a position to make predictions about them. We make different normative assumptions about functional organization of organs, and different ones again about atoms. But once those are in place, understanding is just a matter or being better able to predict.
I'm writing this as a post of my own rather than as a comment at NewAPPS partly because I suspect it will be too long for a comment and partly because I don't know what all of this means and don't want to appear snarky or embarrassingly ignorant. I genuinely (non-snarkily) don't know what it means to hold fixed a set of discourse relevant norms, nor what it means to put normative assumptions in place. But what I take Cogburn to be saying is, in effect, that "understanding in general just is the ability to make novel predictions." 

Winch says that being able to predict what people are going to do does not mean that we really understand them or their activity. He cites Wittgenstein's wood-sellers, who buy and sell wood according to the area covered without regard to the height of each pile. We can describe their activity and perhaps predict their behavior but we don't, according to Winch, really understand it. He surely has a point. Here's a longish quote (from pp. 114-115 of the linked edition, pp. 107-108 of my copy):
       Some of Wittgenstein’s procedures in his philosophical elucidations reinforce this point. He is prone to draw our attention to certain features of our own concepts by comparing them with those of an imaginary society, in which our own familiar ways of thinking are subtly distorted. For instance, he asks us to suppose that such a society sold wood in the following way: They ‘piled the timber in heaps of arbitrary, varying height and then sold it at a price proportionate to the area covered by the piles. And what if they even justified this with the words: “Of course, if you buy more timber, you must pay more”?’ (38: Chapter I, p. 142–151.) The important question for us is: in what circumstances could one say that one had understood this sort of behaviour? As I have indicated, Weber often speaks as if the ultimate test were our ability to formulate statistical laws which would enable us to predict with fair accuracy what people would be likely to do in given circumstances. In line with this is his attempt to define a ‘social role’ in terms of the probability (Chance) of actions of a certain sort being performed in given circumstances. But with Wittgenstein’s example we might well be able to make predictions of great accuracy in this way and still not be able to claim any real understanding of what those people were doing. The difference is precisely analogous to that between being able to formulate statistical laws about the likely occurrences of words in a language and being able to understand what was being said by someone who spoke the language. The latter can never be reduced to the former; a man who understands Chinese is not a man who has a firm grasp of the statistical probabilities for the occurrence of the various words in the Chinese language. Indeed, he could have that without knowing that he was dealing with a language at all; and anyway, the knowledge that he was dealing with a language is not itself something that could be formulated statistically. ‘Understanding’, in situations like this, is grasping the point or meaning of what is being done or said. This is a notion far removed from the world of statistics and causal laws: it is closer to the realm of discourse and to the internal relations that link the parts of a realm of discourse.
Winch talks elsewhere (see p. 89, e.g.) about what it takes for understanding to count as genuine understanding: reflective understanding of human activity must presuppose the participants' unreflective understanding. So the concepts that belong to the activity must be understood. Without such understanding all we will generate is (p. 88) "a rather puzzling external account of certain motions which certain people have been perceived to go through."
 
But does Winch get to say what counts as genuine understanding? This was a point we discussed when I was at the University of Veracruz. Several students there seemed to want less than what Winch would accept as true understanding of human behavior. They did not want to empathize. They wanted to identify patterns, and if they were able to do so well enough to be able to make accurate predictions then they would be very satisfied. A "rather puzzling external account of certain motions" is basically what they were hoping to produce, as long as it allowed them to make accurate predictions.

It looks to me as though Winch would resist or even reject such a desire, but can a desire be mistaken? And I don't know how to settle the apparent disagreement between Winch and others about what is and is not real understanding. Can we just say that as long as you see the facts you may say what you like? Or is that too easy?

Saturday, August 23, 2014

Anscombe Forum on Human Dignity

Conference to be held on March 13-14, 2015, at Neumann University, which is located in Aston, PA, in the greater Philadelphia area. 
The forum is an annual event designed to explore the work of G.E.M. Anscombe and topics in her work that are of continuing importance within the Catholic intellectual tradition.  In March 2014 the forum was initiated with a conference focused on the question of Anscombe's contributions to the Catholic intellectual tradition.  The March 2015 Forum will be dedicated to the subject of human dignity.
Featured speakers: Candace Vogler, David B. and Clara E. Stern Professor of Philosophy, University of Chicago; Nicholas Wolterstorff, Noah Porter Emeritus Professor of Philosophical Theology, Yale University; Duncan Richter, Professor of Philosophy, Virginia Military Institute.
We welcome all contributions on the subject of human dignity and are particularly interested in contributions that engage the work of Anscombe or Peter Geach, or that otherwise engage elements of the Catholic intellectual tradition.  For further information contact Dr. John Mizzoni at mizzonij@neumann.edu.  Submissions (full papers only; 20-30 minute reading time) should be emailed no later than December 30, 2014 to mizzonij@neumann.edu.  
More information will become available at www.neumann.edu/anscombeforum
Select papers from the conference will be published by Neumann University Press.  
I'm looking forward to this, but being in such distinguished company is a little intimidating. I'd better say something good.