Friday, March 29, 2013

Leather elbows on a tweed coat, oh is that the best you can do?

Or: girls go to college to get more knowledge. Or: varsity blues. Or something. What is college or university for? Perhaps to provide students with certain knowledge, skills, and character traits. Perhaps to provide employers with suitable employees, society with suitable citizens, and students with suitable abilities to live a happy life. Perhaps to allow young people to have a good time for a few years before settling down. Perhaps we don't really know what it's for, but it's what we've done for hundreds of years (in various forms) and changing that might spell disaster.

Wanting to blog about this but knowing that I should prepare for class I picked up my notes for today's class in Poverty & Human Development course only to find myself reading this:
It is widely accepted that more education leads to better jobs and bigger paychecks. But more education also correlates to better, happier, and longer lives for individuals and pays big dividends for all of us in the form of increased civic engagement, greater neighborhood safety, more tolerance, and a more competitive economy. Globalization and technological change have made it extraordinarily difficult for poorly educated Americans to achieve economic self-sufficiency, self-respect, and resilience in the face of adversity. 
I can't escape it! So, what is the point of higher education? To some extent it is a rite of passage, more about football games and keg parties, and about proving that you belong to a certain class and can jump through the required hoops, than it is about learning anything. (Of course this applies primarily to the American college experience. Football played no part in my college experience, although there was rowing, i.e. getting drunk in the afternoon while other people rowed. Anyway, I am talking mostly about the United States, and the rest of the world has a tendency to copy what they do, so it has broader relevance too.) But there has to be an academic curriculum too, so what should it be?

In comments here j. writes:
i took the 'third mode' to be envisioned as a successor ('one and two must combine') meant to organize the split humanities traditions, rather than replacing them, necessarily. and to organize by trying to select and emphasize developments that have already taken place within existing traditions. of course, that is a bit of a power move.

i'm not sure what in the description given would make us think that rigor or depth would necessarily be lacking. are rigor and depth to be found in the humanities now?
And:
don't the disciplines mentioned under the third mode suggest a plurality of contested points, which is basically the same as the humanities now? and there is evidently some focusing of the political point intended (necessarily, since environmental questions seem to have been elevated compared to their non-institutionalized place during the twentieth century?), but i'm not sure how it's any more propagandistic than the presumptions favoring liberal individualism (as in liberal-arts-ism) throughout existing or older configurations of the humanities. and those older configurations did find room for -some- alternatives that questioned the prevailing arrangement and the point of it.
He's responding to things I said about things Scott McLemee said about Toby Miller's argument for “a blend of political economy, textual analysis, ethnography, and environmental studies such that students learn the materiality of how meaning is made, conveyed, and discarded,” a blend that would merge literature, philosophy, and history with media and communications studies. 

My concern about depth and rigor has to do with specialization. If students specialize in philosophy, say, or political economy, then they can learn more about it, get better at it, and go from being able to take introductory courses to being able to take more demanding, higher-level courses. Perhaps that is what Miller has in mind, but I would worry if someone wanted to replace a major in, for instance, history (or any other specialized discipline) with a blend of different subjects within the humanities. I fear that no one would learn much about anything in that case. If media studies, etc. are simply being added to philosophy, etc. then I have no great objection, as long as resources are not diverted from the more worthwhile to the less worthwhile. For instance, by all means let's study mass media, but let's not have a whole degree program in it if there isn't that much to be said (I don't know whether there is or not), or if it means no longer studying Shakespeare. 

As for propaganda, I think some propaganda is OK. Part of teaching physics is encouraging students to care about physics, to see its value and to see it as valuable. And the same goes for philosophy, media studies, and every other subject. Propaganda on behalf of academic disciplines is fine. As is propaganda on behalf of facts, e.g. facts about evolution, climate change, the state of the economy, and so on. Maybe that isn't really propaganda, but I mean that there are subjects on which students are likely to have false beliefs because of propaganda, and I think it is perfectly OK (if not better than that) for academics to make a point of teaching their students the truth about these things. But it should not, I believe, be any professor's mission to make students more left-wing or more right-wing on any particular issue or in general. Perhaps everyone would agree with that, but Miller's website says that he sees the goal of the humanities as being to produce "an aware and concerned citizenry." Actually that is OK with me, but it sounds like something that could easily fall into fairly naked propaganda of the bad kind. If the awareness and concern is all of the right-wing kind then I don't want it. If it's all of the left-wing kind then I think it could be counter-productive. Students might resist it (or not need it), and politicians are even more likely to want to scrap the humanities completely if they are taught as more-or-less openly left-wing propaganda. Taking Miller at his word I don't disagree, but I have concerns about what lies behind (or before) those words.

Now for Philip Cartwright's concerns:
if you claim the ultimate value of humanities is the by-product of rigorous thinking then you are tacitly admitting that the subject matter has no intrinsic value. You might as well study chess problems or old episodes of Are You Being Served?
Yes, and unfortunately this is what some people do. Hence McLemee's reference to Angry Birds studies. Things of that general kind might be very good, but they seem to me to be less likely to be good than, say, Shakespeare studies, and they seem to politicians, parents, and voters like a very bad idea. So I don't (completely) deny their value, but I have doubts about it and I certainly think it is a bad move politically to allow the humanities to be seen to be encouraging this kind of thing.

Philip continues:
Now, it is by no means easy to demonstrate that studying literature (or, indeed, history) has any intrinsic value. Does it make you a "better person" (whatever the hell THAT means)? Does it turn callous oafs into caring, sensitive individuals? No, or certainly not often enough to justify its existence - assuming, of course, that the production of caring, sensitive individuals is itself a good thing (and that's not at all obviously true).
I'm reluctant to agree with this, but I probably do. That is, I don't think that studying literature or history will turn a callous oaf into a caring, sensitive individual. Nor do I think that it makes you into a better person. But I do think that it can make you into a better person, that it does so with some people. Me, for instance. Is that enough to justify its existence? Yes. But 'it' here can't be anything mass and compulsory. You don't make sensitive flowers even more sensitive by forcing things down their throats.

Having abandoned the intrinsic value argument, he goes on to describe two alternatives:
The first will cut little ice in today's market-driven world but it at least has the merit of honesty. It runs as follows: "Studying literature (and/or history) helps promote values that I believe in, and if you don't support those values then you can fuck off". I think we could label this "The Kulturkampf Defense".
The second defence is more pragmatic (ie, consequentialist). It says that a lot of employers actually don't want graduates who've spent three years learning a load of theory concerning their area of business - they'd rather teach that stuff to recruits themselves. In fact, they have to "un-teach" their vocationally-trained recruits, which is annoying and costly. They would rather have people who've proved that they're intelligent and hard-working but who haven't learnt a load of theoretical clap-trap that'll have to be jettisoned before they can become useful employees. Subjects like literature and history fit the bill very nicely - especially as many bright people show a marked inclination to study in those areas. So why not give them what they want and thereby give the employers what they want too? It's win/win. 
I have a lot of sympathy with the bracingly-expressed first point, and think the second is very important. I have nothing to add to it. On the first point, though, I would want to add (though perhaps I can't) that it isn't just a matter of what I believe in. If we have a common culture (and what other kind of culture could there be?) then there must be some sort of canon, some points of cultural significance. These might be studied just because they are good, but they might also be studied as elements of cultural literacy. Knowing them will help you understand and communicate with others, and keep the culture together. Giving up on this idea seems to involve a kind of cultural Thatcherism ("there is no such thing as society. There are individual men and women, and there are families"). Perhaps Shakespeare should not be central in American studies of literature, but someone should be: Emily Dickinson or Emerson or Toni Morrison or Walt Whitman or all of the above (or someone else). There's plenty of good stuff there, and I'm not sure how much one can separate seeing intrinsic value in these writings and valuing American culture. What does it mean to care about or value (the United States of) America but not to care about American history or literature? (And what would it mean to value American literature but not think that any of it was any good?) 

I sound more conservative (and more American) than I am, but anyone who has read the books in question knows that they are not essentially conservative. I do see value in American society supporting (yes, paying for) the study of its own culture and heritage, its history, literature, and philosophy, not as museum pieces but as the organs of a living body. By all means study other things too--European philosophy, East Asian religion, African history, contemporary mass media, etc., etc.--but the idea that the humanities should be discarded as old-fashioned or no longer relevant, an idea that comes from the (politically-minded) left and the (business-minded) right, seems like a form of despair, even suicide, to me. Almost no one says that we should discard them, but any move to scale them back (and Miller's book is called Blow Up the Humanities, after all) worries and depresses me. And if you don't support those values then you can fuck off. (I'm just kidding, of course. Comments welcome, etc. etc.)

  

Wednesday, March 27, 2013

Value and culture

Robert Musil's Törless concludes that "things are just things and will probably always be so. And I shall probably go on for ever seeing them sometimes this way and sometimes that, sometimes with the eyes of reason, and sometimes with those other eyes. . . ." The eyes of reason are those of the engineer, the practical  person who sees no great mystery in anything. Those other eyes see "one open chasm of the sky above our heads and one slightly camouflaged chasm of the sky beneath our feet," while the reasonable person feels "as untroubled on the earth as in a room with the door locked." The image calls to mind (but is of course not the same as) Norman Malcolm's quotation of Wittgenstein:
A person caught in a philosophical confusion is like a man in a room who wants to get out but doesn’t know how.  He tries the window but it is too high.  He tries the chimney but it is too narrow.  And if he would only turn around he would see that the door has been open all the time!  (p. 44--I believe this is the only idea of Wittgenstein's that Heidegger ever mentions)
Wittgenstein also distinguishes two ways of thinking about or seeing things in the Lecture on Ethics. One he associates with science, and labels trivial, the other he associates with religion, ethics, and aesthetics, and regards as incapable of fitting into language (any more than a gallon of tea will fit into a tea cup). The natural and the supernatural, the worldly and the otherworldly, are simply not compatible. There is no translating the language of one into the language of the other. At any rate, this seems to be the general idea here.

That's what comes to mind when I read this and this. The first is a reasonable-sounding suggestion that we make the relevance of the humanities to the real world evident in our teaching:
For the past year, I have been a member of a several nursing search committees. In their teaching presentations, the candidates almost always stated clearly “Now we’re going to learn to think critically. Let’s begin by defining it.” I am wondering whether doing something similar in humanities classes might not help our students more clearly see the value of the humanities...
I don't mean to reject this kind of thing completely (perhaps I should), but it has some problems. One is that it seems likely to become tedious very quickly. Why not just always insist on defining terms that need to be defined, so that students get into the habit of doing this? From time to time we could point out the practical value of this habit, but almost always stating that you are about to learn to think critically before doing so would surely drive everyone mad. Secondly, what are students going to think when you don't preface your remarks or activities with words like these? That this part of the class is a waste of time? That seems quite likely to me. A friend of mine recently suggested that the point of teaching Victorian literature is to teach students how to construct an argument. That surely is not the whole point of it. The point has to do with the value of (some) Victorian literature, and its importance in and to our culture. If we give up on the idea of intrinsic aesthetic value and the importance of culture, then we give up on the humanities.

My second linked "this" above refers to a piece by Scott McLemee at Inside Higher Ed about a book by Toby Miller called Blow Up the Humanities. It sounds dreadful, but I haven't read it and so should bite my tongue. In McLemee's words:
What we must recognize, his argument goes, is that there are two forms of the humanities now. What the author calls "Humanities One" (with literature, history, and philosophy at their core) is just the privileged and exclusionary knowledge of old and dying elites, with little value, if any, to today’s heterogeneous, globalized, wired, and thrill-a-minute world. By contrast, we have studies of mass media and communications making up “Humanities Two,” which emerged and thrived in the 20th century outside “fancy schools with a privileged research status.”
In the future we must somehow establish a third mode: “a blend of political economy, textual analysis, ethnography, and environmental studies such that students learn the materiality of how meaning is made, conveyed, and discarded.” Enough with the monuments of unaging intellect!
According to his website:
Miller ultimately insists that these two humanities [One and Two above] must merge in order to survive and succeed in producing an aware and concerned citizenry.
So literature, philosophy  and history are irrelevant and must be replaced by the study of "how meaning is made" so that citizens become "aware and concerned." Students are to be made aware without being made aware of history, presumably, and to study meaning without studying the philosophy of language. And the people who fund public (non-elite) education are expected to pay for this? The non-privileged students who study it are expected to be able to get jobs with these skills (or this awareness and concern)? Art for art's sake I can buy. The same goes for raising political consciousness through the study of history and philosophy. But political goals without history or philosophy? The humanities without literature, or with some literature but with ethnography and environmental studies taking the place of some literature? This doesn't sound like the humanities at all any more. It sounds like a doomed attempt to replace the humanities with some hodgepodge of amateur science or pseudo-science. I feel the open chasm beneath our feet.

Monday, March 25, 2013

What is love?

The VMI Ethics Club is meeting tonight to discuss this question. I may or may not present some thoughts, but I might as well develop some before I go. It seems to me that a good way to start thinking about a question like this is to gather the thoughts of poets and philosophers (people who think carefully about things, but (often) in different ways) and to gather up one's own thoughts and experiences.

Poets often treat love as if it were a person. Auden, for instance, ("Are its stories vulgar but funny?", etc.) and Saint Paul ("Love is patient, love is kind"). Another common idea is that you can't say what love is. See, e.g., Al Green ("I can't explain this feeling") and Buddy Holly ("love is strange"). Above all poets and singers go on and on about love all the time. It's a bit like this fact about Wittgenstein:
In the end, in the 'Lecture on Ethics', the existence of language itself is what Wittgenstein wishes to say expresses what he seeks to express by describing the various experiences he puts forward in his examples (wonder at the existence of the world or the miraculous, a feeling of absolute safety, a feeling of guilt).    
It might be nice to be able to say that in the end the existence of songs itself is what expresses what we want to express by describing various experiences of what we call love. And these songs, of course, include hymns, anthems, and football songs as well as love songs happy and sad. But that might not be maximally informative.

Larkin has something to say about the relation between love and selfishness. His poem "Love" ends with this stanza:
Still, vicious or virtuous,
Love suits most of us.
Only the bleeder found
Selfish this wrong way round
Is ever wholly rebuffed,
And he can get stuffed.  
That's a complicated sentence: Only the bleeder found selfish this wrong way round is ever wholly rebuffed. A bleeder is a person (as in Ian Dury's "There ain't half been some clever bastards (lucky bleeders, lucky bleeders"), but "this wrong way round" sounds self-referential, the words themselves grammatically awry. I suppose it must refer to a "wrong way round" that has been recently mentioned, and that would be: "My life is for me. As well ignore gravity." This kind of 'plain truth' might be irrefutable (or it might not, but I don't want to try to refute it), but it is the attitude of a selfish bleeder who can get stuffed. (Larkin thus seems to reject himself in roughly the way that Schopenhauer rejects solipsism: you would have to be crazy to think that way.)

The philosophers are another matter. There is something depressing about the prospect of reading what philosophers have to say about love. It isn't surprising to find this sort of thing:
To summarize: if x loves y then x wants to benefit and be with y etc., and he has these wants (or at least some of them) because he believes y has some determinate characteristics ψ in virtue of which he thinks it worth while to benefit and be with y. He regards satisfaction of these wants as an end and not as a means towards some other end.
I think this is obviously false, but I also fear that even getting into it will damage me in some way. Quickly, then, I don't so much want to benefit the people and things I love as I want them to be benefited. I wish my family well and want to be with them, for instance, but as long as they all do well then, other things being equal, I don't care whether the goods they receive come from me or someone else or their own efforts. (The ceteris paribus clause is important here. I don't want someone else moving in and taking my place.) And I do not at all think that any member of my family has some determinate characteristic in virtue of which I think it worthwhile to benefit and be with them.

Kant is more interesting:
Love as inclination cannot be commanded; but beneficence from duty when no inclination impels us and even when a natural and unconquerable aversion opposes such beneficence is practical and not pathological love. Such love resides in the will and not in the propensities of feeling, in principles of action and not in tender sympathy; and only practical love can be commanded.
This, it seems to me, is half true and half false. The falseness comes in the implicit distinction between actions and feelings. The will is not as distinct as all that from propensities of feeling. Caring for someone, taking care of them, is not simply a matter of ensuring certain outcomes (clean sheets, clean plate, medicines administered, etc.) nor of performing certain behaviors (smile, say 'hello,' etc.). Better care is taken by those who do these things in a caring way (which does not mean with a certain inner accompaniment). No doubt that can be faked to a certain extent, but this extent is not clearly determinate, it seems to me, and I don't see why really caring, or at least trying to really care, can't be commanded.

Love is a feeling, I don't deny it, but the best way to understand it might be to do so by way of thinking about the behavior that exhibits it rather than trying to get the feeling itself into phenomenological focus. I used to think that I loved the people in my family in different ways but then I realized that I was describing differences between the people rather than differences in my feelings towards them. Love is like a lens that colors our perceptions of what we love, but it is also like a gravitational field that pulls us in the direction of what we love, or pulls our attention that way. We think and talk about what we love, perhaps seeing it in great focus or perhaps seeing it blindly, or with selective blindness. "Every man thinks he has the prettiest wife at home," as Arsene Wenger once said (insightfully, if perhaps also naively). Socrates might have had love in mind when he said that:
the greatest good of man is daily to converse about virtue, and all that concerning which you hear me examining myself and others, and [...] the life which is unexamined is not worth living     
People who converse daily about virtue surely love virtue. The unexamined life is the life of someone who does not love wisdom, who cares little or nothing for it. Or so I sometimes think. To love is to see (think of, treat, feel about) as good. Now if we can just figure out what thinking and goodness are, we'll have cracked it.

Sunday, March 24, 2013

Inequality

This piece by Jospeh E. Stiglitz is interesting and touches on some points that Tommi Uschanov has raised before about what people in the United States think about inequality (see also this video, which was all over Facebook recently). The stuff about Singapore put me off a little, since I don't think of Singapore as being the most democratic or free place in the world, but it gets onto Scandinavia towards the end and reads less like a puff piece for one government's policies.

Thursday, March 21, 2013

Wisdom

"Things happen: that's wisdom in its entirety," says Törless.

There's a nice essay on the novel here. It could be interesting to compare Musil's philosophy with Wittgenstein's. A couple of quotes:
By exercising great and manifold skill we manage to produce a dazzling deception by the aid of which we are capable of living alongside the most uncanny things and remaining perfectly calm by it. […] We are capable of living between one open chasm of the sky above our heads and one slightly camouflaged chasm of the sky beneath our feet, feeling ourselves as untroubled on the earth as in a room with the door locked. 
(That's Ulrich from the second volume of The Man Without Qualities.)
Every work of art offers not merely an immediate experience but an experience that can never be completely repeated. […] The person dancing or listening, who yields himself to the moment of the music, the viewer, the person transported, is liberated from everything before and after […] This condition is never of long duration except in pathological form; it is a hypothetical borderline case, which one approaches only to fall back repeatedly into the normal condition, and precisely this distinguishes art from mysticism, that art never entirely loses its connection with the ordinary attitude. It seems, then, like a dependent condition, like a bridge arching away from solid ground as if it possessed a corresponding pier in the realm of the imaginary.
That's Musil himself. Both quotations taken from Achille C. Varzi's essay.

Wednesday, March 20, 2013

We're hiring again (but not in philosophy)

Job ad here:

Assistant or Associate Professor of English

March 14, 2013Multiple tenure-track positions, beginning August 2013, to teach in an English major that integrates fields in the humanities and the arts - rhetoric and writing, literature, fine arts, and philosophy - through a central focus on rhetorical studies. 

We seek candidates who demonstrate a strong interest in engaging in the opportunities that an interdisciplinary department like ours presents for teaching, scholarly engagement, community outreach, and mentorship of students. Assistant or associate professor rank possible. Preference will be given to candidates with Ph.D. by August 2013, but ABDs will be considered. 

Candidates must both fill out an on-line application at the Virginia jobs site and submit required supporting materials (a letter of application, résumé, official transcript, and three letters of recommendation) either by e-mail to RCSearch@vmi.edu or by mail to Dr. Emily Miller, Head, Department of English and Fine Arts, Virginia Military Institute, Lexington, VA 24450. 

Review of applications will begin immediately and continue until the positions have been filled. Interviews via Skype and/or at CEA. 

In a continuing effort to enrich its academic environment and provide equal educational and employment opportunities, VMI encourages women, minorities, disabled individuals and veterans to apply. 

Virginia Military Institute is a state-supported undergraduate military college of engineering, the sciences, humanities, and the social sciences with a strong emphasis on teaching excellence. 

Military experience is not required for faculty.

Sunday, March 17, 2013

How to succeed

Marcus Arvan has some good advice on how to be a successfully published philosopher. If you're interested in that kind of thing then you've probably seen it already, but here are his main points:

  1. Habits that foster a positive daily attitude and emotional well-being are of paramount importance.
  2. Write first thing in the morning, without any form of self-censorship, setting a firm 3-5 page requirement for yourself, which you assiduously keep to and do not go over.
  3. Give yourself a couple hours a day of "alone time" outside away from the computer if you can.
  4. Send stuff out; don't sit on your work.
  5. When your work gets rejected, send it out again immediately.
  6. Don't work "in secret", but don't seek out so much feedback that you begin to doubt yourself.
  7. Follow every "rule of publishing" as a rule of thumb, but only a rule of thumb.
  8. The best way to have a good idea is to have a lot of ideas.
  9. Only work on weekdays--take weekends off.
#1 sounds much easier said than done, but it's closely related to #2. If you write a bunch of stuff every day you will feel good about your work. But that brings us to #2, which seems to require a couple of hours first thing in the morning. Don't people have classes to teach or kids to take to school? Are you supposed to get up at 4 or 5 in the morning? I just can't see how I, to take a random example, could do this. The same goes for #3. 

#4 is probably a good idea, although I suspect one of the things I might regret the most at the end of my career is sending stuff out too quickly. I often think things are done that I later realize I could have improved significantly if I had just sat on them a bit longer and read them over with fresher eyes. #5 is probably a good idea too, but sometimes surely work is rejected because it isn't that good. It might be worth considering this possibility. Knowing when something is ready, or as good as you will ever be able to make it, is much easier said than done. Something similar could be said about points 6, 7 (which is #2 again in different words), and 8.

I like #9.

All in all I'm not sure how useful this advice is except for graduate students working on their dissertations and early career academics trying to get tenure at less-then-stellar places. If you're at a place that cares a lot about quality then I doubt you can afford to churn work out like this. And unless you are desperately seeking employment and/or tenure (which of course a lot of people are) then you probably owe it to the world not to churn. I also can't see this plan working for anyone with children, unless they live in a very 'traditional' my-wife-takes-care-of-that-side-of-things way. I do think it's good advice, but not for everyone. And it helps to show what it takes to make it these days in the profession, what we have come to. 

Friday, March 15, 2013

What is called thinking?

Two things: first, this list of world thinkers (you can vote on them!) struck me as odd (for instance, are Nussbaum, Sandel, and Zizek really the most important philosophers in the world?), and second, Lars Hertzberg has written that:
There may be many forms of thoughtlessness. One form of many is that in which people are thoughtless about the sense of the words they use (as when they confuse “accidentally” and “inadvertently”); even so, what they are *trying* to say may be perfectly in order.
This is undoubtedly true. But it reminds me that philosophers sometimes, perhaps often, are thoughtless about the sense of the words they use even when they try to be most thoughtful about them. (Over-dramatic italics, sorry.) I think this is what people mean by 'logic-chopping.' Imagine a less careful version of Aristotle dividing goods up into three categories (external, of the body, and of the soul), virtues into two (intellectual and moral), and so on. But, being less careful, getting it wrong and overlooking important points. This isn't simple ignorance of the meanings of words, but it is a kind of ignorance (a kind of ignoring). It seems to be a kind of weakness of will, like someone drunkenly forgetting what they already, usually, know. It's something that I think Wittgenstein tries to discourage (take your time!). Being slower to jump to conclusions, being more aware of what we already know, is not thinking in the usual, active sense preferred by philosophers. It's more like sobriety, not-forgetting, remembering both in the sense of not forgetting and in the sense of putting back together what the logical butcher has hacked, or is about to hack, apart.  

Wednesday, March 13, 2013

The consequentialist 'ought'

Shelly Kagan is visiting Lexington (thanks to Washington and Lee University), so I read his paper "Do I Make a Difference?", went to talk he gave on why it's bad to be dead, and last night had dinner with him and a group of other philosophers. I had thought from his paper that he judged acts by their expected utility, and I have a long draft of a blog post about what this might mean. Can there be such a thing as the expected utility rather than the utility that this or that person (or these persons) expects? Does judging acts not by the consequences they actually have but by the consequences they can reasonably be expected to have even count as consequentialism? So I asked him about this, and he told me that in fact he thinks the right act is the one that has the best consequences, not the one that has the best expected consequences. (If you have read the paper and don't see how this can be his position, see footnote 8.) This led to a very interesting discussion about what one should do in cases of imperfect knowledge about what the right thing to do is. In his view you should do what will have the best consequences, but since you don't know what this is you have to decide as best you can. But then should you act on that decision? In a sense yes, of course, but if your decision tells you to do something that is not in fact the act that will have the best results, then in a sense no. 

Yesterday I also happened to read Anscombe's essay "Must One Obey One's Conscience?" To deliberately act against one's conscience, she says, is to choose to do what is wrong (as you see it). That can't be right (as Kant might agree). But to act in accordance with a conscience that tells you to do the wrong thing (think of Huck Finn feeling guilty about helping an escaped slave) also cannot be right. So you are stuck. The only way out is to find out that your conscience is wrong and fix it. This is impossible for a consequentialist, though, it seems to me, because you cannot know the future with sufficient certainty, and that would be the only way to know that your conscience was wrong. You might find out that generally acting in such-and-such a way produces the best results, but you cannot know that it always will. 

I think that this is at least related to the reason why Anscombe, in "Modern Moral Philosophy," writes:
... if you are a consequentialist, the question "What is it right to do in such‑and‑such circumstances?" is a stupid one to raise. The casuist raises such a question only to ask "Would it be permissible to do so‑and‑so?" or "Would it be permissible not to do so‑and‑so?" Only if it would not be permissible not to do so‑and‑so could he say "This would be the thing to do." Otherwise, though he may speak against some action, he cannot prescribe any‑for in an actual case, the circumstances (beyond the ones imagined) might suggest all sorts of possibilities, and you can't know in advance what the possibilities are going to be. Now the consequentialist has no footing on which to say "This would be permissible, this not"; because by his own hypothesis, it is the consequences that are to decide, and he has no business to pretend that he can lay it down what possible twists a man could give doing this or that; the most he can say is: a man must not bring about this or that; he has no right to say he will, in an actual case, bring about such‑and‑such unless he does so‑and‑so.  
There seems to be a problem for consequentialism here, not so much to do with the difficulty of knowing the future (because that isn't always difficult to know) but to do with the coherence of the moral 'ought'. This apparent incoherence has nothing to do with giving a law to oneself. It has to do with what it might mean to say that one ought to do this or that particular thing. Shelly gave a very nice example, which I hope it's OK for me to use here: Imagine that a mine is flooding with people in it, and you can push one of three buttons that might close various doors and save people's lives. One button will save all the people from drowning, one will ensure that they all drown, and one will cause a few to drown but save most. You don't remember which button does which thing, but you know that button C has the third of these effects, drowning a few but saving most. The right consequentialist thing to do is to push the button that saves all the people, but you don't know what this is. The highest expected utility comes from pushing button C, even though this is definitely not the best button to push. What should you do? In one sense it seems you should push A or B (whichever is the one that will save all the people from drowning), in another sense you should push C. But what is the meaning of 'should' here? It doesn't seem as though it can be the same in both senses.

And that seems like a problem for consequentialism as a moral theory, as a view that can tell us what we should or ought to do, what is right or permissible or wrong or impermissible, as a practical guide to action.

Sunday, March 10, 2013

Ethics, aesthetics, and sport

Ooh, this is good: Coetzee and Auster on sport. Some good bits:
Coetzee: One starts by envying Federer, one moves from there to admiring him, and one ends up neither envying nor admiring him but exalted at the revelation of what a human being—a being like oneself—can do.
Which, I find, is very much like my response to masterworks of art on which I have spent a lot of time (reflection, analysis), to the point where I have a good idea of what went into their making: I can see how it was done, but I could never have done it myself, it is beyond me; yet it was done by a man (now and again a woman) like me; what an honor to belong to the species that he (occasionally she) exemplifies!
And at that point, I can no longer distinguish the ethical from the aesthetic.…
Coetzee: Winning or losing—who cares? How I judge whether or not I have done well is a private matter, between myself and what I suppose I would call my conscience.
Auster: By trying to win the game you are playing, you forget that you are running and jumping, forget that you are actually getting a healthy dose of exercise. You have lost yourself in what you are doing, and for reasons I don’t fully understand, this seems to bring intense happiness. There are other transcendent human activities, of course—sex being one of them, making art another, experiencing art yet another, but the fact is that the mind sometimes wanders during sex—which is not always transcendent!—making art (think: writing novels) is filled with doubts, pauses, and erasures, and we are not always able to give our full attention to the Shakespeare sonnet we are reading or the Bach oratorio we are listening to. If you are not fully in the game you are playing, however, you are not truly playing it.
I'm not sure how much commentary I can offer that would have the slightest value, but there's a lot to think about here. Connections come to mind with Geoff Dyer on the Olympics, the possible relation between not being fully in the game and language going on holiday (if philosophical problems arise when we take our heads out of the game, so to speak), and Stephen Mulhall on freedom through subjection to law.

And then there's Wittgenstein on rule-following in matters that call for appreciation (is this what unexceptional but high-level professional athletes do?) in contrast with the tremendous (is this what a Federer or a Beckham occasionally produces?). Lots to think about. 

UPDATE: on Facebook I just came across this, which seems relevant: “Talent hits a target no one else can hit; genius hits a target no one else can see.” - Schopenhauer

Saturday, March 9, 2013

The importance of elsewhere

In case you missed any of this, there's a lot of discussion at Reshef Agam-Segal's Notes and Half-Thoughts on the duty to tell the truth, a new and longish post at Philosophical Investigations, and a post that ought to be prompting more discussion (and is now starting to do so) at Language is Things We Do.


The Importance of Elsewhere

Lonely in Ireland, since it was not home,
Strangeness made sense. The salt rebuff of speech,
Insisting so on difference, made me welcome:
Once that was recognised, we were in touch

Their draughty streets, end-on to hills, the faint
Archaic smell of dockland, like a stable,
The herring-hawker's cry, dwindling, went
To prove me separate, not unworkable.

Living in England has no such excuse:
These are my customs and establishments
It would be much more serious to refuse.
Here no elsewhere underwrites my existence.
Philip Larkin

Friday, March 8, 2013

"Language is Sermonic"

This is a famous essay by Richard M. Weaver. Some selected passages:
the course itself [i.e. rhetoric, aka, I guess, freshman composition] has been allowed to decline from one dealing philosophically with the problems of expression to one which tries to bring below-par students up to the level of accepted usage.
the most obvious truth about rhetoric is that its object is the whole man.  It presents its arguments first to the rational part of man, because rhetorical discourses, if they are honestly conceived, always have a basis in reasoning.  Logical argument is the plot, as it were, of any speech or composition that is designed to persuade.
Rationality is an indispensable part to be sure, yet humanity includes emotionality, or the capacity to deal and suffer, to know pleasure, and it includes the capacity for aesthetic satisfaction, and, what can be only suggested, a yearning to be in relation with something infinite.  This last is his religious passion, or his aspiration to feel significant and to have a sense of belonging in a world that is productive of much frustration. These at least are the properties of humanity.
When we think of rhetoric as one of the arts of civil society (and it must be a free society, since the scope for rhetoric is limited and the employment of it constrained under despotism) we see that the rhetorician is faced with a choice of means in appealing to those whom he can prevail upon to listen to him.  If he is at all philosophical , it must occur to him to ask whether there is a standard by which the sources of persuasion can be ranked.  In a phrase, is there a preferred order of them, so that, in a scale of ethics, it is nobler to make use of one sort of appeal than another?  This is of course a question independent of circumstantial matters, yet a fundamental one.  We all react to some rhetoric as “untruthful” or “unfair” or “cheap,” and this very feeling is evidence of the truth that it is possible to use a better or a worse style of appeal.  What is the measure of the better style?  Obviously this question cannot be answered at all in the absence of some conviction about the nature and destiny of man.  Rhetoric inevitably impinges upon morality and politics; and if it is one of the means by which we endeavor to improve the character and the lot of men, we have to think of its methods and sources in relation to a scheme of values.
every use of speech, oral and written, exhibits an attitude, and an attitude implies an act. 
The implication of all this seems to be that it is not just advisable but essential and fundamental to rhetoric to study the philosophy of human nature, ethics, logic, and philosophy of language. The rest might be studying the techniques of great writers (and maybe some artists, film-makers, etc.), and probably some psychology too. And I don't really see what would be left to cover, other than more philosophy, more literature, etc., plus plenty of practice in persuasive writing, speaking, and so on. It sounds pretty good to me. And a far cry from some of the things I've heard about rhetoric.

Morality and religion

Via 3quarksdaily I found "Godless yet good" by Troy Jollimore at Aeon. As Jollimore says:
when actual arguments (not just good plain ‘common sense’) are offered against the possibility of secular morality, they tend to be deeply unconvincing. One common argument is that if there is no God, moral views are merely subjective opinions and nothing more: God is said to be required to make morality objective. A second argument is that divine authority is necessary to give morality its motivational force: without the threat of reward or punishment hanging over them, people will supposedly murder, rape, rob, and in every other way give in to their inherently sinful natures.
Neither of these arguments should persuade us.
They should not persuade us because atheists are not particularly immoral, and because:
The idea that murdering innocent people is perfectly fine unless there is a God and he disapproves is not only deeply implausible, but positively immoral in its own right.
This seems right. So why do so many people so often claim that morality requires God? I think two ideas get mixed up here. One is that religion is needed as a justification for morality, the other is that religion is something like a cause of morality. The former of these ideas seems wrong for the reasons already given. The latter seems wrong if it means that religion makes people good and atheism makes them bad (because whatever truth there might be in this idea, it isn't the truth, the whole truth, and nothing but the truth--religion can be a powerful force for good, but it is neither necessary nor guaranteed to work, and it can also be a powerful force for evil). But the historical or causal thesis might be simply that we generally get our moral beliefs from within a religious framework and/or tradition. And it's hard (though by no means necessarily impossible) to imagine a successful alternative. This relates to Jollimore's observation that, "no system of secular ethics has managed to displace religious approaches to ethics in the contemporary popular imagination."

He wonders why this is, and suggests that the impersonal nature of utilitarianism and Kantianism is part of the problem. As an alternative, Jollimore recommends Iris Murdoch's emphasis on attention, and John McDowell's particularism. But these are very sophisticated ideas. How will we teach children this kind of ethics? There are stories we can tell that aren't religious, or that we can tell even without accepting the religion from which they come. But religion provides both a shared set of stories, lessons, and values, and a context or framework into which to fit these stories, etc. It gives us a way to prioritize our values. At least the great religious traditions do, if only because so many people within them have wrestled with the problem. Their values might be wrong, but at least they have a sense of what is more or less important, and of how everything is supposed to fit together. No secular ethics has either the broad acceptance or the literary and philosophical tradition of any great religion. And that seems like a problem to me.

In his lectures on aesthetics Wittgenstein is reported to have said:
A child generally applies a word like ‘good’ first to food. One thing that is immensely important in teaching is exaggerated gestures and facial expressions. The word is taught as a substitute for a facial expression or a gesture. The gestures, tones of voice, etc., in this case are expressions of approval.
This sounds right. So one question is how we get from this use of words like 'good' to the use one makes of them when one is twenty, which, Wittgenstein reportedly says, is not the same use but is related, the child's use being a rough approximation or primitive version of the adult's. I suppose we evolve through a  combination of experience (our own and others') and socialization. And the problem is that without something like a great religious tradition this process will be unguided by the wisdom that such traditions offer (not that wisdom is all they offer) and rather haphazard, since we live among people with different values and different ways of expressing and thinking about them. Perhaps I'm not saying anything that hasn't already been said (by, say, Nietzsche) but it seems quite pressing to me, and no amount of atheist temples or 'good books' will solve the problem.

Thursday, March 7, 2013

Look out honey, 'cause I'm using technology

At New APPS Jon Cogburn is inviting people to identify the best rock song ever. That and this piece on Roadrunner got me wondering about what makes a great punk rock song. Supposedly punk means something like scumbag: "As Legs McNeil explains, "On TV, if you watched cop shows, KojakBaretta, when the cops finally catch the mass murderer, they'd say, 'you dirty Punk.' It was what your teachers would call you. It meant that you were the lowest."" So (maybe 'so' is an exaggeration) a great punk rock song ought to be some sort of transformative celebration, a slave rebellion in music. 

A couple of people at New APPS have suggested "Sister Ray" by the Velvet Underground, a droning, hypnotic account of banal debauchery. It's good, but a bit too obviously experimental. "Roadrunner" is based on it, but much happier. Jonathan Richman takes the Velvet Underground's reversal of the usual '60s themes ("I'm sick of trees, take me to the city" and the taunting "all you protest kids") and embraces their more positive implication: "Doesn't anyone love the dark? And Route 128 out by the industrial park?" He loves driving around at night with the radio on. It's not postmodern irony but love. Which perhaps disqualifies it as punk rock. More appropriate might be something that runs on anger, like the live version of "Boss Hoss" by the Barracudas. I don't know the story behind this recording, but I think the Barracudas were supporting the Stray Cats, and were about as popular as supporting acts usually are. Here they're clearly having some technical problems, and it sounds as though the crowd decides to make things worse rather than wait any longer for them to get their act together. I'm probably imagining things, but I've always felt as though the song becomes an expression of pure frustration, or a pure expression of frustration. Which makes it sound great to my ears, but it's not universally hailed as a classic, and there's probably a reason for that. 

The best rock song of all time is the one you're listening to right now, as long as it amazes you with how good it is. There are lots of perfect songs ("Crash" comes to mind), so any of those that you haven't heard for a while (e.g., for me, "Cool Guitar Boy" when it came on while I was driving the other day) would be it. But if you want something more definite it probably has to be Iggy Pop, either "The Passenger," which claims triumphant ownership of the entire world, or "Search and Destroy," which combines violent fantasy ("I'm a street-walking cheetah with a heart full of napalm"), self-pity ("I am the world's forgotten boy"), lust ("love in the middle of a fire-fight") and poetic nonsense ("I'm the runaway son of a nuclear A-bomb") into something as close to the ultimate punk rock song as you can possibly get. 

Lynch on Horwich

Michael P. Lynch's response to Paul Horwich is good. He writes:
maybe truth doesn’t have just one nature, or none, but more than one. If so, then the sin of traditional views is precisely taking a good idea and overgeneralizing it. But that doesn’t mean that there is nothing more to say about truth. Far from it — there is lots to say, it just depends on what we are talking about. So no uniform reductive explanation perhaps, but illumination just the same.
And:
Locke’s view that there are human rights, for example, didn’t leave the world as it was, nor was it intended to. Or consider the question of what we ought to believe – the central question of epistemology. [...] In getting more people to adopt new evidence-based standards of rationality — as the great enlightenment philosophers arguably did —philosophers aren’t just leaving the world as they found it. And that is a good thing. 
And then:
[Horwich's Wittgenstein], I suspect, would grant all this,
So Wittgensteinian philosophy, or Lynch's Horwichian Wittgensteinian philosophy, rejects over-generalization, is nevertheless potentially illuminating, and allows for visionary reforms of language use, especially in matters of value. Sounds good to me. Where's the snag?
In order to free us from [certain bad] thoughts, the philosopher must not only show the error in such definitions. She must also take conceptual leaps. She must aim at revision as much as description, and sketch new metaphysical theories, replacing old explanations with new.   
Lynch does not say why. To take his example, if we show the error in defining truth as Authority, why is this not enough to free people from that idea? Especially if we show the error in the kind of illuminating, lots-to-say way that he described earlier?

There are two tasks described here: the illuminating exploration of diverse uses of language that shows the error of over-generalization, and the leaping creation of new uses of language. The former requires mastery of a certain kind of method or methods, but the latter requires creation ex nihilo, or something close to it. There can be no method for that. And there is no obvious reason why people who are good at the one task will also be good at the other. Nor why the creative types (I might call them poets or propagandists) must come from the ranks of the explorer types (call them philosophers). So I don't think that Wittgenstein would include 'poetry' among the tasks of the philosopher, but there is no reason why one could not do both.

Monday, March 4, 2013

Was Wittgenstein right?

Paul Horwich has an interesting essay at The Stone. It simplifies things, of course, but three weak points are perhaps worth noticing:

  1. Horwich refers to Wittgenstein's "extreme pessimism about the potential of philosophy." There is something to this, of course, but clearing away houses of cards is a bit like ridding oneself of fat or removing blur in one's visual field. It is misleading to characterize it as purely negative. Horwich doesn't make this mistake, but I'm not sure he avoids it by as wide a margin as he might.
  2. He goes on to say: "But what is that notorious doctrine, and can it be defended? We might boil it down to four related claims." There is something deeply unWittgensteinian about boiling things down. He is quoted in the lectures on aesthetics as saying: "If we boil Redpath at 200 degrees C all that is left when the water vapor is gone is some ash, etc. This is all Redpath really is." Saying this might have a certain charm, but would be misleading to say the least. Summaries of Wittgenstein are likely to be equally misleading. It's hard to avoid them when addressing a general audience about Wittgenstein, of course, but the boiling down metaphor might be better eschewed. 
  3. At the end Horwich says that: "These radical ideas are not obviously correct, and may on close scrutiny turn out to be wrong. But they deserve to receive that scrutiny — to be taken much more seriously than they are." I sort of agree, but I'm not sure about the idea that Wittgenstein's ideas might turn out to be wrong on close scrutiny. It isn't clear, after all, what his ideas are. At least sometimes it seems as though what Wittgenstein offers is a method rather than a set of ideas. Of course this method, or set of methods, is not arbitrary and is based on certain ideas, but Wittgenstein doesn't really offer a theoretical defense of his methods. The proof can only be in the pudding. Or so I'm inclined to think. In which case we have to try the method(s) and see how it goes, rather than scrutinize the method(s) or the ideas on which it is (they are) based. And also look out for the puddings of other philosophers. Are the ideas rejected by Wittgenstein proving fruitful after all? Or are the same debates still going on, albeit perhaps in new forms? 

The Moral Philosophy of Elizabeth Anscombe

Conference announcement here.
On 27th and 28th September 2013 the Centre will host a major international conference on the moral philosophy of GEM Anscombe, after whom the Centre is named, at St Hugh's College, Oxford (where G.E.M. Anscombe was an undergraduate student). 

Participants will include:
Christopher Coope, 
Dr Mary Geach,
Prof Luke Gormally, 
Dr Edward Harcourt,
Dr David Albert Jones, 
Prof Anselm Müller (Anscombe Memorial Lecturer 2013),
Dr Matthew O’Brien, 
Prof Thomas Pink,
Prof Duncan Richter, 
Dr Roger Teichmann,
Prof Jose Maria Torralba and 
Prof Candace Vogler. 

Wittgenstein on aesthetics

What I think of as Wittgenstein's lectures on aesthetics are actually just notes taken by students, but they are pretty good notes, and the material is rich. There are some useful quotes here and a nice account of the lectures here.

The part that most interests me at the moment is this (edited) passage (my emboldening):
5. One thing we always do when discussing a word is to ask how we were taught it. Doing this on the one hand destroys a variety of misconceptions, on the other hand gives you a primitive language in which the word is used. Although this language is not what you talk when you are twenty, you get a rough approximation to what kind of language game is going to be played. Cf. How did we learn ‘I dreamt so and so’? The interesting point is that we didn’t learn it by being shown a dream. If you ask yourself how a child learns ‘beautiful’, ‘fine’, etc., you find it learns them roughly as interjections. (‘Beautiful’ is an odd word to talk about because it’s hardly ever used.) A child generally applies a word like ‘good’ first to food. One thing that is immensely important in teaching is exaggerated gestures and facial expressions. The word is taught as a substitute for a facial expression or a gesture. The gestures, tones of voice, etc., in this case are expressions of approval. What makes the word an interjection of approval? {2.1} It is the game it appears in, not the form of words. (If I had to say what is the main mistake made by philosophers of the present generation, including Moore, I would say that it is that when language is looked at, what is looked at is a form of words and not the use made of the form of words.) 
6. If you came to a foreign tribe, whose language you didn’t know at all and you wished to know what words corresponded to ‘good’, ‘fine’, etc., what would you look for? You would look for smiles, gestures, food, toys. ([Reply to objection:] If you went to Mars and men were spheres with sticks coming out, you wouldn’t know what to look for. Or if you went to a tribe where noises made with the mouth were just breathing or making music, and language was made with the ears. Cf. “When you see trees swaying about they are talking to one another.” (“Everything has a soul.”) You compare the branches with arms. Certainly we must interpret the gestures of the tribe on the analogy of ours.) How far this takes us from normal aesthetics [and ethics—T]. We don’t start from certain words, but from certain occasions or activities.
8. It is remarkable that in real life, when aesthetic judgements are made, aesthetic adjectives such as ‘beautiful’, ‘fine’, etc., play hardly any role at all. Are aesthetic adjectives used in a musical criticism? You say: “Look at this transition”, {3.2} or [Rhees] “The passage here is incoherent”. Or you say, in a poetical criticism, [Taylor]: “His use of images is precise”. The words you use are more akin to ‘right’ and ‘correct’ (as these words are used in ordinary speech) than to ‘beautiful’ and ‘lovely’. {3.3}
And here's another bit:
23. We talked of correctness. A good cutter won’t use any words except words like ‘Too long’, ‘All right’. When we talk of a Symphony of Beethoven we don’t talk of correctness. Entirely different things enter. One wouldn’t talk of appreciating the tremendous things in Art. In certain styles in Architecture a door is correct, and the thing is you appreciate it. But in the case of a Gothic Cathedral what we do is not at all to find it correct—it plays an entirely different role with us. {8.1}  The entire game is different. It is as different as to judge a human being and on the one hand to say ‘He behaves well’ and on the other hand ‘He made a great impression on me’. 
Two things that strike me are the difference between articulate appreciation and just saying "Ah!", for one thing, and the difference between art that gets things right and art that is tremendous, for another. I think Wittgenstein has a similar distinction in mind when he says that "the house I built for Gretl is the product of a decidedly sensitive ear and good manners, and expression of great understanding... But primordial life, wild life striving to erupt into the open - that is lacking." Culture and Value (1980 edition, p. 38e, from 1940). He also says somewhere that really good architecture is like a gesture, like someone saying something. Speaking (in this sense) is then quite different from following rules, which is perhaps not what one would expect to find Wittgenstein saying. And it is not what he says, it's my gloss, but he comes interestingly close to it.

Why does it matter that a child first uses the word 'good' in connection with food or toys or something of the sort? We no longer speak the language of our childhood, and yet there is a sense, an obvious sense, in which we do. We do still use the word 'good' and it is the same word, even if we now use it in more sophisticated ways. It makes little sense to say, "This food is good, but I don't want to eat it," or "This music is good but I don't like it." Those sentences can make sense, of course, but the latter sounds as though it perhaps means, "I know I'm supposed to like this stuff, but I just don't." The word 'good' is in quotation marks. Which suggests that a question like "Why be good?" really makes no sense. The real question is what do we actually regard as good, not why we should pursue what we so regard. Although even that might not be much of a question, since we know pretty well what we regard as good. The challenge, or one challenge, is, so to speak, maximizing the good. This sounds too consequentialist to be quite right, but I mean making sure that we act consistently with our priorities. For instance, I like money but if my pursuit of money costs me things that I love even more (my family, the local wildlife, etc.) then I have behaved foolishly. This is obvious, but I think it's quite difficult in practice to avoid this kind of folly. Another challenge, if I can call it that, would be to do the tremendous. Socrates in the Crito might be an example of this. 


I'll be returning to this. I think there could be a lot to think about in connection with ethics that I haven't really thought about before.