I'm one of three people kicking off a student discussion tonight on paternalism. Here's a draft of what I intend to say:
One way to think about people is as vehicles. The Indian philosopher Nagasena and the Greek philosopher Plato compared a person to a chariot, and whoever said that life is a highway had a similar idea. Paternalism can be understood as taking over someone else’s steering wheel, forcing them to go in a direction that is not the one they have chosen. Put like this it sounds like a form of carjacking or kidnapping, something very bad indeed. It’s hard to say exactly why kidnapping people is wrong, or to articulate what is so bad about it, just as it is hard to say why imprisoning people is so bad (assuming that prisoners are fed and housed, and not subjected to unlawful assaults). But we know that it is very bad to be in prison or to be kidnapped. Freedom seems to be a fundamental value, not something to violate without a very good excuse.
Paternalism is not literally kidnapping, but it is forcing people to live as if you and not they are the boss of them, the person in charge or ruler of their life. So it is like kidnapping, and perhaps almost as bad. It does seem to some people as though it needs a very good excuse, and perhaps it can never really be excused at all. That is, perhaps the only time when paternalistically forcing someone to do, or not do, something without their consent is when they actually do consent in some sense. Most obviously this would be if they actually did consent retroactively. For instance, I save you from a burning building despite the fact that you insist you want to stay and watch the pretty flames. Once your acid trip is over, you thank me for saving your life. You have given me retroactive consent, and my paternalism is forgiven.
Another kind of consent would be hypothetical consent. Here you never actually do agree to what I have made you do, but the idea is that you would consent if you were more enlightened. The word ‘paternalism’ comes from the Latin for father, and it is widely accepted that parents do have the right to make their children eat their broccoli, go to school, etc., because children are not yet fully rational. We might feel the same way about forcing mentally ill people to take their medicine. And so, perhaps, when someone behaves irrationally (driving drunk, taking cocaine, gambling away their life savings, etc.) we might be justified in acting paternalistically toward them. When they have sobered up they might acknowledge that we did the right thing. But even if they don’t, they would do so if they were rational. Irrational people have no right to be left to their own devices. That’s the idea, anyway.
This is tricky though, because who is rational and who is not is a judgment call. Is it irrational to snort cocaine, or just dumb? Do we have a right to intervene every time anyone makes a bad decision? Should large sodas be illegal? Should stupid haircuts be illegal? It’s hard to know where to draw the line, and so it’s tempting to say that every adult should be allowed to do whatever they like as long as they don’t harm anybody else.
There are two problems with this. One is that people make some really bad decisions. Can we really stand by while people destroy their lives? Friends don’t let friends… My neighbor is not my friend, but I don’t have to be inhuman in order to respect other people’s autonomy (self-rule, being the boss of themselves). That is, it seems like a funny kind of moral code that demands we let people suffer when we could, perhaps, easily prevent their suffering. Another problem is that very little does affect only yourself. We’re living in a society, as George Costanza said. If you die, someone else has to clean up your body. If you lose all your money, say by gambling, someone else will have to provide for you. We could, arguably, just leave bodies to rot and the poor to starve, but a) this is simply unacceptable to many people, and b) it would still affect others anyway. Rotting corpses are a health hazard, and starving masses are a recipe for revolution or at least crime. People who do not take care of themselves put others at risk.
Does this give those others a right to insist that people not gamble too much, not take dangerous drugs, not drive without a seatbelt, etc.? I think so. (Just don’t ask me where to draw the line. The point of democracy is to draw lines like this.) But perhaps this isn’t really paternalism any more. It isn’t saving people from themselves, but saving others from the irresponsible.
What about non-legal cases? There is a kind of passive paternalism when you don’t allow someone to make their own informed decisions, by keeping relevant information from them. A doctor might do this by not telling you that you have only a few weeks to live. Or you might decide not to tell a friend that their spouse is cheating on them. There isn’t the same kind of risk to the general public that I described before in these cases, and I don’t think the answer is simple. Some people prefer to know things like this, others don’t. So retroactive consent is hard to predict. What is rational is also hard to know. It might seem that being fully informed is the rational choice. But information might make certain courses of action very hard to take. If they are the right courses to take, then maybe it would be more rational to choose not to know the awful truth.
Take the case of someone who has little time left to live. If he carries on as normal because he does not know that he is soon going to die then his life is kind of absurd, almost ridiculous. The same goes for a man whose wife is cheating on him. It might be cruel to allow someone to live in this absurd way, despite the pain that they would feel on finding out the truth. But what if this pain threw them off track? If the dying man now spends his last days so terrified that he feels unable to get out of bed, was it really kind to tell him he is dying? He has autonomy but no life. Generally I think it is better to know and to let other people know the truth, even when it is painful. But I don’t think I can at all prove that it’s better.
There is a kind of heroic ideal that believes in brave honesty, but I don’t know how honest (or brave) we can be. It is not possible to know everything, or to pay attention to everything. So ignoring some things is not cowardly or dishonest, just inevitable. Death might be one of these things. I don’t mean that we should just pretend it isn’t going to happen, but I doubt we can fully anticipate it either. If that’s right, then the choice isn’t between full honesty and cowardly fantasy, but between degrees of ignorance. And some attempts at maximum honesty might amount to little more than masochism. If you want to do that to yourself it’s up to you, but it would be hard to say that you would always be justified in inflicting it on someone else. So passive paternalism might sometimes be justified, when the truth is just too painful. When in doubt, though, I’d say we let people know the truth so they can steer their lives accordingly.
Monday, February 25, 2013
Sunday, February 24, 2013
The horror, the horror
Why do people enjoy being scared? It surely is not the case that scary movies are not really scary, nor that the pleasure comes from the relief when the fear stops. It seems at least possible that people enjoy scary films for the same kind of reasons that they enjoy sad movies, songs, and stories. And why is that? The Verve's "Bittersweet Symphony" suggests a kind of answer: "I need to hear some sounds that recognize the pain in me. Yeah." If you're feeling sad you're likely to want to have your feelings articulated, and recognized. Not necessarily shared, that is, but acknowledged as something that people do experience. It's a kind of validation, as well as sympathy. Fear might also need to be articulated and validated. For instance, if it is usually only very vaguely felt, or if attempts to articulate it are quickly censored because they sound crazy or evil. If this is right then popular movies might be very revealing of a society's psychological state. And that sounds plausible anyway.
Last night I watched two horror films: Nosferatu and [REC] 3 Genesis. Both are kind of silly, but in different ways. Nosferatu is, not surprisingly, very dated, and the acting seems terrible. On the other hand, even now the images (some of them, anyway) are powerfully striking and, appropriately, haunting. [REC] 3 (the "rec" is for the letters used to indicate that a camcorder is recording: think The Blair Witch Project) is much more forgettable, and the weakest of the [REC] series, though still, to my mind, worth seeing. When the bride picks up a chainsaw to fight off the zombies you know you arein Japan being catered to by crowd-pleasers.
Nosferatu reminded me of the links between the plague and legends of vampires. It also seems portentous to watch an angry German mob run through a town after a scapegoat. Creepy. But what could the contemporary fascination with zombies mean? Perhaps nothing, of course, but my pet theory is that we are afraid that our own society is being taken over by people who are dead on the inside, and that this internal death is contagious. That we are already succumbing to it. This might be thought of as part of the death of God (freedom and immortality die along with him, and so we become mere things), or as the death of the Overman. The disenchantment of the world means, among other things, the disenchantment of human beings. Free will might be thought of as something like a compliment that we cannot help but pay to each other, but it isn't so hard to deny the compliment to those we don't really interact with (Descartes's hats and coats below in the street, "the they"), and these seem to be increasing in number and influence (see here, for instance). And much the same goes for consciousness. (Doesn't it? I haven't thought this anything like all the way through.)
We can, and perhaps cannot but, have an attitude toward a soul when it comes to people we really live with. But all those others out there, and people we live only virtually with, can't really be treated the same way. What kind of attitude can I have toward someone on the other end of an internet connection? I realize that means most of the people who will read this, i.e. the ones I will never meet, and I don't mean to be rude. But we cannot literally see eye to eye like this, or make any use of facial expression or bodily posture. I can only have an attitude toward you in a limited sense of the word. And what about people who are not themselves even in person, only representatives of some corporate position, say, or speakers of jargon? I am not of the opinion that they have souls, to misquote Wittgenstein. And then Schopenhauer's argument against solipsism comes up: you can't prove that you are fundamentally different from everyone else, but you would have to be crazy to think that you are. This goes two ways. It's a kind of argument against zombies, but also an argument against one's own non-zombie-ness. That is, if they are all zombies, and I must be the same as them, then I must be a zombie too. Just being surrounded by them, even if you are still human, is bad enough. And however incredible that idea might be, it's still scary.
Last night I watched two horror films: Nosferatu and [REC] 3 Genesis. Both are kind of silly, but in different ways. Nosferatu is, not surprisingly, very dated, and the acting seems terrible. On the other hand, even now the images (some of them, anyway) are powerfully striking and, appropriately, haunting. [REC] 3 (the "rec" is for the letters used to indicate that a camcorder is recording: think The Blair Witch Project) is much more forgettable, and the weakest of the [REC] series, though still, to my mind, worth seeing. When the bride picks up a chainsaw to fight off the zombies you know you are
Nosferatu reminded me of the links between the plague and legends of vampires. It also seems portentous to watch an angry German mob run through a town after a scapegoat. Creepy. But what could the contemporary fascination with zombies mean? Perhaps nothing, of course, but my pet theory is that we are afraid that our own society is being taken over by people who are dead on the inside, and that this internal death is contagious. That we are already succumbing to it. This might be thought of as part of the death of God (freedom and immortality die along with him, and so we become mere things), or as the death of the Overman. The disenchantment of the world means, among other things, the disenchantment of human beings. Free will might be thought of as something like a compliment that we cannot help but pay to each other, but it isn't so hard to deny the compliment to those we don't really interact with (Descartes's hats and coats below in the street, "the they"), and these seem to be increasing in number and influence (see here, for instance). And much the same goes for consciousness. (Doesn't it? I haven't thought this anything like all the way through.)
We can, and perhaps cannot but, have an attitude toward a soul when it comes to people we really live with. But all those others out there, and people we live only virtually with, can't really be treated the same way. What kind of attitude can I have toward someone on the other end of an internet connection? I realize that means most of the people who will read this, i.e. the ones I will never meet, and I don't mean to be rude. But we cannot literally see eye to eye like this, or make any use of facial expression or bodily posture. I can only have an attitude toward you in a limited sense of the word. And what about people who are not themselves even in person, only representatives of some corporate position, say, or speakers of jargon? I am not of the opinion that they have souls, to misquote Wittgenstein. And then Schopenhauer's argument against solipsism comes up: you can't prove that you are fundamentally different from everyone else, but you would have to be crazy to think that you are. This goes two ways. It's a kind of argument against zombies, but also an argument against one's own non-zombie-ness. That is, if they are all zombies, and I must be the same as them, then I must be a zombie too. Just being surrounded by them, even if you are still human, is bad enough. And however incredible that idea might be, it's still scary.
Saturday, February 23, 2013
Daniel D. Hutto reviews Wittgenstein at His Word
I knew that Daniel D. Hutto had reviewed Wittgenstein at His Word but I had not seen the whole review before. Here it (or a draft of it) is at academia.edu. The reviews I remember reading struck me as having missed the point I was trying to make in the book, which of course shows that I did not make my point clearly enough. Hutto seems to have seen through the murk to what I was trying to say. His conclusion that "while the book makes a strong case for taking Wittgenstein at his word and gives an interesting view of what this means, it hardly supplies the final word" strikes me as fair, or even generous.
Wednesday, February 20, 2013
Midway
In the words of Stephen Clark:
[T]his is indeed a (short) film that everyone should see:The trailer, to which the second link leads, is less than four minutes long, and is incredible. It's like an exhibition of art photographs, but showing birds killed by garbage. The point, as far as I can tell from such a short preview, is about environmentalism and the need for us to throw less garbage into the sea. But if feels as though the point is somehow (even) bigger than that, as though the birds' dying is a symptom of something else, or a metaphor. Greed, laziness, sloppiness, thoughtlessness, short-term thinking, short-range thinking (here matters, there does not) lead to consequences we don't like. Lead even to what we want least of all, the poisoning of paradise. Even though we might be so dimly aware that this (paradise, unpoisoned) is what we want most of all that we don't even recognize our desire for it. (They are only birds, after all.)
https://www.facebook.com/photo.php?v=377714522327361
http://midwayfilm.com/
Friday, February 15, 2013
Ethics and the Philosophy of Culture
I don't think it's out yet, but it looks as though this collection will be published soon. It includes papers by Olli Lagerspetz, Alice Crary, Don Levi, Mikel Burley, Sergio Benvenuto, Lars Hertzberg, Pär Segerdahl, Joel Backström, Tove Österman, Anniken Greve, and me, plus an introduction by Ylva Gustafsson, Camilla Kronqvist and Hannes Nykänen.
David Cockburn says:
This is a very worthwhile collection. All of the essays are informed by a strong knowledge of, and sympathy for, Wittgenstein, and develop strands in his thinking in fresh and interesting ways. They also, however, bring a distinctive angle to issues of broad philosophical interest in a way that should make the collection of value to anyone who is concerned with the fundamental questions of ethics, culture and religion that are its focus. One might add that (in contrast to much contemporary philosophy) most of these essays are a real pleasure to read.Which is nice of him. The people you would expect to be good are indeed on good form here, but there are some very nice surprises from people whose work I didn't know before too. (I am not including myself in either category, by the way.)
Friday, February 8, 2013
Metacognition
Teach Philosophy 101 links to a "great article" about metacognition.
Two things that seem to help, especially for students who are doing poorly, are finding out from them before a test what they think they know and what they think they don’t know (and then, presumably, spending time in class going over the stuff they don’t feel confident about), and having them do brief writing exercises about assigned readings before class. This increases the amount they read (again: surprise!). Both of these exercises are considered to be forms of reflective learning. But they seem more like surveying students and making them read, respectively. Each of which is no doubt potentially beneficial, albeit there will be opportunity costs that ought to be taken into account.
Metacognition has to do with the process of reflecting on learning. Often, however, as we try to fit our material into the semester, we don't leave time for students to reflect on what they have learned. But the studies show that metacognition is an important part of learning. This suggests that we should leave more time for students to reflect on what and how they have learned.Any time I read that "studies show" or "research shows" I get suspicious. What research? What studies? Of course, some research does show things, but in this case it does not seem to me that it shows that metacognition is important.
Here is a kind of summary of the article (which is indeed pretty good):
Throughout the process, as reported in San Francisco, the group found that metacognition was by no means a "silver bullet" for improving student learning, but nonetheless was an effective tool for focusing students' attention more consciously on their learning and, ultimately, providing a means to encourage students to think about the larger purpose of their education. Perhaps as important, the collegium group found that by asking metacognitive questions of students, they became both more aware of their students' learning and increasingly self-reflective about their own teaching practices and effectiveness.In other words, reflection or metacognition (getting students to think about what they have learned and how they learned it) does not significantly improve students’ learning. It does make them more conscious of what they learn and how they learn it (surprise surprise!), and can help teachers become more aware of what their students are learning (or not) and how, so that problems can be addressed.
Two things that seem to help, especially for students who are doing poorly, are finding out from them before a test what they think they know and what they think they don’t know (and then, presumably, spending time in class going over the stuff they don’t feel confident about), and having them do brief writing exercises about assigned readings before class. This increases the amount they read (again: surprise!). Both of these exercises are considered to be forms of reflective learning. But they seem more like surveying students and making them read, respectively. Each of which is no doubt potentially beneficial, albeit there will be opportunity costs that ought to be taken into account.
Thursday, February 7, 2013
Was Wittgenstein sexist? Part II
Another reason to get on Facebook if you aren't already is Wittgenstein Day-by-Day. Today's entry tells us:
His take on flirting is curious too. Pinsent's view is much more reasonable, but in a way it seems to be reasonableness itself that Wittgenstein is rejecting. (Again probably unhelpfully, in a way that reminds me of Chesterton's rejection of Aristotle's reasonableness.) Half-measures, half-heartedness, compromise do not seem to have been at all to his taste, at least when he was young. In which case he might not have supported anyone's having the vote, as people on Facebook have suggested.
I probably sound as though I'm twisting myself into knots to avoid accepting the fact that he was sexist. But I think he was sexist. What I'm not sure about is whether this was one facet of some larger anti-reasonableness. And then whether this is best understood as immaturity, romanticism, existentialism, religiousness without theism, aristocratic arrogance, or what. Not that a mere label will be much use on its own, of course. But I'm curious about the nature of his seemingly reactionary thinking.
Friday 7th February, 1913: LW turns up at Pinsent’s rooms, and stays for tea until 5.30pm, at which point Pinsent goes to attend Russell’s lecture. At 6.30 Pinsent goes to LW’s rooms, and the two stay talking there until Hall at 7.45. They talk about Women’s suffrage (http://en.wikipedia.org/wiki/Women%27s_suffrage ), and Pinsent reports that LW “is very much against it – for no particular reason except that ‘all the women he knows are such idiots’”. LW expresses his view that at Manchester University the female students spend all their time flirting with the professors, which disgusts him, since he dislikes half-measures of all sorts, and ‘disapproves of anything not deadly in earnest’. Pinsent comments: “Yet in these days, when marriage is not possible till the age of about 30 – no one is earning enough until then – and when illegitimate marriages are not approved of – what else is there to do but philander?” (Pinsent, pp.44-5).I don't know what to make of this. If he opposes women's suffrage because all the women he knows are idiots then this would not be opposing it for no reason. It would be opposing it for a bad reason, of course, but not no reason at all. What Pinsent reports is open to interpretation, but I wonder whether Wittgenstein perhaps did oppose it, even "very much", for no particular reason. It seems very much in the same spirit as his remark to Russell that he would prefer a Society for War and Slavery to one for Peace and Freedom. And his suggestion (as I recall) that the atom bomb was likely to be good because the people who opposed it were so bad (although what he actually writes is that just because the opponents are bad it doesn't follow that the bomb must be good, suggesting to me that the contrary thought had occurred to him and needed to be corrected). In short, a certain kind of liberal progressivism seems to have irritated him so much that he opposed whatever it favored, even when he had no particular reason to do so. I don't mean that this is a good thing. I'm much more on Russell's side here than Wittgenstein's, which, at least as I am imagining it, is purely reactionary. This probably won't help, but his position seems vaguely like some things Nietzsche says, and which I also find hard to understand. Maybe there is a common thread, maybe there isn't. It's interesting though (Wittgenstein's apparent reactionary tendency, that is, not any possible similarity with Nietzsche in this regard, which may or may not turn out to be interesting if it exists at all).
His take on flirting is curious too. Pinsent's view is much more reasonable, but in a way it seems to be reasonableness itself that Wittgenstein is rejecting. (Again probably unhelpfully, in a way that reminds me of Chesterton's rejection of Aristotle's reasonableness.) Half-measures, half-heartedness, compromise do not seem to have been at all to his taste, at least when he was young. In which case he might not have supported anyone's having the vote, as people on Facebook have suggested.
I probably sound as though I'm twisting myself into knots to avoid accepting the fact that he was sexist. But I think he was sexist. What I'm not sure about is whether this was one facet of some larger anti-reasonableness. And then whether this is best understood as immaturity, romanticism, existentialism, religiousness without theism, aristocratic arrogance, or what. Not that a mere label will be much use on its own, of course. But I'm curious about the nature of his seemingly reactionary thinking.
Wednesday, February 6, 2013
Did Wittgenstein Disagree with Heidegger?
I just found that the version of this paper of mine that shows up on academia.edu is not the best version. So I have made that available there too, and hopefully people will stop reading the old one.
Tuesday, February 5, 2013
Comments
I have just added what I hope will be some sort of barrier to spam in comments here. I had removed almost all such barriers, but too many advertisements for non-philosophical websites are getting through. If you find yourself having to type the words shown below where the "words" are a photograph of a bridge and some illegible jumble of blurry typewriter parts, please let me know.
Rhetoric
If you're wondering about my allergy to rhetoric (or 'rhetoric,' at any rate), try watching this video. Some of the people in it come across as intelligent and make good points (there's one doctoral candidate in particular who seems sound to me), but not everything is uniformly excellent.
There's a lot going on, and going wrong, there, it seems to me. Passages 1-4 suggest that one problem is trying to do too much. How can one discipline enable people to assess messages effectively in any facet of any field? I can't assess a chemistry textbook without knowing any chemistry, for instance. Of course, I might check the spelling and grammar, or I might be a design expert and check the layout, but there cannot be a single discipline that specializes in every aspect of every kind of communication. And making people more self-conscious about communication might make it harder for them to communicate effectively. Point 4 suggests that rhetoric is the study of presentation, which perhaps could be a discipline. But I doubt that one can reliably guide people in matters of presentation without some knowledge of what is to be presented. Otherwise the presentation might distort the content or mislead the audience.
If you would rather not watch it all, here are some parts transcribed. People don't always speak in perfectly formed sentences, of course, but I have done my best to write what I think they mean to say accurately.
- An education of rhetoric enables communicators in any facet of any field to create and assess messages effectively.
- What rhetoric does is to help you to become more self-conscious about your practices so that you can tailor them for a wide variety of individual situations.
- Rhetoric [...] is language.
- When we decide which browser we'll use we're making a rhetorical decision. Which is going to be the best for my career? Which is going to be the best for my research? ... Take digital photography. We make decisions of how I will crop the picture, what lighting I will use, ... These are just as important to the person who is doing visual rhetoric as the person who is using oral rhetoric would consider how loudly they speak, what terms they use. It's the same principle at work.
- How people make purchase decisions can be rhetorically informed or not. [The speaker then points out that people tend to be much more careful and systematic when buying a car or choosing a university than they are when choosing which toothpaste to buy.] That's epistemic rhetoric.
- [Next comes a story about two young women going to college. One does a lot of research before choosing a school, and is therefore said to have made an informed decision, while the other picked the school that most of her friends were going to, which is said not to be an informed decision. After this we are told that many people do not question their lives at all because they don't have the capacity to do so. The implication is that studying rhetoric will teach you not to be so foolish as to pick a college without careful research, e.g. into what majors each college offers, and will, in general, help you lead a more thoughtful, rational life. The narrator later tells us that "Julie" used epistemic rhetoric and made an informed decision, while "Kate" based her decision on someone else's opinion.]
- Rhetoric, among other things, is epistemic, that is, it creates realities. It's not going to create that wall over there, but it's going to create your perception of that wall or understanding of what that wall means to you, your understanding that the wall actually is there. So when I say it creates reality what I mean is not that it creates some physical world but it creates our understanding of the physical world and our place in the physical world.
- The one sentence definition of epistemic rhetoric that I like to use is that rhetoric is a means of adjudicating between competing knowledge claims.
- To say that rhetoric is epistemic is to say that rhetoric is a way of knowing.
- Rhetoric is one of the processes we use to create facts, to construct facts.
- A fact is raw data plus interpretation.
There's a lot going on, and going wrong, there, it seems to me. Passages 1-4 suggest that one problem is trying to do too much. How can one discipline enable people to assess messages effectively in any facet of any field? I can't assess a chemistry textbook without knowing any chemistry, for instance. Of course, I might check the spelling and grammar, or I might be a design expert and check the layout, but there cannot be a single discipline that specializes in every aspect of every kind of communication. And making people more self-conscious about communication might make it harder for them to communicate effectively. Point 4 suggests that rhetoric is the study of presentation, which perhaps could be a discipline. But I doubt that one can reliably guide people in matters of presentation without some knowledge of what is to be presented. Otherwise the presentation might distort the content or mislead the audience.
Points 5-11 seem almost like a philosophical nightmare. 5 and 6 suggest that "epistemic rhetoric" is (the study of) rational decision-making. This has nothing to do with presentation, though, except in the sense of point 3, that it all has to do with language. This is about as much as I can get from point 7. Point 8 suggests that, if rhetoric is language, epistemic rhetoric is reason. And then 10 and 11 seem like garbled Nietzsche.
In short, I'm not impressed. The idea seems to be that rhetoric is somehow both the art of presentation in any medium and of any content, and that rhetoric is philosophy, or something very like it. If you really wanted to learn this stuff I'd suggest something like the following curriculum:
- freshman composition
- graphic design
- critical thinking
- cognitive psychology
- epistemology
- philosophy of language
- Nietzsche
- Wittgenstein
That doesn't sound so bad, although it might be a challenge to make it all fit together and to teach the courses lower on the list (philosophy of language, etc.) to undergraduates. I wonder whether anyone does something like this? Maybe it's the future of philosophy at VMI.
Sunday, February 3, 2013
MOOCs and doom
I've spent so much time discussing MOOCs and The Future of the University today that I think I might as well put my thoughts together in one place. Here goes.
It started with this and this. The first is an article by Thomas Friedman arguing that MOOCs (massive open online courses) will be great for spreading cheap, high quality education, thereby lifting people out of poverty. The second is a piece by Nathan Harden arguing that MOOCs will mean the end of the university as we know it. So what's my view?
MOOCs face some problems. The fact that they are open means that anyone can take them, and lots of people do. If all those students write papers and do other assignments, who is going to grade them all? How will the grader know that the students did not cheat, say by having someone else do the assignments for them? And if the courses are open to anyone, how can anybody make a profit from them? Presumably the goal is to use some combination of software and badly paid academics to grade work and detect cheating. Then credit can be given for passing the course, and students will pay for this credit. Since there are so many more students able to take each course than at a traditional college or university, and the overheads are relatively low, it should be much more profitable to teach like this than in the traditional way. Cheaper for the students, too. Hence the good news that Friedman is so happy about, and the doom that Harden predicts.
It's too early to tell what will happen, but it does seem likely that MOOCs will become a cheap way to spread pretty good education around the world. They don't seem ideal for hands-on engineering courses, or theatre, or subjects like philosophy where tutorials would probably be the ideal means of instruction. But they do seem to be a perfectly decent way to teach anything that can be taught through large lectures, and that includes almost everything at the introductory level, as well as perhaps some whole subjects. Would a small introductory ethics course taught by me really be much better than one in which students watched Michael Sandel's lectures and had access to someone like me online for questions and discussion? I like to think it would be, is, better, but I doubt it's much better, and I don't know how anyone could measure (or simply discern) the difference in quality at all reliably. At levels above the introductory, though, I think you need more individual attention and less lecture, at least in subjects like philosophy (by which I might just mean the liberal arts, but I don't know other subjects well enough to say). And I think this is widely recognized.
I expect universities will find a way to survive, even though they are the homes of the largest lecture courses, which seem to me to be the most vulnerable to competition from MOOCs. After all, they are also home to college sports, to the professors who will teach the MOOCs (although how many of these will we need in future?), and to the graduate students who are learning how to become either superstar MOOC professors or else badly paid MOOC graders and discussion-leaders (how many of them will there be in future?). But maybe it will become standard to graduate in just two or three years after transferring in a year or two of MOOC credit. Maybe some subjects will be so MOOC-friendly that they will disappear from college curricula, there being no market for these on actual campuses. And perhaps non-MOOC-friendly subjects will be confined to a much smaller number than exist now of old-fashioned liberal arts colleges, populated by the children of wealthy parents who want more individual attention and the prestige of non-vocational education for their offspring, even if they themselves have little sense of the value of literature, philosophy, etc. That's what seems likely to me.
It will mean better value in higher education for many people, but an even worse job market for liberal arts PhDs. Given that the job market is already terrible, and that some students go to community college first and then transfer credit to four-year colleges and universities, it would basically mean more of the same stuff we are seeing already. Which makes it all the easier to believe.
It started with this and this. The first is an article by Thomas Friedman arguing that MOOCs (massive open online courses) will be great for spreading cheap, high quality education, thereby lifting people out of poverty. The second is a piece by Nathan Harden arguing that MOOCs will mean the end of the university as we know it. So what's my view?
MOOCs face some problems. The fact that they are open means that anyone can take them, and lots of people do. If all those students write papers and do other assignments, who is going to grade them all? How will the grader know that the students did not cheat, say by having someone else do the assignments for them? And if the courses are open to anyone, how can anybody make a profit from them? Presumably the goal is to use some combination of software and badly paid academics to grade work and detect cheating. Then credit can be given for passing the course, and students will pay for this credit. Since there are so many more students able to take each course than at a traditional college or university, and the overheads are relatively low, it should be much more profitable to teach like this than in the traditional way. Cheaper for the students, too. Hence the good news that Friedman is so happy about, and the doom that Harden predicts.
It's too early to tell what will happen, but it does seem likely that MOOCs will become a cheap way to spread pretty good education around the world. They don't seem ideal for hands-on engineering courses, or theatre, or subjects like philosophy where tutorials would probably be the ideal means of instruction. But they do seem to be a perfectly decent way to teach anything that can be taught through large lectures, and that includes almost everything at the introductory level, as well as perhaps some whole subjects. Would a small introductory ethics course taught by me really be much better than one in which students watched Michael Sandel's lectures and had access to someone like me online for questions and discussion? I like to think it would be, is, better, but I doubt it's much better, and I don't know how anyone could measure (or simply discern) the difference in quality at all reliably. At levels above the introductory, though, I think you need more individual attention and less lecture, at least in subjects like philosophy (by which I might just mean the liberal arts, but I don't know other subjects well enough to say). And I think this is widely recognized.
I expect universities will find a way to survive, even though they are the homes of the largest lecture courses, which seem to me to be the most vulnerable to competition from MOOCs. After all, they are also home to college sports, to the professors who will teach the MOOCs (although how many of these will we need in future?), and to the graduate students who are learning how to become either superstar MOOC professors or else badly paid MOOC graders and discussion-leaders (how many of them will there be in future?). But maybe it will become standard to graduate in just two or three years after transferring in a year or two of MOOC credit. Maybe some subjects will be so MOOC-friendly that they will disappear from college curricula, there being no market for these on actual campuses. And perhaps non-MOOC-friendly subjects will be confined to a much smaller number than exist now of old-fashioned liberal arts colleges, populated by the children of wealthy parents who want more individual attention and the prestige of non-vocational education for their offspring, even if they themselves have little sense of the value of literature, philosophy, etc. That's what seems likely to me.
It will mean better value in higher education for many people, but an even worse job market for liberal arts PhDs. Given that the job market is already terrible, and that some students go to community college first and then transfer credit to four-year colleges and universities, it would basically mean more of the same stuff we are seeing already. Which makes it all the easier to believe.
Friday, February 1, 2013
Contemporary moral debate
I'm developing an allergy to the word 'rhetoric', but reading Don Levi's "In Defense of Rhetoric" got me thinking. He takes an interest in the actual arguments people present regarding such issues as abortion. Arguments like this, the kind you find in real life, can be simply bad in one way or another (they may be based on ignorance, fallacious, expressions of prejudice, etc.) but they can also be revealing. Sometimes people misunderstand their own beliefs, their own values, and can come to see them more clearly when pressed to be consistent and to express more accurately exactly what they mean. (I don't mean to suggest that pressing for consistency can never go too far.) This is an example, I think, of how studying philosophy can be useful. It can help us to understand both ourselves and some of the issues that shape and divide our society.
Studying only sophisticated arguments about, say, abortion might not be especially likely to have this effect, though, because philosophers (and legal theorists, et al.) tend to respond to other philosophers more than to ordinary people. There are exceptions. Ronald Dworkin has studied popular opinions about abortion and euthanasia, trying to make the most sense possible of the various things people say they believe about them. Jonathan Haidt does something like this, too, and somebody or other (Greg Pence?) has gone through public statements by politicians and political groups about cloning and tried to make sense of the moral arguments contained in them. But a lot of philosophy is not like this. Often people will start with far-from-ordinary concerns with personhood and rationality, and go from there. I think we ought to assign wise work to our students, expose them to serious attempts to get at the truth as well as the rubbish we read and hear every day on tv, for instance, but I wonder whether we should also have them study the rhetoric of more ordinary arguments too, to make more explicit the connection between careful philosophical argument and the chatter and journalism that surrounds, and sometimes fills, our heads. After all, often the bad arguments are just corrupted versions of good ones, and perhaps we should point out the corruption explicitly rather than leaving students to see it for themselves.
One source of such arguments would be the students themselves, and in discussing issues like abortion in class we might hope to identify and then work on various kinds of confusion and ignorance. But this limits the class to the opinions its members happen to have. What if none of them is confused in some particular common way that you would like them to understand? For instance, they are all on the same side of the abortion debate but you want them to understand both sides (or more), not only the best version of their own opinions. If part of the point is to help them understand society, maybe opinions and arguments from outside the classroom should be studied. They might even poll other people and then (try to) do a Dworkin on the results. I can see students enjoying this, and college administrators lapping it up (how innovative!, x-phi!, inquiry-based learning!, interdisciplinary!), but perhaps that is precisely what makes me suspicious.
First let me spell out what I have in mind a bit more. It would have at least two parts. One would be looking at specific arguments, such as the ones you hear about making sure that only good people and not bad people have guns (do good people exist as an ontological category?). Another would be looking at a range of opinions to see how consistent they are with each other. This is where conducting surveys might come in. But would there actually be anything to be gained from something like this that students would not get from a normal course in contemporary moral issues? Is there anything obviously bad about the idea? Has anyone tried it and found that it did or did not go well? Or have I failed to describe the idea well enough for anyone to answer these questions? I'm curious to know.
I can easily imagine people I know thinking that this was a great idea. But I also know and respect people who would hate it. If nothing else, I think I could use a reminder about why this kind of thing seems so bad to some people. My own opinion is not so much torn as just in the middle. I want to take my students' minds and acquaint them with those of great philosophers, so I want the attention in my courses to be on both the students and the great philosophers. Is that impossible? Or too indulgent of the students? There is a danger that the course would turn into an exercise in amateur sociology, but if we applied the principle of charity, wouldn't it be philosophy too? That is, even in the parts of the course that focused on the beliefs of ordinary people, we wouldn't only be trying to figure out what they believe or how they think, but what they ought to believe and how they ought to think, given what they say and do, on the one hand, and truth and logic, on the other. I sort of like the idea, but I also suspect I am being, or have already been, corrupted by administrative pressures.
Studying only sophisticated arguments about, say, abortion might not be especially likely to have this effect, though, because philosophers (and legal theorists, et al.) tend to respond to other philosophers more than to ordinary people. There are exceptions. Ronald Dworkin has studied popular opinions about abortion and euthanasia, trying to make the most sense possible of the various things people say they believe about them. Jonathan Haidt does something like this, too, and somebody or other (Greg Pence?) has gone through public statements by politicians and political groups about cloning and tried to make sense of the moral arguments contained in them. But a lot of philosophy is not like this. Often people will start with far-from-ordinary concerns with personhood and rationality, and go from there. I think we ought to assign wise work to our students, expose them to serious attempts to get at the truth as well as the rubbish we read and hear every day on tv, for instance, but I wonder whether we should also have them study the rhetoric of more ordinary arguments too, to make more explicit the connection between careful philosophical argument and the chatter and journalism that surrounds, and sometimes fills, our heads. After all, often the bad arguments are just corrupted versions of good ones, and perhaps we should point out the corruption explicitly rather than leaving students to see it for themselves.
One source of such arguments would be the students themselves, and in discussing issues like abortion in class we might hope to identify and then work on various kinds of confusion and ignorance. But this limits the class to the opinions its members happen to have. What if none of them is confused in some particular common way that you would like them to understand? For instance, they are all on the same side of the abortion debate but you want them to understand both sides (or more), not only the best version of their own opinions. If part of the point is to help them understand society, maybe opinions and arguments from outside the classroom should be studied. They might even poll other people and then (try to) do a Dworkin on the results. I can see students enjoying this, and college administrators lapping it up (how innovative!, x-phi!, inquiry-based learning!, interdisciplinary!), but perhaps that is precisely what makes me suspicious.
First let me spell out what I have in mind a bit more. It would have at least two parts. One would be looking at specific arguments, such as the ones you hear about making sure that only good people and not bad people have guns (do good people exist as an ontological category?). Another would be looking at a range of opinions to see how consistent they are with each other. This is where conducting surveys might come in. But would there actually be anything to be gained from something like this that students would not get from a normal course in contemporary moral issues? Is there anything obviously bad about the idea? Has anyone tried it and found that it did or did not go well? Or have I failed to describe the idea well enough for anyone to answer these questions? I'm curious to know.
I can easily imagine people I know thinking that this was a great idea. But I also know and respect people who would hate it. If nothing else, I think I could use a reminder about why this kind of thing seems so bad to some people. My own opinion is not so much torn as just in the middle. I want to take my students' minds and acquaint them with those of great philosophers, so I want the attention in my courses to be on both the students and the great philosophers. Is that impossible? Or too indulgent of the students? There is a danger that the course would turn into an exercise in amateur sociology, but if we applied the principle of charity, wouldn't it be philosophy too? That is, even in the parts of the course that focused on the beliefs of ordinary people, we wouldn't only be trying to figure out what they believe or how they think, but what they ought to believe and how they ought to think, given what they say and do, on the one hand, and truth and logic, on the other. I sort of like the idea, but I also suspect I am being, or have already been, corrupted by administrative pressures.
Subscribe to:
Posts (Atom)