Sunday, August 31, 2008

Poker

“Hey,” Joel [Coen] said, his voice brightening, “didn’t Karl Popper go after Wittgenstein with a poker?”
—From an article in Friday’s New York Times

Saturday, August 30, 2008

The violence of philosophy. (Are some values “objective”?)


Philosophy can be about anything, and so it can be about “values.” We step back from the world of nations and civilizations and inevitably puzzle at differences and tensions that continually arise there. And this brings us to the difficult question of whether and how there can be “absolute” or “objective” values.

Now, if there were such things, and if we were confident that we could identify them, then, naturally, we would “wield” them. Often, having values (having a morality) is a matter of trying to make the world better—and this involves attempting to eliminate or lessen “bad things” in the world: pain, suffering, injustice. And, sometimes, especially bad things seem to be happening far from the borders of our own peculiar (i.e., distinctive) society and its way of life.

OK, but what about moral relativism?

Interestingly, two important 20th Century writers, one a leftist (socialist), the other a rightist (conservative), agreed on rejection of moral relativism. Or so says David Lebedoff, author of The Same Man. The book was reviewed in yesterday’s New York Times: Two of a Kind:

…[George] Orwell conjured up the nightmarish dystopia of “1984.” [Evelyn] Waugh’s best-known work, “Brideshead Revisited,” was a reverie about a vanished age of Oxford privilege, titled Catholic families, large country houses and fastidious conscience. Orwell was … a socialist with an affinity for mineworkers and tramps. Waugh was a short, plump, florid social climber and a proud reactionary…. Orwell fought on the loyalist side in the Spanish Civil War. Waugh announced, “If I were a Spaniard I should be fighting for General Franco.” … Orwell thought “good prose is like a window pane,” forceful and direct. Waugh was an elaborate stylist whose prose ranged from the dryly ironical to the richly ornamented and rhetorical. Orwell was solitary and fiercely earnest. Waugh was convivial and brutally funny. And, perhaps most important, Orwell was a secularist whose greatest fear was the emergence of Big Brother in this world. Waugh was a Roman Catholic convert whose greatest hope lay with God in the next.

Dissimilar though their causes may have been, Orwell and Waugh were both anchored by “a hatred of moral relativism”; that, Lebedoff claims, is what set the two men apart from their contemporaries. Yet in stressing this similarity, the author elides [omits] a deeper difference. Although Waugh despaired about the future, he saw the Catholic Church as an enduring bulwark against chaos. His moral order was backed by divine authority. Orwell too was a passionate believer in objective truth, including moral truth. But unlike Waugh, Orwell did not attribute transcendent power to the truth; indeed, he feared that it might ultimately prove impotent in history. Hence his terrifying vision in “1984” of a future of totalitarian sadism, of “a boot stamping on a human face — forever.”

…The two men admired each other — up to a point. Orwell thought Waugh was about as good as a novelist could be while holding “untenable” beliefs. “One cannot really be Catholic & grown up,” he wrote. Waugh thought Orwell was as good as a thinker could be while neglecting nine-tenths of reality: the supernatural part. He wrote to Orwell apropos of “1984” that “men who love a crucified God need never think of torture as all-powerful.”….


In class, I often note that those on both ends of the political spectrum do seem to approach the world as moral objectivists—people who suppose that there exists some set of values that apply equally to all of humanity. It is obvious that conservatives do: the more primitive among them often seem to view the beliefs and practices of foreign cultures essentially as 16th Century Europeans (or late 19th Century Americans) did.

Perhaps it is less obvious that leftists/liberals are often entrenched objectivists as well, for surely a willingness to wield “human rights” across cultures assumes that there is some objective standard of conduct and moral belief to which people around the world may appeal!

But I am a philosopher. And so I ask, “OK, what justifies that idea?” I mean, how is this supposed to work exactly? Is it that those nasty cultures that practice female genital mutilation and the like (i.e., violations of human rights) are somehow blind to facts? Do they lack reason? Are the members of such cultures brain-damaged? Did God neglect to send them a Moses?

My guess is that most who read the above review think that they are clear in their minds about the nature of “moral relativism” and its opposite. But I have my doubts.

We need to philosophize a bit.

Just what is a “moral relativist”? Why, it is someone who supposes that morality is “relative.” —Relative to what? The likely answer (coming from most, I suppose): one’s culture.

But the statement is problematic. One can believe that “morality is relative (to culture)” and mean very different things.

One kind of moral relativism is “descriptive” and probably uncontroversial. For instance, in saying that “morality is relative,” one might be saying merely that, as one examines the cultures of the world, one will discover differences, some of them significant. For instance, some cultures (our own perhaps) emphasize the notion that individuals are entitled to be left unmolested by others, while other cultures place no such emphasis on the self and its entitlements. Perhaps they emphasize the health and survival of the community.

(More grossly: you’ve got your headhunters and you’ve got your non-headhunters; you’ve got your patriarchal societies and you’ve got your egalitarian societies; etc.)

Now even a moral absolutist like Pat Buchanan embraces this kind of “moral relativism.” Sure, he says, different cultures have different moralities. Who would deny that? But, he adds, some of those cultures are in the dark, morally speaking. Ours (i.e., our “Judeo-Christian” culture) is not.

So, in one sense, Pat is a relativist. In another, he is an absolutist.

Another kind of moral relativism is the odd idea that “rightness” is whatever one’s culture defines as right. For us, female genital mutilation is wrong, wrong, wrong. But, in some cultures, it is right. It is “wrong for us,” but it is “right for them.”

Now, try to resist the temptation to “beg the question” here.

DIGRESSION: among logicians (and verbal conservatives!), “begging the question” does not mean “raising a question”; rather, it means something else entirely: committing the error of assuming the truth of X in one's argument for the truth of X. Suppose that a theist argues that God exists on the basis of references to God in the Bible. But why should we regard the Bible as reliable? Because (we are now told) the Bible is "divinely inspired." But the idea that the Bible is "divinely inspired" is the idea that God exists and inspires the Bible. That is, the theist is assuming the truth of the very claim that he is supposed to be establishing. He is "begging the question." —END OF DIGRESSION.

Some of you may be thinking: “It is ridiculous to suppose that the practice of female genital mutilation could be anything but wrong, wrong, wrong! Anyone who views the matter otherwise is obviously beyond the pale!”

Well, yes, I sympathize. FGM strikes me as “wrong, wrong, wrong” too. But, really, that’s what is at issue here. Are we entitled to regard such practices as absolutely wrong? On what basis exactly? How does the universal “wrongness” of this practice go exactly? These are very difficult questions to answer (for those who do not beg questions, if you can find such people).

It is clear, I think, that, as members of a culture, we are raised to think and behave in a way such that certain practices will seem right and good to us. And it is clear, too, I think, that those same practices will sometimes be regarded with horror by those who are members of other cultures and who are raised in very different ways. And so, clearly, some practice P will seem absolutely right and good to me while, for people in some other culture or cultures, P will seem absolutely wrong and wicked.

But just what sort of thing is being said when someone asserts that “for us” mutilating girls is wrong but “for them” it is right? And remember: this second kind of relativist doesn’t mean merely that, from our perspective, this practice will seem wrong; from their perspective, it will seem right. The latter idea is perfectly intelligible and likely true.

The sort of relativist we are now examining means something else entirely. They mean there is this thing—rightness (or wrongness)—and it is one thing for us; it is quite another thing for them.

What on earth are they talking about?

Many philosophers, I think, doubt the coherence or meaningfulness of such talk. Me too.

There’s a third kind of relativism that is very different from these first two. Perhaps it should not be called “relativism” at all, although I think we can see why it is sometimes called that. It is the view that, when we step back from the various moralities/cultures of the world, and we seek some standard against which to evaluate them (for correctness or truth or “validity”), we seem unable to locate that standard.

Again, I must warn you against question-begging. You may be inclined to say: “Well, obviously, some moralities are barbaric! They are immoral! They offend reason!”

Again, I sympathize with such remarks. I can think of any number of practices of foreign cultures (and a few home-grown ones too) that strike me in exactly that way. But to reason in this way is, I think, to beg the question. It is to assume the very matter that is at issue. Our question is: on what basis may we regard “values” embraced by some foreign cultures as right or wrong, valid or invalid? To insist now that these girl-mutilators are “plainly wicked” is to assume that we have a basis, that we may start from the "fact" that girl-mutilating is wrong.

If we have a basis for supposing that our assessment of girl-mutilating is correct and the others' assessment is incorrect, then just what is it?

“Why, anyone with reason can see that mutilating these little girls is wrong!”

Well, yes, but, obviously, mature adults of these other cultures do not see things in this way. On what basis exactly are we entitled to judge that these people are irrational (or blind or…)?

That’s the question.

“Why, it is self-evident that mutilating little girls is wrong!”

But it is not self-evident to these people in the other cultures. How come?

“Well, that is because they are backward!”

Well, perhaps so, but could you please explain that to us? On what basis may you judge that anyone who does not share your sense of the “self-evident” is backwards? Please explain to us how you are not simply exhibiting ethnocentrism?

Such questions are not easily answered (again—among those who refuse to beg questions).

I do not think that philosophers—people who make it their business to deal with these matters—are in agreement about the answer to these questions or even that they can be answered. (Don’t kid yourself; these thinkers aren’t missing something obvious about which you can easily enlighten them.) In my view, we need to take seriously the possibility that we just don’t have—not yet at least—a justification for the assessment that the disturbing foreign moralities are somehow mistaken or benighted.

Perhaps I have lost you. I left you in a state of bewilderment and annoyance a few paragraphs back. If so, try this. Suppose that you encounter a group of Chinese people playing mahjong. You are accompanied by your cousin Ralph, who has never encountered the game. He watches the people playing it. At one point, he declares, “That’s a stupid game!”

Let’s suppose that Ralph isn’t joking. He is quite serious. He is used to checkers. In his mind, checkers is a good game. But this mahjong—to him, it’s simply ridiculous.

Naturally, Ralph is a dolt. “Relative to what,” we now ask him, “is mahjong a stupid game?” There is no standard against which to evaluate mahjong against, say, checkers. What would it be? Obviously, Ralph simply assumes that his perspective and his traditions are the standards for the universe. Well, maybe so. But if he’s going to take that view, he’ll have to defend it, and the chances of his being able to do so successfully seem slim.

Matters are different if we look at somebody’s car and judge it to be small. We can measure the size of cars relative to the objective standard of interior cubic square footage and the like. Saying that someone’s car is “small” is not at all like saying that their game is ridiculous.

And so, again, what are the standards against which we can assess some culture’s morality (compared to our own)? Appeals to “self-evidence” are just question-begging. Here as elsewhere, if we allow appeals to "self-evidence," we're going to get nowhere.

I suppose that this third view is sometimes called “relativism” (metaethical relativism) because, in suggesting that there is no basis for distinguishing between “moralities” in terms of their correctness or soundness, it is saying that these moralities are all on a par. None is superior to any other. (Perhaps each is equally ungrounded.) “Relativism”—in some sense—is often seen as the notion that has arisen to challenge or replace the often unreflective idea that “our culture” is the true and correct (or enlightened) one, the superior one, the standard for the universe, provided by God (or reason). Well, this third kind of “relativism,” too, rejects the “superiority” thesis, at least with regard to morality.

But notice what it does not do. It does not endorse the second kind of relativism, which asserts that right and wrong (and not just what is regarded as right and wrong) differs from culture to culture. A person who embraces (perhaps reluctantly) this third relativism might well reject the second kind.

And observe that this third view is not based on the fact (if it is a fact) that different cultures have different moralities (the modest and largely uncontroversial thesis of the first kind of relativism). Nothing much follows from this fact. After all, cultures of the world differ in their views about the physical nature of the Earth. It doesn’t follow that there is no objective standard to judge claims about the nature of the Earth, does it? Nope.

What if the history of the world were different and only one human culture existed on earth? In this imaginary world, there are no different cultures, different moralities. There is one.

But, in that world, it would be possible to imagine different moralities. And we could still ask, “On what basis may we judge one morality—for instance, our own—as more correct or valid than these other, imagined moralities?”

And the same problem would arise.

In the grounding department, we seem to have bupkis.

I know that some of you are still thinking, “well, obviously, any culture that permits or endorses something like female genital mutilation is wrong! It is backward”

And, again, it isn’t as though I don’t fully share your horror at that practice. But my question to you is this: how can you defend that judgment (without appealing to the worthless standard of “self-evidence,” without assuming exactly what you are obliged to establish—that those who view the practice with horror are correct and those who embrace it are mistaken)?

Wednesday, August 27, 2008

Damn! Boggled again!

I worked in construction one summer when I was 18. I recall encountering various workers working very hard, day after day. They carried, pulled, pushed, hammered, shoveled, etc. They did this for hours on end. I was very impressed. Actually, I was somewhat horrified.

My father was in construction, an electrician. I recall his telling me once that he got lots of exercise on the job, and that seemed right to me, though it also seemed to me that other tradesmen and construction workers worked even harder than the electricians.

But, in general, despite their intense daily exertions, these people didn’t look like they worked out. Most of them were overweight. And they didn’t appear to be healthy.

It was puzzling.

On Saturday, The Guardian’s Ben Goldacre wrote about a recent study that focused on work and exercise (Healthy mind, healthy body). According to Goldacre, two Harvard psychologists focused on 84 hard-working female hotel attendants. The psychologists observed that, while working, the attendants were getting lots of exercise. Nevertheless, 2/3 of the workers reported that they did not exercise regularly and 1/3 reported that they did not exercise at all.

The health of each worker was carefully measured. The psychologists then divided the workers into two groups: “One group got a one hour presentation on what a fabulous amount of exercise they were getting, how they were meeting and clearly exceeding recommendations for an active lifestyle.” It was made clear to them that, whatever they may have thought, in truth, in the course of doing their jobs, they were burning lots of calories and working lots of muscles.

Meanwhile, the other group went about their work unburdened by this happy information.

Four weeks later, the health of each worker was again measured. The workers who had been enlightened about their actual levels of exercise experienced clear improvement in health. The other workers did not.

“It’s an outrage,” jokes Goldacre. Funny guy.

It is possible, of course, that the study was flawed. Maybe the “enlightened” workers subtly changed their work habits, wielding their vacuum cleaners more quickly, scrubbing toilets more forcefully.

Time will tell.

But what if the study was unflawed and the workers’ attitudes really made all the difference? What, then, are we to make of this study? Is it that the workers’ exercise was healthful all right but its benefits were blocked by “negative” attitudes? Or is it that the exercise had nothing to do with these health benefits—it’s “positive attitude” that did the trick?

The literature on the placebo effect is disconcerting. It appears to me that, contrary to popular opinion, it is not clear that the placebo phenomenon ever occurs. I suppose that I hope that it does. But, right now, the matter is clouded.

Still, there are many studies of the kind described above. They unsettle me. I sense that massive folly is afoot. But I don’t know what it is.

I can just imagine people fifty years from now shaking their heads at our time and the appalling spectacle of tens of millions of people taking medicines, undergoing procedures, suffering through taxing regimens—all of them inefficacious. All placebos. Powerful ones.

On the other hand, I can equally imagine the future looking back at this dismal earlier period of gross and absurd oversimplification of the subtle complexities of health and disease. “These clueless bastards,” they’ll say, “were so narrow in their thinking that they resorted to the notion that just thinking you’ll get better sometimes causes you to get better!”

“Ha ha ha. Ha ha ha!”

I’ve been following the science of diet and disease for many years now, though not closely. By two or three years ago, I was strongly under the impression that the relevant authorities were fairly sure which diets were “healthy” and which were "unhealthy." All of the population studies seemed to point to the same culprits: saturated fat, etc.

But then a study came along that questioned all of that. Evidently, it was very impressive in scope and methodology. It was better than the earlier population studies. But it painted a different picture.

People were taking this seriously.

What could this mean? I asked. How can this be?

I hate being boggled.

But I also like it.

Sunday, August 24, 2008

Tomorrow!

Well, the semester begins tomorrow morning. That's likely to kick start this here blog into life again. You wait and see.

The heat makes me stupid.

Saw my brother's brats yesterday. They climbed all over me, as per usual, as though I were a carnival or a magic mountain. Wore me out. But it was good.

Tomorrow!

Monday, August 18, 2008

Try to believe that a monkey is a pumpkin

THIS ONE'S ABOUT RISK—TINY RISKS OF HUGE CATASTROPHES. THAT'S US ALL OVER HERE IN CALIFORNIA, WHAT WITH EARTHQUAKES, FIRES, AND TSUNAMIS. HOW ABOUT DISBELIEVING IN GOD?
I visited my parents today. It turns out my mother experienced a minor emergency last night. She had been feeling poorly for weeks but then, late yesterday, she experienced palpitations. My dad drove her to Kaiser. The two of them remained in the emergency room for several hours.

My mom told me that her palpitations were caused by “drug interactions.” I asked her which drugs had interacted. She said, “it was my antibiotics.” I said, well, OK, that’s one drug, but what about the other one or ones?

Eventually (you have no idea), I gleaned from my parents (aka the Costanzas) that mom has been on thyroid medication for decades, and when she started on a course of antibiotics for a dental problem about a month ago, she immediately became ill. Two weeks later, she started another course of antibiotics—for another ailment—and, again, she felt ill, until Friday, when she broke out in a rash. Then came the palpitations last night.

“How do you know it isn’t just the antibiotics?” I asked. My parents stared at me. My parents (aka the Bickersons) then argued for a while about I-know-not-what, but, in the end, they seemed to say that both of these drugs were in mom’s system when she had the palpitations. So there you are.

Naturally, I explained that, no, in fact, there are several possibilities that could explain the palpitations. First, it could indeed be the interaction between the thyroid medicine and the antibiotics. Second, it could be the antibiotics alone. Third, it could be neither the above-mentioned interaction nor the antibiotics; possibly, coincidentally, something else might have caused the palpitations.

“Well,” said dad, “that’s unlikely.” I agreed, but I said that that didn’t mean that the possibility should be ignored. After all, I said, for all we know, there could be something wrong with mom's heart. Even a 1% chance of that is something you need to consider, I said.

(You might wonder why I didn’t just ask them what the emergency room doctor told them. But you’ve got to know my parents to know how unlikely it is that one will find out what the doctor told them by asking them what the doctor told them. Tea leaves or gopher entrails are much better indicators.)

* * * * *

This reminds me of something I wrote a while back about the danger of a tsunami wreaking havoc on the coast of Orange County (So Cal tsunamis?). I had come across a study that had been done, I believe, for the state (Evaluation of Tsunami Risk to Southern California Coastal Cities (pdf)). According to the study, there is a small but significant chance of a tsunami hitting the OC, not only because of earthquakes, but also because of undersea landslides on the Catalina side of the channel, which is very deep. According to the report, there is evidence of historical tsunamis of serious significance along the OC coast. These, said the report's author, are “infrequent.” Nevertheless, “the hazard posed by locally generated tsunami attack [!] is very serious and should be appropriately mitigated.”

The general point here is that a risk might still be worrisome even when it is very unlikely. It is significant if what could occur would be catastrophic.


This seems to be the core idea of “Pascal’s Wager,” one of the most famous arguments “for believing in God.” Blaise Pascal (1623-1662) argued, not that God exists, but that it is rational to believe in God. There’s a difference.

Suppose that I have been falsely accused of blasphemy, and I am now in the clutches of Inquisitors. Suppose, further that the Inquisitors will torture me until both I confess to blasphemy and, further, I believe my confession (let us assume that I know that they will kill me in any event). Clearly, under the circumstances, it is rational for me not only to confess but also to believe in my confession, if I can manage that, for it is better to not be tortured than it is to be tortured. That is, even though the belief that I blasphemed is false, it is nevertheless rational for me to adopt that belief (if, again, I can manage that).

A similar situation is said to arise re belief in God. According to Pascal, I will either believe in God or not.

I BELIEVE IN GOD. We’ll start with my believing in God. In that case, there are two possibilities, for either God exists or He doesn’t. If God does exist, that’s good, for (reportedly) God, who is omnipotent, is pleased by those who believe in Him. But what if God does not exist? In that case, nothing good or bad happens. One simply has a false belief. One already has lots of those. No big deal.

The upshot: one takes no chances, really, in believing in God. There’s no downside of any significance.

I DO NOT BELIEVE IN GOD. Now consider what happens when one doesn't believe in God. It’s totally different. Again, either God exists or He doesn’t. If He doesn’t, then, if I don't believe in Him, then I have a true belief. So what? I've already got lots of those. I don’t get a prize or anything.

But now suppose that God does exist. Surely there is some possibility that that is the case. Who could deny it? But if God does exist and I do not believe that He exists, then things go very badly for me. The word is that, in that case, I'm in for eternal torment.

The upshot: in not believing in God, one takes one hell of a risk. True, given the poverty of the arguments for God’s existence and the tension in the notion that a perfectly good creator has created an evil-drenched world such as ours, God’s existence may well be judged unlikely. But even a 1% chance of eternal torment is a chance that, as a rational being, one must not take. Meanwhile, there really is no downside to believing in God. So, naturally, the only rational thing to do is to believe in God! (I have taken some liberties here. Pascal emphasized the good of eternal reward, not the bad of eternal punishment.)

Once again, the key motivator here isn’t the likelihood of the negative outcome. One might well argue that that outcome (eternal torment) is very unlikely. The key motivator is the magnitude of the catastrophe involved in the negative outcome.

This makes a lot of sense—until you consider:

Isn’t God liable to get ticked off if you show up believing in Him, not because He reveals Himself in His fine and wondrous workmanship and love, but because, well, you believe in insurance? Imagine finally meeting Him and saying, "Oh, great! It was a real leap believing in you, dude—I mean, c'mon!—but like I always say, you gotta consider all contingencies! Ha ha ha!"

Second, if this reasoning works for the Christian God, isn’t it likely to work for other religions’ gods too? If so, does reason then compel us to believe in all of the gods? But do the religions allow you to employ a “cover your bets” strategy? I think not. For one thing, gods tend to be “jealous”—at least that’s the word on the street.

Finally, are beliefs really things you can choose? Go ahead, choose to believe that a monkey is a pumpkin, I dare you!

Can’t do it, eh?

Well, I guess if you had enough time to really mess yourself up, you could probably just manage it.

I do wish theists would be clearer about just what they expect from people with regard to belief. Some theists seem to think that nonbelievers are ipso facto sinners. Occasionally, I call them on this. I say, "OK, just what am I doing wrong? It's not like I don't want to believe in God. I do. (I want justice as much as anyone, and, well, there doesn't seem to be justice in this realm.) But, dude, I've got to have a reason, and it's got to be a decent one. It can't be a leap of faith, 'cause I am a rational being, and I am not going to make some totally daffy move that places me in exactly the same category, rationally speaking, as Shirley MacLaine or Kathryn Kuhlman."

Unfortunately, at this point, I usually start getting the pseudoscience shuffle. That's when I'm told that "if my heart is truly open," if I will just set aside my pride, then God will somehow enter me and do the do. Yeah, so, later, when it doesn't happen, they've got that one covered. I'm a sinner; I'm not worthy.

Reminds me of the physicist who was pretty sure that psychic power is real. He found some kids who were reputed to have psychic ability, so he tested them. But no abilities were revealed.

So naturally, he backed off, started changing his tune. Right?

Wrong. He did the shuffle. He had a "Eureka!" moment. He had discovered, he said, that psychic ability is shy. That is, when you test it, it goes away!

So, again, I want to believe in God, but I am aware of no good reason to believe in Him. What am I supposed to do here? Tell me that! And don't be giving me the shuffle!
'

Sunday, August 17, 2008

News media

The ever-reliable Ben Goldacre has a nasty piece (in yesterday's Guardian) about, well, the moronic media: From the mouths of morons in the media. It's veddy, veddy good.

I was watching Countdown on Friday, with Rachel M subbing for Keith Olbermann. She interviewed the two Georgia hikers who claimed to have the body of a Bigfoot. Check out the video (below); the guy on the left looks like he can hardly keep from cracking up. Rachel also has a rough time keeping her dubiousness off of that sleeve of hers.

Countdown video. [Luddites, click on the pretty red words.]

On Saturday, however, MSNBC reported that two genetic samples taken from the alleged Bigfoot proved to be "from a human" in one case and "from an opossum" in the other. 

Too bad. Like most skeptics, I want to believe!

See 'Bigfoot' fails DNA test
'

Saturday, August 16, 2008

The supposed war between reason and the emotions

So much of our thinking seems to be defined by simplistic and misleading slogans or caricatures. Consider the way we talk about "Reason versus Emotions." The thinking seems to be that

These "things" are distinct (two entities)
One excludes or corrodes the other (they are enemies; they are opposites)
Reason is superior to emotion (In the healthy mind/soul, the reason rules the emotions—Plato, Kant)

Sometimes, we encounter a kind of rebel position:

• "The heart has its reasons, of which the mind knows nothing." —Pascal

We run with some of these ideas, but should we?

OVERWHELMING EMOTION

The term “emotionalism” is sometimes used to refer to a state in which emotions—anger, fear, joy, etc.—are so strong that reason is impaired. Obviously, this sort of thing does happen. But just what may we conclude from this phenomenon? Certainly none of the ideas above.

Observe that, in recognizing the phenomenon thus understood, we seem to be thinking that emotions are not intrinsically corrosive of reason, for otherwise we would not specify cases in which emotions are “so strong.”

This matches the common sense observation that people often feel emotions while reasoning perfectly well. If I feel joy when I reunite with my long-absent cat, there is no reason to suppose that my reason is impaired. Heck, it could be optimal.

I sometimes watch those military aircraft shows on TV. Actually, I love 'em. There, I encounter depictions of fighter pilots in highly stressful circumstances—dogfights and “furballs.” These pilots’ survival depends on their employing rational faculties well. Do we think that these pilots are necessarily rationally impaired because of their stress and fear?

Or is it that, somehow, they block out or thwart their emotions in order to function well and thus survive? But they give every indication of not doing that, and, typically, they acknowledge their fear.

In what sense then can emotions be thought of as “opposed to” reason?

UNDERSTANDING EVENTS

Sometimes, our reason—or, more broadly, our understanding—requires, or essentially includes, emotions. If college kid Ivy works on a research paper about the Holocaust, learning for the first time that millions of people, including women and children, were rounded up, horribly abused, and then killed, we expect her to be horrified.

Being horrified, even if it is the sort of “horror” that we might experience as we hover quietly above a book in a library, is an emotional state.

Imagine that Ivy is in no sense horrified. She is reading about and thinking about these events, but she feels no differently than she might feel while thinking about, say, lint. Here, we would, I think, be inclined to say that, since she has no feelings about these events, she obviously does not understand them.

Among the states of incapacitation that we talk about are states of numbness and the absence of emotions. Here, as in so many cases, “being rational” involves, among other things, the presence of feelings and emotions, not their absence. Emotions aren’t opposed to reason; emotions are a part of being rational.


THE LOGICAL MR. SPOCK

Imagine an episode of Star Trek in which Captain James T. Kirk has been captured by the Romulans and it is now up to Mr. Spock, the supposedly “unemotional” half-Vulcan second-in-command, to save Jim from a fate worse than death. Suppose that Spock does his very best to command the ship and its crew and, in the end, he saves Kirk. Later, Spock explains to Dr. McCoy that his efforts on behalf of Kirk were “logical.”

But it is difficult, I think, to picture Spock’s diligence in saving Kirk without attributing to Spock some element of feeling or emotion, at least in a dispositional sense. I mean, if Spock doesn't give a damn about Kirk—and doesn't give a damn about the “Prime Directive” and the Federation either—then in what sense is it “logical” for him to strive and strain to save Captain Kirk? In that case, it appears to be illogical.

YOU'RE JUST BEING EMOTIONAL

Often, foes of abortion or animal exploitation are accused of being “emotional” rather than logical. Most of the time, I think, the charge is confused. Surely the accusers cannot be thinking that the anti-abortionists fail to be logical because they feel emotions, for emotions—e.g., concern for the welfare of poor pregnant women and unwanted children—are as essential to the pro-choice position as they are to the pro-life position. In general, having strong concerns is intrinsically a matter of emotions and feelings, and we certainly do not want to say that having concerns is incompatible with being rational or using reason.

It is easy to anthropomorphize—that is, to erroneously project the capacities and aspects of our own mental life onto beings who are not thus capable. We do that with animals when we suppose that they are indignant or treacherous. We sometimes do it with infants and with fetuses. Now, if Auntie A supposes that an 8-week old fetus is mentally very much like an infant, and, on that assumption, she is utterly horrified by film of an abortion of an 8-week old fetus, then she is making a mistake. But her mistake has nothing to do with her emotions. For if she felt the same emotions while observing similar violence done to an infant, we would regard her feelings as appropriate and healthy. The problem here is not that her emotions impair reason; rather, the problem is that she misconstrues the facts, for an 8-week old fetus does not have a mental life and thus is not at all like an infant. Misconstruing the facts is not about emotions.

SURPRISING EMPIRICAL TESTIMONY

Have you read Antonio R. Damasio’s book Descartes’ Error? Damasio is not a philosopher, but rather a neurologist at the U of Iowa. He researches brain functions. In Descartes’ Error, working largely from his research, he argues that emotions play a central role in human reasoning. According to Damasio, people who, congenitally or owing to injury or disease, cannot feel emotions also cannot make good decisions.


WORDS WITH, OR WITHOUT, EMOTIONAL MEANING

People are sometimes accused of using “emotional” language as though that were illogical or fallacious. Here's a concrete case.

Suppose that Auntie A insists on describing an 8-week fetus as a “baby.” Now she’d never get away with that among a group of pro-choicers, for pro-choicers would immediately ask her to justify her tacit assumption that 8-week old fetuses are similar to or equivalent to infants. It would quickly become clear that Auntie A is, as we say in logic, “begging the question,” i.e., she is assuming the truth of the very proposition that she needs to establish—namely, that a fetus is the sort of being who ought to be regarded in the same way that we regard a baby. She cannot simply assume the proposition. She must argue for it.

Just who would be led astray by the “tactic” (if that is what it is) of calling a fetus a “baby”? I find it hard to picture this process. But I guess that I can just barely imagine someone, Mr. B, listening to Auntie A and being persuaded of her pro-life position in part because he fails to notice the question-begging in Auntie A’s rhetoric. Mr. B's mistake doesn’t strike me as about his emotions. Someone who falls into Auntie A's trap strikes me as rationally untrained and unsophisticated, but they don't strike me as having defective emotions or as applying emotions "incorrectly."

Just what would it be to have a “wrong” or “illogical” emotion anyway? Laughing at a rock or a bean, I guess. Feeling joy at the sight of a wall crack.

Fallacy!

And, again, simply having emotions is not in itself a fallacy. Far from it.

Sometimes, people use rhetoric that obscures the appropriateness of strong emotions. If Jack Ripper teaches his recruits about the My Lai “incident,” he might manage to leave the impression that it was not a violent event in which many women, children, old people, and animals were slaughtered. It is much more “logical,” it seems to me—it is more accurate and honest—to refer to the event as a massacre, something that by its very nature is violent and dreadful.

Sometimes, being logical is a matter of bringing emotions into play.

ERRONEOUS WAR

I often find students who are hostile to reason, or hostile to the advocates of reason, exactly because they suppose, as we are endlessly encouraged to do, that reason and emotions are like oil and water or are distinct and opposite. Because these students rightly sense that there is nothing wrong with emotions, that, indeed, there is something very important about emotions, and given that reason and emotions are "opposite," they naturally become the enemy of reason.

But their assumption that reason and emotions are separate things and that they are opposed to each other is unwarranted, and it is a mistake. And so, therefore, is the war they seem to feel they must wage against reason.
'

Thursday, August 14, 2008

"It's all subjective," he said

I was watching the Olympic Games today, and these two network talkers—a young woman and a young man—were yacking about some wrestler who, earlier, had been so angered at a judge's call that, during the subsequent medal ceremony, he threw down his bronze medal and stalked off. He had done some other unpleasant things, I guess.

Naturally, as the video played, the network talkers started yapping about the guy's poor sportsmanship. They said the usual things.

And then, as though he were correcting the indecorous athlete, the young guy said, "Well, I mean, it's all subjective." He shrugged.

Huh? The other talker, the young woman, didn't seem to like that. Then the first talker, the guy, said it again. Well, I had to go do something else, so I don't know what happened next.

I wonder what people like this guy mean by such remarks?

In my mind, there's a spectrum of judgments from the "clearly subjective" to the "clearly objective." I judge that the moon is in the sky. That's objective. I judge that plain vanilla ice cream is the best. That's subjective. Nobody's gonna disagree with that (i.e., that the first is objective and that the second is subjective).

But how does it go in the middle area?

First of all, there's no clear line to be drawn between subjective judgments and objective judgments, that's for sure.

When we say that a question or issue is objective, we mean at least that there exists (and there can be applied) some procedure for determining the truth of the matter. That the moon is in the sky is an objective judgment, for there exists a way to determine whether what is depicted by the statement corresponds to reality (i.e., whether it is true). That vanilla is the best ice cream is a subjective judgment, for there exists no test like that and nothing remotely of the sort. In the end, the judgment, if it can even be called that (for only a knucklehead would asset it as a truth), expresses mere personal taste.

Now here's the crucial point: the having or not having of such a test is a matter of degree, isn't it?

There simply is no conceptual backdrop that allows a determination of the truth of my vanilla judgment. (Even if a poll revealed that most people prefer vanilla, I don't think we'd conclude that "vanilla is the best ice cream.") But the same cannot be said for, say, the question of whether the 1st violinist in our orchestra is good. There are criteria, and they are widely recognized. These criteria figure into our shared understanding of what is desired and what is not in the performance of music. Now, this does leave room for some disagreement—for instance, there is no clarity about the relative importance of the different criteria. But it would be absurd to conclude that the judgment that this violinist is "good" is "just subjective." Well, it is somewhat subjective. But it is also somewhat objective. It is more objective than it is subjective.

My guess is that the kind of judgment that Olympic judges are called on to make is usually much more like the "violinist" judgment than like the "vanilla" judgment. If so, we wouldn't want to be running around saying that these judgments are "just subjective."

So what on earth was that shrugging network guy trying to say?

Maybe that the judgments that the Olympic officials are asked to make are not entirely objective, that there are unavoidable elements of subjectivity in such judgments.

I don't see how saying that would help here. I think it might help if the call had been, as we say, "close."

This doesn't seem to be one of those cases.

I'm no expert, but from what I've seen, it seems to me that the thing to say here isn't that the judging is "subjective." No, the thing to say here is that this wrestler flat lost and that he's an asshole, and so he threw a fit.

Sorry about the technical terms. But you get my meaning.



P.S.: "I'm no expert" is quite the understatement. Perhaps Abrahamian had good reason to be angry. I don't know. See this. In any case, my point here concerns the thinking of the network talker, not the conduct of Olympic wrestling judges. Obviously, bias (corruption) is a potential defect of judging (beyond "subjectivity").

P.P.S.: I assumed that it goes without saying that the degree of "subjectivity" in athletic judging varies from sport to sport and that, within a given sport, it varies depending on the kind of call. Obviously, some calls are more subjective than others in judging wrestling. The Olympic judges in this case evidently had available to them (after the ref's crucial call) appeal to an instant replay camera. For whatever reason, they chose not to avail themselves of that option, which may suggest that, in the judgment of the officials, the call was not close.

But, getting back to the subject: I still don't know what the network talker thought he was saying exactly. There are many things he could have said that would have made sense to me, e.g., "when you enter into competition, you (tacitly) accept the refereeing judgments (including appeals) for, after all, in the end, subjective elements in judging are ineliminable, and so we must accept that and go forward." --OK, but one does not express all that by saying, "It's subjective." Besides, in a given case, it might not be subjective at all, as when, say, a wrestler is tossed around the mat like a rag doll or, say, he decides to take a nap during the match.

I still think that Abrahamian is an "asshole," but not because he was angry about the crucial call, whatever it was. Don't know enough to judge that. It is hard to justify his messing up the other medalists' big moment (receiving medals) as he did. If there had been bias or error, it certainly wasn't those guys' fault.
'

Clueless

I don’t know about you, but I want to look at the universe and have some clue what I’m looking at.

But history throws a wet blanket on people like me. I look along it and find always the same thing: people who are sure that they understand things. But no. At best they’ve advance our understanding. But they always seem to overestimate their take on things. Later generations always end up looking back at it and saying, “God, imagine being that clueless.” But of course GenN+1 is almost as clueless as GenN.

So it’s an endless chain of foolish confidence. That’s what history is. The best we can hope for is advancing our understanding a little bit. But we’re still essentially clueless.

I guess I can see being one of the workmen building a pyramid, but only if I get to see the finished product. Imagine working on a pyramid knowing that the damned thing won’t be finished until long after you’re dead! I don’t know about you, but I’d have trouble getting up in the morning to go to work.

* * * * *
I am impressed by the universe, but, for all that I know, I shouldn’t be.

In his Dialogues Concerning Natural Religion (1776), David Hume has us imagine that we are examining a magnificent ship. It’s big, it’s complex, it’s beautiful, and it’s powerful.

“Wow,” we say. “Whoever designed and built this ship must be a genius!”

But no:

[W]hat surprise must we feel, when we find him a stupid mechanic, who imitated others, and copied an art, which, through a long succession of ages, after multiplied trials, mistakes, corrections, deliberations, and controversies, had been gradually improving?

Dave has got a point. That ship can be explained without bringing any Poindexters into the story. All we need is a series of Felixes. Or even Vavooms.

But of course the same point can be made about the universe itself. It is grand, complex, impressive. Maybe we’re inclined to attribute all this whiz-bangery to a Creator. The fellow must be a genius!

Nope. For all its grandeur, the universe can be accounted for through the efforts of a very long series of divine knuckleheads:

Many worlds might have been botched and bungled, throughout an eternity, ere this system was struck out; much labour lost, many fruitless trials made; and a slow, but continued improvement carried on during infinite ages in the art of world-making.

Dave’s pal Adam Smith started us down the road to explaining “designed” things without anyone’s intending them. And that brought us to Darwin eighty years later. So, now, we don’t even need a knucklehead to explain grandeur. It can be done with utterly mindless processes.

Can such processes impress us? I don’t know. I do think that's a good question, though. I'll be thinking on it.

Hume goes on to say:

In such subjects, who can determine, where the truth; nay, who can conjecture where the probability lies, amidst a great number of hypotheses which may be proposed, and a still greater which may be imagined?

That’s right, Dave. Once again, we’re clueless.
'

Tuesday, August 12, 2008

To the undergraduate ear (tiny philosophical adventures)

1. Philosophy is often a matter of stepping back from something and asking fundamental and general questions about it: “What do we mean by ‘a person’?” “What is a ‘law’?” “What is an ‘object’?” I tell my students that such questions are inevitable, unavoidable. I mean, you're not going to make much headway understanding, say, the nature of scientific knowledge unless you delve into the meaning of "law," or "law-hood," as we might say.

There’s no use scoffing at such questions, even though they are highly abstract, seemingly ridiculously so.

One of my old professors used to tell this story. He was taking a philosophy course at a certain Ivy League University—this would have been in the early 50s—and the lecturer was discussing the general idea of a “property.” You know, a quality, a characteristic. (See Properties.)

Philosophers (at least in the Anglo-American tradition) routinely use or borrow from formal logic, and, in logic, when one chooses a letter to represent a property, one chooses among A, B, and C—or P, Q, and R. Don’t know why. That’s just the way it is.

A, B, C...
P, Q, R...

That reminds me. A couple of days ago, my best friend asked me, “How come you called your new blog ‘Contra PalaVerities’?”

“I dunno. I just called it that. It sounds good, I guess.”

“But what does ‘Contra PalaVerities’ mean?”

“Mean? It doesn’t mean anything, dude. It’s just a name.”

Philosophers are the only people guaranteed to understand that answer. (See Rigid designators.)

Anyway, this philosophy professor decided to refer to “some property” using the phrase “A-ness.” The thinking, here, was that “-ness” is a suffix for properties—e.g., redness, tallness, knuckleheadedness, etc.

So he commenced referring to A-ness, writing it on the blackboard. "A-ness, A-ness, A-ness," he said. Unfortunately, to the undergraduate ear, there is no difference at all between the sound of the word “A-ness” and the sound of the word “anus.” So students commenced tittering and murmuring and whispering like they do.

After a while, the prof swung around and asked what all the commotion was about.

“Well, you said ‘anus,’ explained some brave soul, pointing at his rear end.

The prof was mightily embarrassed. But he soon recovered. Naturally, he erased all the A-nesses from the board, replacing each one with a “P-ness.”


2. I’m a bit deaf, owing to an incident that occurred maybe twenty-five years ago. I went out to the desert with my crazy little brother Ray, and when we got there, Ray pulled out a Saturday Night Special. He said, "Let's shoot at somethin'." Well, I was always looking for opportunities to do things with my black sheep bro, so, despite my utter lack of interest in guns (I support lots and lots of gun control), I joined him in shootin' up a cactus or something. (In those days, we didn’t know any better.)

He gave me the little pistol and I squeezed the trigger a few times. Boy did my ears hurt. And they rang. I said, "Is it supposed to be so loud?" Ray laughed.

Well, that was over twenty years ago, and my ears have never stopped ringing.

Excuse me, I've gotta get the phone. —Well, no. That's just the ringing in my ears. Huh? Did you say something? D’oh!

Which reminds me. In grad school, I had a colleague named Fong or Fang. I like to think it was Fang, but I suppose it was Fong. He was Chinese, and, as it turns out, his English was terrible.

Then there's my deafness.

So, we were kind of friends, but I never understood a thing he said. You see, judging by his body language and facial expressions, he was a great guy. I was raised by wolves (i.e, German immigrants), and so body language is important to me. (I sometimes find myself not listening to what a person is saying at all. And yet, in some sense, I am listening intently.)

Well, one day, I asked him what his dissertation was about. We were both in the philosophy doctoral program over there at UCI. And, again, philosophers tend to focus on seriously abstract issues. I think my brother (my non-crazy brother, Ron), who got a doctorate in philosophy from UCLA, did his dissertation on the idea of a "property." Or was it on "somethingness"? Not sure.

So I asked Fong what his thesis was about. Without hesitation, he asserted: "WHAT DUH FUK!"

Huh? What was that again?

"WHAT DUH FUK!"

Ok, Ok. That sounds pretty good I guess.

Well, judging by his expression, he still seemed like a nice guy, so I figured I just didn't understand how that particular phrase could be associated with a dissertation in philosophy. Whadoo I know? Could be, I guess.

A few months later, I found a copy of a draft of Fong's dissertation on somebody's desk. I picked it up. Its title:

WHAT'S A FACT?

I laughed pretty hard about that one, boy.

Somebody get the goddam phone!

(Part 2 adapted from my Help the hearing challenged.)
'

Monday, August 11, 2008

"Tell me more, Herr Bauer"


Yesterday, I gave a little party for my old friend Ken, who, these days, is a professor of philosophy in the California State University system. He was a student in my very first philosophy class when I started at Irvine Valley College twenty-two years ago. Back then, Ken was kind of wild, a "headbanger," he said. But he seemed to love what I was doing in class—"I couldn't get rid of him," I reminded him—and he was as smart as a whip.

I recall bringing young Ken to hear the communitarian philosopher Alasdair MacIntyre at nearby CSU Fullerton. On another occasion, I took him to a colloquium at the University of Redlands. He and I somehow ended up going to dinner with the honored guest along with the usual departmental suspects. I think Ken got a bit drunk and started throwing bagels around or something. Good grief.

He's still a bit of a wild man, I guess, but he's a damned good philosopher and a great teacher too. I'm proud of 'im.

Yesterday, he arrived with his fiancé, who is a wonderful person, and young Mortimer, who is also a wonderful person, albeit of the canine variety. I was feeling good about all of this wonderfulness. It was a warm summer night, and we enjoyed the quiet and our view of the Santa Ana Mountains.

I've been corresponding with a student from last semester—another big talent, it seems to me, as a writer. He's been in the Army (more than once in Iraq), and now he's going to school and loving it. He has a great little family, and lots of dreams.

He was there, too.

He brought his wife, another survivor of Army life, who teaches and who hopes to continue her education. We're all very pleased that that new "GI Bill of Rights" passed. That will be a great help to so many deserving young people.

I recall being an undergraduate, back in the 70s, hanging out with a group of students that played volleyball on Friday afternoons. One of the regulars was a very well-known philosopher of religion, who loved to be around young people. Often, a gang of us would end up at his funky little place in Laguna Beach. We'd drink and eat and sing and talk about philosophy—and everything else. Very incorrect, I suppose. He had an old Gibson guitar on which he'd sing his "talkin' blues," if we insisted.

I also recall when the famous German philosopher, Carl Hempel, visited the university for a few months—a guest, I recall, of the philosophy club, which managed to snag some kind of grant. He seemed to me at the time to be a very old and very wise man. (I now realize that he was only 73. That ain't so old!)

I recall telling him about Bob Dylan, on a balcony high above Laguna Beach, on a warm summer night. "Tell me more, Herr Bauer," he would say. He seemed genuinely interested.

I do hope that I have been as encouraging to my students as some of my old professors were to me.

'

Saturday, August 9, 2008

Manifest falsity: why do we embrace stupid ideas?

Aristotle was insightful about moral development. As I briefly explained yesterday, Aristotle views each of us as responsible for his character because he believes that we form our character over time through the actions that we choose to perform. If, for instance, Little Suzie allows herself (or is allowed) always to run away from fearful things and to indulge her fears, she will fall into the habit of running away and feeling fear. It will become a settled disposition.

If, however, Little Suzie is encouraged to stand her ground (within reason!) and to combat her fears, insofar as she succeeds in repeatedly doing so, she will develop the habit of standing her ground despite fear. No doubt, over time, her fear will become less severe. Eventually, she will establish a firm disposition to stand her ground (when appropriate) without undue fear.

I have chosen the example of courage/cowardice, but the analysis works for all virtues and vices. (I’m assuming the existence of “free will” here. Whatever else might be said about free will, it is the sine qua non of morality, of responsibility. Of course, that doesn’t show that free will exists or that the idea even makes sense….)

According to this (highly plausible) way of viewing moral development, each of us is the author of his character, although, naturally, parents can FUBAR the deal. I think it would be absurd simply to hold teen-aged Ralph responsible for his character—say, his tendency to deal with conflict using violence—if virtually everyone in his formative environment routinely dealt with conflict using violence. It seems to me that some people imagine that the nature of virtue (and vice) is like a magical light that glows within all of us—the light of reason?—and it reliably guides us if only we would allow ourselves to be guided.

I can think of no more groundless idea than that one. What on earth would inspire it? Like the idea of a “self” within us that cannot be identified or explained but that is the final answer to who we are, the notion of a “clear light of reason” that burns inside us to direct us, even if we are being raised by wolverines, is nonsense. You might as well believe in ghosts or “chi” or the memory of water molecules. C’mon!

ARISTOTLE AND ADULTS:

The Aristotelian “model” seems right about moral development as understood as the progress of a person from infancy to maturity, but it also provides a fine model of moral life for the mature moral agent. None of us is perfect—far from it—and so we need to examine ourselves (at least periodically) in relation to the virtues and vices. Our characters are somewhat plastic; we can slowly mould ourselves in the right directions.

Sadly, our culture has abandoned the language of morality—virtues such as “magnanimity” and vices such as “pusillanimity”—in favor of the vague and stupid word “values.” (As Gertrude Himmelfarb explains, the modern word “values” comes to us from Nietzsche!) But, even so, most of us have a rich enough moral vocabulary to catch the big stuff—e.g., one’s impatience with others, one’s endless concern for oneself at the cost of mindfulness about others’ welfare, one’s tendency to loll about.

Thanks to Aristotle, we know how to address these defects. Just start—and keep—doing the right thing! "Do as the virtuous person does." Maybe doing it will never become “second nature,” but it will surely grow easier.

I have to say, I think this self-molding is a beautiful and admirable process; and it is available to all of us. Or so I tell my students. Mostly, they buy it.

THE DISMAL "SELF-ESTEEM" PHILOSOPHY

Why do we so often say stupid things? I have in mind such remarks as, “You can achieve anything; all you need to do is want it!”

Good Lord!

The remark has an embarrassing feature: manifest falsity. Why say this stupid thing when we can say something true and helpful instead? Whatever our current standing as moral beings (or as athletes, conversationalists, or underwater basket-weavers), we can do better if we try. And if we try hard and persistently, we will achieve improvement (at least in the moral realm), and that will be a great and admirable thing. It will be a good and honest reason to feel good about ourselves.

Telling a kid that he can be a professional basketball player or a major architect if he just wants it badly enough is almost guaranteed to produce one disappointed and resentful kid.

The psychologist Robyn Dawes explains that, years ago, the state legislature in California set up a task force to study “self-esteem” and its supposed efficacy in promoting childrens’ well-being. The team got to work identifying all of the studies that showed that, by fostering “self-esteem” (or banishing low self-esteem), children would be much less likely to use drugs, become pregnant, or become couch potatoes.

But there was a problem. No such studies existed. None. In the end, that silly task force acknowledged as much.

Why do we so often go in for stupid ideas? Children (and people generally) should not be encouraged to feel good about themselves. Rather, they should be encouraged to try to do well and to achieve what they can. All of them can do that. And if they do that—that is, if they really try—then, of course, they should, and they will, feel good about what they have achieved.

And if they drop the ball, and they keep dropping the ball? Well, in such cases, the natural feeling can be disappointment, shame, guilt, etc. (Please note: disappointment, shame, guilt—just because these things can be excessive or neurotic doesn’t establish that they are intrinsically so.)

The solution to the pain of being an underachiever or screw-up? Tell them: "stop screwing up, kid. Start doing as you should. That’s the road to self-esteem."

Now, obviously, this should be done with sensitivity (which is not to say that we should treat children as though they were mental patients). And not every kid has the advantage of having been taught what to seek in the first place. Telling a kid who has no conception of what goodness or excellence might be to straighten up and fly right would just be cruel and stupid.

I love myself


SHOULDA, OUGHTA

Philosophers often say, “ought implies can”—that is, the statement “X ought to do Y” assumes that X can do Y. Thinking along the same lines, one might suppose that, if one is called upon to do Y, but one cannot do Y, then one should not be blamed or held responsible for failing to do Y. That is, one should feel no guilt or shame for having failed to do Y.

But that last idea doesn’t work.

I had a friend—I’ll call her Mindy—who had a terrible fear of birds. She was very smart, but she had this one stunning phobia. I won’t go into details.

Now, we all know how one might go about addressing such a phobia. It ain’t rocket science. You’d just expose yourself to birds, degree by degree. Eventually, if you were to persist, you’d achieve some degree of success in quieting your fear.

Hanging around Mindy, it would occasionally occur to me that she was a kind of walking time bomb. It did not require much imagination to think of situations in which her (yes) paralyzing fear of birds might prevent her from doing something that desperately needed to be done.

Suppose that a child is drowning in the pond at the park, and only Mindy is there to save him, but Mindy can’t do that because: birds.

Let’s just grant that, as a matter of psychological fact, Mindy’s fear of birds is so great that she literally cannot save the drowning child in this case.

Nevertheless, it is not absurd to suggest that, in this case, she would be somewhat blameworthy. Why? Well, we can appeal to Aristotle’s model. To some extent, Mindy is responsible for her continued phobia. She may not be responsible for acquiring it--it was that nasty crow that landed on her stroller and bit her on the nose that’s to blame. But she should recognize the potential hazard in her continued phobia. Having done that, she should have addressed it (if, that is, she didn’t have other, distracting concerns and obligations, etc.--life gets complicated).

STUDENTS AND ACADEMIC VICE

Most of my students, as students, drop the ball bigtime. Most of them have no conception how much homework they should be doing. They resent my efforts to clue them in. They seem not even to understand that they should be concentrating in class. Unless I take steps, they just come and go as they please. I spend much of my time trying to get my students to do as they should, or to at least approach that level of commitment to the course.

One is tempted simply to view their conduct as abject ball-droppage.

One thing is clear: the complex of parents, teachers, FaceBook, YouTube, iPod, etc.—the students' Enviro-Parent—has surely dropped the ball bigtime in rearing these kids. For many of them, I wonder not only whether they can do the work for the course; I wonder even if they were ever taught how to be a student, a being who studies.

As Zorba said, it's "the full catastrophe!"

Still, each semester, we try to make the best of it. And, often, at the end of the semester, we have every right to feel good about ourselves.

"Teach me to dance!"

'

Friday, August 8, 2008

Acting just like a vicious human (some notes on Free Will)


A peculiarity of our language—or of our thinking—is that, when we find someone’s behavior to be especially heinous, we often call him or her an animal. “He’s no more than an animal,” we’ll say. Or he’s a “beast” or a “brute”—more words meaning “animal.”

Usually, the “beastly” behavior we are condemning cannot be found among animals—i.e., among nonhuman animals, for, obviously, humans are animals too. Isn’t it clear that human beings do terrible things that are never done by animals? Think of torture, genocide, parking one's car on the lawn.

How come we don’t accuse Dick Cheney of behaving like some kind of goddam human?

Another peculiarity is our tendency to speak of “vicious” animal attacks. Well, the one thing that an animal attack cannot be is vicious. My Merriam-Webster dictionary defines “vicious” as “having the nature or quality of vice or immorality….” The word “vicious,” of course, comes from the word “vice.”

Consider the case of a cougar attacking a hiker. In the cougar’s mind, the hiker invaded her territory. So she attacked the invader. Was the cougar behaving immorally? Did the cougar exhibit vice?

Of course not. She did not do these things because cougars are not (as we say in philosophy) moral agents. That is, they are not beings who are capable of moral or immoral behavior. For one thing, they have no understanding of right and wrong.

Human infants are not moral agents either. They matter morally—it would be murder to kill an infant. But they are not moral agents, for they are incapable of right or wrong actions.

I know a man who routinely attributes moral agency to nonhuman animals. For thirty years, he has carried on a war with gophers on his land. He hates the little buggers. Sometimes, he’ll explain how a gopher could have dug in direction A, but instead he dug in direction B, evidently for the sole purpose of antagonizing him. He oozes contempt for these gophers. Do you know such people?

Sometimes I kid him. I say, "Well, those gophers were here before you were. If an all-out war breaks out here, I'm sorry, but I've gotta take the gophers' side."

There seem to be many TV programs devoted to describing animal attacks on humans. Invariably, the victim of the attack will at some point explain, “I don’t blame the bear (or weasel or jackalope). She was just protecting her young.”

But what if the bear was just hungry? I guess they’d condemn the bear in that case. “Goddam bear.”

But to do so is to engage in anthropomorphism, “the attribution of human characteristics or behavior to a god, animal, or object” (according to my Mac’s dictionary). Bears know no morality. They are not the kinds of beings who can be immoral or blameworthy. If a bear eats someone because she’s hungry, she isn’t vicious. She’s just doing what bears do because they’re built that way.

I recall an episode of a TV legal drama in which a gentle man becomes violent and starts to hurt people. He’s arrested. Eventually, doctors discover that he has a brain tumor that has somehow altered his personality and caused him to be violent and erratic. I thought, “yes, I get it. This tumor changed the guy into someone who could not help having uncontrollable violent impulses.” And so he was violent.

I figured, “great, they’ll just remove the tumor and send him home.” But no. In this drama, the case became a real puzzler. People debated the man’s responsibility for his attacks. In the end, a judge decided to convict the man of his crimes and send him to prison. As I recall, she said something like, “if we don’t hold this man responsible for his actions, how can we hold others responsible for theirs?”

To me, this just seems confused. In the story, we were led to believe that anyone who had such a tumor would be caused to do what this man did. Suppose that’s true. Then it wasn’t the man but the tumor that accounted for his violent behavior. Indeed, in the story, the man had a long history of being moral and nonviolent. So, if we need to subtract the tumor from the man, we can do that, and when we do, we have a man, a moral agent, who is not violent or immoral.

It seems to me that many people refuse to reflect about when people are or are not responsible for their actions. Often, they seem to gravitate to a very simple picture: you have a man and you have his action. He is responsible for his action. End of story.

I’m amazed that such people don’t regard gophers and bears and infants in the same light.

Maybe they do. Good Lord.

"Goddam brat!"

• • • •

I have never understood the concept of free will. I want to understand it. I want to believe in it. But what sort of thing is it supposed to be?

Among those who believe that we humans are "free agents" and that we possess a will that is free, some will insist that their conscious decisions are the occasion and the phenomena of freedom.

If so, then a recent study should give them something to fret about. Several months ago, the journal Nature Neuroscience published a German study that involved people performing actions while hooked up to some kind of MRI. The study seems to show that, several seconds before subjects make a conscious decision, the "brain" has already made one.

From the Boston Globe (Free will? Not as much as you think):

"It seems that your brain starts to trigger your decision before you make up your mind," said the study's lead author, John-Dylan Haynes of the Max Planck Institute ... in Germany. "We can't rule out free will, but I think it's very implausible. The question is, can we still decide against the decision our brain has made?"
...
Employing both functional magnetic resonance imaging and pattern recognition statistical techniques, the researchers were able to predict which button people would choose before they made their conscious decisions—as much as 10 seconds early, "an eternity," Haynes said.

Haynes believes that delay suggests the absence of free will as most people define it.

The physical brain apparently starts shaping the decision long before the conscious mind does. He speculated that the frontopolar cortex encodes the decision, while a section of the parietal cortex stores it and coordinates the decision's timing.


The study does not settle the issue of whether there is free will. But, for those who suppose that their conscious decisions are the captain of their ship, things aren't looking very good right now.

• • • •

If Haynes is right, then conscious decisions are epiphenomena—that is, each is "a secondary effect or byproduct that arises from but does not causally influence a process" (as my Mac dictionary would have it).

Think of a futuristic robot (Ralph) that seems and behaves exactly like a human being. Ralph is a deterministic mechanism—that is, like my Mac (more or less), Ralph is a system in which nothing happens by chance; everything is programed and caused.

Ralph is not, however, sentient (i.e., he has no mental life).

Suppose that a clever engineer finds a way to modify Ralph so that he does have a mental life. Thus, now, he no longer only seems to think and feel; he really does think and feel.

But Ralph's new feature amounts to a series of epiphenomena. That is, the thoughts and feelings and decisions of Ralph's "mind" do not cause anything. In fact, the older mechanism within Ralph causes his thoughts. And so Ralph imagines that his decision to lift his hand caused him to lift his hand, when, in fact, his mechanism caused both the lifting of his hand and his "decision" to do so.

Does Ralph have a free will?

Blade Runner: "I've seen things"


Aristotle argued that we are responsible for our character because our character forms through habit. A young person who continually "does as the courageous person does" will grow accustomed to behaving in that way; he will develop a firm disposition to stand his ground. But it works for vice, too. A young person who is allowed to (and allows himself to) run away whenever something fearful arises will form a disposition to run away. His cowardice—his disposition to run away—will be of his own making.

As far as it goes, this makes a great deal of sense and is, I think, insightful.

For those who are inclined to press the question of how it is that a person is responsible for his actions, this account promises some answers. If John runs away like a coward, we can attribute that to his cowardice (his disposition to run away), and since he chose the actions that, repeatedly performed (owing to his choices), led to the formation of that disposition, he is responsible for his cowardice.

But this does seem to raise a puzzle. We have asked, why is John responsible for his cowardice? The answer: he chose the actions that led to his cowardly character.

OK. But what is the nature of the "self"—the John—who chose those actions that are now at the bottom of our explanatory scheme? We cannot refer to character, since we've already appealed to these actions to explain character.

So just what is this self then? Isn't it something that reveals character by its choices (of actions)? How could it be otherwise? But now we're just going in circles, for we are appealing to character to explain actions that explain character.

Suppose that one ends one's efforts to explain moral responsibility there. If so, then I suppose that one must regards the "self" that makes the choices (that develop the character that issue in actions for which one is responsible) as a kind of "character" that just comes into being: a given, a brute fact. The self is not responsible for the "character" of this original self. It just is what it is.

Does it make sense to view responsibility in this stark way? If someone is born bad, does it make sense to hold him responsible for being bad? (Surely not, unless we have an account of how this entity that is born is the way it is because of some process that occurred previously [now we're getting metaphysical!] that includes some way that the self is responsible for how he ends up being. We seem to be in an infinite regress here.)

We do seem to think in that stark way sometimes. We are told that child molesters are almost always the product of child molestation. When I hear this, a part of me thinks: it is not at all clear to me that we can hold someone responsible for their pedophilia if that aspect of their personality is virtually guaranteed by their having been molested as a child.

And yet we seem not in the slightest bit reserved about condemning and loathing pedophiles.

Same goes for sociopaths. If they are born that way (as we seem to be told), how are they responsible for their sociopathic ways? How does this work exactly?

And if we are willing to hold Dexter responsible for just arriving in this world a sociopath, why not hold animals responsible for what they are and what they do? And infants?

(Compare with G. Strawson: Living without ultimate moral responsibility.)
'

Thursday, August 7, 2008

Human folly is so very entertaining


After many years as a teacher of “critical thinking” (I hate that phrase, but what are you gonna do?), if one thing is clear to me, it is that it’s easy to make mistakes when judging about causation, i.e., X causing Y.

Most people grossly underestimate these difficulties.

Not long ago, I spoke with a highly intelligent colleague (known for being sharp) who explained to me that he takes his horses to a chiropractor. I gave him a look. He said, “No, really. I’ve taken horses to this guy and the adjustments really work.”

If there were a course called “Causation 101,” among its targets would be the old “it works for me” (or “it works for Trigger”) rationale. No doubt it would be discussed along with the famous “post hoc ergo propter hoc” (after this, thus because of this) fallacy.

Just because A happens (Trigger gets tweaked), then B happens (Trigger eats more hay), doesn’t mean that A caused B. Right?

Yeah, but there’s a pattern. A then B, A then B. What about that!

I won’t launch into the whole lecture. I just want to make the point that, unless you study the fallacies concerning causation carefully, you’re very liable to commit them.

The truth is that the history of humanity is a history of people coming to believe things—ideas about causation included—on poor or nonexistent grounds. Example: two hundred years ago, doctors purged and bled patients. In fact, they were killing them. But this didn’t lead to much skepticism about purging and bleeding.

History (including the present) is littered with examples of poor causal thinking and unfortunate and erroneous causal believing.

But here’s the kicker. It’s not as though someone has gone through society with some kind of anti-virus program, spotting and fixing all the causal goof-ups. We are tempted, I think, to view society as a place in which, more or less, reason rules, and glaring folly has largely been identified and discarded. But nothing could be further from the truth. I often tell my students, “Go anywhere and point in any direction. It would be amazing if you were not to find yourself pointing at some instance of routine foolishness.”

In a way, journalism amazes me by how bland it usually is. It seems to me that one need only enter some random building and one will find something absurd or alarming.

Imagine that we point at the world of sports and athleticism. There, one hears endlessly about the importance of “stretching.” We listen to this and think, “Guess so. They’re the experts.”

But if people can find a way to puke and bleed themselves to death while supposing they’re on the road to health, then they sure as hell are capable of believing just about anything about stretching. I mean, it’s not as though anybody’s really studied the matter.

In yesterday’s New York Times, Filip Kwiatkowski asks Is Stretching All It’s Cracked Up to Be?

Kwiatkowski first notes that trainers (et al.) are very passionate about the issue of stretching. That is, they are very sure that stretching is important—something you'd better get right.

But, in fact, advice about stretching is remarkably different around the world: “In Norway, people stretched after they exercised; in Australia, they stretched before exercise.” Further, the nature and purpose of stretching differs. In some parts of the world, athletes stretch to prevent soreness, but that’s not why people stretch in this country.

But it sure is important.

Something doesn’t add up here. If stretching is so all-fired important, how come people don’t agree on when to stretch and how to stretch and why to stretch? Tell me that!

Well, two large studies have been launched to try to get to the bottom of this. You can read about them in the Times.

I have my doubts about these studies. I don’t think this “stretching” business will get cleared up right away—not based only on two studies, even if they are large (they are). One hopes that further studies will be done and results compared. In time, the truth will emerge.

In the meantime, think of all the fallacies that will be committed, all the conclusions jumped to, all the misconceptions and passions generated.

Humanity is so very entertaining.

Gotta go. I need to take a quick walk. It's good for the digestion. Everybody knows that.
'