Saturday, June 27, 2009

Folly is manifest

manifest: clear or obvious to the eye or mind
     I am embarrassed and troubled to admit that most people mystify me. For instance, I seem to be surrounded by people who view the having of religious “faith” as some sort of virtue.
     But, on its face, faith looks like a vice.
     My Merriam-Webster dictionary offers several definitions of faith, including this one: “firm belief in something for which there is no proof.” That’s the sense of the word that interests me. Many people seem to think that it is good and meritorious to have faith in God and God’s big project: to believe in this stuff despite the absence of proof or even strong evidence that any of this stuff is real.
     Now, most of the time, we are inclined to ridicule people who believe in this way. A dozen years ago, those Heaven’s Gate people believed that the time had come to leave Earth, since it was about to be “recycled.” They left Earth by killing themselves, taking drugs and putting plastic bags over their heads. As near as I can tell, they are now just plain dead and the Earth is just plain unrecycled.
     Why did the Heaven’s Gateians hold those beliefs? Not sure, but the zany convictions of HG leader Marshall Applewhite had something to do with his alleged near-death experience after a heart attack.

     “That’s silly,” we say. “You’re bound to be a bit addled while recovering from a heart attack!”

     But when more ordinary people explain their religious beliefs—e.g., belief in Christ as our Lord and savior—they don’t seem to have anything better to offer. They’ll refer to feeling transported while singing hymns at church or experiencing some kind of transcendent moment whilst looking into the night sky. (I can relate to that one.) Stuff like that.
   “Well, why then (I ask the Christian) should you feel any more confident in your beliefs than that Applewhite guy? How are you different from him?”

   “Shut up. Applewhite was a nut. HG was a cult.”

   “Yeah, but that’s just about membership size, right? There are lots of people like you and there are few people like Apple Boy.”
     These conversations never seem to get me anywhere.

     Yesterday’s Schott’s Vocab (Weekend Competition) is soliciting definitions of the word “faith.”
     But wait. Words mean what speakers mean by them—a meaning that survives (for as long as it survives) because it is useful to us. And if there weren’t general agreement about word meanings, language wouldn’t work.
     So what’s this business about asking people for “their” definitions? That’s like asking a guy how he uses his chair or his comb. I don’t ask such questions.
     It would make more sense to me to ask whether the meaning of a word is such that we ought to have some important belief that uses that word. Thus, for instance, given that “faith” is believing without evidence, we can ask: should one ever have beliefs based on faith?
     Maybe some who answer the “what’s your definition?” question really mean to answer the latter question. Dunno.

     Here are some entries to Schott’s solicitation:
• The suspension of reason and rationality for a dream.

• Faith is knowing something should be true, being certain it is, and having no insight into one’s collisions with reality.
• Faith: Security in numbers.
• Faith is the tenacity with which a belief or myth is adhered to, regardless of any proof for its veracity.
• Faith is a socially acceptable insanity in the same way that alcohol is a socially acceptable drug.

     Since Schott’s readers are ipso facto New York Times readers, you’ve gotta expect entries that are witty or that are show-offy or that are snidely opinionated (I skipped over some of the worst offenders in this regard).
     So most of these definitions are just what we’d expect, I suppose.
     The stuff about a “collision with reality” is funny, I guess. (To me, that phrase is always funny.) Most of the rest strike me as little more than variations on the dictionary definition, plus some 'tude.

     For me, two of these stand out a little bit. “Faith,” says one wag, is “security in numbers.” I suppose the point is that most people manage to avoid being embarrassed by their failure to apply minimum standards of rationality to their religious beliefs because such beliefs are so “normal” and time-honored and thus they must be true--or at least it wouldn't be too embarrassing if they turned out to be false.
     My own view is that human beings are capable of almost anything (i.e., any atrocity or idiocy), as long as it can be said that “we’ve always done things like this.” Even now, tradition and normalcy are much more powerful than reason. It's pretty disheartening.
     The last definition is somewhat interesting: “Faith is a socially acceptable insanity in the same way that alcohol is a socially acceptable drug.” This definition strikes me as more earnest than clever.
     I guess it’s pretty obvious what the definer means by calling faith “insanity.” Faith is some sort of extreme rational error or failing. I get it.
     But it’s one that is somehow acceptable. Yes, I get that too.

     Like drinking alcohol? Here, I get lost. I suppose the obvious points to make about alcohol are that (1) you shouldn’t drink too much of it too often and that (2) it is silly to prohibit other drugs but not alcohol.
     But our definer seems to be thinking (am I wrong?) that drinking alcohol per se is some sort of madness, one that is tolerated.
     –A teetotaler, I guess. I’ve known people who seem unwilling to recognize that one can enjoy alcohol without abusing it. Is that who we’re dealing with here? Wadda nut.
     I’m sympathetic to this “definition,” but I suppose I’d prefer to use another example: “Faith is a socially acceptable insanity in the same way that the notion that we have a right to bear arms is socially acceptable [insanity].”
     The problem here is that one is trying to make a somewhat controversial point by relying on another controversial point.
     Probably, the core of the point is just that, if one steps back to take a clear and objective look at “us,” one cannot avoid noticing that this “faith” thing that we do, like a few other things that we do, is plain hogwash. It's indefensible.
     Aha! Like many insights, this one turns out just to be a variation of the Emperor’s new clothes allegory.

     If I were to write a book (don’t worry, I won’t) that captures Roy’s wisdom, it would include a handful of propositions, one of which would be: most folly is manifest.
     (But how can that be, Master?!)
     Yes, yes, exactly. Now run along and think about that, Grasshopper (and stay out of that damned closet!).

Friday, June 26, 2009

Snap out of it!

     Michael Jackson, an enormously talented and influential pop star of mixed and increasingly dubious accomplishment, has died a premature and miserable death, as anybody with half a brain thought he likely would. And so now he’s gone.
     No doubt this is a terrible time for his family and friends.
     The rest of us: surely we can see that his death deserves little attention. It isn’t particularly meaningful or important, now is it?
     Snap out of it!
   We cannot say that we are being fooled. It is not entirely inaccurate to say that we are being "informed." … The efficient mass production of pseudo-events—in all kinds of packages, in black-and-white, in technicolor, in words, and in a thousand other forms—is the work of the whole machinery of our society. It is the daily product of men of good will. … The people must be informed!
Daniel Boorstin, 1961

Wednesday, June 24, 2009

How can one be responsible for the self that is responsible for the choices that create the self??

What or who is this self that does the choosing of actions that leads to the formation of one’s character? One might suppose that, for Aristotle, this self has no moral character (it is the pre-character self, at least in the normal sense of character) and thus this being’s choices do not reflect the self’s character at least in the early stages of action-choosing (i.e., moral development). How then explain the “badness” or “goodness” of this self who, it seems, Aristotle holds responsible for the course of actions taken, leading to a particular character? (He seems to be assuming that this self is responsible for his or her choices.) Taking his view, we seem compelled to recognize a kind of moral agent sans moral character that nevertheless makes choices that can be moral or immoral. Are we to imagine some Calvinistic fated self who simply comes into being disposed to right- or wrong-choosing as moral development commences? (This is absurd/appalling.) Or does this initial self in some sense “choose” its “character*”—because of a peculiar proto-character not accounted for by my later action-choices that yield the dispositions that are my ultimate character?  
– RB, 4/18/16

I have always been interested in the free will debate, though, years ago, I decided essentially to stop thinking about it because continuing to do so seemed at the time to threaten my ability to function as a normal human being (an ability that was already pretty feeble in my case).

(My "decision": there’s a kind of irony in this that I won’t go into but that you can guess.)

Among the views that I have always found attractive is the notion that the very idea of free will is incoherent. I suppose I understand those who insist that free will is a possibility despite the correctness of some sort of mechanistic view of nature, but it has always seemed to me that these thinkers have a conception of free will that is foreign and that, in any case, is not the notion I came to the philosophical table with years ago. To me, these philosophers seem to come into the room wanting to play chess, whereupon they commence bringing out baseball gear. WTF?, as they say.

To my (pre-reflective and reflective) notion of free will, it is, of course, a kind of causation: the causation of a "self." But I can’t make any sense of this "self" if it is viewed a la the Libertarian in part because that view seems unable to explain how the self is in some sense responsible for itself. The Humean view (alluded to above) seems even worse in this regard.

How can you make sense of the self as something for which one is responsible--and it does seem to me that this is a requirement for any sense of “freedom” and “responsibility” that is even remotely like my pre-reflective notions. The enterprise seems unpromising in the extreme. At least to me.

Today, I happened to come across a reference to an essay by Martin Heisenberg (yes, the son of the great physicist) that appeared in a May issue of Nature. Heisenberg, it seems, was defending free will, but the blogger I was reading (who offered a critique of H's article) rejected Heisenberg’s argument owing (he said) to Heisenberg’s conceiving free will as “randomness.”

Following that up, I ran into the view called “Pessimism.” I had come across it before. It is associated with the contemporary philosopher Galen Strawson (yes, the son of the great Philosopher Peter S).

To make a long story short, S’s “pessimism” sounds right to me.

Here’s a brief Wikipedia account of that view:
The contemporary philosopher Galen Strawson agrees with Locke that the truth or falsity of determinism is irrelevant to the problem. He argues that the notion of free will leads to an infinite regress and is therefore senseless. According to Strawson, if one is responsible for what one does in a given situation, then one must be responsible for the way one is in certain mental respects. But it is impossible for one to be responsible for the way one is in any respect. This is because in order to be responsible for the way one is in some situation "S", one must have been responsible for the way one was at "S-1". In order to be responsible for the way one was at "S-1", one must have been responsible for the way one was at "S-2", and so on. At some point in the chain, there must have been an act of origination of a new causal chain. But this is impossible. Man cannot create himself or his mental states ex nihilo. This argument entails that free will itself is absurd, but not that it is incompatible with determinism. Strawson calls his own view "pessimism" but it can be classified as hard incompatibilism.

The view is explained in more detail in this entry of the Routledge Encyclopedia of Philosophy.

I’ve got to run, but I wondered what others thought about this view?

* * * * *

Just got back from my lunch date with my pal Jan. I want to return to the earlier matter.

I recall studying Aristotle’s ethical views in graduate school. Aristotle says that one is responsible for one’s moral character, for one’s moral character is (more or less) one’s set of dispositions, and those arise via the actions one had chosen to perform which, when repeatedly performed over time, produced a habit or tendency—a second nature.

This is, I think, more or less correct and importantly so. For what it is worth, I try to live by this view in my own "moral saga," at least as far as my own moral agency is concerned.

Nevertheless, it has always seemed to me that something about this account doesn't add up, for, though it is surely correct that one’s moral personality is a matter (largely or entirely) of one’s set of dispositions (to do things, want things, etc.) and that one’s dispositions arise (largely) via repeated actions that one “chooses” to perform, this now focuses all explanatory attention on that self who makes those choices. Who’s that guy? Where did he come from? How do we account for his fortunate (or unfortunate) choices or decisions to act as he does (repeatedly over time)? One cannot return to the “habituation” account--that this self is the product of earlier choices and processes of habituation leading to his moral character--for one then enters a vicious circle or an infinite regress.

But then—what?

I have never understood the familiar notion—familiar, at least, among those who watch TV shows and movies—that criminal psychopaths are evil and deserve to be punished for their terrible crimes and even for being the monsters (in intention and desire) that they are even before they’ve acted. Always in the background, it seems, is the further assumption that they are “born,” not made. (Am I wrong? This does seem to be the thinking, at least sometimes.) Evidently, in the minds of these screenwriters--and of the audience--this further idea does not erode the notion that the psychopathic monsters are blameworthy for their evil. But surely if criminal psychopaths are born as they are (lacking certain capacities essential for empathy, etc.), then they are no more responsible for their criminality than wolves are responsible for their predation.

Well, I suppose one could argue that, though they lack the usual emotions, etc., they still have some sort of capacity to choose what is right. That is, their psychology, being what it is, does not guide them or incline them to do (and want to do) what is right (and eschew what is wrong) as normally occurs in people. But that does not mean they cannot learn what is right and wrong and act accordingly.

I guess I will have to enter into (not today) why this defense strikes me as ridiculous. It seem to me that having those "normal" desires and aversions (etc.) is very, very important in being a moral agent--the sort who can be held responsible for what he does, etc. It does seem to me to be a leap to suppose that, since "obviously" one can "understand" that stealing is bad and charity is good, then one is off and running as a moral agent with all the benefits and burdens. But no: it strikes me as more natural to suppose that one lacking the usual capacity for empathy (etc.) will not likely get the point of the moral game. It will never "take" with him. To lump such a person together with "normal" humans with regard to the availability of morality just seems absurd to me. It strikes me as absurd as expecting a blind person to enjoy the shape of a building that he cannot see but that, nevertheless, can in some sense be "explained" to him.

Well, I'm going to think more about this.

(Note: it would be a mistake to infer from my position that I am in favor of letting dangerous psychos run loose. One can suppose that such people are dangerous and should be "restricted" without also supposing that they deserve to be thus restricted.)

I think I should add that, for me, the point I am making is a matter of justice. I feel that a kind of gross injustice is involved in blaming and punishing beings for being a certain way—ugly, or monstrous, or unpleasant. It angers me. Something in me wants to rise up against it, as one would rise up against gross racial discrimination or bigotry toward, say, short or fat people.

In the end, it seems to me that the Humean thinkers (compatibalists and soft determinists alluded to above) somehow seem to be fine with that injustice (as I would put it), for they do not deny that, given the mechanisms of nature, Joe Schmo had to have the personality and character that he has--this state that led to his decision, in that fateful moment, to kill Sam the innocent shopkeeper. And (they argue), since nobody slipped Joe a mickey or monkeyed with his brain or plied him with threats (etc.), and he “simply” decided to kill Sam, his action was free, something for which he is responsible.

I just don’t get it.

Friday, June 19, 2009

Marilynn Marchione, we salute you

     I don’t often get a chance to praise the work of journalists, but it appears that AP reporter MARILYNN MARCHIONE is posting a series of excellent articles about alternative medicines, which, she says, are increasingly popular despite the growing evidence that they just don’t work.
     Here are links to a few of her recent pieces:

Alternative medicine goes mainstream
     Ten years ago, Congress created a new federal agency to study supplements and unconventional therapies. But more than $2.5 billion of tax-financed research has not found any cures or major treatment advances, aside from certain uses for acupuncture and ginger for chemotherapy-related nausea. If anything, evidence has mounted that many of these pills and therapies lack value.
     Yet they are finding ever-wider use….
$2.5B spent, no alternative med cures
…Echinacea is an example. After a large study by a top virologist found it didn't help colds, its fans said the wrong one of the plant's nine species had been tested. Federal officials agreed that more research was needed, even though they had approved the type used in the study….
60% of cancer patients try nontraditional med
     Some people who try unproven remedies risk only money. But people with cancer can lose their only chance of beating the disease by skipping conventional treatment or by mixing in other therapies. Even harmless-sounding vitamins and "natural" supplements can interfere with cancer medicines or affect hormones that help cancer grow.
     Yet they are extremely popular with cancer patients, who crave control over their disease and want to do everything they can to be healthy — emotional needs that make them vulnerable to clever marketing and deceptive claims. Studies estimate that 60 percent of cancer patients try unconventional remedies and about 40 percent take vitamin or dietary supplements, which do not have to be proved safe or effective and are not approved by the federal Food and Drug Administration.
Cancer patient learns herbals can interfere

Thursday, June 18, 2009

On the alleged importance of presenting "opposing points of view"


Yesterday, John Dean, author of the Watergate exposé “Blind Ambition” and a guy who nowadays appears on liberal TV and radio shows, gave a lecture at the Nixon Presidential Library & Museum. The Nixon Foundation, which earlier pledged $150,000 to support Library events, is, well, not liberal, for it is dominated by Richard Nixon fans and Nixon was a Republican. Further, since Dean was the guy who blew the whistle on Nixon’s “Watergate” excesses, it's fair to say that the Nixon-loving foundation folks are particularly peeved about Dean and his writings.

They’re unhappy about the Dean visit, so much so that they’ve even decided to withdraw their $150,000. That’s their right.

According to the OC
Reg (Last straw: John Dean still riles Nixon group), “Foundation officials” say that they do not object to Mr. Dean’s appearing at the Library.


According to the Reg, they object “to a lack of opposing viewpoint.”

Much fun could be had attempting to describe those viewpoints. Pro-corruption? Anti-Constitution? Pro-
cocker spaniel? Anti-telegenic?

It’s clear that at least some foundation officials do object to Mr. Dean and his ideas, and not just to his failure to be accompanied by, say,
G. Gordon Liddy, or maybe Beelzebub. For instance, Sandy Quinn, the foundation’s assistant director, is quoted as saying
"He's disgraced and has been disbarred…He's so controversial … and [Blind Ambition] is not a new book. It's 33 years old. It would have been more serving and non-partisan it [sic] would (have been) point-counterpoint.”

That’s a mighty strange thing for Quinn to say.

It is by no means clear that Dean is still a “disgraced” figure. He’s definitely changed since his bad old Nixon days. And isn’t
Dennis “Abramoff” Hastert, a former Library guest, also a “disgraced” figure? (Hastert has been involved in numerous scandals and controversies going way back.) Did Quinn carp about Hastert’s visit too? Doubt it.

More importantly, if ever a President were “disgraced,” it would be Richard Nixon himself. He had to resign, remember? (Plus, he said all those nasty things about blacks and Jews and Jane Fonda.)

And don’t forget:
Dick Cheney was invited to speak at the Library. Surely that fellow's standing by now is lower than a barefoot rattlesnake. He's a sub-disgrace—a subsgrace.

You’d have to agree that Dean is a “controversial” figure—at least among Republicans. But consider this: according to the Foundation’s
Nixon Library fact sheet, the "Nixon Library’s ongoing Distinguished Speakers Series has brought lively lectures and discussions from such leaders as
Vice President Dick Cheney;
Nobel Peace Prize recipient and former Secretaries of State
Henry Kissinger and George Schultz;
former Secretaries of State and White House Chiefs of Staff
Alexander Haig and James Baker;
former Vice President
Dan Quayle;
Speaker of the House
Dennis Hastert;
former Secretary of State
Madeleine Albright;
former U.N Ambassador
Vernon A. Walters;
Sean Hannity,
Bill O’Reilly,
Larry King,
Laura Ingraham,
Mike Deaver,
Charlton Heston

Obviously, the views of at least some of these "leaders" are as controversial as Dean’s views. Gosh, did Quinn object to the appearance of all these folks too?

Quinn obviously doesn’t care about a speaker’s being “controversial.”

That “Blind Ambition” is 33 years old is irrelevant. It is an important work in the
oeuvre of works about Nixon’s White House.


But what about this
“counter-point” business? Is Quinn suggesting that inviting a speaker without throwing in a contemporaneous “counter-point” speaker is somehow inappropriate? That’s absurd. And I doubt very much that the Library has bothered to follow that goofy practice in the case of past speakers. Did Quinn object to Cheney’s showing up sans opponent? I don’t know, but I can guess.

There is such a thing as “balance,” I suppose, though it can never be more than a rough approximation that will inevitably be judged unsuccessful by some. Now, in most situations, balance is not achieved by imposing it on discrete things (a visit, a seminar, a lecture, a purchase, etc.). Rather, it is achieved by imposing it on a
series of discreet things spread over time, or a set of discreet things, viewed as a whole, spread across a zone.

For instance, though, as an academic, I am no fan of instruction that emphasizes advocacy of controversial positions (I’ve discussed this previously), I do not object to “advocacy” instruction
per se. In itself, an instructor’s “teaching” liberal (or conservative or radical) views is not a problem for me (though I would recommend a somewhat different approach). On the other hand, I might object to it if, upon surveying the pattern of instruction at the college, I find that it adds up to a strong bias in favor of some controversial position. (Here, I am referring to controversy relative, not to society, but to academia and expert [“discipline”] communities.)

For instance, if one finds that every instructor in an economics department expressly or tacitly presumes a strong laissez-faire stance regarding the economy and this perspective is incorporated in their teaching, one might worry that students who take economic courses at the college will leave with an “unbalanced” or one-sided view regarding that important matter. One would be relieved were the next hire to recognize regulation of industries as necessary or prudent.

If the Nixon Library were interested in “balance,” it would seek a schedule of lectures including guest speakers representing a range of perspectives. Until two or three years ago, the Nixon Library was private, and it clearly made little effort to provide “balance.” Look at the list of guest speakers above. With two exceptions, they’re all Republicans.

As of two years ago, the Library is a federal facility, and so, nowadays, its director is obliged to pursue guest speakers with an eye to "balance."

It seems to me that, in objecting to the Dean visit, the Nixon Foundation people no longer have a leg (or a
pumpkin) to stand on.


Remember when then-trustee
Steve Frogue arranged a “forum” or “seminar” on the Warren Commission Report on the JFK assassination? Frogue invited four speakers, each of whom took an arguably incompetent, and certainly a marginal, position regarding the assassination. Further, some of these speakers inspired moral outrage. One speaker was the chief reporter for the notoriously anti-Semitic “Liberty Lobby,” and another speaker contributed to that organization’s publications.

To make a long story short, Frogue had organized a crackpot forum. This was explained to the board on the night that the vote approving traveling expenses was taken, but the board majority (John Williams, Steve Frogue, Teddi Lorch, and Dorothy Fortune) were unmoved. The forum went forward. The press learned of the details, and, owing to a public outcry, the forum was soon cancelled and abandoned.

I believe that, when the board approved this daffy Nutcake Forum, they erred. In my view, it is not a simple matter arguing that they erred, for colleges are supposed to be bastions of free speech.

On the other hand, colleges are supposed to have high standards--they do not regard all opinions as equally valid--and so, if one seeks to enrich the community’s reflections on some event such as the JFK assassination, one ought to organize forums comprising competent experts, not crackpots. So this was a case of conflicting values (or desiderata). Which value should prevail?

At the time, many who objected to Frogue's forum explained their objection by appealing to the need for “balance.” As I recall, then-Chancellor
Robert Lombardi took that view, though he waffled a bit. He found fault with the forum owing to its lack of differing points of view. On the other hand, eventually, he said that “free speech” means that the forum should be allowed. “I think it’s terribly important to allow differences of opinion to be voiced,” he said.

Yes, but that can’t mean that a college should provide a forum for any group expressing
any idea—for instance, that the Earth is flat or, say, that President Obama is a Manchurian candidate or a mujāhid. Remember: Frogue and his guest speakers were crackpots from Hell (crells). Even among conspiracy theorists, this crew was viewed as subpar. (I recall that that point was nicely made by noted author Gerald Posner.)

In an editorial that appeared in August of 1997, the
LA Times weighed in on the issue of whether the forum should be allowed:
The seminar was thrown off campus only after the district received more than 200 calls of protest Thursday. The callers had more wisdom than the trustees. There is a difference between airing seemingly crackpot ideas in an intellectual, substantive manner on a campus devoted to academic freedom and giving legitimacy to bigoted ravings with no balance from opposing speakers. [My emphases.]

One things is clear: one does not achieve “balance”—or at least one does not achieve a rationally desirable kind of balance—simply by having “opposing speakers” at one's forum. In fact, Frogue's four "experts" had strongly divergent views. One attributed the assassination to renegade Nazis. (Yep.) Another blamed Israel's Mossad. Another favored Jim Garrison's long-discredited views. Frogue’s pick was the CIA or the ADL or the Brownies (my memory is fuzzy). These people were clearly not on the same page.

OK, so the forum already had a kind of “balance” in that it presented
opposing views. On the other hand, it did not include the view that the Warren Commission was correct: there was no conspiracy. That’s “imbalance,” I suppose.

It’s clear, then, that presenting opposing views does not by itself ensure the quality of the discussion/forum. The views presented might all be lousy. And one can assemble opposing views while leaving out the view that is in some sense best or most defensible among alternatives.

Further, sometimes, rationally speaking, an issue is not controversial and so opposing views
don’t exist. For instance, though there are people who insist that the Earth is flat, among those who can reason, there is no controversy regarding the shape of the Earth. It is spherical. Case closed. Imposing “opposing sides” to the “discussion” of the shape of the Earth (or to whether there was a Holocaust or to whether AIDS is caused by HIV) would actually diminish the quality of the discussion.

That's right. Sometimes, bringing in the "opposing views" just muddies the waters, rationally speaking.

I think that the JFK assassination verges on being an uncontroversial “issue” (or non-issues) that really has only one "side." Evidently, amongst the general public, how JFK was killed is controversial. But, among the relevant experts, the basic facts are not disputed, and the consensus (namely, that the Warren Commission essentially got it right) seems to have grown stronger over the years.

Still, I would not be opposed to a forum on JFK assassination conspiracy theories at a college. That so many Americans—a handful of whom are intellectually estimable, I suppose—suspect a conspiracy might be a reason to arrange or allow a college “forum.”

But you'd better be darned careful who you invite. No crackpots. (That a guest "expert" is
anti-Semitic doesn't strike me as relevant to the quality of his view about the JFK assassination. You wouldn't want your sister to marry the guy, but, hey, even a bigot can have a good theory.)

Should the forum be “balanced”? (In this case, we’re talking about a one-shot event. Hence, any balance will have to be a balance within the event, not some pattern the event fits into.)


I think “balance” is overrated. I would not object to inviting one solitary speaker on a controversial topic if the speaker were sufficiently impressive in his/her intellectual attainments and abilities. After all, whether a position should be accepted is not ultimately a matter of
comparing views. A view is worth believing if and only if the grounds for it are logically compelling. (This, of course, is often a matter of degree.)

There’s room for talk of “balance,” I think, for those who seek to promote sound discourse. But surely one betrays misjudgment—a kind of error in proportions—if one approaches a “forum” about X by focusing on “presenting opposing points of view.” Ultimately, we should be judging, not the winner of a debate, but evidence and arguments. There are standards for such things, and, by those standards, some thinkers and some positions
just don’t rate. So don't include 'em.

We need to find a way to encourage “forums” that honor the authority of those standards. That's the crucial thing. If we do that, we won't be yammering so much about "balance" and the need to present "opposing views" and debates.

But, of course, I’m talking as though the public would even know the difference.

Silly me.

(Note: I'd next like to consider J.S. Mill's view concerning the benefits of allowing expression and advocacy of even false and absurd views.)

Wednesday, June 17, 2009

Neither a Luddite nor a philistine be


The Third Man (1949)
Director: Carol Reed
Orson Welles (as Harry Lime)
Joseph Cotton (as Holly Martins)
Alida Valli (as Anna Schmidt)

Post-war Vienna: Author Martins learns that his old friend Harry Lime is dead. He isn't. In fact, Lime has become a black-market racketeer who has made a fortune stealing and diluting penicillin, with tragic consequences. When Martins and Lime meet (on Vienna's famous Ferris wheel), Lime offers his, well, philosophy. Welles' Lime embodies evil and charm.

This is a great movie. The cinematography and music alone make the movie worth seeing, over and over.

To see what I mean, check out the film's famous final scene:

To you young people out there: Be thou not a knucklehead. See what you've been missing!

Tuesday, June 16, 2009

More bullshit (ethics and the blues)

Just got through reading an annoying article in this morning’s Inside Higher Ed (Do as I Say, Not as I Do) about “ethicists” not being particularly moral people.

I don’t hang out with academic philosophers much these days, but I know the field, and my guess is that most philosophy professors would simply shrug if they found out that ethicists are less moral or only as moral as the average person. That’s because Ethics is a field within philosophy—along with Metaphysics and Epistemology and whatnot—and philosophers tend to approach philosophy as an attempt to arrive at understanding, not goodness.

Sure, goodness is important, but philosophy is essentially an intellectual enterprise; its works do not belong in the self-improvement aisle.

The fictional Hannibal Lecter had understanding; he understood people very well. But he was nuts and seriously wicked.

Still, with his intellect, he’d likely be an ace ethicist.

I dunno. It seems obvious to me. Understanding is one thing; being good is another. Aristotle cast much light upon morality. I'm particularly impressed with what he had to say about moral development and how one acquires virtues and vices. But what if we learned that he was in fact a cowardly fellow who often succumbed to temptation and did bad things? It would be disappointing, but would it mean he was wrong about, say, the nature of moral development?

It would not.

Some philosophers of religion are atheists, you know. And some ethicists come to lose their initial sense of the importance or absoluteness of morality. They become a little unhinged about morality. Does this make them bad philosophers? Nope. Whether they’re good or bad philosophers depends on the strength of their arguments and insights.

But the philosopher that is the focus of the IHE article doesn’t make that point. He doesn’t say, “Look, philosophy is mostly about understanding, not about ‘being good.’” Instead, he seems to go along with the misconception.

I can think of two reasons why he might do that.

First, though there is no reason to suppose that ethicists must be moral, there is, I think, a reason to suppose that most ethicists regard morality as an important part of leading a life. Historically, this has been the moral philosopher’s starting point: “we care so much about behaving well, but what’s that all about, anyway?” (But I don't see that having that starting point is essential to "doing ethics.")

So, I suppose, it would be puzzling were it a fact that moral philosophers are not particularly moral. If morality is important to them, why don’t they show that in their lives? But still, I say, the quality of their philosophy depends on the soundness of their arguments. That they are moral midgets (an idea, by the way, I’m inclined to dispute) is philosophically irrelevant.

Second, as Harry Frankfurt has noted, we live in a world of bullshit. I won’t rehearse Frankfurt’s reasoning.

Yes, existence is highly bullshity. There’s no use denying it. You’ve got to deal with it. So when communicating with the public, the specialist is, I think, faced with a Bullshit Mountain; confronted with so great a heap, he or she will be tempted to play with, not to refute, the inevitable preposterous caricature of his or her field. “Philosophers are wise,” says the common man. That, of course, is bullshit (I know few philosophers who concern themselves with wisdom). “Philosophers are deep,” thinks John or Jane Doe. Well, maybe, but that’s not what philosophers think they are. (They might acknowledge pursuing fundamental issues that are hard to think about. Is that deepness? Why would anyone call it that?)

People tend to go with the flow, bullshitwise, it seems to me. That's because it's always a tsunami.

"The blues ain't nothin' but a low down heart disease"
—A traditional blues lyric

THIS REMINDS ME of how people talk about the blues. Yes, the blues—the music of Muddy Waters, Howlin’ Wolf, Charley Patton, Robert Johnson, and Lightnin’ Hopkins, to name a few of its great practitioners.

Blues, of course, is mostly a kind of sexy, good time dance music.

But then there’s the vast and deep bullshit about blues that we simply cannot escape or remove. Even dictionaries fling it. My computer’s dictionary defines blues as “melancholic music of black American folk origin, typically in a twelve-bar sequence.” Merriam-Webster informs us that a blues is “a song often of lamentation characterized by usually 12-bar phrases [etc.].”

Melancholy? Lamentation? Don’t think so.

Well, as they say, a video is worth a billion words: watch this 1964 performance by the great bluesman Robert Nighthawk:

That's the blues.

The blues first developed within rural black communities, where rowdy and impious young men and women would enjoy debauched entertainments at seedy juke joints. Musicians were expected to keep things sexy and thumpin’.

That’s right: they were playing dance music.

When this music moved to the cities, it filled bars and then clubs. Perhaps the pinnacle of this phenomenon was the Chicago blues scene of the 50s and 60s. Those clubs were funky, raucous, and wild.

No lamentations were allowed.

NEVERTHELESS, I don’t know how many times I’ve heard old (and young) blues performers dish out that malarkey about blues being an expression of pain and sorrow for a downtrodden and oppressed people. “You’ve got to suffer to play the blues,” they say, staring sadly at the ground. “You’ve got to know what it’s like to be hungry and without a dime.” They look weary, old.

Yeah right. Check out this video of a 1966 performance by the great Howlin’ Wolf. First, the Wolf provides a few thick slices of that ol’ baloney. —Then he blows some classic blues, blasting that stinky sausage clean out of the room!

As you watch his typically sexy, funny performance, ask yourself: “Gosh, just how melancholy is this fellow? What manner of lamentation is this?”

The next video presents the equally great Muddy Waters, trying to explain the blues to a clueless Norwegian in 1976. (Really.)

It’s about "hard times,” he says. It’s about being poor; it's about not being free.

The Norwegian wants more. Muddy is not inclined to go further down the baloney highway. He becomes uncomfortable.

Well, says Muddy, I've had lots of trouble with "womens" and "money." There. That's the basis of my blues.

The Norwegian wants more.

Muddy hasn't got any more. So he finally says: well, it’s “a good time thing."


I’m not saying that the regrettable facts about African-Americans in our history have nothing to do with the blues, its preoccupations, and its themes—sure—but the music itself, when it is performed for real audiences, is not about those things.

It’s entertainment—often sexy, always passionate, sometimes dark 'n' dangerous.

By the way: I found this redolent chunk of mega-balogna on a website for a band:
"This is going to hurt some, but it'll be worth it, I promise you. You're going to experience not just our pain; you're going to feel your own pain deeper than ever before. But feeling it, really feeling it, and then letting it go will give you a sense of renewal like no other. And that, my friend, is the purpose behind the Blues. It's what makes the Blues different than everything else. And when you hear this band play, you're going to hear Blues the way it was meant to be felt!"
Man, that’s some fine bullshit.

Thursday, June 4, 2009

Anthropomorphizing cats

[The following appeared in Dissent the Blog. I include it here owing to its (admittedly meager) philosophical content and nature.]

RECENTLY, I added two cat images (see) that I thought were funny. I especially liked the commando-cat image. I thought the other image was iffy, humorwise.

One of our readers (MAH) objected to the commando-cat image. In a note to another reader (BS), she opined (in passing): “the cat with [the] gun is horrible!”

BS responded, noting (in passing) that he liked the “cat with a gun,” judging it to be a “creative” use of a yawning cat.

Later, MAH shared her usual thoughtful reflections about things, but she added:
All right: just to show my grumpiness at the June gloom, I don't even like the cat with mouse and laptop. I never did care for that kind of anthropomorphized imagery of animals. Remember those godawful commercials in the old days that made cats appear (not very well) to speak words? Hated 'em more than I can say. The magnificent creatures are interesting enough in their own right not to need "enhancement" with faked human activities. [I added the dictionary link.]

I briefly responded, arguing (good-naturedly, I hope) that the “commando” image either did not anthropomorphize or that it did, but in an acceptable way. I said that, at least for me, the humor of the image depends in part on the manifest absurdity of placing a cat in these settings—not on the idea that cats are like Rambo (or are like computer geeks). (No doubt MAH will explain to me that she doesn't need me pointing this out.)

I’m not sure, but I think that the commando-cat image is “fun” for me in part because it vaguely ridicules the embarrassing and unsophisticated “Rambo/Hollywood-commando” fantasy or mindset. It does not ridicule cats or suggest that cats are anything like one of these stupid cartoon-commandos. (I’m passing no judgment on real commandos.)

OK, so one point is this: I do not object to portraying cats as people per se, just as I do not object to portraying, say, babies as adults per se. (No doubt, MAH will agree.) I noted that the current E-trade commercials (portraying a baby/toddler as a kind of hip young male stock speculator) are funny and unobjectionable (at least re our attitudes toward babies). (See below.) Their creator is clever and understands the creepiness and absurdity of viewing babies as hip young traders. These commercials are generally striking and entertaining, to me. (On the other hand, they utterly fail to cause me to buy what E-Trade sells. In fact, I had to look up whose commercials these were!)

Along with MAH, I did not like those old Meow Mix talking-cat commercials and the like. (See below for the "meow" commercial, although MAH perhaps had some other commercials in mind.) These kinds of commercials don’t strike me as objectionable. To me, most of them are simply unclever and stupid. Do they imply anything about the nature or quality of cats? I don’t see how. I think they pander to a common capacity to be dazzled and entertained by stupid things. (OK, I am now flashing my “elitist” card.)

MAH is of course correct: some commercials anthropomorphize animals in the sense that they in some sense impute human thoughts and attitudes to nonhuman animals. And some of these commercials are stupid precisely on that score.

My candidate: the “Morris the cat” (9 Lives) commercials. (See below.) There’s something too easy, and somehow just stupid, about the "haughty cat" stereotype. Cats can be finicky, of course, but I don’t think they are ever haughty, and that's what these commercials suggest. Are they sometimes indifferent to our desires and actions? Well, no, for indifference implies awareness, but it seems to me that on those occasions that inspire talk of feline “haughtiness,” cats are not aware of our efforts or wishes. Are they disdainful of us? Well, again, no, for they seem to proceed as though we are not present. There’s nothing really present for them to be contemptuous or disdainful of. It seems obvious (to me) that cats are never contemptuous or disdainful, though they share some of the behavior associated with those attitudes among humans.

No doubt some of you will now reveal your claws.

As any cat person knows, it is possible to “connect” with a cat. With some effort, one can make a cat aware of one’s presence and needs or desires (to some extent). When prompted, cats have no trouble looking into a person's eyes and paying attention to them. Such occasions seem to interrupt the general flow of feline obliviousness to others. I think cats are more oblivious of others than humans are. It's just their way.

I guess I “object” to Morris commercials more or less in the way that I object to silly and crude stereotypes generally. Most of the time, crude stereotypical thinking strikes me as stupid more than wrong. But, obviously, it can be wrong, too.

BTW: it turns out that Morris the Cat is a fairly decent guy. According to Wikipedia,
Morris has appeared in [various] media over the years. He starred in the movie Shamus with Burt Reynolds and Dyan Cannon in 1973. He also appears as a "spokescat" promoting responsible pet ownership, pet health and pet adoptions through animal shelters. To this end, he has "authored" three books: The Morris Approach, The Morris Method and The Morris Prescription.

In 2006, Morris was depicted as adopting a kitten from a Los Angeles animal shelter, L'il Mo, who represents the first in a campaign known as Morris' Million Cat Rescue.

Yes, I know. It would be naive to conclude that the "9 Lives" people are focused on the welfare of pets.

Morris the cat “9 Lives” commercial:

E-Trade “talking baby” commercial:

“Meow Mix” singing cat commercial: