Showing posts with label Philosophy. Show all posts
Showing posts with label Philosophy. Show all posts

Saturday, August 30, 2008

Jitters Bugged?

Old joke:

Two Roman Catholic theologians, one a Jesuit and the other one a Dominican, are arguing about prayer and smoking. (Hey, I said it was an old joke. This was before smoking became a secular sin only slightly less heinous than child abuse.)

So, anyway, the Jesuit says there’s nothing wrong with praying and smoking at the same time, while the Dominican is equally adamant that it’s disrespectful to God and thus sinful. The argument goes on and on and finally they decide to submit the question to the Vatican, which they both do.

Months pass as, left to their own devices, months will, and finally the Jesuit and Dominican meet. As they see each other big smiles break out on both their faces. “I told you so!” the Dominican almost shouts. “What are you talking about?” the Jesuit says, “I just got word back from Rome recently that I was right.” “That’s impossible,” the Dominican says. “I just got word back from Rome telling me that I was right.” The two theologians stand there silently and bewildered.

Finally, the Jesuit smiles. “Wait a minute,” he says. “What exactly did you ask?” “I asked exactly what we were arguing about. I asked if it was a sin to smoke while you were praying.” “Ah ha!” the Jesuit exclaimed. “I thought so! That’s the problem. You see, I asked if it was a sin to pray while you were smoking!”

To borrow from Wittgenstein, while we may not constantly be bewitched by language, we are always in danger of being misled by some sort of linguistic stage magic, and this is true even though much of it is unintentional and some is even self inflicted. How we characterize something (e.g., “pro choice” or “pro life”) already inclines us to one sort of judgment versus others.

But that’s not simply to note that words have emotional connotations as well as objective denotations. Wittgenstein, again. “Can one play chess without the queen?” What question is being asked? Certainly not whether one can continue playing chess after one or both queens are captured. What then? Whether one could play a game like chess except without queens? Again, ignoring how good a game it might be, the question fairly obviously is yes. What about whether such a game still ‘really’ was chess or still ‘should’ be called chess? Is that a factual question? One that perhaps still requires more data to resolve or, as is typically true in philosophical disputes, one that calls more for a decision which, in turn, will depend on how we go about weighing this consideration versus that?

So, also, are performance enhancing drugs in athletic competitions per se unfair? Doesn’t it depend on how and why they enhance performance? Philosopher / physician Carl Elliott raises that question in a current Atlantic piece, arguing that, at the very least, what counts as performance affects out answer to that question. Is the ability to perform in public under intense pressure an integral part of the very athletic ability being judged, or should an otherwise gifted athlete’s greater sensitivity to pressure and higher state of anxiety be considered irrelevant?

Beta-blockers (a common class of anti-hypertension drugs), for example, tend to reduce the physiological effects of anxiety. Not the anxiety, itself, mind you, but only of some of its outward effects such as hand tremors. Thus, their use is banned in some competitive sports, but the validity of the rationale for their ban depends on whether we’re talking about smoking while at prayer or praying while having a smoke. Elliott:
Beta blockers are banned in certain sports, like archery and pistol shooting, because they're seen as unfairly improving a user’s skills. But there is another way to see beta blockers—not as improving someone’s skills, but as preventing the effects of anxiety from interfering with their skills. Taking a beta blocker, in other words, won’t turn you into a better violinist, but it will prevent your anxiety from interfering with your public performance. In a music competition, then, a beta blocker can arguably help the best player win..... The question is whether the ability to perform the activity in public is integral to the activity itself.

I have no dog in this fight. (By way of Truth In Bloggistry disclosure, it happens that I take beta blockers for hypertension, but I’m not inclined to public performance anxiety and, besides, there are no performance enhancers of any sort that would make me an athlete. If instead of Dr. Bruce Banner I’d gotten the gamma rays, the Hulk would have been an overgrown but still uncoordinated doofus.) I don’t care whether either amateur or professional athletes are permitted to take beta blockers or, for that matter, any other performance enhancing drugs. My only point here is that how one answers these sorts of questions depends in large measure on how one frames the questions in the first place.

That settled, feel free to take out your prayer beads now and, oh, yeah, smoke ‘em if you’ve got ‘em.

Sunday, August 24, 2008

Selfishness, Egoism and Altruistic Libertarianism

It is a cliché among many psychologists and economists that human beings behave self-interestedly. Moreover, since Adam Smith’s somewhat theological, somewhat anthropomorphic “invisible hand” metaphor, it has been almost an article of faith within the latter discipline that the collective, societal result of individual self-interested behavior is ironically salubrious.

It is a faith to which I also ascribe, although like all but the most zealous of religious fanatics I season that faith with the occasional heresy here and there. Crucially, however, it needs to be noted at the outset that not just any sort of self-interested behavior contributes to the common wealth and greater good. Specialization and trade, voluntary association, bargained-for exchanges, common rules and some sort of enforcement mechanism to address rule breaking are all necessary elements for human society to flourish economically, for the invisible hand to prove, as it were, optimally dexterous.

Most importantly, “self-interested” is not synonymous with “selfish.”

Discussions about selfishness elsewhere on this blog got me thinking about these things. I am no Ayn Rand scholar, nor do I purport to be an Objectivist. Undoubtedly, however, Rand’s followers constitute a significant and vocal segment of the libertarian community. (It’s a non-gated community, after all, noted for its lack of zoning regulations, restrictive covenants or entrance requirements.) Anyway, given that Rand published a collection of essays entitled The Virtue of Selfishness: A New Concept of Egoism, it should be clear just from the title’s use of the word “egoism” that she or Nathanial Brandon, as the case may be, intended to give the word “selfishness” a special, technical meaning in the overall context of Rand’s worldview.

But selfishness and egoism are two separate things, a fact I assume Rand understood perfectly well when she deliberately invoked the apparent contradiction of selfishness as a virtue for its rhetorical impact. Whatever Rand’s standing as an intellectual and participant in the history of political philosophy, she was also certainly a polemicist with a particular political agenda in opposition to what she correctly perceived as the 20th century’s greatest threat to humankind; namely, the threat of collectivism. You simply cannot read Rand fairly without bearing that in mind.

The important point is that selfishness is a common language concept, not a technical term. Anyone fluent in English knows what it means and knows, more importantly, that it entails a negative moral judgment. Selfishness is by definition a bad thing. It’s using up all the hot water in the shower when others are waiting, eating up all the cookies instead of sharing them with friends or family, and so forth. (Except, perhaps, at the Ayn Rand School for Tots, although Ms. Sinclair couldn’t have really been much of an Objectivist since the first thing she did was violate Maggie's pacifier property rights.)

Selfishness moreover logically entails and presupposes that there is some preexisting community to which the individual belongs and some moral commitment to that specific community. I, for example, live with my family in a household where there is a finite supply of hot water and cookies. If I stand in the shower for an hour shoving one increasingly soggy chocolate chip cookie after another into my mouth until both supplies are exhausted, I am acting selfishly relative to my family. It is less clear that I am being selfish when I buy the last package of cookies at the store, thus depriving the next cookie junkie from his or her fix, or when I purchase the big, heavy-duty water heater for my house. It is less clear, still, that it is properly called selfishness to eat any of those cookies or use any of that hot water knowing that many millions of people across the globe have neither cookies to eat nor any hot water to shower with.

To be sure, there are those who claim that the last is selfish, although the overwhelming majority don’t really believe it based on how they, themselves, actually live. The notion that we as individuals have moral obligations to humanity at large is, to put it mildly, very problematic. The point, in any case, is that we wouldn’t be inclined to call all sorts of behavior like eating a cookie selfish simply because every cookie eaten is, necessarily, a cookie no one else can eat. The morality of sharing does not require splitting my cookie into several billion pieces so everyone can have some.

Egoism, by contrast, is not an ordinary language word or concept. Mothers don’t scold their children for being egoists when they selfishly eat the last cookie. Indeed, if you peruse its Stanford Encyclopedia of Philosophy entry you will discover that there is not even a single technical sense of the term.

We pause now while I grind a philosophical axe for a moment. There is a critical difference between, on the one hand, the theory of psychological egoism, the theory that claims it is simply a fact that human beings always and under all circumstances behave self-interestedly and, on the other, ethical or rational egoism. These theories contend that morally right behavior or rational behavior, respectively, simply is self-interested behavior.

These latter may be right or wrong and are certainly subject to criticism, but at least they both admit of the possibility of unethical or irrational behavior. That is to say, the ethical egoist acknowledges that people are capable of behaving other than self-interestedly, she simply argues that they shouldn’t. So, too, the rational egoist doesn’t claim that we always act rationally, i.e., self-interestedly, but only that we should or that it is only when we do that our actions deserve the appellation “rational.”

Psychological egoism, by contrast, obliterates the normative force of self-interested behavior, whether for good or bad. Indeed, it obliterates normative considerations in the same way all strong forms of determinism do: if “ought” implies “can” but one cannot act differently than one does then it is absurd to claim that one ought to have acted differently. Moreover, if all behavior is, by definition, self-interested, then it is a fair question to ask of this non-falsifiable metaphysical theory what sort of substantive claim, if any, it really is making.

Axe grinding concluded, I’m reasonably confident that Rand was an egoist in both the ethical and rational egoism senses. In retrospect, however, it is perhaps unfortunate that she chose to use “selfishness” as a rhetorical device to describe her egoism because it opens both Objectivism in particular and libertarianism in general to the sort of prejudicial criticisms Mr. Hanley recently bemoaned.

In fact, Rand aside, there is nothing at all incompatible about libertarianism and altruism. Not, at least, if altruism is understood not as Rand technically used the term but simply as the opposite of mere selfishness. It is hardly altruistic, in the ordinary sense of the term, to coerce other people to behave in supposedly selfless ways in order to achieve your personal vision of the greater collective good even if that greater good is thereby realized. But it is unarguably immoral to coerce others using that rationale when, in fact, it becomes painfully obvious that the exact opposite results.

Indeed, if we’re looking for a single lesson from the history of the 20th century, we could do much worse than conclude that, no matter how noble their advocates’ intentions may have been, collectivist social and economic orders yield disastrous results. Obviously, therefore, noble intentions are no guarantee of success. Libertarianism has never claimed that in a libertarian world order everyone will win and "all must have prizes." In fact, as far as I know, only utopian collectivists and Lewis Carroll's Dodo have made that claim.

But then Carroll, of course, knew he was talking nonsense.

Tuesday, July 17, 2007

Barnett on Libertarianism, the War and Ron Paul

Today's online FOX Wall Street OpinionJournal includes a column by Georgetown Law professor Randy E. Barnett entitled Libertarians and the War. He is especially keen to make the point that Ron Paul's opposition to the Iraq War is not the 'official' libertarian position and that one can be, as many libertarians were and some still are, supportive of the war without grave violation to what they hold to be the essence of libertarianism.

This is certainly true, though not entirely for the reasons Mr. Barnett articulates, and the key word is "entirely." There simply is no single set of libertarian principles shared by all who define their political views as such, so Barnett's unqualified claim that "libertarians believe in robust rights of private property, freedom of contract, and restitution to victims of crime ... that ... define true 'liberty'" is not, strictly speaking, true. Some do, some don't, and that is a point at least worth making as his point that Dr. Paul does not speak for all libertarians.

There is something structurally odd about that quoted assertion (the literal text of which I have edited but the sense of which remains intact) and it is his unqualified assertion of certain rights as definitional of (the oddly scare-quoted) liberty. The strong implication here is that libertarianism rests on some sort of natural rights theory, which indeed it does for many but does not for others, and that such view is the only (possible?) theoretical foundation of libertarianism. That is certainly wrong, and for several important reasons.

First, it is always important to distinguish moral claims of rights from legal rights. Legal rights exist, if at all, by operation of government including a legal system established to enforce such rights. I may or may not have a moral right to hold you to your promise to pay me for painting your house, but it is my legal right under the law of contracts that makes commerce possible. (Whether the mechanisms of legal rights enforcement must be governmental or can be privatized is a matter of dispute among libertarians but is irrelevant here.) So, too, whatever Lockean or other natural rights in property one might argue in philosophical debate, it is the law of property, essentially a creature of the state, that gives the contemporary concept of property most of its useful substance.

Natural rights theories have been out of fashion among academic philosophers for some time now. It is true that academic philosophers have a long and notorious history of changing their minds but being wrong both before and after, but that is not to deny that they have analyzed natural rights theories down to the subatomic level and found them wanting for good and serious reasons. My own view is that any theory of natural rights weak enough to be conceptually defensible is unlikely to be sufficiently robust to get most libertarianism where they want to go. That said, I also think that if any natural rights do "exist" (I trust my use of scare-quotes makes sense here), they include the moral right under most circumstances to be left the hell alone. (That is, I take individual autonomy to be presumptively legitimate and the moral burden on those who would violate it, but that does not quite equate to a theory of natural rights.)

In any case, while one can attempt to defend libertarianism in terms of rights and duties (to use the philosophical jargon, on deontological grounds), many prefer a purely consequentialist, usually utilitarian, approach. They argue, in effect, that libertarianism, by maximizing individual liberty, results in or at least affords the greatest good for the greatest number or at least the greatest opportunity for the greatest good or some such. Barnett inches toward that justification in the same paragraph, claiming that his rights defined concept of liberty:
... provide[s] the boundaries within which individuals may pursue happiness by making their own free choices while living in close proximity to each other. Within these boundaries, individuals can actualize their potential while minimizing their interference with the pursuit of happiness by others.

This formulation, interestingly enough, is a "virtue ethics" approach; that is, an ethical justification that goes to the goal of individual self-actualization or flourishing in the Aristotelian sense rather than the objective of collective good that tends to be the focus of most political theory.

It isn't so much that I disagree with what I think is Barnett's rather muddled one paragraph justification of libertarianism (it is, after all, only one paragraph and in an opinion column at that), as that it needs to be said that libertarianism as a living political movement is more about its generally shared conclusions than its specific theoretical justifications. That said, it is certainly true that Ron Paul's current fifteen minutes of fame could well misrepresent libertarianism in general and Barnett is correct to point that out. I might add that Paul's position on abortion, which I largely share, is even less widely held by libertarians.

Finally, I suspect Barnett is mistaken in his implied belief that most of the Americans who have taken note of Paul identify him as a libertarian. Whether they do or not, the presence of an elected official and major party presidential candidate voicing libertarian themes and receiving even modestly positive reactions among the public at large is surely of greater value than the loss of any prospective converts to libertarianism because of Paul's anti-war position. On that point he happens to be in the majority at the moment, a fact that augurs well for the prospects of liberty in post-Bush America.

Tuesday, June 19, 2007

Science, Sanity and the Law

If you have never entertained, however fleetingly, the prospect of killing your children, you're probably not spending enough time with them. Fortunately for the species, few of us ever act on such feelings. So few, in fact, that the rare parent like Andrea Yates, who in 2001 killed her four small children, strikes us immediately as monstrous or insane or both.

Reason's Brian Doherty posts a very fine article today about our struggles as a society with the notions of sanity, responsibility, free will and the law. The legal so-called insanity defense continues to fascinate us precisely because it touches so many deep mysteries about life, typically arising under the most gruesome and horrifying of situations. "Insanity" is a term long ago abandoned by the psychiatric profession, but the relationship between what is, at bottom, a legal defense justified on moral grounds and what purports to be increasing scientific evidence against the notion of free will of any sort continues to lie at the heart of the issues raised.

It is a basic tenet of ethics that ought implies can; that is, that holding someone blameworthy (or praiseworthy) for an act can be meaningful and justifiable only if that person could have done other than he did, in fact, do. Logically, it must also hold that "ought not" implies that the person could have refrained from doing whatever was done. Those who deny the existence of volitional or intentional human agency (i.e., free will) but contend that society must nonetheless indulge in the useful fiction of contending otherwise and holding people 'responsible' for their 'acts' are not, I think, all that far removed from those who hold that belief in God is necessary for there to be any moral order. Of course, by their own theory they are incapable of holding contrary views, so perhaps we can forgive them this conceptually muddled attempt to have their determinist cake and freely eat it, too.

In fairness, one can make a case for the notion that in society at large 'pretending' that criminals could refrain from committing their crimes so to justify 'punishing' them may well have a general deterrent effect. That is to say, 'punishing' certain acts raises the known consequences of their commission and people in general respond to such incentives and disincentives, whether freely or not.

But at the fringes of "people in general" lie those whose minds are so deranged (or, if you will, whose brains are so disfunctional) that the notion of general deterrence breaks down completely. These are, ironically, the people who are the most likely candidates for the insanity defense. Put simply, punishing the truly psychotic is unlikely to have any effect on the behavior of other truly psychotic persons. Indeed, it is almost definitional that such persons do not respond to the world as you and I do.

If we are only play-acting at a belief in free will in our criminal justice system as it deals with ordinary people, then surely we must be indulging in a play within a play when we go through the motions of a criminal trial with such persons, grappling with questions such as the (in)famous M'Naghten test whether "...at the time of the committing of the act, the party accused was laboring under such a defect of reason, arising from a disease of the mind, as not to know the nature and quality of the act he was doing, or, if he did know it, that he did not know what he was doing was wrong."

Society must, of course, remove or restrain those who, for whatever causes or reasons, pose a deadly threat. But what possible difference can knowing what one is doing or knowing it is deemed wrong by others make if one cannot act otherwise anyway?

[EDIT: The first posted version read "scientific evidence of free will" and should have read and now does read "scientific evidence against the notion of free will."]

Sunday, June 10, 2007

Richard Rorty, R.I.P.

Richard Rorty, one of the preeminent American philosophers of the 20th century, died on June 8 at the age of 75. Obituaries can be found here and here.

Rorty was among the faculty at Princeton with Donald Davidson and others when it was the unquestionably reigning philosophy department in the nation and was a principle figure in the Anglo-American analytic tradition especially as infused and influenced by American pragmatism. Rorty’s work and later career moved to what both critics and admirers might have called a “post-philosophical” perspective, a view influenced by Wittgenstein and others that there was, if you will, less there than meets the eye in philosophy as traditionally understood and practiced in the academy. However his philosophical views may have shifted over time, he remained committed to a progressive political perspective which nonetheless at least had the salubrious advantage of finding serious fault with the likes of Foucault.

Much as I like to criticize contemporary academic philosophy, to some extent for the same reasons Rorty found the field confused and wanting, I remain convinced that philosophers have shaped human society and even human thought, itself, more than anyone else throughout history. There are no emperors or generals whose influence compare to the influence, for better or worse, of Plato or Aristotle; and in more modern times I continue to find, however bastardized, misunderstood or unacknowledged, the works of Kant, Wittgenstein and a very few others continually influencing our “original” thinkers in virtually every other field of thought. Many contemporary scientists are scornful of philosophy, but philosophy gave birth to science and, my criticisms aside, it is far easier to find a philosophically naïve scientist than a scientifically naïve philosopher.

Few people will have ever heard of Richard Rorty. I didn’t know him but I did meet him once during his days at the University of Virginia. He struck me as someone who had followed Socrates’ admonition that the unexamined life is not worth living and was more than happy with the career and the life that advice led him to pursue. We should all be that fortunate.

The Myth of the Rational Voter: Why Democracies Choose Bad Policies, by Bryan Caplan

Insanity is doing the same thing over and over expecting different results. -- attrib. Albert Einstein

I am fond of writing, even if you are not fond of reading it again, that I know just enough about law, philosophy and economics to be dangerous, mostly to myself. As it happens, I blogged twice about Bryan Caplan’s new book The Myth of the Rational Voter, acknowledging on both occasions that I hadn’t read the book but was responding only to comments written about it.

Of course, that not only doesn’t pass the minimal standards for scholarship, it doesn’t even pass the minimal standards for journalism. My plea in mitigation is that I am neither a scholar nor a journalist, a fact I make abundantly clear all the time by what and how I write here without needing to confess the fact as well. Still, having wasted so much virtual ink on the subject already (admittedly, a sunk cost), I’m happy to report that I have now, oddly enough, read the book.

This would not usually have been the case so quickly because I do not review books professionally and must therefore either buy them or wait for a library copy. I like buying books, but I am cheap and thus usually wait until they have been remaindered, which means I'm always behind the power curve on the cocktail party chat circuit. In this case, however, I received a free copy courtesy of my older son who, serendipitously, just attended a seminar in which Mr. Caplan was one of the speakers. Indeed, he even inscribed the book to me, which was kind of him but which raises a problem. He wrote in it that I am “a rational man in an irrational society.”

This would be puzzling if I took it as more than a gracious sort of inscription to a stranger. Aside from his lack of evidence about my rationality – we’ve never met and I seriously doubt he reads this blog (we’ll get to society’s irrationality in a minute) – there is the definitional problem of rationality, itself, and it is a problem that underlies Caplan’s argument. Rationality, as economists understand it, is tied very closely to their concept of efficiency, the layman’s version of which is getting the maximum bang for your buck. Caplan also imports the notion that truth seeking, perhaps even apart from its usual connection to efficiency, is an integral component of rationality as well. No doubt, under many conditions these are good operational criteria of rationality. But if the bang you are buying with your buck or your vote is your own subjective sense of well being, not only can ignorance be bliss, so can false beliefs. There is, at least, certainly nothing illogical about such a possibility. Indeed, not only the possibility but the fairly high likelihood that I am, by his criteria, at least somewhat irrational is at the heart of Caplan’s contention.

I move now to the jargon-laden version: Caplan contends that the collective effect of individuals’ systematic biases resulting from preferences over beliefs, when combined with an understanding of democracy not as a market but as a commons, undermines the Public Choice concept of rational ignorance as a means of understanding the workings of democracy. If I tried to explain all these terms, I would fail, so you should read the book instead. (And since you're not cheap, you should buy a copy now.) Still, I'll take a shot at a concept or two.

The rational ignorance hypothesis is easy enough for even dumb guys like me to understand – acquiring knowledge is time consuming and, at least in that sense, expensive, therefore people tend not to acquire more knowledge than they believe they need. People know, despite idiotic “Your Vote Counts” propaganda campaigns, that even if their votes get counted those votes never really count in the sense of altering the results. Thus, they tend to vote, if at all, largely out of ignorance and, as a result, essentially randomly. If so, this is either good or bad, depending on whether you believe the Miracle of Aggregation (the ignorant masses cancel each other’s vote out while the very few people who know beans from bacon make the real choice) or the Gucci Gulf theory that self-serving weasels, leeches and parasites politicians, lobbyists and special interests seek and get the goodies while no one else notices or cares.

But Caplan argues that it isn’t rational ignorance but an ironically ‘rational’ sort of irrationality that is the key to understanding why, for example, voters continue to prefer protectionist policies despite the fact that such policies are not in their or the nation’s best interests. Let's call it the "We can't all be that dumb so we must be crazy" thesis. If democracy is really a commons and not a market, then as with most commons situations there is negligible cost to the individual voter resulting from casting his vote as a method of indulging his preferred beliefs over his desire to opt for objectively rational (that is, efficient) economic policies. Moreover, if voters’ motives are altruistic (or, and here’s your vocabulary building word for today, sociotropic), they have that much less motive to correct their (objectively) mistaken beliefs about the relative efficacy of one economic policy versus another.

Caplan contends that because irrationality and not the prevailing Public Choice hypothesis of rational ignorance better explains these voting patterns and habits, we have “neither well functioning democracies nor democracies hijacked by special interests [but] democracies that fall short because voters get the foolish policies they ask for.” More succinctly, he invokes Mencken’s famous observation that “Democracy is the theory that the common people know what they want, and deserve to get it good and hard.”

As Caplan notes, “recognizing irrationality is typically equated with rejecting economics.” Even so, if it is true that people have preferences over beliefs and that, as a result, they deem irrationality as a good among other goods, then they will in some cases prefer irrationality; that is, they will demand a certain amount of irrationality depending on its price. What he perhaps fails to stress sufficiently is that this is important only in the prescribed sense of rationality economists employ. In fairness, however, he does address the issue:
Many escape my conclusions by redefining the word rational. If silly beliefs make you feel better, maybe the stickler for objectivity is the real fool. But this is why the term rational irrationality is apt. Beliefs that are irrational from the standpoint of truth-seeking are rational from the standpoint of individual utility maximization. More importantly – whatever words you prefer – a world where voters are happily foolish is unlike one where they are calmly logical.

I think this is a critical acknowledgment. Whether it is rational under Caplan’s or most economists’ definition of the term or not, it has been my experience more people would prefer to be happily foolish than calmly logical.

Efficiency, while a dandy instrumental good, is neither an intrinsic nor an exclusive good. (I am using the term good here as a normative term and not as an economic term of art.) That is, there is nothing per se irrational in the sense of being internally contradictory as opposed to merely being less than optimal in terms of efficiency about a normative outlook that holds efficiency to be only one of sometimes competing values. More broadly, I’d simply note that consequentialism is not the only ethical game in town. Say what you will of Kant, you'd be hard pressed to call him irrational.

Caplan makes a few other claims I find questionable; for example, that the Self-Interested Voter Hypothesis is false. It may really be false, but I found his confidence regarding that claim unconvincing and his examples capable of alternative explanations that would still support the basic (and, I think, generally useful) intuition that people, including people acting as voters, act in their own perceived self-interest. However, these sorts of disputes can run a high risk of having the very concept of self-interest turned into mere tautology and, in any case, the point may not be critical to his case. He also invokes his epistemically privileged understanding of his own preferences in evidence against the revealed preference thesis, a sort of introspectionist attack on the nearly universal operational behaviorism of social science. I think there's a good point lurking there, but it needs more development.

Finally, there is the question of who the intended audience is for this book, and I’m guessing it isn’t the likes of me. As mentioned before (and, once again, made abundantly clear), I am not an economist; but I found the claim that democracy is a commons and not a market, once made, both insightful and uncontroversial to the point of being obvious. That is, it is obvious in that "Oh, of course!" way someone else's hard work seems simple once the work is done for you. But Caplan seems to think that both economists and political scientists will find the claim highly controversial, so what do I know? Further, perhaps again because I simply don’t get it because I’m not an insider, it isn’t clear to me what difference between the rational ignorance hypothesis, tweaked a bit here and there, and Caplan’s alternative account would be for practicing economists. Then again, I’ve always found the “rational” part of phrases like “rational self interest,” “rational maximizer,” etc. a bit suspect. On balance, though, this book is an extended theoretical argument aimed at or at least aimed to Caplan's fellow theoreticians.

Theory aside, human society orders itself in all sorts of ways. There is, if you will, a market for markets and a market for democracy. In many cases markets and democracy can and perhaps should compete to determine which works best for people. Caplan contends that “democracy overemphasizes citizens’ psychological payoffs at the expense of their material standard of living.” But assuming there are to be tradeoffs at all, what is the right balance?

Generally speaking, I prefer markets to government and so, obviously, does Caplan. However, in my own case my preferences go at least as much to what I believe would be the psychological benefits of the greater freedom of markets as they do to my belief that I would be better off materially, though I happen to think that would usually be the case as well. But the fact is that I probably would rather be happy than right just as I probably would rather be more happy and less well off materially than vice versa.

Indeed, any other choices would strike me as irrational.

Thursday, June 7, 2007

Faith & Reason on the Campaign Trail

It’s a variation of an old Groucho Marx line, I know, but when I was in graduate school years ago I heard of a young philosophy PhD who went interviewing for a job at a small Quaker college. He was certainly well enough qualified for the job and they didn’t expect or require their candidates to be members of the Society of Friends, but he went through a very protracted interview process addressing what we might call questions of character and ethics. As some philosophers are wont to do, he danced around most of these questions, giving carefully nuanced and qualified answers requiring more parsing than the school officials cared to do. Finally, one of them said with some exasperation, “What we want to know is whether you are a man of principles.” “Don’t worry, sir,” came the too clever reply, “I have principles I’ve never even used.”

He didn’t get the job.

Fast forward. Polls show that some 40% of the voting population attend some sort of formal worship service weekly. No doubt that is way down from generations previously, and one might question further whether many of those in attendance take it as seriously as the non-churched (to use an insider’s phrase) suspect they all do. In fact, there is a substantial body of evidence that many congregants of various faiths and denominations routinely hold ethical positions contrary to the official teachings of their faith and, indeed, many who regularly attend some sort of organized religious community do so more for the community than for the religion.

The point is that 40% of American voters doing anything is a big deal, especially if you’re trying to get them to vote for you. I suspect few Americans are so parochial in their religious views (or lack thereof) that because they are, say, Baptists or Roman Catholics or atheists they will vote only for fellow Baptists, Roman Catholics or atheists. But it does appear to matter to many voters that prospective candidates, especially for president, pay some sort of deference to religion, preferably of the organized variety. And it obviously is some sort of a disadvantage, though no one can say for sure how much, not to claim to be a mainstream Christian. No one asks whether someone can win the presidency because he’s a Presbyterian or she’s a Methodist, so the mere fact that the question is raised whether a Jew or a Mormon is electable means that it is, to some extent, a real issue.

Thus we have the rather peculiar spectacle, for example, of Mitt Romney attempting simultaneously to downplay the specifics of his faith while emphasizing, when needed, that he is a man of faith. Meanwhile, the Democrats, in their efforts to woo back religious voters without alienating those who view any profession of religious belief at all with varying degrees of scorn and skepticism, are groping for ways of expressing (or at least professing) a political perspective infused by religious faith.

There is no little irony in this latter fact. If liberalism and the Democratic party had any legitimate claim to the “high moral ground” of 20th century politics, it was on the subject of civil rights. But the civil rights movement from the 1950s onward was motivated extensively by religious liberals and “secular humanists” whose ethics were significantly informed by their Judeo-Christian tradition and culture. Of course, that is a tradition and culture also informed by the Enlightenment and, for that matter, by philosophical developments antithetical to the tenets of theism, especially including socialism; but the fact remains that liberalism of the 1950s and 1960s and the civil rights movement in particular found some of their most passionate and persuasive spokesmen and dedicated activists from believers.

But that was then. The number of nonbelievers in America has grown to the point where they now publicly complain about being a discriminated-against minority. (No doubt they always felt that way, but getting burned at the stake is a pretty harsh result for speaking out. How, one wonders, will nonbelievers treat the faithful if and when nonbelievers are in the majority?) Today, the candidate who believes in the Median Voter Theory (and they all do) has the growing dilemma of a sort of naïve Kantianism among both secular and religious voters. It may not be enough for a candidate to claim to believe X is right or wrong, he may also have to try to convince voters that he believes it for the same reasons they do. Those who care about such things as logical rigor and intellectual integrity might find this an impossible challenge. Luckily for them, politicians care about neither. Unluckily for us, one or more of them always do get the job.

Tuesday, June 5, 2007

What's Red and Green and Blue All Over?

Libertarians of a certain sort are notoriously obsessive when it comes to political theory and economics. The too easy trope for such political obsession, whether of a libertarian bent or not, is to call it 'religious' and for some, at least, the label is apt. There are Articles of Faith, after all, about which any contrary evidence is brushed aside, studiously ignored or defined away. In full rant mode, the true believer's gaze takes on a disquietingly feral intensity of the sort that, in another setting, would prompt mental health professionals to go running for the industrial strength Haldol. You know the type; otherwise you wouldn't be reading this.

Still, when it comes to a fanatical devotion usually reserved for the unexpected minions of the Spanish Inquisition, nothing beats your dyed-in-union-label Marxist. I fondly remember listening back in the 60s and 70s to these fellows at school, standing behind tables strewn with ink smudged CPUSA pamphlets printed on paper too flimsy to make credible toilet tissue. As it became my avowed purpose in life by the mid 70s to distance myself as far as possible from my proletarian origins, their efforts were ultimately wasted on me. Still, like Scientologists, Lyndon LaRouche's followers and cultists of all sorts, one had to admire their almost inexhaustible capacity for cognitive dissonance.

They're getting harder and harder to find these days, even in American universities. Harder but, by golly, still not quite impossible. Here then, by way of Arts & Letters Daily, is a bit of newly-minted nostalgia (hey, that phrase sounds vaguely dialectical!), eco-socialist John Bellamy Foster's latest searing indictment of The Imperialist World System.

Let me just stir your own memories with the opening paragraph:
The concept of the imperialist world system in today’s predominant sense of the extreme economic exploitation of periphery by center, creating a widening gap between rich and poor countries, was largely absent from the classical Marxist critique of capitalism. Rather this view had its genesis in the 1950s, especially with the publication fifty years ago of Paul Baran’s Political Economy of Growth.1 Baran’s work helped inspire Marxist dependency and world system theories. But it was the new way of looking at imperialism that was the core of Baran’s contribution. A half-century later it is important to ask: What was this new approach and how did it differ from then prevailing notions? What further changes in our understanding of imperialism are now necessary in response to changed historical conditions since the mid-twentieth century?

Oh, yeah... good times!

Thursday, May 31, 2007

What Sam Brownback Thinks About Evolution

Today's New York Times includes an op-ed column by Kansas Senator Sam Brownback explaining in more detail his views on evolution.

Brownback is a lawyer and not the dumbest guy in Congress, but as public intellectual credentials go, not only is he no Daniel Patrick Moynahan, he's not even Newt Gingrich. Whatever good it may do him politically, the column is an intellectual muddle.

First, Brownback sets the stage by asserting "the complexity of the interaction between science, faith and reason." That's a nice touch, actually. The problem is that "faith" is a very ambiguous term. Faith of what sort and in what, exactly? We might reasonably claim that scientists, themselves, have faith in reason (and in evidence and so forth), but it is certainly not the sort of faith of which Brownback writes. What he means to imply but does not outright say is faith in the truth of certain specific doctrinal beliefs he happens to hold to be true, so it isn't the existence or nonexistence of faith, per se, that is at issue here but faith as belief in the correctness of certain substantive claims. Brownback writes:

The heart of the issue is that we cannot drive a wedge between faith and reason. I believe wholeheartedly that there cannot be any contradiction between the two. The scientific method, based on reason, seeks to discover truths about the nature of the created order and how it operates, whereas faith deals with spiritual truths. The truths of science and faith are complementary: they deal with very different questions, but they do not contradict each other because the spiritual order and the material order were created by the same God.

Let's dissect that. Why can't we "drive a wedge between faith [as Brownback understands it] and reason"? Because he wholeheartedly believes they cannot be contradictory? Why is that? Here he starts out reasonably well, noting that science and (Brownback's) faith address different questions; namely, questions about how nature operates and what he calls "spiritual truths." That's not so bad so far. If he had gone on to claim that their areas of concern were not complementary but incommensurable or merely that they bore no relationship to each other at all, rather like, say, there is no overlap between questions about auto mechanics and questions about music, I'd gladly agree with him. But he doesn't. What he does instead is simply assert his belief in God's agency. I don't happen to disagree with that belief as such, but the belief itself is no evidence or argument that scientific assertions and theological assertions cannot or do not contradict each other. As Brownback states it, it is merely a conclusion, an assertion of faith, actually, without any supporting argument or evidence. Viewed as an purported argument, it is entirely question begging.

Brownback then shifts from the notion that science and faith are complementary to the notion that faith supplements the scientific method "by providing an understanding of values, meaning and purpose." Certainly, religious beliefs can provide a context for and even, insofar as they are believed, a rationale for one's values, etc. But they are not the only possible such contexts or rationales, nor is it at all clear how any of these things supplements the scientific method any more than my discussing jazz with my mechanic supplements his ability to fix my car. What Brownback might have said is that, just as a knowledge of both mechanics and jazz lead to a fuller life, a life focused only on the sorts of questions science can answer is a less full life than one that includes other concerns. But that isn't what he said and what he did say, insofar as it is intelligible, is false.

Brownback tips his hand when he writes, "If belief in evolution means simply assenting to microevolution, small changes over time within a species, I am happy to say, as I have in the past, that I believe it to be true." Implicit in this statement is that he does not believe, in particular, that our species evolved from other species (whether accidentally or not). This, of course, belies his purported belief that science and faith do not contradict each other, but we'll let that alone for now.

He goes on with the fairly typical Intelligent Design gambit of dropping the punctuated equilibrium hypothesis as evidence that real scientific questions remain unanswered in evolutionary theory. As others have written extensively, this is both true and irrelevant. What is especially relevant is Brownback's rejection of "arguments for evolution that dismiss the possibility of divine causality." This is the real heart of the matter and Brownback gets it exactly wrong.

I don't know a single scientist who believes as a matter of science that divine causality is impossible. I know some who do entirely reject the notion of divine causality as I know some who believe in it, but in neither case are they making a scientific claim and in neither case are their views at all relevant to evolutionary theory. The critical point here is that as far as the science of evolutionary theory is concerned, (1) its working hypothesis that divine causation is not necessary to explain how nature works has so far proved successful and (2) it is impossible, in any case, to either verify or falsify divine causation as we have come to understand what that assertion entails. I note, in passing, that some would claim the assertion is unintelligible or incoherent and, thus, incapable of being either true or false, but we'll leave that for another time.

Let's return to my mechanic friend who does not want to discuss why Miles Davis was one of the all time jazz greats but wants me to understand, instead, why I should get the oil changed regularly in my car and so explains how internal combustion engines work. He describes the pistons moving up in their cylinders, compressing gas vapors, then how the vapors are ignited, causing a controlled explosion pushing those pistons back down and, at the same time others on the crankshaft up, etc. He describes how the gears and such transfer that power from the rotating drive shaft to turn the wheels and so forth and why, therefore, the engine must be properly lubricated. Let's pretend I follow him but insist at every point in his explanation that this all happens because my personal deity, Mechano, makes it happen.

My mechanic, a more philosophically astute fellow than most politicians, points out to me that, while Mechano may indeed be the unseen force behind the workings of engines and motors, he does not need to believe in Mechano's existence to understand or to repair automobile engines. Maybe he does believe in Mechano, but none of the repair manuals and none of his training and experience have mentioned Mechano. In that sense, neither my nor his beliefs one way or the other about Mechano either complement or supplement his work. Whether they complement or supplement his or my life outside the area of auto repairs is another matter. Maybe they do, maybe they don't. In any case, this is the rough equivalent to evolutionary theory vis a vis Sen. Brownback's faith and this is why, again only roughly speaking, efforts to include non-evolutionary accounts of the origin of man in biology class curricula are met with the same sort of reaction I would get if I tried to pressure General Motors to include a chapter on Mechano in their service manuals.

There are any number of other problems with Brownback's column, but I'll just make one final point. He writes, "While no stone should be left unturned in seeking to discover the nature of man’s origins, we can say with conviction that we know with certainty at least part of the outcome. Man was not an accident and reflects an image and likeness unique in the created order."

Whether the final sentence of that claim is true or not, it is worth noting that one can be certain in one's convictions but nonetheless entirely wrong.

* * * * *

P.S. -- John Derbyshire likens Brownback on evolutionary biology to Paris Hilton on partial differential equations. The Derb goes on, as I did not, to do a nice job of tearing apart the implicit "science" of Brownback's weaseling over "micro" versus "macro" evolution.

Monday, May 21, 2007

"There are no vegan societies for a simple reason: a vegan diet is not adequate in the long run."

So writes Nina Plank in an op-ed column in today's New York Times, discussing the death of 6 week old Crown Shakur, whose vegan parents fed him mainly soy milk and apple juice. The infant weighed 3.5 pounds and died of starvation. His parents were subsequently convicted of murder, involuntary manslaughter and cruelty.

I don't know Plank's qualifications, though I assume the Times vetted her before running the piece. Further, while I am definitely omnivorous, I have no problem with vegetarians of any sort so long as the ones who refrain from eating meat on what they believe to be moral grounds refrain as well on prudential grounds from trying to argue the point with me. If free range tomatoes are your cuisine of choice, more power to you as long as we're talking only about what you, personally, choose to eat.

The case of Crown Shakur's death, however, points dramatically at what are and properly should be the limits of social tolerance of parental authority over children. That isn't a controversial notion among either liberals or conservatives; they differ for the most part only in the sorts of personal liberty they enthusiastically wish to prohibit. It is, however, a controversial notion among too many libertarians.

Too bad. Reasonable people can reasonably disagree whether, say, corporal punishment ought to be illegal or what minimal level of education parents should be responsible to ensure their children receive or even whether certain vaccinations or other medical attention should be required.

But there is no such thing as a reasonable case for permitting parents to starve their infant children to death. None whatsoever. And those who would argue on ideological grounds for complete parental control over children, unfettered by state interference, are on an equal moral footing with those whose merely different ideology could result in this sort of senseless and entirely avoidable death.

Sunday, May 20, 2007

Giving Less Than Your All

As promised or, depending on your point of view, threatened, I want to revisit Steven E. Landsburg’s new More Sex Is Safer Sex, this time addressing his contention that, given certain assumptions, it is preferable for a person to give his entire charitable contribution to whatever he deems the most worthy charity rather than parcel out his charitable contributions among various worthy charities. (Title reference: Landsburg discusses this in his chapter, "Giving Your All.")

Here is the basic logic of the argument. Begin with the key assumption that among the various charities under consideration, they are all sufficiently large and address sufficiently large problems that, however much your contribution may be, it will nonetheless represent only a very small increase in their endowments and, when spent, similarly address only a very small part of the problem they seek to solve. Landsburg uses CARE and the American Cancer Society as examples, so I will, too. The thinking here is that your $10 or $100 or $1,000 isn’t in and of itself going to be the determining factor in finding a cure for cancer, nor will it feed all the hungry children in the world.

Let’s say you plan on contributing $1,000 to charity and, as a preliminary matter, thought you’d make your contributions in $100 increments. If you deem feeding hungry children a better cause than cancer research, then your first $100 will go to CARE. Landsburg’s argument, in the proverbial nutshell, is that however much good your $100 did to feed one or more hungry children, the number of hungry children is vastly larger, the other children (metaphorically) waiting in line to be fed next are equally deserving of your charity and so your next $100 should go to CARE for the same reasons your first contribution did.

The size of the problem and of the charity is critical. Looking at small scale charitable contributions, e.g., should you contribute $100 toward fixing up a playground for children or toward fencing in a neighboring dog park (my examples), even if you like dogs more than children, at some determinate point the fencing gets paid for and it makes sense to contribute to the playground as well. That is, as Landsburg claims, you can make a real dent in small scale problems whereas your contribution, viewed in isolation, cannot make such a dent in the overall problem of world hunger or medical research.

So far, so good. Of course, we’re simplifying matters here by considering only two charities, whereas the world is filled with other possible objects of your charitable attention. (The Ridgely Early Retirement & World Cruise Fund springs to mind here.) In principle, however, you could rank the worthiness of every such charity and one would eventually come out on top. If you really couldn’t decide which of your top two charities was worthier, Landsburg says “flip a coin and give everything to the winner. If the two causes are equally worthy, sending $200 to either is just as good as sending $100 to each – and it will cost you just one postage stamp instead of two.”

Well, no. Landsburg “does the math” in an appendix to make his point. The math is good; the assumptions underlying the math, not so good.

Landsburg’s argument depends on distinguishing between the satisfaction, however derived, one gets by giving to charity and the good such contributions do for others. Analytically, that makes perfect sense. Insofar as we are capable of drawing that distinction and focusing solely on the latter, the math works out just fine. Unfortunately, however, his “defense of pure reason” (which is more Spockian than Kantian) presupposes that people are capable of arriving at moral conclusions by reason alone; that is, that they are capable of and should be willing to set aside the self-serving motives of charity and to do the research required to crunch the numbers.

In fairness, Landsburg acknowledges both the reality and usefulness of self-serving motives and the limits to which one can, should or will incur the search costs of ferreting out charitable bang-for-buck. But it seems to me he significantly underestimates them both. As with voting, information costs can be formidable and so there is a real element of rational ignorance involved in deciding among charities for most of us. But, okay, let’s say that putting in some time and effort sorting out charities is legitimately a part of our overall charitable contribution.

It may be true that, having thus invested a reasonable amount of time and effort into investigating not only, as in Landsburg’s oversimplified model, the endowment of the American Cancer Society and CARE but also their relative overhead costs, likely other sources of income (Landsburg says it shouldn’t matter if I know you are also going to give $100 to CARE, and he’s right. But what if I discover that some billionaire has just left his entire estate to CARE the day before I write the check. Might not that matter? Assuming you were dumb enough to contribute to NPR in the first place, might not Joan Kroc's $200 million contribution a few years ago have rationally swayed your coffee mug purchasing "membership" elsewhere?) and so forth, I determine that $200 to one is as good as $100 to each. Oh, and forget the stamp, they send pre-franked envelopes and there’s always the internet to give through, anyway.

Well, then, it would be irrational (and in that strictly utilitarian sense, immoral) at that point for me not to consider self-serving reasons why I might wish to split my contributions. I would, as economists say, have failed to maximize utility, would in effect have decreased the net wealth of the world by not taking my own happiness into account. To tart up the point with a bit of slightly misused economics jargon, once I truly am indifferent regarding the two charities in terms of the good they will do for others, it certainly doesn't follow that I should be indifferent as to other distinguishing factors.

I further question the underlying assumption that there is anything approaching an objective answer to the question: which is better, curing cancer or feeding hungry children? Landsburg blithely sets up the dilemma as one of blind instinct versus logical analysis, but logical analysis gets us to interpersonal utility comparisons and all sorts of other messy concerns. There is such a thing as the illusion of objectivity, too; and one of the most notorious sorts of such illusions is the mathematical formula which, upon close enough inspection, turns out to be using unmeasurable or incommensurable factors. I admit, however, that these concerns would require a more extended consideration than I am giving them here.

Viewed as a matter of economic logic, Landsburg’s key insight is that among two unequally worthy major charities, the marginal utility of one’s subsequent contributions to the most worthy would not be decreased sufficiently to justify giving to the second charity instead. Sure. It’s a great exam question, but it may still be highly questionable considered as real ethical advise.

Thursday, May 10, 2007

Nothing, In Particular

While dallying earlier today over at Urkobold® (your one-stop shop for all things internet trollish), I did a bit of research (read: "typed in a Google search") and came upon an unauthorized posting of an article by the late philosopher Peter L. Heath. My high respect for intellectual property notwithstanding to the contrary, having some personal knowledge of Professor Heath's sense of humor, I cannot help but think that nothing would please him more. Herewith, then, a link to what may very well be the all-time definitive short article on the subject of "nothing."

Put a bit differently, you will find a better article on nothing in particular nowhere, but what are the chances of ever finding yourself there? Oh, sure, many philosophers have written extensively about nothing in particular or at least nothing that was especially interesting and the number of philosophical treatises about nothing worth reading are legion. Still, although nobody has written more cogently about nothing than Professor Heath, nobody's work wasn't as readily available. Nothing ventured, nothing gained, as no one I can remember at the moment once said.

Sadly, Professor Heath's other great work of philosophical whimsy, The Philosopher's Alice, a (serious) philosophical look at Lewis Carroll's Alice's Adventures In Wonderland and Through the Looking-Glass, appears to be out of print. Should you run across a used copy or find it in your local library, I strongly recommend it to you.

Monday, May 7, 2007

A Case of Wrongful Life? (Notes on Facts and Values)

Old joke: A doctor tells his patient, "I'm sorry but you only have six months to live." The patient takes the news stoically and asks the doctor how much he owes him. "Five thousand dollars," the doctor says. "But I'll never be able to come up with that much money in six months, Doc!" "Okay, then," says the doctor, "you've got a year."

I said it was an old joke, not a good one. Meanwhile, while I look for better material, John Brandrick, 62, was told two years ago that he had terminal pancreatic cancer and only months to live. Brandrick quit his job, sold his possessions and spent what he thought was his brief, remaining life taking vacations, eating in swank restaurants and such. A year later, his doctors revised their diagnosis. Brandrick was suffering from non-fatal pancreatitis.

Oops!

The AP reports:

"My life has been turned upside down by this," Brandrick said. "I was told I had limited time to live. I got rid of everything — my car, my clothes, everything."

Brandrick said he did not want to take the hospital to court, "but if they have made the wrong decision they should pay me something back."

The hospital said there was "no clear evidence of negligence" on its part.

"Whilst we do sympathize with Mr. Brandrick's position, clinical review of his case has not revealed that any different diagnosis would have been made at the time based on the same evidence," the hospital said in a statement.

Personally, I think the mere fact that the hospital used "whilst" in its denial is pretty clear evidence of negligence. No, not really. It's an interesting case, though. Here's this poor guy in his sixties, naked and carless, expecting to shuffle off this mortal coil any moment now, probably stuffing himself with fatty foods, gadding about in cabs instead of taking the Underground and tipping big all the while when suddenly his imminent demise is snatched from his grasp no doubt just as the money was running short.

Does he have any legal recourse against the hospital? I haven't a clue. Aside from not knowing how the British courts deal with the various potential tort or contract remedies that any first year law student could think of scribbling down on an exam together with all the likely defenses to those causes of action, the more interesting question is whether he should have some sort of legal remedy here.

I don't know whether there is settled case law on this particular situation, but something like it must have happened somewhere before and it would be mildly interesting to know how a court or jury trial resolved similar such situations. Aside from being interesting at that level, however, it is also interesting as a good example (regardless of what, if any, law there is on point) of how knowing all the facts of a situation do not necessarily resolve a dispute arising from that situation.

Moreover, it isn't just a straightforward case of the difference between facts and value judgments, either. It is a case of that, to be sure, but of more as well. There are also applicable legal rules or at least legal rules that we want to say are not "mere" value judgments and that should apply even though we may not know how to apply them. Learning the formal elements of negligence, for example, is easy: the defendant must have owed a duty to the plaintiff, must have breached that duty and that breach of duty must have proximately cause the plaintiff harm. Of course, it can get much more complicated than that, "proximate" is a special bit of legal jargon and so forth, but that's the nutshell version.

Even so, learning the mere rules tells you next to nothing about how to apply them in a particular situation, how they should be applied in this situation. And if we face a new and somehow different set of facts from the facts to which the rules have previously been applied, then we must decide which facts are relevantly similar and which are relevantly different from those prior cases and how much weight to give to those similarities and differences. Herewith, the late philosopher John Wisdom approaching the matter a bit differently:
In courts of law it sometimes happens that opposing counsel are agreed as to the facts and are not trying to settle a question of further fact, are not trying to settle whether the man who admittedly had quarreled with the deceased did or did not murder him, but are concerned with whether Mr. A who admittedly handed his long-trusted clerk signed blank cheques did or did not exercise reasonable care, whether a ledger is or is not a document, whether a certain body was or was not a public authority.

In such cases we notice that the process of argument is not a chain of demonstrative reasoning. It is a presenting and representing of those features of the case which severally co-operate in favour of the conclusion, in favour of saying what the reasoner wishes said, in favour of calling the situation by the name by which he wishes to call it. The reasons are like the legs of a chair, not the links of a chain. Consequently although the discussion is a priori and the steps are not a matter of experience, the procedure resembles scientific argument in that the reasoning is not vertically extensive but horizontally extensive – it is a matter of the cumulative effect of several independent premises, not of the repeated transformation of one or two. And because the premises are severally inconclusive the process of deciding the issue becomes a matter of weighing the cumulative effect of one group of severally inconclusive items against the cumulative effect of another group of severally inconclusive items, and thus lends itself to description in terms of conflicting ‘probabilities’. This encourages the feeling that the issue is one of fact – that it is a matter of guessing from the premises at a further fact, at what is to come. But this is a muddle. The dispute does not cease to be a priori because it is a matter of the cumulative effect of severally inconclusive premises. The logic of the dispute is not that of a chain of deductive reasoning as in a mathematical calculation. But nor is it a matter of collecting from several inconclusive items of information an expectation as to something further, as when a doctor from a patient’s symptoms guesses at what is wrong, or a detective from many clues guesses the criminal. It has its own sort of logic and its own sort of end – the solution of the question at issue is a decision, a ruling by the judge. But it is not an arbitrary decision though the rational connections are neither quite like those in vertical deductions nor like those in inductions in which from many signs we guess at what is to come; and though the decision manifests itself in the application of a name it is no more merely the application of a name than is the pinning on of a medal merely the pinning on of a bit of metal. Whether a lion with stripes is a tiger or a lion is, if you like, merely a matter of the application of a name. Whether Mr. So-and-So of whose conduct we have so complete a record did or did not exercise reasonable care is not merely a matter of the application of a name or, if we choose to say it is, then we must remember that with this name a game is lost and won and a game with very heavy stakes.

(John Wisdom, "Gods," reprinted in Philosophy and Psycho-Analysis, 1969.)

We would like to say, or at least some of us sometimes think we would, that facts and values and the rules we use to apply the latter to the former have some sort of determinate and separate logic to them -- "No ought from an is!" or "Ought implies can!" we might proclaim. If we are very sophisticated indeed, perhaps we pull out some bit of philosophical legerdemain like supervenience to bridge our tidy looking dichotomy between facts and values. At the end of the day, however, whether we come equipped with theory or not, we must decide whether the hospital was negligent or breached some contractual duty and whether Mr. Brandrick's spending-spree was proximately caused by a breach of some such duty or implied promise and thus constituted harm to him now that he will likely live much longer and so forth. That, in turn, requires the application of rules which are neither facts nor values or, if you like, are both.

How should we decide?

Thursday, May 3, 2007

“It is seldom that liberty of any kind is lost all at once.”

"That sentence from Hume," writes Roger Kimball in The New Criterion, "stands as an epigraph to The Road to Serfdom. It is as pertinent today as when [Fredrich] Hayek set it down in 1944."

Harkening back to my cyber-discussion with Mona yesterday, I don't know if The New Criterion could properly be called neocon or not, but it certainly is conservative in its perspective, at least in matters aesthetic and frequently in matters social, as well. Whatever my libertarian tendencies may be, since I happen to share much of their aesthetic and some of their social conservativism, I rarely have much of a problem with what I read at TNC.

As Reagan observed, my 80 percent ally is not my 20 percent enemy, and while the manner in which the 80 /20 divide varies, I find that true across most of the political spectrum. My point is only that there are plenty of conservatives of a certain sort who have not been corrupted by the Borg Bush Administration (just as there are many liberals of a certain sort) with whom I share all sorts of common cause. Besides, that leaves plenty to bitch about on both sides.

Be that as it may, the Kimball article is well worth a read, though it will tell Hayek fans (like Mona) nothing they didn't already know.

Thursday, April 26, 2007

"No, no! Not the war, but this war!"

Unlike “the present King of France,” whom Bertrand Russell once argued was not a logically proper subject straightforwardly (and thus meaninglessly) denoting a nonexistent entity but a disguised description, “the war in Iraq” would seem to refer to, well, the war in Iraq. Thus, when Senate Majority Leader Harry Reid recently said that the war in Iraq is lost, we might reasonably assume (1) that we understood what he meant and (2) that it was either a very bold or a very stupid thing to say. Or both. Given the reaction so far, the consensus seems to be leaning toward stupid, enough at least to prompt his Senate colleague, Chuck Schumer to engage in a bit of philosophical analysis, himself; to wit:
What Harry Reid is saying is that this war is lost -- in other words, a war where we mainly spend our time policing a civil war between Shiites and Sunnis. We are not going to solve that problem. . . . The war is not lost. And Harry Reid believes this -- we Democrats believe it. . . . So the bottom line is if the war continues on this path, if we continue to try to police and settle a civil war that's been going on for hundreds of years in Iraq, we can't win. But on the other hand, if we change the mission and have that mission focus on the more narrow goal of counterterrorism, we sure can win.

So, too, as David Broder also recalls that other great political metaphysician, Bill Clinton, famously wondering whether existence is a predicate as he grappled with the definition of “is,” we should be comforted by the highly nuanced appreciation of the vagaries of language our elected officials are capable of explicating at the drop of an overnight opinion poll, let alone a subpoena.

Of course, the war in Iraq isn’t the sort of ‘thing’ that has tidy and discrete definitional or physical boundaries, so perhaps we can forgive Schumer this sort of semantic back peddling. Whether the war in Iraq is lost or not does, after all, depend on how one defines the phrase “war in Iraq,” not to mention “lost,” doesn’t it? And I, for one, would be happy to learn exactly how we “sure can win” that “more narrow goal of counterterrorism."

Care to give us the details, Senator Schumer?

"Far out, Moonbeam! They've co-opted our co-op!"

With a hat tip to Right Reason, I came across an Economist article (reprinted here in Financial Express) entitled “Postmodernism is the new black.” The article suggests that contemporary retail “niche” marketing has been influenced by the likes of such so-called postmodern thinkers as “[Jean-François] Lyotard, Roland Barthes, Michel Foucault and Jacques Derrida [who] were all from the far left ... [and who] wanted to destroy capitalism and bourgeois society.”

It seems to me, however, that the article makes more, for example, out of Foucault's suggestion that his followers read Hayek than is really there to be made. (I’d have to say, by the way, I know more people who started with Hayek and eventually got around to reading Foucault than vice versa.) Richard Branson, mentioned in the article, and other modern marketers may have been exposed to a bit of PoMo musings back in their school days and may even subsequently have co-opted some PoMo pet phrases, but I doubt that sort of thing is much different from the almost obligatory references to Wittgenstein, Einstein or Freud some thirty or forty years ago.

I wrote recently about the irrelevance of who gets to anchor the CBS Evening News, noting the proliferation of alternative news sources in contemporary society. This, too, is a sort of PoMo “fragmentation” example. However, we might just as easily say that new technology has simply facilitated an expanded market in response to a previously frustrated demand for alternatives. There may be counterexamples here and there, but the Leftist suspicion that producers create demand as opposed to merely (if they’re lucky) responding to it is itself suspicious. The problem isn’t that we don’t really want all the things producers would sell us but that producers don’t have nearly as many things to sell us as we want.

That is not to say there isn’t now or hasn’t always been a great deal of crap offered for sale, including not only material crap but aesthetic and intellectual crap, and it is as true of expensive crap as it is of cheap crap. That is, long before Rodeo Drive there were the shops at St. Mark’s Square in Venice and today's Dollar Store is simply this generation’s Five & Dime. Quick Quiz: Other than Christopher Marlowe and Ben Jonson (and without using the internet), name a few of Shakespeare’s fellow Elizabethan playwrights. Nope, I can’t either. Much of what was produced in every age or generation was quite properly soon discarded and forgotten. Why should our age be any different?

The contemporary enclosed mall is in many respects merely the climate controlled downtown of an earlier generation, itself the shop-by-shop enclosed equivalent of the agora. If anything, the most depressing thing about the contemporary mall is its lack of diversity. The same chain stores are to be found in every mall from coast to coast and the anchor department stores offer the same depressingly few currently popular designer goods also sold at the designers’ own boutique shops, often in the same malls. The internet holds the promise of being the ultimate mall, but even it is content starved by comparison to our desires, the virtual equivalent of Moscow's old Soviet-era GUM department store, vast beyond comprehension but with largely empty showcases and shelves.

Having lived through (and participated in) the counter-culture of the 1960’s, I have some visceral lingering sense of the frustration among some of those who never quite made it out of the ‘60’s over what they take to be the bourgeois capitalist co-opting of their anti-capitalist, anti-bourgeois worldview. But that is to say only that I was once blissfully unaware that all those beads and tie-dyed shirts and bongs and black lights and “underground” records and such were even then just a market response to demand, no different from how PBS has now replaced most of its airing of symphonic music with Jimi Hendrix at Woodstock or reunion tours of once skinny, long haired rockers who now stand paunchy and bald on stage to play their decades old greatest hits. Even “the French philosophers whose interest in accessories was limited to a Gauloise” needed not only the local tabac but also the huge commercial network that stocked its shelves. Should we go all PoMo and call that a Niche-sche market?

Friday, April 13, 2007

Sandel on Embryo Ethics

With a hat tip to both Reason's Ronald Bailey and Arts & Letters Daily, herewith, with commentary, are excerpts from Harvard professor Michael J. Sandel’s recent Boston Globe column on the subject of embryo ethics. First, Sandel fairly states what is, more or less, my own position, as follows:

Human beings are not things. Their lives must not be sacrificed against their will, even for the sake of good ends, like saving other people's lives. The reason human beings must not be treated as things is that they are inviolable. At what point do we acquire this inviolability? The answer cannot depend on the age or developmental stage of a particular human life. Infants are inviolable, and few people would countenance harvesting organs for transplantation even from a fetus. Every human being -- each one of us -- began life as an embryo. Unless we can point to a definitive moment in the passage from conception to birth that marks the emergence of the human person, we must regard embryos as possessing the same inviolability as fully developed human beings.

Then:

This argument can be challenged on a number of grounds. First, it is undeniable that a human embryo is "human life" in the biological sense that it is living rather than dead, and human rather than, say, bovine. But this biological fact does not establish that the blastocyst is a human being, or a person. Any living human cell (a skin cell, for example) is "human life" in the sense of being human rather than bovine and living rather than dead. But no one would consider a skin cell a person, or deem it inviolable. Showing that a blastocyst is a human being, or a person, requires further argument.

True. The issue isn’t whether a human blastocyst is merely human life but whether it is a human life. As a matter of ordinary language, what we call a human life is a human being. Moreover, what we typically call a human being is a person. Of course, convention and ordinary language do not settle the matter. Strictly speaking, they are not even arguments in support of one view versus the other. But neither are they wholly lacking in probative value. How we weigh that probative value is another matter, of course, but there are likely to be good reasons why we pre-reflectively sort out the world the way we do just as there are likely to be good reasons why biologists may choose to differentiate the life cycle of complex living organisms by deeming one stage a blastocyst, another a fetus or embryo, another as immature and yet another as mature or adult. Just as ordinary usage is far from dispositive insofar as ethical considerations are concerned, so too are biological terms of art.

Sandel continues:

Some try to base such an argument on the fact that human beings develop from embryo to fetus to child. Every person was once an embryo, the argument goes, and there is no clear, non-arbitrary line between conception and adulthood that can tell us when personhood begins. Given the lack of such a line, we should regard the blastocyst as a person, as morally equivalent to a fully developed human being.

Well, some may make that argument. I don’t. I think the better argument and the real point is rather who should bear the moral burden of proof. Regarding a human blastocyst (and note how those who hold Sandel’s position not only employ the distancing language of biology but also avoid as much as possible using the morally critical adjective "human") as a person calls for a moral decision. But so, dear reader, does calling you or me or Prof. Sandel a person.

That we typically have neither factual nor normative grounds to deny the personhood of, well, of another person means only that accepting or acknowledging such personhood is the standard condition and paradigm of our experience. If you accept the proposition that under ordinary circumstances those other beings you encounter every day are not only human beings in the biological sense but in the morally significant sense, i.e., persons, then the moral force of the so-called non-arbitrary line or slippery slope argument derives from personhood being a defeasible claim. That is, X (where X might be you or me or a child or infant or crowning pre-born or human blastocyst or even a Harvard professor) is a person unless, well, unless what?

Answers to the “unless what” question can be and have been offered to claim personhood in some such cases and deny it in others, but consideration of the soundness or persuasiveness of such answers and arguments is beyond the scope of this post which is intended only to respond to Sandel’s column. He considers the non-arbitrary line argument, as he phrased it, unpersuasive. I agree. But if the burden of proof falls, as I believe it morally must, on those who would contend “This X is not a person; therefore, we may harvest its cells or organs,” then it falls to Sandel’s side of the dispute to provide the morally significant criteria that make the line, wherever it may be drawn, not arbitrary.

More Sandel:

Consider an analogy: although every oak tree was once an acorn, it does not follow that acorns are oak trees, or that I should treat the loss of an acorn eaten by a squirrel in my front yard as the same kind of loss as the death of an oak tree felled by a storm. Despite their developmental continuity, acorns and oak trees differ. So do human embryos and human beings, and in the same way. Just as acorns are potential oaks, human embryos are potential human beings.

The acorn analogy seems to be very popular among academics, but I fail utterly to grasp its persuasiveness. How we should regard persons or, for the sake of argument, even putative persons is qualitatively different than how we should regard other entities or beings, and that is so regardless of how our taxonomy of such other beings might play out. The moral rights of non-human animals, sentient machines or intelligent space aliens aside, whatever my reasons for regarding an acorn one way and an oak tree another way may be, my relationship will be, to use Martin Buber’s distinction, an I-it relationship and not an I-thou relationship.

The analogy, in other words, is simply not relevant for precisely the reason the dispute arises in the first place; namely, that persons are different from non-persons in a moral sense. How we distinguish between persons and non-persons is therefore necessarily a matter of providing morally significant criteria. There may be all sorts of non-morally significant differences between oak trees and acorns, but there aren’t any morally significant criteria, at least none that I can think of. Again, that isn’t to claim that developmental differences in the lives of human beings are of no moral significance at all. Sometimes they are, sometimes they aren’t.

For that matter, it is one thing to note that there are morally significant differences among persons that are a function, for example, of their age or stage of development but that do not go to whether they are persons, another to assert that the fact that such distinctions exist are themselves evidence in support of denying the status of personhood in other cases. Arguing that a three year old child shouldn’t be allowed to do whatever it wants or given access to dangerous weapons, for example, is irrelevant both to whether that child is a person and to whether it was a person some three and a half years ago.

Sandel again:

The distinction between a potential person and an actual one makes a moral difference. Sentient creatures make claims on us that nonsentient ones do not; beings capable of experience and consciousness make higher claims still. Human life develops by degrees.

This is completely question begging. The matter disputed precisely being what is a person and what isn’t, of course potential persons, as a subcategory of the vast universe of things that are not persons, are, well, not persons. Peter Singer and the PETA crowd aside, whether merely being a sentient creature suffices to “make claims” on us is, to put it mildly, not yet a settled matter, and Sandel’s mere assertion takes us no closer to settling it. Human life does indeed develop by degrees, but that is not to say we are clueless as to when and how it begins. Again, the question is whether that point in the life of a human being or some later point establishes that human being’s personhood.

Those arguing the later point typically assert something along the lines of Sandel’s “beings capable of experience and consciousness make higher claims still.” I agree. But what counts as being a “being capable of experience and consciousness” remains to be fleshed out, as it were. Must such a being be capable at present? Certainly, that can’t be the relevant criterion, otherwise we would not be people while asleep or unconscious (say, under anesthesia). What level of consciousness is necessary to make those higher claims? Does a newborn’s suffice or is a neonate still merely a potential person. Mind you, there are logically consistent and ethically defensible arguments to support the neonate’s merely potential personhood. Whether they comport with our moral sentiments is another matter. It takes a highly developed level of intellectual and ethical sophistication to believe, for example, that non-human animals have rights but third trimester prenates don’t. I don’t claim that Sandel believes that – I don’t know one way or the other – but some people do believe it.

The rest of the Sandel article raises objections to President Bush’s supposedly morally inconsistent position on embryonic stem cell research on grounds that it is, um, morally inconsistent. I am no apologist for the Bush Administration, but I will offer a couple of observations in response. First, it is true that the logically consistent view of those who contend that there is something immoral about embryonic stem cell research because human embryos are human beings must be “that the 400,000 excess embryos languishing in freezers in US fertility clinics” are also human beings. Whether “they should also be leading a campaign to shut down what they must regard as rampant infanticide in fertility clinics” is another matter.

Whether what Sandel calls Bush’s “don’t fund, don’t ban” policy is morally inconsistent or not, legitimate moral distinctions can be made regarding the proper use of federal funds without raising the underlying moral objection to a Kantian categorical imperative. So, too, a robustly utilitarian ethos of the sort all too familiar at, say, Harvard and Princeton can quite reasonably agree, for example, to permit abortions in cases of rape or incest as the regrettable price to pay in a political compromise to save other human lives. There is no per se moral inconsistency or failing in saving however many people one can from a burning building despite not being able to save all the others. Not even from an ivy-covered building at Harvard.

Tuesday, April 10, 2007

Notes on Reductionism

Not counting theology, there probably is no more inherently hubristic academic discipline than philosophy. The “love of wisdom,” as it literally translates, affords the practitioner the salubrious or, depending on your perspective, dubious privilege of sticking his nose into the business of literally every other intellectual endeavor and more than a few that don’t even rise to “intellectual.” Take philosophers of science, for example. Often not scientists themselves, they nonetheless presume to analyze and comment on what it is that scientists, themselves, do, how they go about doing it and what sense or, occasionally, nonsense they speak when they, themselves, talk about what it is they do.

Case in point (by way of the indispensable Arts & Letters Daily), a book review in American Scientist by University of Exeter Professor John Dupré of Darwinian Reductionism: Or, How to Stop Worrying and Love Molecular Biology by Duke University philosopher Alex Rosenberg. Put in its simplest terms, Dupré contends that Rosenberg ascribes to a sort of physicalism termed reductionism, a view that is in general disfavor among philosophers of science although, in one version or another, it is almost certainly the naïve view of most practicing scientists, themselves.

(By “naïve,” I don’t mean to say that most practicing scientists are ill informed or unsophisticated about the nature and practice of science but only that, being practicing scientists, they spend the bulk of their time doing science, not philosophy of science. Similarly, to say of a philosopher of scientist that he is not a scientist is not a way of discrediting what it is that he does (my somewhat caustic introduction aside) so much as a way of putting what it is he does into some sort of perspective. There are, to be sure, philosophically sophisticated scientists and scientifically sophisticated philosophers.)

Part of the problem here, as readers of the linked Stanford Encyclopedia of Philosophy entry will learn, is that both physicalism and reductionism are variously defined concepts. What sort of physicalism and what sort of reductionism one is espousing makes a great deal of difference. Here, anyway, is Dupré’s account of Rosenberg’s position:

[Rosenberg’s] new idea is that recognition of the pervasiveness of Darwinism in biology will enable us to assert reductionism after all. Rosenberg is an admirer of Dobzhansky's famous remark that nothing in biology makes sense except in the light of evolution:

Biology is history, but unlike human history, it is history for which the "iron laws" of historical change have been found, and codified in Darwin's theory of natural selection. . . . [T]here are no laws in biology other than Darwin's. But owing to the literal truth of Dobzhansky's dictum, these are the only laws biology needs.


The suggestion is that something Rosenberg calls "the principle of natural selection" is actually a fundamental physical law. Natural selection, according to him, is not a statistical consequence of the operation of many other physical (or perhaps higher-level) laws, as most philosophers of biology believe. Rather, it is a new and fundamental physical law to be added to those already revealed by chemistry and physics.


Well, maybe. I’ll leave it to Rosenberg and Dupré to fight this one out, commenting only that I suspect most scientists would find it a bit overreaching for a philosopher to assert what is or is not a fundamental physical law, let alone that natural selection counts among them.

Let’s back up a bit. It seems fairly uncontroversial to say that, whatever biology is, its foundation is to be found ultimately in chemistry and, similarly, that whatever chemistry is, its own foundation is to be found, ultimately, in physics. But that is only to say that biological organisms and their component parts are comprised of chemical molecules, themselves comprised of atoms comprised of elementary and sub-atomic particles. That is, after all, what we believe about the material world. Moreover, unless we ascribe, for whatever reason, to an ontology that includes a sort of being or substance other than the material substance of the physical world, we are likewise committed to the view that whatever it is that mental activity and the mind may be must also find its foundation in the material; namely, the brain.

Fair enough. However, what exactly it is that we mean here by “foundation” (my rough term, by the way) is unclear. To be sure, all the evidence points to the notion that human beings are, in purely physical terms, “nothing but” highly complex combinations of elementary particles doing what elementary particles, thus combined, happen to do. We remain a very long way, however, from being able to close all the gaps from here to there and so the question remains, both scientifically and philosophically, whether in principle we shall ever be able to do so. Philosophers’ use of such concepts as supervenience and qualia are attempts to bridge those gaps conceptually even as science attempts to bridge them scientifically.

And, no, I’m not talking about evolution here. Think, instead, the physicist’s understanding of heat as motion and, let's say, an imaginary philosopher's at least facially reasonable question in response whether the proposition "I feel heat" is or must therefore be semantically or logically or referentially equivalent or identical or translatable to the proposition, "I feel motion." Yes, I know I'm conflating the physical phenomenon of heat with the psychological sensation of heat here, something a good (even imaginary) philosopher wouldn't do. What I am groping after, however ineptly, is only that the nexus between the world as science explains it and the world as we experience it is not nearly as straightforward as we are sometimes inclined to believe and that this isn't obviously or simply because science hasn't yet "finished the job."

Science, ultimately, is a human endeavor, an endeavor to understand and explain the world as it is, but such understanding is necessarily human and, as such, necessarily bound by the conceptual and linguistic apparatus of human thought. That is to say, among other things, that science operates within and only within our linguistic, conceptual schema and, as a result, is bound to the limitations, complications and confusions of that schema just as is, analogically, the work of the artist whose representation of the world is necessarily limited by the limits of human perception.

Is, for example, logic, the sine qua non of rationality, a fact about the world or “merely” an incident of human cognition – an accidental fact about how the brain happens to operate? Are elementary particles popping into and out of existence and “action at a distance” aspects of the world as it “really” is or merely necessary posits in our best, current working model of that world? (And what work is “really” doing in that question?) Is what we naïvely - there’s that word again - take to be intentional thought and action necessarily illusory in a world comprised of “nothing but” those particles “randomly” doing their thing, or is some viable sense of genuine human freedom compatible with such a physical world even as human consciousness self-evidently is? Would science itself be possible except in the context of our ordinary and decidedly nonscientific and imprecise language? I think it is clear that the answer to the last question, in particular, is no and, further, that if that is so, it raises all sorts of legitimately philosophical, non-scientific questions about what it is science can and cannot hope to accomplish in terms of affording a complete and consistent understanding of the world and our relationship to it.

I have neither the hope nor the intent of addressing these concerns in detail here, let alone resolving them. (Nor, for now, to argue them in great detail in comments, thanks anyway.) Rather, I want for now simply to point, however obliquely, to several of them and to nudge the reader into considering some of the complications involved in sorting them out. For what it’s worth, my own experience in trying to sort out such questions to my own satisfaction has been that a frontal attack tends not to work very well. Then again, maybe that’s just me. After all, I'm just a guy doing this blog thingie here.

Wittgenstein famously said that philosophy "leaves everything the way it is." (Philosophical Investigations, § 124.) So, I think, does science. It is only we, ourselves, in our better understanding of how things are, that are changed.