Showing posts with label Medicine. Show all posts
Showing posts with label Medicine. Show all posts

Saturday, August 30, 2008

Jitters Bugged?

Old joke:

Two Roman Catholic theologians, one a Jesuit and the other one a Dominican, are arguing about prayer and smoking. (Hey, I said it was an old joke. This was before smoking became a secular sin only slightly less heinous than child abuse.)

So, anyway, the Jesuit says there’s nothing wrong with praying and smoking at the same time, while the Dominican is equally adamant that it’s disrespectful to God and thus sinful. The argument goes on and on and finally they decide to submit the question to the Vatican, which they both do.

Months pass as, left to their own devices, months will, and finally the Jesuit and Dominican meet. As they see each other big smiles break out on both their faces. “I told you so!” the Dominican almost shouts. “What are you talking about?” the Jesuit says, “I just got word back from Rome recently that I was right.” “That’s impossible,” the Dominican says. “I just got word back from Rome telling me that I was right.” The two theologians stand there silently and bewildered.

Finally, the Jesuit smiles. “Wait a minute,” he says. “What exactly did you ask?” “I asked exactly what we were arguing about. I asked if it was a sin to smoke while you were praying.” “Ah ha!” the Jesuit exclaimed. “I thought so! That’s the problem. You see, I asked if it was a sin to pray while you were smoking!”

To borrow from Wittgenstein, while we may not constantly be bewitched by language, we are always in danger of being misled by some sort of linguistic stage magic, and this is true even though much of it is unintentional and some is even self inflicted. How we characterize something (e.g., “pro choice” or “pro life”) already inclines us to one sort of judgment versus others.

But that’s not simply to note that words have emotional connotations as well as objective denotations. Wittgenstein, again. “Can one play chess without the queen?” What question is being asked? Certainly not whether one can continue playing chess after one or both queens are captured. What then? Whether one could play a game like chess except without queens? Again, ignoring how good a game it might be, the question fairly obviously is yes. What about whether such a game still ‘really’ was chess or still ‘should’ be called chess? Is that a factual question? One that perhaps still requires more data to resolve or, as is typically true in philosophical disputes, one that calls more for a decision which, in turn, will depend on how we go about weighing this consideration versus that?

So, also, are performance enhancing drugs in athletic competitions per se unfair? Doesn’t it depend on how and why they enhance performance? Philosopher / physician Carl Elliott raises that question in a current Atlantic piece, arguing that, at the very least, what counts as performance affects out answer to that question. Is the ability to perform in public under intense pressure an integral part of the very athletic ability being judged, or should an otherwise gifted athlete’s greater sensitivity to pressure and higher state of anxiety be considered irrelevant?

Beta-blockers (a common class of anti-hypertension drugs), for example, tend to reduce the physiological effects of anxiety. Not the anxiety, itself, mind you, but only of some of its outward effects such as hand tremors. Thus, their use is banned in some competitive sports, but the validity of the rationale for their ban depends on whether we’re talking about smoking while at prayer or praying while having a smoke. Elliott:
Beta blockers are banned in certain sports, like archery and pistol shooting, because they're seen as unfairly improving a user’s skills. But there is another way to see beta blockers—not as improving someone’s skills, but as preventing the effects of anxiety from interfering with their skills. Taking a beta blocker, in other words, won’t turn you into a better violinist, but it will prevent your anxiety from interfering with your public performance. In a music competition, then, a beta blocker can arguably help the best player win..... The question is whether the ability to perform the activity in public is integral to the activity itself.

I have no dog in this fight. (By way of Truth In Bloggistry disclosure, it happens that I take beta blockers for hypertension, but I’m not inclined to public performance anxiety and, besides, there are no performance enhancers of any sort that would make me an athlete. If instead of Dr. Bruce Banner I’d gotten the gamma rays, the Hulk would have been an overgrown but still uncoordinated doofus.) I don’t care whether either amateur or professional athletes are permitted to take beta blockers or, for that matter, any other performance enhancing drugs. My only point here is that how one answers these sorts of questions depends in large measure on how one frames the questions in the first place.

That settled, feel free to take out your prayer beads now and, oh, yeah, smoke ‘em if you’ve got ‘em.

Sunday, June 15, 2008

It's a Bug and a Feature!

Today's (UK) TimesOnline reports "Scientists find bugs that eat waste and excrete petrol." The scientists in question are the Silicon Valley variety and the bugs in question are the genetically engineered variety. The report goes on:
Unbelievably, this is not science fiction. Mr Pal holds up a small beaker of bug excretion that could, theoretically, be poured into the tank of the giant Lexus SUV next to us. Not that Mr Pal is willing to risk it just yet. He gives it a month before the first vehicle is filled up on what he calls “renewable petroleum”. After that, he grins, “it’s a brave new world”.

So it is, or will be. How soon, however, is another question and another month is, to put it mildly, a tad optimistic.

But who knows? That's the thing about technological revolutions. While they do, indeed, build on what has been discovered or invented before, there really are "Eureka!" moments that change everything forever, too. I have little doubt that as physics, engineering, electronics and computer science were the motive forces of 20th century technology, genetic engineering and genetic medicine will be the big stories of the 21st century, certainly revolutionizing medicine and quite possibly revolutionizing energy production, too.

Meanwhile, no word so far on whether scientists have had any luck bioengineering bugs who eat politicians and excrete productive people.

Sunday, June 8, 2008

The Ethics of Public Health and Safety Officials

Today’s (UK) Independent Online runs a story entitled "Threat of world Aids pandemic among heterosexuals is over, report admits." While noting that the now over 25 year old disease continues to kill “more than all wars and conflicts,” the far more newsworthy (in the sense of new and unusual) part of the story is as follows:
In the first official admission that the universal prevention strategy promoted by the major Aids organizations may have been misdirected, Kevin de Cock, the head of the WHO's department of HIV/Aids said there will be no generalized epidemic of Aids in the heterosexual population outside Africa.
This is, to be sure, not good news for homosexuals or Africans; but it is, that sad fact notwithstanding, well past time the epidemiological realities of HIV/Aids risk were acknowledged. Just in case there is an outbreak of candor going on among public officials (yes, I know), perhaps someone could say the same thing about resources misspent through the generalized screening for possible terrorist suspects to avoid profiling.

This is a delicate topic. When the “pink disease” was first detected among a handful of homosexual men in Los Angeles in the early 1980s, this originally named Gay Related Immune Deficiency began to attract serious general public attention in the U.S. only after cases of heterosexuals contracting the disease (e.g., female sexual partners of AIDS patients and blood transfusion recipients) were documented. I speak here purely anecdotally, but my impression in the early to mid 1980s was that the U.S. shifted rapidly from a state of almost complete indifference over the plight of homosexuals and IV drug users to a state of panic over their own risk.

Of course, the medical community was mostly ignorant of the nature of HIV/Aids, itself, in the 1980s. But a decade later we had a much better understanding of the retrovirus and, thankfully, much better available treatments. Most relevant here, however, we also had ample epidemiological evidence leading to an almost overwhelmingly obvious conclusion: white, heterosexual male, non-IV drug users -- in other words, the demographic group who wielded the most power in the U.S. and, indeed, in the world – faced just about the smallest real risk of contracting HIV/Aids possible.

Counter-factual arguments being what they are, there is no way of telling whether public support and, more to the point, public funding for HIV/Aids research would have been nearly as extensive in the past quarter century if the general public had known that claims of the universal risk of contracting HIV/Aids were, although true, highly misleading.

Certainly, however, it is at least not unreasonable to suspect that support and funding would not have been as extensive, and perhaps not nearly as extensive, which raises the following interesting ethical question: Is misinforming or misleading the public ever ethically justified on grounds of public health and safety?

By way of addressing this issue somewhat obliquely, let’s ignore for now concerns about giving undeserved ammunition to homophobes and drug warriors whose worldview continues to include the belief that HIV/Aids is God’s punishment for being gay or using drugs. (In passing, I have yet to hear from those who hold that view how it is that God is so piss-poor at punishing junkies and queers that all He can manage to do is put them in a higher risk category?!?) Let’s consider Africa, instead.

A month or so ago, the Onion ran an almost throw away one-liner in the crawl below one of their Onion News Network videos. It read:
ABC cancels new reality show Who Wants To Save Africa? after second episode.
Indeed. (And, yeah, it’s so painfully true that it is funny.)

Of course, you’d be hard pressed to come up with ways in which sub-Saharan Africa isn’t a basket case, and even if you could magically eliminate HIV/Aids from the continent, Africa’s public health record would still be abysmal. But, no doubt about it, HIV/Aids has been epidemic in Africa’s general population to an extent unlike everywhere else. Why?

Dr. de Cock (I know, I know!) says:
It is the question we are asked most often – why is the situation so bad in sub-Saharan Africa? It is a combination of factors – more commercial sex workers, more ulcerative sexually transmitted diseases, a young population and concurrent sexual partnerships.

Sexual behavior is obviously important but it doesn't seem to explain [all] the differences between populations. Even if the total number of sexual partners [in sub-Saharan Africa] is no greater than in the UK, there seems to be a higher frequency of overlapping sexual partnerships creating sexual networks that, from an epidemiological point of view, are more efficient at spreading infection.

Which is to say that there are not only political and economic differences but also social differences in much of African culture which make the spread of HIV/Aids that much more intractable.

Here is the reality, though. As terrible as HIV/Aids is, it is only one of the terrible ways people die needlessly in Africa or, for that matter, around the world. As reason’s Ronald Bailey recently wrote in a report on the 2008 Copenhagen Consensus Conference, “[T]he number 1 priority identified by the experts in the 2004 Copenhagen Consensus was combating HIV/AIDS. That dropped to number 19 in the 2008 ranking."

Ceteris paribus, the same must be said of the U.S., as well.

There are, to be sure, all sorts of objections that can be raised in good faith to that perspective. I wouldn’t be a bit surprised if the medical research focusing on a cure for HIV/Aids didn’t yield important findings for other diseases and disorders. I suspect that the rise of HIV/Aids in the U.S. actually contributed positively to the struggle for gay civil and human rights, ironically enough. Whether disingenuous or not, suggesting that the entire population was similarly at risk for HIV/Aids diminished the stigma unfairly attached to those who, for whatever reason, contracted it. These are certainly collateral benefits to the emphasis in HIV/Aids research and public health policy in the past twenty-five or so years.

But every benefit has a cost, and every tradeoff is susceptible to the reasonable question, was that a good deal? Put differently, only progressives – and not very bright progressives, at that – whine at this point “Well, it shouldn’t be a case of ‘Either / Or.’ We should be able to support HIV/Aids research and treatment and address all those other health and safety problems, too. You’re arguing a false dilemma.”

It may be a false “dilemma,” but it is a very real tradeoff. A dollar spent on X is necessarily not a dollar spent on Y.

So, too, with our most recent insanity, the War On People Living In Caves Terrorism and its most strikingly absurd manifestation in commercial air travel. Randomly searching the luggage and persons of geriatric Lutheran women from Minnesota will not increase air safety any more than police All Points Bulletins advising to be on the lookout for suspects “of no particular demographic characteristics” will help apprehend the bad guys. To all intent and purposes, such women are the statistical equivalents of the white, heterosexual male, non-IV drug users in the case of HIV/Aids.

Yes, there’s a real and vitally important difference between describing someone who has actually committed a crime and targeting people simply because there is a statistically significant correlation between their demographic characteristics and the commission of a potential crime. (And, yes, police engage in the sort of racial profiling that no court can prohibit because, for better or worse, it’s the same sort Jesse Jackson and Chris Rock engage in. And, yes, it’s a bad thing and one of the reasons why, comparatively speaking, being black in America still sucks.)

And there’s “always the possibility,” the ever incompetently vigilant TSA will tell you, that Osama Bin Laden could recruit some Prairie Home Companion grandmother to pack some C-4 up her, well, you know to blow up that puddle jumper from Omaha to Ft. Worth, too. Absolutely true. Here are some other possible occurrences: invasion by space aliens, commercially viable cold fusion energy using ordinary household products, George W. Bush winning the Nobel Peace Prize, my wife finally unpacking and sorting the stuff in the garage (Ouch! Sorry, dear!), a Pauly Shore movie not sucking, and, well, you get the picture.

Exaggerating the risk from or to Group A while discounting the risk from or to Group B always has attendant costs, costs that could otherwise be used to address some of those other perhaps even more important health and safety issues. In some cases, those attendant costs have been unconsciously, obscenely high.

So I return to the original question. Is misinforming or misleading the public ever ethically justified on grounds of public health and safety? When public support for a policy objective, any policy objective depends on deliberately misinforming the public, part of the non-economic attendant costs of that lie must surely be harm to the very core of popular sovereignty.

It remains to be seen whether we will abandon the rewards and risks of genuine popular sovereignty for the promise of health, safety and happiness from our paternalistic nannies. Reality is always a mixed bag, but many recent trends suggest we are well down the road toward making a very bad tradeoff.

Wednesday, July 11, 2007

Not What The Doctor Ordered

Today's New York Times is running a fairly scathing report of Administration attempts "to weaken or suppress important public health reports because of political considerations" during Dr. Richard H. Carmona's four year tenure as Surgeon General.

According to the Times, those reports included such topics as embryonic stem cell research and "a landmark report on secondhand smoke [that] concluded that even brief exposure to cigarette smoke could cause immediate harm."

Reasonable people can reasonably disagree about the moral status of human embryos. As such, that issue, itself, is a legitimate policy matter for the Bush Administration. What is not a legitimate policy matter but a purely scientific question is whether or to what extent embryonic stem cells are essential or vitally important in medical research. Again, one might hold that human embryos being persons, no medical advances would justify their intentional destruction, but we should at least know as best we can what the likely payoff of such research would be and what, if any, alternatives exist.

So, too, many have questioned the scientific validity of the aforementioned study on secondhand smoke. I haven't read the study or the criticism, so I won't offer an opinion beyond acknowledging skepticism about the sweeping nature of some of the claims reported in the press. Be that as it may, scientific research needs to be made available precisely so it can be challenged scientifically. Ethical and policy concerns remain, but we should at least have the benefit of a full examination on the scientific merits of such studies first.

Beyond that, though, it seems clear to me that if the nation is going to have a Surgeon General at all, the office must be accorded greater independence from both political branches of government. As matters stand, the office falls under the Department of Health and Human Services which is, frankly, a prime target for political manipulation regardless of the party controlling the White House.

Conversely, we could simply eliminate the office or relegate it to its primary function as head of the United States Public Health Service Commissioned Corps and create some independent medical advisory authority in its place. Science and health policy is properly the responsibility of policy makers, not of scientists and physicians, themselves. But it is equally true that the soundness of the scientific or medical research required to make those policy decisions must remain in the unfettered province of the scientific and medical community, itself.

Oh, and for goodness sakes, drop the silly uniforms.

Monday, June 25, 2007

"Wii Admitted We Were Powerless Over Video Games -- That Our Lives Had Become Unmanagable."

Given the choice between, oh, say, saving his life by fleeing a burning building or staying a bit longer to reach the next level in whatever video game he was playing at the time, I'm reasonably sure my younger son would, reluctantly, flee. Lower the stakes, however, and the video game would almost certainly win. So, is he addicted?

At least for now, the American Medical Association is saying no. That's the right call, which is no guarantee it will remain the AMA's position and that Video Game Addiction won't find itself next to alcoholism and drug addiction in the next edition of Diagnostic and Statistical Manual of Mental Disorders (DSM) in five years.

Whatever one thinks about the science underlying the notion of addiction as a medical disease or disorder, it is unlikely that the supposed Video Game Addiction "sufferer" will be affected greatly one way or the other by the medical experts' decision. What would change is that "[s]uch a move would ease the path for insurance coverage of video game addiction." Follow the money, in other words.

With the usual disclaimer about how, among the myriad other things I'm not, medical expert is high among them, here's my take. Issues of actual chemical dependence aside, and we'll get back to that later, when you strip the concept of addiction of any sort from its negative connotative baggage it reduces to little more than the notion that people like doing what they like to do and, as a corollary, when they like doing something very, very, very much it is hard for them to refrain from doing it. So hard, in fact, that in many cases those who have decided they would be better off not doing it so much find it much easier not to do it at all rather than trying to try to do it in moderation.

I'm not arguing that a life spent shooting heroin or smoking crack cocaine or drinking a quart or more of whiskey a day or gorging on food constantly or betting the rent money or even playing video games for six to eight hours a day is a life well spent. It is unquestionably dysfunctional by any reasonable objective standard. Also, some addictions are more dysfunctional than others, posing serious health and even life threatening risks.

But we don't manage our own lives by reasonable objective standards, though. We might think we do, but the best we can do is to view ourselves by reasonable subjective standards, the reasonable part being our adaptive success at learning from our own and others' experience and changing, when necessary, accordingly. I can't say how much satisfaction you get from your preferences, you can't really know how much satisfaction I get from my preferences. We can, of course, figure out what each other's preferences are and maybe even what their ordinal value to others are by how they in fact behave' but it doesn't follow that your number one preference, say, smoking crack, isn't so far ahead of number two and the rest that you're not making a rational decision by hanging out at the crack house no matter what I may think of your decision.

That said, many people who come to believe that their lives are being ruined by their alcohol or drug habits or whatever, can and do manage to stop. They probably need help doing so in most cases, but I don't know of a single person who stopped what others deemed an addictive habit solely because those others wanted him to do so. Society -- family, friends, employers, etc. -- can raise the stakes, but that's it. As the old psychology joke goes, the light bulb has to want to change.

Chemical dependency, a reality in some addictions despite what some naysayers claim, isn't really the issue. If it were, alcoholics and drug addicts could quite cheaply and quickly be detoxed and that would be that. It is the psychological dependence that is the far tougher nut to crack. No doubt that, too, is physiological. Perhaps the brain gets hard wired with "memories" of how much fun the alcohol or cocaine or, let's not forget, nicotine is. That's why the patient has to want not only to quit but to keep quit. That, too, is why there is some logic to lumping, say, a gambling addiction into the same psychiatric category as alcoholism or drug addiction. The problem is, the logic leads to precisely the wrong conclusion. What we should be learning from our ever expanding list of "addictions" is that, medical assistance with the chemical dependence aspect of certain such addictions during withdrawal aside, they are no different than the rest, which is to say that they're not really diseases or, in the medical sense, disorders at all.

Which, in turn, is why it isn't ongoing (expensive) medical treatment but recourse to Twelve Step programs and their like that tend to be the most effective method for people who want to quit and keep quit of whatever their addiction may be. My title here is a play on the First Step of such programs: substitute "Wii" with "We" and "Video Games" with "Alcohol" and you have the version of the granddaddy of all such programs, Alcoholics Anonymous. What they claim, at least for themselves, is that coming to believe that they really did have a problem and couldn't fix it by themselves was the first, essential step toward its solution. There may be many other ways for people who want to stop drinking or gambling or playing video games incessantly to do so. But at least in this one celebrated and successful (and, sure, highly criticized and controversial) way we're back to the psychologist and the light bulb; that is, it is the individual, himself, who must want to change; otherwise, all is for naught.

Meanwhile, my son is indulging his "addiction" at the moment playing some sort of video game in the other room and temporarily safe from being clinically labeled even as I type this (indulging in my, um, computer "hobby"). Just as we don't set out a big bowl of candy in the living room and tell our kids to help themselves as much as they want, we limit his access to the video games. He plays other, i.e., non-electronic, games and sports, does his homework (grudgingly) during the school year and so forth. I'm sure he'd rather play video games than do his math homework and, left to his own devices, would do precisely that. But he's a child, our home is ruled by (benign) despots and he doesn't get that choice. Not yet. Libertarianism is for grown-ups.

It would be bad enough in the case of adults that we infantilize them by telling them they can't do what harms only themselves because they have a disease that gives us power over them "for their own good," but it's worse than that. It doesn't even work until, ironically, they want to change anyway. Unlike light bulbs, there's little point in screwing with them until the light has already come on.

Tuesday, June 19, 2007

Science, Sanity and the Law

If you have never entertained, however fleetingly, the prospect of killing your children, you're probably not spending enough time with them. Fortunately for the species, few of us ever act on such feelings. So few, in fact, that the rare parent like Andrea Yates, who in 2001 killed her four small children, strikes us immediately as monstrous or insane or both.

Reason's Brian Doherty posts a very fine article today about our struggles as a society with the notions of sanity, responsibility, free will and the law. The legal so-called insanity defense continues to fascinate us precisely because it touches so many deep mysteries about life, typically arising under the most gruesome and horrifying of situations. "Insanity" is a term long ago abandoned by the psychiatric profession, but the relationship between what is, at bottom, a legal defense justified on moral grounds and what purports to be increasing scientific evidence against the notion of free will of any sort continues to lie at the heart of the issues raised.

It is a basic tenet of ethics that ought implies can; that is, that holding someone blameworthy (or praiseworthy) for an act can be meaningful and justifiable only if that person could have done other than he did, in fact, do. Logically, it must also hold that "ought not" implies that the person could have refrained from doing whatever was done. Those who deny the existence of volitional or intentional human agency (i.e., free will) but contend that society must nonetheless indulge in the useful fiction of contending otherwise and holding people 'responsible' for their 'acts' are not, I think, all that far removed from those who hold that belief in God is necessary for there to be any moral order. Of course, by their own theory they are incapable of holding contrary views, so perhaps we can forgive them this conceptually muddled attempt to have their determinist cake and freely eat it, too.

In fairness, one can make a case for the notion that in society at large 'pretending' that criminals could refrain from committing their crimes so to justify 'punishing' them may well have a general deterrent effect. That is to say, 'punishing' certain acts raises the known consequences of their commission and people in general respond to such incentives and disincentives, whether freely or not.

But at the fringes of "people in general" lie those whose minds are so deranged (or, if you will, whose brains are so disfunctional) that the notion of general deterrence breaks down completely. These are, ironically, the people who are the most likely candidates for the insanity defense. Put simply, punishing the truly psychotic is unlikely to have any effect on the behavior of other truly psychotic persons. Indeed, it is almost definitional that such persons do not respond to the world as you and I do.

If we are only play-acting at a belief in free will in our criminal justice system as it deals with ordinary people, then surely we must be indulging in a play within a play when we go through the motions of a criminal trial with such persons, grappling with questions such as the (in)famous M'Naghten test whether "...at the time of the committing of the act, the party accused was laboring under such a defect of reason, arising from a disease of the mind, as not to know the nature and quality of the act he was doing, or, if he did know it, that he did not know what he was doing was wrong."

Society must, of course, remove or restrain those who, for whatever causes or reasons, pose a deadly threat. But what possible difference can knowing what one is doing or knowing it is deemed wrong by others make if one cannot act otherwise anyway?

[EDIT: The first posted version read "scientific evidence of free will" and should have read and now does read "scientific evidence against the notion of free will."]

Tuesday, May 29, 2007

And for Christmas, Everyone Gets a Pony!

Barack Obama is now making a plan for universal health care a campaign promise in his bid for the presidency.

Details, such as they are, are reported by the AP (with my emphasis added) as follows:
Under Obama's proposal, everyone would be able to obtain health insurance, and the Illinois senator would create a National Health Insurance Exchange to monitor insurance companies in offering the coverage. In essence, Obama's plan retains the private insurance system but injects additional money into the system to pay for the expanded coverage.

Those who can't afford coverage would get a subsidy on a sliding scale depending on their income, and virtually all businesses would have to share in the cost of coverage for their workers. The plan that would be offered would be similar to the one covering members of Congress.

His package would prohibit insurance companies from refusing coverage because of pre-existing conditions.

"My plan begins by covering every American. If you already have health insurance, the only thing that will change for you under this plan is that the amount of money you will spend on premiums will be less," Obama said. "If you are one of 45 million Americans who don't have health insurance, you will after this plan becomes law."

In addition to broadening coverage, Obama called for a series of steps to overhaul the current health care system. He would spend more money boosting technology in the health industry such as electronic record-keeping, put in place better management for chronic diseases and create a reinsurance pool for catastrophic illnesses to take the burden of their costs off of other premium payers.

His plan also envisions savings from ending the expensive care for the uninsured when they get sick. That care now is often provided at emergency rooms. The plan also would put a heavy focus on preventing disease through lifestyle changes.

In all, Obama said, the typical consumer would save $2,500 a year.

Obama conceded that the overall cost of the program would be high, while not providing a specific number.

"To help pay for this, we will ask all but the smallest businesses who don't make a meaningful contribution to the health coverage of their workers to do so to support this plan," said Obama. "And we also will repeal the temporary Bush tax cut for the wealthiest taxpayers."

Sounds great, doesn't it? Hey, if you already have insurance, your rates as a consumer (never mind your rates as a taxpayer) will go down, and if you don't have it the government will pay for it. Where's the (yet uncalculated) extra money going to come from? Why, from rich and greedy businesses (don't worry, their prices and profits will magically remain the same) and the "wealthiest" taxpayers (everyone but the poor) and from everybody's lifestyle changes!

My only question is this: Who is the Hillary Clinton operative planted inside Obama's headquarters working to ensure his defeat?

Tuesday, May 22, 2007

Does This Mean Stallone Will Be Banned From Competitive Acting?

"Nothing is over! Nothing!" -- John Rambo

Among the many things Sylvester Stallone and I have in common must be counted a very limited talent for acting, aging flesh and the desire to self-medicate. The last, alas, cost Stallone fines and court costs amounting to around $13,000 after a guilty plea in Australia to possession of 48 vials of the human growth hormone Jintropin and four vials of testosterone.

I, by contrast, only wanted some antibiotic eye drops but ended up instead with a lingering eye infection and several unnecessary trips to an opthamologist.

Stallone, like several other old lions of his generation's action hero stars (notably, Bruce "I see old people" Willis's soon released Take the Blue Pills and Die Hard, or something like that), has been racing the reaper to complete his valedictory outing as Rambo. While his recent Rocky Balboa wasn't nearly as bad as I, in my affected and uncredentialed role of Constant Viewer, expected it to be, there was still something mildly bathetic about the sixty year old Stallone lumbering into the ring for one last round. Well, time and tide and all that.

"I will not be without these. I cannot be without these," Stallone said when discovered with the goods, and I can well understand why. Why the hell should he be without them? If the man is vain enough and deluded enough to want to pump himself with steroids and such to play the heroic lout one more time, I say more power to him. It's his career and his life, fergawdsakes!

And by the way, while Stallone's geriatric action heroics are easy to ridicule, Stallone is a very good screenwriter and director in his genre and the original Rocky easily deserved its Oscars as much as any of Frank Capra's legendary melodramas ever did. It is worth remembering that Stallone became the third person ever to be nominated for both acting and writing in the same year for Rocky, following Chaplin for The Great Dictator (1940) and Orson Welles for Citizen Kane (1941). Call that declining standards, if you will, but that's pretty damned good company. So, also, Bruce Willis turned out to be (or become) a much better actor than his Moonlighting mugging or early John McClane machismo would have led me to believe.

Anyway, enough of this Hollywood hoopla. Let's get back to the real topic which is drugs and me. (Me! It's all about me!) Being both lazy and stupid, I left a pair of extended wear contact lenses in for too long and ended up with an infection in one eye. Now, I admit it might not have been a mere infection. All sorts of things could have been wrong with my left eye, but an infection was by far the most likely problem, it having happened to me before and the prescribed treatment being antibiotic eye drops and refraining from wearing contacts for a while.

As it happened, being lazy and stupid and knowing I was due for an eye exam shortly anyway, I did nothing and, as will more often than not happen, my immune system kicked in, my eye felt better though not entirely well, and I decided to just leave the contacts out and wait until it was time for the regular exam. Now, had I been able to run down to the pharmacy to buy a bottle of antibiotic eye drops in the first place, the infection would have healed faster and that would have been that. Of course, as I said, it might not have been a bacterial infection, in which case the eye drops would have done no good (but no harm, either) and I would have known to seek medical attention at once. But you can't buy antibiotics without a prescription, dagnabit!

I know, I know. Antibiotics abuse is a public health problem, and some people are allergic to some antibiotics and so on and so forth. But if you can buy topical antibiotic creams and soaps and if you can buy tetracycline for tropical fish, ferchristsakes, then you damned well ought to be able to buy antibiotic eye drops without a prescription.

When I finally saw the doctor some weeks later, he noted the still mildly infected area and guess what? He prescribed antibiotic eye drops! Plus, of course, a couple of return visits to check the course of the treatment -- treatment which I could easily have self-administered weeks earlier at far lesser risk to my eye.

Okay, so I'm stupid and lazy and cheap and physician-resistant, as well. But they're my eyes and I don't need or want to be saved from myself. Or, if I do, it still isn't the business of the state to do so. (Who, oh who will save me from the state?) If I want to run the risk of self-prescribing the wrong medication, it's my own lookout. Yeah, I know. Literally, in this case.

Same with Stallone. If it's that bloody important to him to have one last fling as an action hero and it takes controlled substances to permit him to do it, why the hell should Australia or the U.S. or any other state prohibit him from doing so?

Monday, May 7, 2007

A Case of Wrongful Life? (Notes on Facts and Values)

Old joke: A doctor tells his patient, "I'm sorry but you only have six months to live." The patient takes the news stoically and asks the doctor how much he owes him. "Five thousand dollars," the doctor says. "But I'll never be able to come up with that much money in six months, Doc!" "Okay, then," says the doctor, "you've got a year."

I said it was an old joke, not a good one. Meanwhile, while I look for better material, John Brandrick, 62, was told two years ago that he had terminal pancreatic cancer and only months to live. Brandrick quit his job, sold his possessions and spent what he thought was his brief, remaining life taking vacations, eating in swank restaurants and such. A year later, his doctors revised their diagnosis. Brandrick was suffering from non-fatal pancreatitis.

Oops!

The AP reports:

"My life has been turned upside down by this," Brandrick said. "I was told I had limited time to live. I got rid of everything — my car, my clothes, everything."

Brandrick said he did not want to take the hospital to court, "but if they have made the wrong decision they should pay me something back."

The hospital said there was "no clear evidence of negligence" on its part.

"Whilst we do sympathize with Mr. Brandrick's position, clinical review of his case has not revealed that any different diagnosis would have been made at the time based on the same evidence," the hospital said in a statement.

Personally, I think the mere fact that the hospital used "whilst" in its denial is pretty clear evidence of negligence. No, not really. It's an interesting case, though. Here's this poor guy in his sixties, naked and carless, expecting to shuffle off this mortal coil any moment now, probably stuffing himself with fatty foods, gadding about in cabs instead of taking the Underground and tipping big all the while when suddenly his imminent demise is snatched from his grasp no doubt just as the money was running short.

Does he have any legal recourse against the hospital? I haven't a clue. Aside from not knowing how the British courts deal with the various potential tort or contract remedies that any first year law student could think of scribbling down on an exam together with all the likely defenses to those causes of action, the more interesting question is whether he should have some sort of legal remedy here.

I don't know whether there is settled case law on this particular situation, but something like it must have happened somewhere before and it would be mildly interesting to know how a court or jury trial resolved similar such situations. Aside from being interesting at that level, however, it is also interesting as a good example (regardless of what, if any, law there is on point) of how knowing all the facts of a situation do not necessarily resolve a dispute arising from that situation.

Moreover, it isn't just a straightforward case of the difference between facts and value judgments, either. It is a case of that, to be sure, but of more as well. There are also applicable legal rules or at least legal rules that we want to say are not "mere" value judgments and that should apply even though we may not know how to apply them. Learning the formal elements of negligence, for example, is easy: the defendant must have owed a duty to the plaintiff, must have breached that duty and that breach of duty must have proximately cause the plaintiff harm. Of course, it can get much more complicated than that, "proximate" is a special bit of legal jargon and so forth, but that's the nutshell version.

Even so, learning the mere rules tells you next to nothing about how to apply them in a particular situation, how they should be applied in this situation. And if we face a new and somehow different set of facts from the facts to which the rules have previously been applied, then we must decide which facts are relevantly similar and which are relevantly different from those prior cases and how much weight to give to those similarities and differences. Herewith, the late philosopher John Wisdom approaching the matter a bit differently:
In courts of law it sometimes happens that opposing counsel are agreed as to the facts and are not trying to settle a question of further fact, are not trying to settle whether the man who admittedly had quarreled with the deceased did or did not murder him, but are concerned with whether Mr. A who admittedly handed his long-trusted clerk signed blank cheques did or did not exercise reasonable care, whether a ledger is or is not a document, whether a certain body was or was not a public authority.

In such cases we notice that the process of argument is not a chain of demonstrative reasoning. It is a presenting and representing of those features of the case which severally co-operate in favour of the conclusion, in favour of saying what the reasoner wishes said, in favour of calling the situation by the name by which he wishes to call it. The reasons are like the legs of a chair, not the links of a chain. Consequently although the discussion is a priori and the steps are not a matter of experience, the procedure resembles scientific argument in that the reasoning is not vertically extensive but horizontally extensive – it is a matter of the cumulative effect of several independent premises, not of the repeated transformation of one or two. And because the premises are severally inconclusive the process of deciding the issue becomes a matter of weighing the cumulative effect of one group of severally inconclusive items against the cumulative effect of another group of severally inconclusive items, and thus lends itself to description in terms of conflicting ‘probabilities’. This encourages the feeling that the issue is one of fact – that it is a matter of guessing from the premises at a further fact, at what is to come. But this is a muddle. The dispute does not cease to be a priori because it is a matter of the cumulative effect of severally inconclusive premises. The logic of the dispute is not that of a chain of deductive reasoning as in a mathematical calculation. But nor is it a matter of collecting from several inconclusive items of information an expectation as to something further, as when a doctor from a patient’s symptoms guesses at what is wrong, or a detective from many clues guesses the criminal. It has its own sort of logic and its own sort of end – the solution of the question at issue is a decision, a ruling by the judge. But it is not an arbitrary decision though the rational connections are neither quite like those in vertical deductions nor like those in inductions in which from many signs we guess at what is to come; and though the decision manifests itself in the application of a name it is no more merely the application of a name than is the pinning on of a medal merely the pinning on of a bit of metal. Whether a lion with stripes is a tiger or a lion is, if you like, merely a matter of the application of a name. Whether Mr. So-and-So of whose conduct we have so complete a record did or did not exercise reasonable care is not merely a matter of the application of a name or, if we choose to say it is, then we must remember that with this name a game is lost and won and a game with very heavy stakes.

(John Wisdom, "Gods," reprinted in Philosophy and Psycho-Analysis, 1969.)

We would like to say, or at least some of us sometimes think we would, that facts and values and the rules we use to apply the latter to the former have some sort of determinate and separate logic to them -- "No ought from an is!" or "Ought implies can!" we might proclaim. If we are very sophisticated indeed, perhaps we pull out some bit of philosophical legerdemain like supervenience to bridge our tidy looking dichotomy between facts and values. At the end of the day, however, whether we come equipped with theory or not, we must decide whether the hospital was negligent or breached some contractual duty and whether Mr. Brandrick's spending-spree was proximately caused by a breach of some such duty or implied promise and thus constituted harm to him now that he will likely live much longer and so forth. That, in turn, requires the application of rules which are neither facts nor values or, if you like, are both.

How should we decide?

Tuesday, May 1, 2007

Adult Sudden Death Syndrome

With a hat tip to the nice folks over at grylliade, the Australian reports that Chinese officials, um, believe this was the cause of the recent death of a prisoner. "'Li Chaoyang's sudden death conforms with adult sudden death syndrome,' said Mr Shi, citing a forensic report."

I don't want to sound overly cynical here; but, personally, I ain't buying it until I see a case of Adult Sudden Death Syndrome show up on House.

Still, it does raise the question, do we have some sort of exchange program between U.S. military "holding facilities" guards and Chinese prison guards? Just askin', mind you.

Friday, April 20, 2007

Herbal Medicine vs. Comfort Food Politics

We are all tattooed in our cradles with the beliefs of our tribe; the record may seem superficial, but it is indelible. You cannot educate a man wholly out of superstitious fears which were implanted in his imagination, no matter how utterly his reason may reject them. – Oliver Wendell Holmes, Jr.

My former Inactivist co-blogger, Jennifer Abel writes in the Hartford Advocate about Mark Braunstein, a college librarian whose injuries from a diving accident in 1990 resulted in partial paralysis below the waist. Braunstein makes no secret of his occasional non-medicinal use of marijuana in his youth, a fact common to many people of his generation, but since the accident he has found that marijuana is an effective treatment for his recurring leg spasms and pain. Unfortunately, such treatment is illegal.

Not that it would have any affect on federal law which does not recognize the medicinal use of marijuana, Connecticut is considering legislation, the Compassionate Use Act, that would make Braunstein’s use of marijuana legal at least under state law. However, at least one state legislator interviewed by Abel, Republican Toni Boucher, is an adamant opponent of the pending legislation. Before I quote from Abel’s interview with Boucher, here is the legislator’s official web page motto:

Listening to our fellow citizens and responding to their concerns with common sense solutions to protect individual freedoms and provide a better quality of life for all of Connecticut.

So why is Boucher opposed to legislation that would decriminalize what seems to be a common sense solution to Braunstein’s quality of life problems?
Boucher fears if the Compassionate Use Act passes, marijuana will make sick people sicker and snare children into self-destruction. That’s why, where medical marijuana is concerned, “there’s more harm than good in promoting it.”

And so it went: why should it be illegal for Braunstein to smoke? Because marijuana’s bad for you. So bad those who smoke it should go to jail? Yes, because it’s against the law. Why? Because it’s bad for you.

How long did Boucher think [Braunstein] should spend in prison?

There followed a long silence broken by Boucher’s response: “That’s a ludicrous question.... We’re not the judiciary.”
Given how easy it is to find federal officials willing at the drop of a hat to make fools of themselves, finding the same at the state level is almost unsportsmanlike. Still, Rep. Boucher’s mindset is deserving of the exposure here. Indeed, I have little doubt she represents the attitude of many of her constituents on this point, more’s the pity.

We can easily call attention to the folly of Boucher’s comments. She has no medical expertise, in fact it most certainly is the legislature’s business to decide which acts are to be crimes under the laws of her state and what the penalties for those crimes are to be, and the angle of incline of this particular slippery slope is equal to if not less than zero. Children will not be rushing to paralyze themselves so they can smoke pot legally.

But let’s give this slippery slope argument its due here and acknowledge that however sincere advocates of medical marijuana may be in their belief that it should be a medical option in some instances, their opponents are correct in surmising that legalizing marijuana use in such instances would also serve to undermine the general prohibition.

In that sense, they are equally situated atop the sort of precarious absolutism as that of abortion rights advocates whose reaction to Carhart suggests we are now mere days away from the return of back alley butchery.

It simply cannot be repeated often enough: politics is at least as much if not far, far more about what we fear and what we desire than what we think makes sense. Of course, no one knows anyone whose life has been damaged by smoking marijuana or crack cocaine, for that matter, to the extent that thousands of lives have been damaged by conviction and incarceration for same.

The reason this sort of “destroy the village in order to save it” logic makes no sense and yet is heard, one way or another, over and over again is because we are not, in fact, concerned about the lives of those other people. We are concerned about our own lives and those of our families and friends and collateral damage be damned as long as we believe ourselves to be safe and sound.

The sort of deep, visceral fear involved here is no more rational than the fear of a mother bear whose cubs one has accidentally and innocently approached. It is not only bereft of reason, it is immune to reason. It will not be swayed by mere facts. It is my young daughter frightened by thunderstorms, who cannot be comforted by reason but only by being held and told, over and over again, that she is safe now and will always be safe in the arms of her mother.

Wednesday, April 18, 2007

Supreme Court Upholds Federal Partial Birth Abortion Technique Ban

The slip opinion has not yet been posted, so I can comment only on the breaking AP report for now.

Abortion opponents, among whom I include myself, should properly greet any retreat from the Court's abortion cases beginning with Row v. Wade as good news of a sort. (And, of course, vice versa for abortion rights advocates.) Still, it is worth noting, first, that the surgical technique under consideration here is used in only a handful of the over one million abortions performed in the U.S. every year and, second, that as the AP reports:
The procedure at issue involves partially removing the fetus intact from a woman's uterus, then crushing or cutting its skull to complete the abortion.

Abortion opponents say the law will not reduce the number of abortions performed because an alternate method _ dismembering the fetus in the uterus _ is available and, indeed, much more common.

That latter is, of course, a matter of dispute. (When it comes to abortion, what isn't?) Still, it must be taken as small comfort from the near-term human being involved that, while her death may no longer be effected by having her skull crushed, getting the job done by dismemberment remains a "viable" option.

More, no doubt, later.

Sunday, April 15, 2007

[Insert "That sheep's a damned liar!" joke here]

The (U.K.) Daily Mail recently reported that Professor Esmail Zanjani, of the University of Nevada, has created the world's first human-sheep chimera, having 15 percent human cells and 85 percent sheep cells. The research is aimed at eventually being able to create sheep organs capable of being transplanted into human patients. The hoped for process is described as follows:
The process would involve extracting stem cells from the donor's bone marrow and injecting them into the peritoneum of a sheep's foetus. When the lamb is born, two months later, it would have a liver, heart, lungs and brain that are partly human and available for transplant.

Excuse me, but... the brain?

[Note: An earlier version of this entry was posted at Inactivist on March 25, 2007.]

Thursday, April 5, 2007

"... I'd type a little faster."

Reason’s science correspondent Ronald Bailey expects to die on September 4, 2027. Well, no, not really. He derived that date from a whimsical little internet site called the Death Clock, according to which I died on July 18, 1995. ("Either this man is dead or my watch has stopped." - Groucho)

Less whimsically, Bailey wrote recently (and spoke today on NPR) about emerging diagnostic technology that will permit individuals to learn their likely life expectancy with far more than mere actuarial probability. Some of these tests for some sorts of likely fatal illnesses exist already. More such tests for more such diseases are on the horizon.

Bailey’s not unreasonable position about such advances is that they are a good thing; that the average individual should both want and have access to such information. The medical community, on the other hand, is a bit more conflicted. Physicians regularly tout the life saving advantages of routine examinations of various sorts designed to provide early detection of illnesses for which, if caught in time, viable treatment options and a more successful prognosis are possible. But doctors don’t like giving bad news, whether the bad news is that you’re going to die in a matter of months or, assuming you would otherwise likely live longer, some five or ten years from now because of some fatal disease or disorder.

I admit to being somewhat conflicted myself about having access to such information. As Bailey correctly argues, the individual who is aware of the time and nature of his likely demise is better equipped to make the most of what time remains. In some cases, he is also better equipped to take whatever measures may be available, if not to avoid, at least to postpone that fate. Eventually, perhaps, a more advanced medical technology will allow us not only to identify such genetic diseases but also to cure them. "Eventually", however, may well be a long time away. The question for now is whether or how to use the information in the meanwhile.

Would the average person really profit from having such information available to him? I don’t know. I remember a television series from the 1960’s called Run For Your Life about a man named Paul Bryan who, being told by his doctors he had at most a year or two to live, set off to squeeze out every moment of his remaining time. Of course, it helped Bryan that he had a lucrative law practice to cash in and make the run possible. A show based on the remaining days of Paula Bryan, former librarian, would have suffered from far fewer exotic locations. (The show, by the way, lasted three seasons. Apparently you can get a temporary reprieve from death with high enough ratings.)

Knowing in one’s youth that one had a shorter than normal life expectancy, say only twenty more years, would indeed drive career, lifestyle and even family decisions. But how would the typical person really react to such news? As matters stand, the only two sorts of people who know the exact time and place of their death are suicides and death row inmates. Neither are role models to which the typical person aspires. Indeed, when doctors do have to break the bad news to patients, their doing so is commonly referred to as “delivering a death sentence.” No wonder physicians shy away from that part of the job.

If the only question is whether people have a right to discover, for better or worse, such information about themselves, I agree wholeheartedly with Bailey that they should. As to whether they should seek such information, well, that’s a bit trickier.

Far more troubling, however, as the technology for such information becomes available (and indeed it increasingly will), is who other than the patient will have, indeed, will require such information. Will life and health insurance companies demand such information as a precondition of coverage? It certainly seems likely that they will and, moreover, that those of us who come up losers already in the genetic lottery will find ourselves doubly stricken as we therefore become unable to secure either health or life insurance sufficient to care for our health needs when the inevitable occurs or for our survivors afterward.

Insurance is, after all, about pooled risks. It is, if you will, a sensible way of dealing with ignorance. Perhaps it will not remain a viable way to finance health care or survivor needs in a future in which such ignorance is increasingly replaced with knowledge. For better or worse, however, this is the system we have for now and these are issues that must be confronted and resolved.

(Title quote from Isaac Asimov: "If my doctor told me I had only six minutes to live, I wouldn't brood...")

Tuesday, April 3, 2007

"Mother's Little Helper" Revisited

The Washington Post reports today that as many as one in four patients diagnosed with clinical depression may in fact merely be, well, unhappy. How depressing.

Psychiatry and clinical and abnormal psychology are, as Thomas Szasz will gladly tell you, scientifically suspect endeavors, at best. That’s not to say, as Szasz in fact has said, that there is no such thing as mental illness. It is beyond serious debate at this point that there are psychological disorders which are, in fact, biochemical disorders or imbalances and treatable as such. So called bipolar disorder and clinical depression are examples of such, if only in the sense that the symptoms of such disorders can typically be mitigated, eliminated or controlled by medication.

In fact, we have more confidence in calling any so-called cognitive, affective or behavioral disorders genuine illnesses only when and precisely because they are amenable to medication. They fit the ruling ontology of the day; namely, that everything, including the mental, is ultimately physical and therefore susceptible, at least in principle, to physical, i.e., medicinal or surgical treatment.

By contrast, historically and still in large measure today, psychiatry and clinical psychology are mostly taxonomic arts – disorders are diagnosed by noting behavioral signs and symptoms and checking them against the latest version of the Diagnostic and Statistical Manual of Mental Disorders, currently in its fourth edition and known simply as DSM-IV. As far as an underlying etiology or cause of the current list of disorders goes, theories abound but little is actually known.

Getting back to the depressing business of depression, the WaPo article explains the current diagnostic attitude as follows:

Diagnoses are currently made on the basis of a constellation of symptoms that include sadness, fatigue, insomnia and suicidal thoughts. The diagnostic manual used by doctors says that anyone who has at least five such symptoms for as little as two weeks may be clinically depressed. Only in the case of someone grieving over the death of a loved one is it normal for symptoms to last as long as two months, the manual says.


The problem here, of course, is one of differentiating merely situational depression (how the “sh*t happens” facts of life affect us all) from clinical depression, reserving the prescription of, for example, selective serotonin reuptake inhibitors (SSRI’s) like Prozac for the latter. A new study suggests, however, that “extended periods of depression-like symptoms are common in people who have been through other life stresses such as a divorce or a natural disaster and that they do not necessarily constitute illness.” (Emphasis added.)

As the kids say, “Well, D’uh!” I haven’t checked on the rest of the symptom list for depression, but chronic recurrent sadness, fatigue, insomnia and suicidal thoughts are also symptomatic of parenthood, a condition that typically (tragically) lasts considerably longer than two weeks. And who the hell said you get just two months to grieve for a lost loved one?

DIGRESSION #1: A guy is playing golf with his friends one beautiful Saturday afternoon. As a funeral procession passes the course, the man stops, bows his head and crosses himself. A friend says, “Hey Joe, I didn’t know you were so religious,” and the man answers, “I’m not really, but what the hell? I was married to the woman for nearly twenty years.”

The problem here, well, one of them anyway, is that in the absence of a specific biochemical or pathogenic etiology psychologists are pretty much free to slice up the whole gamut of human emotions and behavior pretty much anyway they choose. Hence, for example, back when I was in college the DSM still defined homosexuality as a psychological disorder and schizophrenia came in any number of varieties as a grab-bag of serious psychoses that didn’t fit another category. Rumor has it that the immediately prior edition had just two differential diagnoses: whacko and possessed. I don’t deny that subsequent research and reaction to criticism have had no effect on later editions of the DSM, but how many and which symptoms a patient must experience and for how long to qualify for this diagnosis or that is, to put it mildly, ultimately just a tad arbitrary.

DIGRESSION #2: Giving my medical history to a new primary care physician recently, I had already told him I was adopted at birth and thus had no biological family history. Moments later, he asked if there had ever been any suicides in my family. Why, I asked. Well, he said, there have been studies suggesting that even in the case of adopted children, the example of a suicide in the family correlates to an increased likelihood of suicide. Maybe. And maybe like too many physicians he just couldn’t bring himself to say “Oops, I forgot.” (Actually, there was a suicide in my family and a pretty damned amusing one at that. Remind me to tell you about it some time on a slow news day.)

Ignoring the backstage role of ‘Big Pharma’ in all of this, a case can be made for keeping clinical depression diagnostic criteria flexible enough to ensure that genuine cases don’t go undiagnosed and thus untreated, a point made in the Post article. On the other hand, as that article also quotes Rutgers sociologist Allan Horwitz, “People are starting to think that any sort of negative emotion is unnatural, that they can take medication and feel better. [Psychoactive drugs can] make it less likely for people to make real changes in their lives that might be better than medications.”

As psychopharmacology inevitably improves, the line between chronic affective disorder correcting and merely mood enhancing drugs becomes increasingly problematic. It is one thing to be depressed when everything is going well in one’s life, another to be depressed when it isn’t, and yet another to be as happy as can be even when one’s life is a shambles. Analogically, it’s great that we have analgesics to treat pain. But the ability to experience pain is a highly useful, possibly even essential survival trait. Happily enough, so is the ability to feel sad.

* * * * *

BONUS: Speaking of old Rolling Stones' song titles and, well, drugs, celebrity corpse Keith Richards is reported today as claiming (a claim now being denied by his manager) that he once "snorted his father's ashes mixed with cocaine." What a drag it is getting old.