The Man Who Mistook His Life For A Hat

by Jacob Dahlke, Bioethics Program Alum (MSBioethics 2012)

Our society tends to put on pedestals the celebrities among us, particular upon their deaths. For author Oliver Sacks, it is no different except that he is not yet dead. He did, however, recently announce in the New York Times that metastasized tumors were found in his body. His diagnosis is terminal in the near future. There is an ease and confidence with which he declares, “It is up to me now to choose how to live out the months that remain to me”. These are not the words of a man who plans to rage against the dying of the light, he simply plans to confront (as Hume did) the difficulty “to be more detached from life than I am at present.”

We could certainly expect – from his own energetic accounts – Dr. Sacks to plunge aggressively into treatment. He describes himself in the NY Times piece with powerful, vivid words, a “man of vehement disposition, with violent enthusiasms, and extreme immoderation in all (his) passions.” One might presume that such a person, when faced with the prospect of a terminal illness such as Dr. Sacks’, he would opt for Thomas’ strategy. And yet, when given the opportunity to consider his path, he has chosen differently. Dr. Sacks decided to forgo aggressive treatments – choosing the quality of his life over its quantity. His path is not the one less traveled, but rather one recently reflected in the perspectives of healthcare providers and patient alike.

What is so striking about Dr. Sacks’ writing, then, is how he elucidates the transition from a life with one set of goals and purposes to another set, with goals and purposes all their own. People outside the shadow of a terminal illness may see this transition as a loss of will to live out their remaining days – hence the outcry over such stories as Brittany Maynard, or of the recent news in Canada about assisted suicide. But Dr. Sacks cut through such presumptions with surgical precision: “This is not indifference but detachment.” It is not a loss of the will to live, but instead a loss of the will to live as before. It’s not that issues like global warming are no longer important; they’re just not important to him individually anymore.

It is as though Dr. Sacks’ life as a public figure is a sort of hat. Something worn, but not entirely revealing. And now that he is nearing home, his hat has become less useful. It is, perhaps, time to take his hat off.

I am not sure we will hear much more, if anything, from Dr. Sacks from now until his death in the coming months. He decided to focus on those close in his life, and that is certainly to be respected and praised. But Oliver Sacks’ potential withdrawal from the public sphere is a loss for our society as a whole. It remains difficult to translate the thinking about end of life decisions well, and his experience and skill, his work does that with such clarity.

His accomplishments are many, but for health care professionals whose patients approach death’s door, this short piece will be perhaps the most useful. These health care professionals themselves wear many hats, and, increasingly, one of these pushes patients, families, and the public to confront what it means for their light to fade. Of course, they may choose to fight to the very end. But the brutal honesty of Olivers Sacks’ reflections opens a space for these patients to say (and for their families to hear) that they, like Dr. Sacks, have decided not to rage. Instead, they have chosen to look back and say: “Above all, I have been a sentient being, a thinking animal, on this beautiful planet, and that in itself has been an enormous privilege and adventure.”

[The contents of this post are solely the responsibility of the author alone and do not represent the views of the Bioethics Program or Union Graduate College.]

The Case of Cassandra C: Finding Clarity and Responsibility as a Mom and a Bioethicist

by Amy Bloom, Bioethics Program faculty

I have been reading the latest news regarding Cassandra C., the teen with Hodgkin’s lymphoma who refused treatment but was forced into receiving it by a Connecticut Supreme Court ruling. As a mother and a bioethicist, these are the times when reconciling my personal opinions with my professional experience can be most challenging. Many of my “mom” friends were shocked and horrified by the image of a young woman being restrained to a bed, forced to undergo treatment. They had visions of a screaming pained girl, a mother helpless to save her child, and “big brother” dispensing poison to an innocent girl whirled through our collective mind.

From an ethics standpoint, it is generally wrong to force medical treatment on anyone, particularly when there are cultural and religious factors to be taken into consideration.  I am reminded of cases involving Christian Scientists who believe that any “traditional” medical intervention is contrary to their cultural and religious views.  Oftentimes, in cases involving a seriously ill child, parental rights are legally overruled and children are “forced” into treatment. Sometimes, the state may assume its parens patriae rights and substitute its own control over children when the natural parents appear unable or unwilling to meet their responsibilities, or when the child poses a problem for the community. Further still, the state can mandate treatment in order to assure proper care, as established by Jacobson v. Massachusetts in which the US Supreme Court upheld compulsory vaccination laws.

So, on the one hand we argue it is unethical to force treatment. On the other hand, we do sometimes make the decision to mandate care, particularly when children are involved. The question becomes: how does one manage the rights of children and of parents, while also maintaining the responsibility of the state to protect children?

First, we must consider what is in the “best interest” of the patient while still considering individual choice. Such cases are clearer when an outcome like death is imminent.  The case of the 29-year-old young woman with terminal brain cancer who refused treatment and moved to Oregon to end her life is a good example.  Most ethicists supported her decision, although there were some who disagreed with “ending one’s life” so directly.  In her case, this was a quality of life issue. Treatment provided no long-term benefit. It only prolonged her pain and suffering while delaying the inevitable. From a legal perspective, she was also an adult and capable of making her own decisions.

Cassandra’s case is different. Chemotherapy has a very good chance (~85%) of curing her. I personally struggle to understand how, when faced with these scientific facts, she chose to refuse care. I am troubled by the daughter’s decision-making process, and I wonder about the relationship between the mother and daughter. Some of the words and the reactions make me wonder what, in fact, the young woman believes to be true. As far as I can tell, there were no religious or cultural beliefs behind her renouncing medical care.  Seeing chemotherapy as “poison” is a bit odd, truthfully, and her claim to be “ready to die at 17” is even more disturbing, especially given that there is treatment available.

My ‘gut’ tells me that there is something askew in Cassandra’s belief system. The things she claimed to fear as a result of chemotherapy – loss of fertility, side effects to other organs – may not happen. Moreover, if she’s dead then these are no longer an issue. These side effects are also manageable. She can prevent a potential loss of fertility by freezing some eggs. The emotional and psychological effects of chemotherapy can similarly be managed with proper medical and palliative care.

Some bioethicists have suggested this was a missed opportunity for an ethics consultation. I agree, and then some. This was not just a missed opportunity for an ethics consultation, this was a missed opportunity for education, communication, support and compassion. This was a missed opportunity to reach out, inform, and support a teen navigating the difficulty of deciding how to treat a life-threatening illness. This was a missed opportunity to understand how she came to the notion that “chemo is poison” or that “being ready to die at 17” is something worth talking about.

For argument’s sake, let’s assume Cassandra made her decision to refuse treatment with all the facts.  Let’s assume that the medical providers explained all the details to Cassandra and she still chose to renounce care. We then have to ask about Cassandra’s mother, the woman who is still legally responsible for her care.  Why would she not choose the treatment most likely to cure her daughter? Some claim Cassandra’s mother showed great bravery, love and compassion in standing by her daughter’s decision to refuse care. I struggle with this. I feel that a mother’s responsibility is to advocate for the best care for her child. Unlike the 29-year old with terminal brain cancer, this treatment will save Cassandra’s life.

This case has caused me to reflect on the implications of a government that mandates the care I give (or choose not to give) to my child, under the assumption that I am of sound mind and can make proper choices about my child’s health.  I trust science, and I trust myself to be a critical thinker.  I believe that there are certain health care issues that should be mandated – vaccinations, for one – because the science is clear (and the information against it is completely faulty and warped by media sensations like Jenny McCarthy). I also believe that I have a moral responsibility to take care of my community, and that includes my child. Sometimes that will require me to do things that are uncomfortable, against my nature, and that may even cause my daughter pain, but it is still the right thing to do. Not for me, but for her.

So, in this case, I come back to a single question: Why?  If I could understand why Cassandra chose to forego chemotherapy, and if I could believe that her mother was thinking “in the best interest” of her child, then I would be more comfortable with the decision to refuse care. Until then, I hope that Cassandra lives a long and healthy life. I also hope that Cassandra, her mother, the medical establishment, and the bioethics community continue to have this conversation because our work here is certainly far from complete.

[The contents of this post are solely the responsibility of the author alone and do not represent the views of the Bioethics Program or Union Graduate College.]

Striking the Balance Between Population Guidelines and Patient Primacy

by Susan Mathews, Bioethics Program Alumna (2014)

Breast cancer is the second leading cause of cancer death among North American women. Although routine mammography decreases the risk of death by about 15 percent, research on the effectiveness of wide-scale screening programs shows that 2,500 people would need to be screened to prevent one cancer death among women ages 40-49. Given this, the US Preventive Services Task Force (USPSTF) updated its population guidelines in 2009 to advise against routine screening mammography for women under 50.

These new recommendations were met with controversy and confusion, with many questioned the ability of “experts” to weigh potential benefits and harms of screening for individuals.

But how should population data like this, along with other epidemiologic, social, psychological and economic factors, be considered in medical decision-making?

To read more, click here.

[This post is a summary of an article published on Life Matters Media on November 25, 2014. The contents of this blog are solely the responsibility of the author and do not represent the views of the Bioethics Program or Union Graduate College.]

Fear and Loathing in Liberia

by Sean Philpott-Jones, Director of the Center for Bioethics and Clinical Leadership

Two weeks ago, I wrote a commentary decrying the current hysteria in the US over Ebola. It was ironic, I argued, that so many people were demanding the federal government take immediate steps to address the perceived threat of Ebola while simultaneously ignoring the real public health threats that we face.

A seasonal disease like influenza, for example, takes the lives of tens of thousands of Americans every winter. Still, far too many people refuse to get an annual flu shot. Similarly, outbreaks of preventable (and potentially deadly) diseases like measles, mumps and whooping cough are becoming more and more common as childhood vaccination rates plummet.

Moreover, the politicians and pundits calling on the Obama administration to take radical steps to combat Ebola are the same individuals who have repeatedly criticized efforts to combat the main causes of mortality in the US. Plans to tax junk food or limit the size of sugary sodas are seen as unwelcome government intrusions into the private lives of Americans, despite the fact that over 300,000 Americans die of obesity-related illness every year.

This isn’t to say that Ebola shouldn’t be a concern for public health officials in the US. I previously criticized both the US Centers for Disease Control and Prevention (CDC) and US Customs and Border Protection for their initially tepid response to the crisis.

CDC officials, for instance, were slow to update guidelines for treating patients with Ebola, initially recommending a level of training and use of protective gear that was woefully inadequate. As a result, two nurses who cared for an Ebola patient in Dallas are now infected with the virus. Thankfully, these women are likely to recover.

The CDC has now released new guidelines for clinicians that are similar to those used by Doctors Without Borders, the charitable organization at the forefront of combatting the Ebola epidemic in West Africa. These guidelines, along with new screening procedures for travelers arriving from countries affected by the Ebola epidemic, make it even more unlikely that we will have a serious outbreak here in the US.

Unfortunately, our public response to Ebola is marked by ignorance, fear and panic. Parents of students at Howard Yocum Elementary School, located in a bucolic suburb of Philadelphia, recently protested the fact that two students from Rwanda were enrolled. Rwanda is a small East African country that is 3,000 miles away from the epicenter of the Ebola crisis, and has no reported cases of the disease. Nevertheless, frightened parents threatened to boycott classes. In response, school officials asked the parents of these two young children to “voluntarily” quarantine their kids.

What happened at Howard Yocum Elementary School is not an isolated case. A teacher in Maine was put on mandatory leave simply for attending a conference in Dallas, where the first US cases of Ebola were reported. A middle-school principal in Mississippi was suspended after returning from a family funeral in Zambia, another East African country located many thousands of miles from the heart of the Ebola outbreak.

Cruise ships have been put on lock down, subway stations closed, family vacations cancelled, and buses and planes decommissioned because of public fear about Ebola and the risks it poses.

The sad thing is this much of irrational fear is driven by xenophobia and racism. Since the Ebola outbreak began, over 4,500 people have died in West Africa. However, the mainstream Western media only began to report on the epidemic once an American doctor became infected. The level of care and treatment offered to infected patients from the US and Spain – including access to experimental drugs and vaccines – is also far greater what is provided to patients in affected countries.

Finally, African immigrants to the US are being increasingly ostracized and stigmatized, even if they come from countries unaffected by Ebola. Their kids are being denied admission to school, their parents denied service at restaurants, and their friends potentially denied entry to this country.

Many US politicians, mostly conservative lawmakers but also some progressive policymakers facing tough reelection campaigns, have called for a travel ban to affected countries in West Africa. This is despite statements from the World Health Organization, Red Cross and CDC that such a travel ban will be ineffective. This is also rather disproportionate compared with lawmakers’ reactions to past outbreaks of mad cow disease in England, SARS in Canada and bird flu in China. No travel bans were proposed in those situations.

Rather than fear West Africans, now is the time to embrace them. We could learn a lot from them. Consider the recent piece by Helene Cooper, a New York Times correspondent and native of Liberia. In that country, where over 2,000 people have died, few families have been left untouched by Ebola. At great personal risk, Liberians have banded together to fight the disease rather than isolating and ostracizing those who are sick. Unlike the average American, they are responding not with fear and loathing but with compassion and love. It’s time for us to do the same.

[This blog entry was originally presented as an oral commentary on Northeast Public Radio on October 22, 2014, and is available on the WAMC website. The contents of this post are solely the responsibility of the author alone and do not represent the views of the Bioethics Program or Union Graduate College.]

Income Inequality and Health: Can the Poor Have Longer and Better Lives?

by Sean Philpott-Jones, Director of the Center for Bioethics and Clinical Leadership

The issue of income inequality has been in the news a lot lately. The gap between rich Americans and poor Americans has grown considerably since the 1970s. The United States now ranks first among the developed nations of the world in terms income inequality as measured by the Gini coefficient, a way of describing the distribution of wealth in a society. Globally, we’re fourth overall, surpassed only by Lebanon, Russia and the Ukraine.

Income inequality is a serious problem, so much so that Nobel Prize-winning economist Robert Shiller called it, “the most important problem that we are facing today.” Income inequality negatively affects economic growth, social mobility, political stability and democratic participation. It also affects the public health.

Quite simply, wealthier Americans tend to live healthier and longer lives. As the income gap has grown, so has the longevity gap. For example, consider the report recently released by the Brookings Institute that looked at income and differential mortality.

Between 1977 and 2007, Brookings economists Barry Bosworth and Kathleen Burke found that life expectancy increased an average of five years for men and one year for women.  But the gains in life expectancy accrued primarily to the rich. The richest 10% of Americans gained 5.9 and 3.1 years of life for men and women, respectively.  For men in the poorest 10%, the increase in life expectancy was less than two years.  The poorest women actually lost two years of life.

To really get a sense of how stark this divide is, however, consider the recent article by New York Times Reporter Annie Lowrey. She compared average life expectancy in Fairfax County, Virginia with that of McDowell County, West Virginia. A suburb of Washington, DC, Fairfax has one of the highest median incomes in the country:  $107,000. Men in Fairfax also have a mean life expectancy of 82 years. By contrast, the coal mining communities in McDowell have one of the lowest median incomes: $23,000. Men in that county only live to 64 on average.

There are a myriad of reasons why this longevity gap exists. The most obvious is access to health care. Wealthier individuals are more likely to have health insurance, a fact that the Obama Administration is trying to change through the Affordable Care Act.

But even if the Affordable Care Act succeeds in reducing the number of under- or uninsured Americans — which now seems likely, given that 8 million people signed up for one of the new health insurance exchanges — inequities in access will still exist.

For example, wealthier Americans will have far more choice in the types and numbers of doctors they can see.  Many clinicians are now refusing to accept any insurance plan, particularly publicly funded plans like Medicaid. Others are setting up concierge practices that guarantee same day appointments to those willing to pay. By contrast, poorer patients will have to wait for treatment, assuming they can find a doctor willing to see them.

The quality of care that the poor receive is also lower. Numerous studies have shown that lower-income patients are more likely to be misdiagnosed, prescribed the wrong medication, or suffer from complications of treatment. This is not because their doctors are incompetent or don’t care about their poorer patients. Rather, doctors that serve lower-income communities often do not have the time to adequately examine patients, take a full medical history, properly explain treatment options, or prescribe the newest drugs; they simply have too many patients to see and insurance reimbursement rates are too low to provide a full range of services.

Finally, wealthier individuals tend to live healthier lives overall. They are less likely to smoke, to drink to excess, and to be overweight. Part of this is due to differences in education, but part of it is due to time and resources. The investment banker who works in Manhattan can afford to buy fresh produce and other healthy meals at the local Whole Foods. He can also afford a gym membership, and he likely lives in a neighborhood that offer safe opportunities for exercising out-of-doors. By contrast, the single mother of four who lives in the Bronx must feed her family on a limited income, buying pre-packaged food at the corner market. She also probably lacks the time to exercise, assuming that the local playground isn’t overrun with drug dealers and gang members.

As we struggle with the issue of health care in America — expanding access to treatment while controlling costs — it is important to remember that the current health care crisis is not just about medical insurance. There are other problems in our society that will affect the outcome of the current debate. The Affordable Care Act will help address some of the current inequities in our health care system. Until we attack the fundamental issue of poverty and the income gap, however, we are probably just putting a small bandage on a large and gaping wound.

[This blog entry was originally presented as an oral commentary on Northeast Public Radio on April 24, 2014. It is also available on the WAMC website. Its contents are solely the responsibility of the author alone and do not represent the views of the Bioethics Program or Union Graduate College.]

Health Disparities: They’re Not Just for Patients Anymore

by Jacob Dahlke, Bioethics Program Alum (MSBioethics 2012)

Much is written – and justifiably so – about the disparities that exist in our healthcare system in the U.S. The CDC, for example, reports a few: non-Hispanic blacks die more frequently from stroke and coronary heart disease than whites; homicide deaths are 2.5 times higher for men than women, and over 6.5 times higher for non-Hispanic blacks than whites; non-Hispanic whites and American Indian/Alaska Natives than other ethnic groups. Health disparity can be viewed as a sort of volatility risk of the healthcare system: as the difference in health among various groups of patients increases, so does the possibility (or likelihood) than people within the system will be treated more unjustly or unfairly. This leads to likely further social disparities, increasing the likelihood that these groups will not be able to manage their health effectively. A vicious cycle, indeed.

A primary stakeholder in the health disparity discussion in the U.S. is the Centers for Medicare and Medicaid Services (CMS). This government agency manages the healthcare for nearly a third of the entire U.S. population – about 100 million people. They even covered me and my family for about four years. They play a deep role in American healthcare, and so it is usually when a group like that provides data in the name of transparency. This is just what they did, releasing Medicare payment records for physicians for 2012. It was a controversial move, opposed primarily by the American Medical Association (AMA). The AMA’s position was based on a concern that the data’s release would “mislead the public into making inappropriate and potentially harmful treatment decisions and will result in unwarranted bias against physicians that can destroy careers”. While I understand that view from the perspective of protecting the interests of it constituents – physicians – I think this view in particular comes off as condescending and paternalistic. Perhaps that can be discussed another time…

The data release shows some dramatic differences that is not unlike American society at large. The data includes payment information for 880,000 physicians who received Medicare payments from CMS in 2012, totaling $77 billion. To make simple comparisons about the disparity within this particular system, consider a ‘flat’ disparity, where every one of those physicians received an equal amount of payments. Payments would be $87,500 to each physician. To contrast, then: the top recipient of Medicare payments in 2012 earned $20.8 million. This comparison is far too simplistic, of course: it presumes that all physicians saw an equal number of patients with the same health conditions, and charged the same price for those services. None of these hold for every physician in the U.S. (I am waiting for Nate Silver to run some numbers on this- it could be another bestseller for him.) 

This report – over 10 million lines of text – highlights a massive disparity in payments. $1.5 billion of the (almost 2% of the total payouts) was distributed to only 344 physicians (0.038% of the total number of physicians). About 1 out 4 of the physicians practice in the state of Florida. Over half of these top physicians (193 of the 344) practice in just five states – FL, CA, TX, NJ, and NY – whose populations account for only a third of the total US population. (Those states account for less than 16% of the total U.S. Medicare population.)

Whether all of this is a fair characterization remains to be seen. It is obvious that a physician that simply sees more payments ought to be compensated more than a physician who doesn’t.  But the numbers appear so skewed – at this point, at least – that further scrutiny is surely warranted. If we as a nation are truly interested in maintaining or improving our social systems – and most of us are – then this improvement on transparency at CMS can lead to better things, and I hope that it continues.

As an alternative, though, we could just follow Vermont’s lead.

[This blog entry was originally posted in a slightly different form on Mr. Dahlke’s blog on April 9, 2014. Its contents are solely the responsibility of the author alone and do not represent the views of the Bioethics Program or Union Graduate College.]

The Concept of Brain Death and the Tragic Cases of Marlise Munoz and Jahi McMath

This guest post is part of The Bioethics Program’s Online Symposium on the Munoz and McMath cases. To see all symposium contributions, click here.

by Ryan Abbott, M.D., J.D., M.T.O.M.
Associate Professor of Law, Southwestern Law School, and Visiting Assistant Professor of Medicine, David Geffen School of Medicine at UCLA

Historically, death has been a very simple and intuitive thing to understand – it occurs when someone stops breathing and their heart stops. Visually, it is a dramatic change that anyone can comprehend.

However, we now live in an age where machines can keep people breathing, and their hearts beating, when they would otherwise die. These medical advances have been revolutionary, and they are vital to allowing living patients to recover after severe illness or injury. On the other hand, they can make it more difficult for people to accept and understand death, because it can make dead patients “appear” alive.

Brain death refers to the irrevocable loss of all functions of the brain, including the brainsteam. Someone with brain death is just as dead as someone who has stopped breathing and whose heart has stopped. Doctors confirm brain death through a neurological examination, and once diagnosed the patient is dead. That person will never have any brain functioning and will never return to life or “wake up.”

That, of course, is a difficult concept to explain to people without medical training, and who don’t understand how the brain and body work. To family members, a loved one with brain death on life support has some of the features they associate with being alive. For example, a video now circulating online that purports to show Jahi McMath responding to stimulation may simply demonstrate that some reflexes may persist after brain death, such as a Babinski’s reflex that causes the big toe to move upward while the other toes fan out in response to the sole of the foot being firmly stroked. Grieving family members are, understandably, sometimes unable to accept a diagnosis of brain death.

Continue reading