Should Medical Staff ‘Google’ Patients?

Note: The Bioethics Program blog will be moving to its new home on April 1, 2015. Be sure to change your bookmarks to


by Brandon Hamm, Bioethics Program Alum (MSBioethics 2012)

On several occasions, a new admission or psychiatric consultation has been accompanied by patient information that was “googled” by nursing or consulting practitioners. On some occasions, the ‘googled’ information has admittedly been helpful for refining diagnosis and management. On other occasions, it has seemed unnecessary for patient care. HIPPA does not protect information publicly available on the internet, but is it ethical for medical staff to “google” their patients?

The first time that I personally ran into this issue was during a psychiatric consultation requested for suspected factitious disorder (patient attempting to deceive by producing/providing false symptoms). The patient presented as a new patient (to our geography and institution) with shortness of breath and chest pain. After an extensive workup, the medical team was puzzled that the patient’s symptoms did not match his objectively normal physiology. The clinical next-steps under consideration were invasive. The medical team was not yet able to obtain previous records from outside hospitals and the patient declined consent for collateral information. So the team “googled” the patient. We found a forum accusing the patient of inducing arrhythmias for medical attention and regularly committing social/financial frauds. Later, records were later obtained from an outside hospital that revealed a history of hospital-hopping and recurrent malingering /factitious behavior resulting in medically unnecessary procedures.

In this case, the “googled” information decelerated the invasive (significant risks) trajectory of the patient’s care. This also facilitated better management of the building conflict between the patient and the frustrated medical team. In this case, “googling” produced beneficent, or at least non-maleficent results. And these would be the typical justifications for “googling” a patient, but it is much harder to claim that it respected patient autonomy. Perhaps more importantly, I worry about what impact “googling” has on trust in the patient-physician relationship.

Despite public attention from Haider Warraich’s New York Times article “When Doctors ‘Google’ Their Patients”, the ethics of “googling” patients has received scant attention in bioethics literature. What consensus there is deems as unethical “googling” for non-clinical purposes. When “googling” is used as a clinical tool, there remains some disagreement. Volpe et al (2013) argue that “googling” patients is bad practice because it encourages providers to withdraw from patient relationships, can damage trust, and invades patient privacy. George et al (2013) point out searching is legal and not considered a breach of privacy for employers screening applicants. Consequently, this group proposes that, at times, it is irresponsible not to “google” patients when traditional information sources (patient, patient medical record, previous providers) are exhausted or unavailable. Various commenters (Krischner et al, 2011) point out, however, that information obtained on the internet is of variable accuracy, and using it will consistently break trust and rapport when disclosed to the patient.

“Googling” for patient information is somewhat different in psychiatry than it may be in other areas of medical practice. A patient’s social circumstances, criminal history, attunement with reality, and even provision of truthful information are clinically pertinent for psychiatrists to perform accurate diagnosis and management. Since this information (with variable accuracy) may be readily available on the internet, temptation to “google” patients may be strongest for psychiatrists. A patient’s posted suicidal thoughts or homicidal threats are clearly of clinical significance. And these may not be available in the normal course of care. For example, patients with acute paranoid psychosis are often very protective of even the most basic personal information. Moreover, concerns about compromising patient trust may be blunted in psychiatry—psychiatrists are accustomed to damaging rapport when admitting patients involuntarily who are a significant danger to self or others.

Along with others, I believe that practitioners should never “google” for clinically irrelevant patient information. While it may be acceptable to “google” new colleagues and new friends, the relationship with a patient should be understood differently. “

Googling” for clinically relevant information should only be undertaken after a clear articulation of the reasons for “googling” and the necessity of the information that is being sought. Clinical information should ideally be obtained from the patient, medical records, previous providers, and patient- permitted others. In cases when these resources are exhausted /unobtainable, and significant patient benefit/harm is at stake, practitioners should only look to information obtained from the internet as a last resort. Some guidelines for psychiatrists were proposed by Clinton et al (2010). Specifically, these guidelines encourage the psychiatrist to consider his or her clinical intention, the potential for trust impairment, possibly obtaining consent for the search, and the impact of revealing obtained information to the patient or in documentation.

What we don’t have here is a clear algorithm for determining when it’s okay to “google” a patient and when it’s not. And so I wonder what you’re experiences have been and when you think it’s appropriate.

[The contents of this post are solely the responsibility of the author alone and do not represent the views of the Bioethics Program or Union Graduate College.]


Caveat Scholasticus

Note: The Bioethics Program blog will be moving to its new home on April 1, 2015. Be sure to change your bookmarks to


by Sean Philpott-Jones, Director of the Center for Bioethics and Clinical Leadership

Economists talk a lot about scarcity. Scarcity occurs when we have fewer resources than are necessary to fill our basic needs and wants. Price is usually a good indicator of scarcity. Despite the recent short-term glut of oil, for instance, increasing demand and decreasing supplies of fossil fuels means that gasoline prices will inevitably rise in the coming years.

Ethicists like myself also talk about scarcity. Medical resources are often in short supply and must be rationed. The limited number of beds in the intensive care unit means that doctors must sometimes make difficult choices about which critically ill patients are admitted to the ICU and which are not. Vaccines may also be rationed. In the event of a serious flu epidemic, for example, the New York State Department of Health has a four-tiered vacccine allocation system, with critically needed staff such as doctors, nurses, police and firefighters given priority over grocery clerks, plumbers, mechanics, and stay-at-home dads. But one thing we never thought would be an increasingly scarce resource, at least in the medical setting, was privacy.

Everyone is increasingly concerned about privacy today, and rightfully so. In a progressively wired and interconnected age, there is little about a person that isn’t public knowledge. In fact, despite all our protestations, we as individuals are largely responsible for this loss of personal privacy.

We give up our personal privacy in a myriad of seemingly innocuous ways: posting status updates on Facebook and Twitter, writing blog articles, and uploading pictures to Instagram. Everything we say or do online leaves behind a trail of personal information that can be used by public agencies and private businesses to track us, watch us, and selectively market goods and services to us.

This is true even when it comes to our personal health. As mentioned before, much of this is our own doing. We comment about our various aches and pains online, use databases like WebMD to self-diagnose and self-treat minor illnesses and injuries, and purchase over-the-counter and prescription drugs using our CVS ExtraCare card. But one thing that we would never expect is that our conversations with our physicians and psychotherapists could also become public knowledge.

If anything, maintaining patient privacy and confidentiality is one of the key ethical obligations placed upon physicians. It is an obligation that has its roots in two millennia of Hippocratic practice, and it is the foundation of the doctor-patient relationship. Patients must feel that they can share all sorts of personal information with their physician, no matter how embarrassing or stigmatizing. This information is often necessary to ensure proper diagnosis, testing and treatment.

A sixteen-year-old girl who is experiencing pain when urinating, for example, may simply have a urinary tract infection. But she may also have a more serious condition like chlamydia, gonorrhea or some other sexually transmitted infection. If she is not willing to share the fact that she is sexually active, perhaps out of fear that her parents will find out, her doctor may inaccurately diagnose and treat her.

Maintaining patient privacy and confidentiality is so important that it has been put into practice and codified into law. Following a groundbreaking observation study of what doctors, nurses and medical students shared with each other in public elevators (spoiler alert: they shared way too much), many hospitals instituted strict policies about what can and cannot be said about patients in public settings. Anyone who has been to a hospital in recent years has undoubtedly seen the signs in the hallways and elevators reminding staff of this fact. Hospital staff can reprimanded and even fired for breaching confidentiality, as happened at Cedars-Sinai Medical Center after six employees inappropriately accessed the medical records of reality television star Kim Kardashian.

State and federal laws restrict the types of information that can be shared about patients. One key federal law, the Health Insurance Portability and Accountability Act (HIPAA), places strict limits on who can access or share your medical records or your health insurance and billing information. Doctors, hospitals, and insurance companies bound by HIPAA regulations can face severe civil and criminal penalties for violating this law, including fines of $1.5 million and prison sentences of up to ten years.

Unfortunately, this privacy law is rife with loopholes. HIPAA only applies to so-called ‘covered entities,’ such as health providers and health insurance companies. It does not apply to others who may have private health information, such as life insurance companies, employers, workman’s compensation programs, law enforcement agencies, or schools. This is a significant problem, as highlighted by a recent case involving a student at the University of Oregon.

That student was allegedly raped by three University of Oregon basketball players. In a Title IX lawsuit filed against the school, she claims that the school deliberately delayed its investigation so that the men could play in an important NCAA tournament.

So what does this case have to do with medical privacy? The University is using the student’s own medical records to defend itself in court. Because the student sought clinical treatment and psychological counseling at the University health clinic, her medical record belongs to the school. A federal law known as the Family Educational Rights and Privacy Act (FERPA), ironically meant to the protect the privacy of a student’s educational records, exempts campus medical records from HIPAA’s privacy rules.

Sadly, as morally repugnant as this is, the University is well within its legal rights to do this. Until laws like HIPAA and FERPA are amended to close these loopholes, we all should be more than a little wary. Students, for example, may wish to seek off-campus counseling or treatment in order to protect the privacy of their records, even if this means that they or their families may be forced to shoulder the cost. Meanwhile, the rest of us should be a little more diligent about the types of medical information we share with agencies and organizations not covered by HIPAA, and to pause for a moment before we complain about our neck aches and back pains on social media.

[This blog entry was originally presented as an oral commentary on Northeast Public Radio on March 12, 2015, and is available on the WAMC website. The contents of this post are solely the responsibility of the author alone and do not represent the views of the Bioethics Program or Union Graduate College.]

I Heard It Through the Grapevine: Ethical and Legal Considerations of HIV Disclosure

by Jacob Dahlke, Bioethics Program Alum (MSBioethics 2012)

Nebraska’s highest court ruled last week that an Omaha-area woman can pursue her lawsuit against a local clinic that she alleges disclosed her HIV-positive diagnosis. The case began in 2010, with the plaintiff, known only as C.E., visiting a diagnostic lab in Omaha as a part of a health insurance application. After sending the results to another lab, C.E.’s lab results arrived to her local physician’s clinic, Prairie Fields in Fremont, 30 miles NW of Omaha. C.E. was told at her consultation in the fall of 2010 of her positive diagnosis for HIV. She had initially asked for her results from an employee and former high school classmate Kristy Stout-Kreikemeyer, who appeared to see the results but deferred the actual disclosure to C.E. to a staff physician assistant. C.E. was told the test was inconclusive, and she agreed to take another test. Barely more than 24 hours later C.E. was contacted by an ex-boyfriend, Jonathan Karr, to ask about how she was, having heard from another mutual acquaintance, Jamie Goertz, that she had “full blown AIDS”. It is alleged by C.E. that Kristy Stout-Kreikemeyer told Jamie Goertz about the diagnosis, and that Prairie Fields is also responsible for this violation of her privacy.

While the details of the case may seem to read like a sort of tabloid story, the ethical issues remain real. What are the public health implications of disclosing vs. keeping private STI diagnoses, including HIV? What are medical facilities’ obligations regarding such disclosures? Does the public have a right to know such private details about individuals? How does the size or intimacy of the community change the issue, or should that be irrelevant?

Let’s begin with some lingering unknown factors. First, it remains unknown C.E.’s actual HIV status, and I would anticipate it will remain unknown to the general public, even through the course of her future trial. While her initial screen was positive, it was recommended by the clinic that she have another, more definitive test performed in order to rule out false-positive screen. According to the CDC, “(f)urther testing is always required to confirm a reactive (preliminary positive) screening test result.” While the court paper refers to only an initial ‘blood test’, it would remain possible that the test would provide a false-positive result, meaning that the test indicated the presence of HIV antibodies when there were none. This would have been why the PA at Prairie Fields wanted C.E. to return for a follow up, more definitive test.

What does this have to do with privacy and public safety? The first is that C.E. was not diagnosed with having HIV; she only screened positive for it. Not until C.E. undergoes (underwent) the second test and is positive there, would she be considered HIV-positive. This may matter for C.E. from a legal standpoint because her lawsuit may include some claims to libel, in which the defendants publicly slandered her, saying she has a disease that she doesn’t. From a public perspective, the only reporting that is to be done is to the appropriate health departments. This was appropriately not yet done by the clinic, since a confirmed positive diagnosis was not present. The purpose in reporting to health departments is tri-fold: 1) it can aid the patients who may need treatment by connecting them to local resources and by providing local context to their disease; 2) it helps to identify and support sex partners who may not yet be infected, or partners who are at risk of infection, or partners who may be infected but who do not yet know; and 3) it can help to track disease patterns and trends, thus ensuring that appropriate resources are being devoted. It is done confidentially to ensure that a patient maintains control (autonomy) over their own healthcare decisions, including the circumstances (who, what, when, where, why, how) of telling others.

It is perhaps obvious to say that a case like this can lead to a loss of trust between patient and physician in a community. Confidentiality is a benchmark of quality healthcare. Patients are ill, concerned, and otherwise vulnerable; healthcare providers are there to have intimate and personal conversations with patients that are often not discussed with even the most privileged of the patients’ inner circle of friends and family. In the event that C.E. had a negative confirmatory test for HIV, it should have been up to her to share her ‘close call’ with whomever, including no one. Likewise, were she to test positive, the ability to at least control the context of her telling her sexual partners ought to have been hers. But that did not happen, due to someone’s (alleged) violation of her privacy.

Instead, what transpired highlights why confidentiality in healthcare is so important. C.E. was instead left to defend herself against her community, which appears to quite closely knit and connected to each other for long periods of time. What should have been a conversation between C.E. and her physician was instead being talked about at a local bar, which led eventually to her ex-boyfriend (and father to one of C.E.’s children) asking her about it. Breach of confidentiality is obviously not a problem that exists only in rural or small communities, but in this case it sure seems like it was exacerbated by it.

[This blog entry was originally posted on Mr. Dahlke’s blog on March 18, 2014. Its contents are solely the responsibility of the author alone and do not represent the views of the Bioethics Program or Union Graduate College.]

Of DNA and Databases

by Theresa Spranger, Bioethics Program Alumna (MSBioethics 2012)

You have the right to remain silent, but your DNA can and will be used against you…

On Monday. The United States Supreme Court decided to allow DNA to be taken at the time of arrest from those accused of “serious” crimes. The DNA sample is taken by a cheek swab and the information is then held in a database. The sample is run against others in the database, some of which are unknown samples that were collected at crime scenes. Occasionally, there is a match and the new DNA sample can help the police to solve one of their cold cases.

The case that went to the Supreme Court is that of Alonzo King. Mr. King was arrested on an assault charge and his DNA swabbed at the time of arrest. The DNA matched that of an unsolved rape case and King was charged with this rape. The problem arose when King pled guilty to a lesser crime than the assault. Under current Maryland law, the police would not have been allowed to take his DNA for the crime for which he pled guilty. King’s attorneys argued that because he was convicted of a lesser crime the DNA evidence should not be permissible in the rape case.

In a 5/4 decision that rocked party lines, King’s rape conviction was upheld. Kennedy, the notorious swing vote on the court, wrote the majority opinion. This opinion was that collecting DNA was like fingerprinting upon arrest and not a violation of the person’s rights. He was joined by Justices: Roberts, Thomas, Alito, and Breyer.

The dissenting justices were Ginsberg, Kagan, Sotomayor, and Scalia. Scalia wrote the dissent and argued that the ruling was too vague, the precedent the court was setting was dangerous, and collection of DNA has high potential for future misuse.

I agree with the dissent and believe the court made a mistake with this decision. It’s not that I would like Mr. King roaming the streets to rape again, but I think some of the justices neglected to look at the bigger picture. My main issue is with the saving of information, DNA information is not like having your fingerprints on file. Your DNA is a map of you and we have no idea how this information could be used in the future. We learn more and more about DNA and genetic makeup every day and the more we learn the more cautious I become about sharing my genetic information.

What about the DNA sample itself? Is this retained along with the database of information? It would be one thing to have a database, but another entirely to have the physical sample. Scientists can do amazing things and all indications point to more incredible discoveries in the world of genetics. This should give us pause when discussing the creation of a central database for anyone.

A further issue with this ruling is its vagueness. The court states that DNA gathering is permissible in cases of “serious” crimes. What does this mean exactly? What must someone be accused of to lose their right to control the use of their DNA? Drunk driving, shoplifting, protesting? Or is it truly for violent criminals? Murderers, rapists, etc.?

Taking the sample upon arrest however, flies in the face of our nation’s presumption of innocence. Keep in mind that not all of those arrested are criminals. Who knows, perhaps one day you will be in the “wrong place at the wrong time” and your genetic information will be on permanent file without your consent.

Making decisions based on what we know about DNA today is never a good idea, genetics and manipulation of genes is an ever changing field and we need to be making decisions into the future. I certainly want to give the police every advantage when catching dangerous criminals, but not at the risk of my or other innocent American’s personal privacy or freedom.

[This blog entry was originally posted in a slightly edited form on Ms. Spranger’s blog on June 6, 2013. Its contents are solely the responsibility of the author alone and do not represent the views of the Bioethics Program or Union Graduate College.]

Death, Taxes and Medical Records

by Theresa Spranger, Bioethics Program Alumna (MSBioethics 2012)

The Internal Revenue Service (IRS) is being charged with the illegal search and seizure of 60 million medical records from about 10 million Americans. It is suspected that these records contain personal medical information on people from all walks of life.

The claim charges 15 IRS officials with unlawful seizure of the medical records during an investigation at a California company. The IRS was investigating a former employee for a financial issue. The company is requesting $25,000 in damages be paid for each unauthorized record, but this case would also serve as a precedent case and provide future guidance for similar situations.

The unauthorized seizure of medical records is a major concern for several reasons:

  1. Our medical records contain confidential and sensitive information about us. The records contain psychological history, sexual history, drug and alcohol history, etc. Anything you tell your doctor ends up in your chart and I think it is safe to say that most, if not all of us, have at some point told our doctor something we would not like to end up on the network news, or in the hands of a government agent.
  2. The IRS has been given a vital role in the management of President Obama’s heath care reform. In this position they will likely be frequently placed in situations like the one mentioned above. Because of their position, we will need to trust the IRS officials to maintain the confidentiality of our health information. This means they will need to only view the records absolutely necessary, and extract only the information they directly need for their investigation. Many security measures are built into EMR (Electronic Medical Records) systems to keep your medical information safe. To access medical records at the hospital where I word, I have to enter two different password screens and every record I view is recorded. Also, my name is listed right on the screen in the patient’s electronic file. This means that I can see the names of the last several people who have viewed the chart for each patient and they will be able to see mine. This provides extra incentive to only view the records that you have a workplace need to view and allows other employees to see if you have been poking around records you have no reason to be viewing.
  3. When taken from the hospital system information in the medical record is no longer as tightly protected. When moved to another computer there is always the risk that the information will be misused and the confidentiality compromised.

So, ask yourself: Would you trust each individual employee of the IRS with every aspect of your life and the most intimate details of your personal health history? No? Then, we need to be extra vigilant as healthcare reform rolls out over the next few years. I do recognize that I have a hearty dose of skepticism of the government, but like the old adage says, “plan for the worst and hope for the best.” I think we need to do a bit more of the planning when it comes to the new IRS responsibilities under the Health Care Reform Act.

[This blog entry was originally posted in a slightly edited form on Ms. Spranger’s blog on June 2, 2013. Its contents are solely the responsibility of the author alone and do not represent the views of the Bioethics Program or Union Graduate College.]

Lara Croft: Cancer Activist

by Sean Philpott, Acting Director of the Center for Bioethics and Clinical Leadership

In an Op-Ed piece published in Tuesday’s New York Times, actress Angelina Jolie revealed publicly that she had undergone a prophylactic double mastectomy — removal of both breasts — in order to reduce her risk of developing cancer.

Ms. Jolie had a reason to be concerned. Genetic tests showed that she carried a mutation in a gene known as BRCA1, a change in her DNA that greatly increased the likelihood that she would develop breast or ovarian cancer sometime during her life. Cancer-causing mutations in the BRCA1 gene (or a related gene known as BRCA2) are rare, but account for a majority of familial cases of breast and ovarian cancer seen in the US.

Ms. Jolie likely inherited this mutation from her mother, who died of cancer at 56. Based on her test results, doctors estimated her lifetime risk of developing cancer at approximately 87%, probably at an early age. By contrast, the average woman in the US has a lifetime risk of 12%, with diagnosis usually coming later in life.

The decision to remove both breasts could not have been an easy one, particularly for a starlet who is famous for playing buxom femme fatales in movies like Lara Croft: Tomb Raider, Mr. & Mrs. Jones, and Salt. Ms. Jolie admits as much in her Times article. A prophylactic mastectomy doesn’t completely eliminate her risk of breast cancer, only reduces it by about 10-fold.

She is also at increased risk of developing ovarian cancer, but elected not to have her ovaries removed. A prophylactic oophorectomy, as that procedure is known, is an invasive procedure with long-lasting physiological effects, including early menopause, cardiovascular disease, osteoporosis, and loss of sexual function.

With recent advances in reconstructive surgery, there was no need for Ms. Jolie to go public. She wouldn’t have been the only Hollywood star to get breast implants, just one of the few that had a medical reason for doing so. Barring release of her medical records, a serious breech of privacy, no one would have been the wiser.

So why speak out? According to the actress, she wrote about her experience so that other women could benefit. Specifically, so women with a familial history of cancer could get tested for mutations in the BRCA1 and BRCA2 genes and, if necessary, to “take action.”

Having a spokeswoman like Angelina Jolie increase public awareness of breast cancer is good. It is a laudable goal, but it also one that worries me. Women who look to Angelina as a role model might rush to be tested for cancer-causing genes. However, the results of genetic testing have profound consequences — physically, psychologically and for future insurance coverage. In addition, the tests in question are very expensive. A single test costs approximately $3,000, and may not be covered by existing health insurance plans. Many women simply cannot afford to do what Angelina did.

These exorbitant testing costs are due to the fact that a Utah-based company called Myriad Genetics has patented both the BRCA1 and BRCA2 genes. Myriad currently holds a monopoly on testing for breast and ovarian cancer-causing mutations. The legality of this monopoly had been questioned, most notably in a US Supreme Court case challenging a private company’s right to patent human genes. But until the Court’s ruling in October, the company has every legal right to charge what it believes the market will bear.

Given this, only women with a clear familial history breast or ovarian cancer should be tested. But figuring who has such a history is not an easy task. As many as one in eight women in the US will develop breast cancer at some point in their lives, making it likely that most people will have a sister, mother, aunt or grandmother with a diagnosis. People can have as many as two, three or even four female relatives with cancer. But most of these cases will not be associated with mutations in BRCA genes. It takes a trained genetic counselor or skilled physician, using a detailed family tree, to know for sure whether or not a woman is a potential carrier of a mutant gene.

Moreover, for those unlucky few who do carry a mutant copy of BRCA1 or BRCA2, a prophylactic mastectomy or oophorectomy may not be the answer. Ms. Jolie made a carefully considered and informed decision, in consultation with a highly trained team of doctors, to undergo this radical procedure. But there are other less effective but less expensive and less invasive options, including tamoxifen or regular monitoring, that may be the better choice for many woman (particularly those that lack the savvy and resources of Angelina). I’d hate to think that they rushed to have their breasts removed simply because their favorite starlet had done the same.

None of these concerns I voice is meant to take away from what Angelina has done. Speaking publicly about her decision is a courageous thing to do. But the take-home message for women is far more nuanced than get tested and get treated.

[This blog entry was originally presented as an oral commentary on Northeast Public Radio on May 16, 2013. It is also available on the WAMC website. Its contents are solely the responsibility of the author alone and do not represent the views of the Bioethics Program or Union Graduate College.]

The Law, Ethics and Science of Re-identification (An Online Symposium)



from Michelle Meyer, Bioethics Program FacultySlide1

Over the course of the last fifteen or so years, the belief that “de-identification” of personally identifiable information preserves the anonymity of those individuals has been repeatedly called up short by scholars and journalists. It would be difficult to overstate the importance, for privacy law and policy, of the early work of “re-identification scholars,” as I’ll call them. In the mid-1990s, the Massachusetts Group Insurance Commission (GIC) released data on individual hospital visits by state employees in order to aid important research. As Massachusetts Governor Bill Weld assured employees, their data had been “anonymized,” with all obvious identifiers, such as name, address, and Social Security number, removed. But Latanya Sweeney, then an MIT graduate student, wasn’t buying it. When, in 1996, Weld collapsed at a local event and was admitted to the hospital, she set out to show that she could re-identify his GIC entry. For twenty dollars, she purchased the full roll of Cambridge voter-registration records, and by linking the two data sets, which individually were innocuous enough, she was able to re-identify his GIC entry. As privacy law scholar Paul Ohm put it, “In a theatrical flourish, Dr. Sweeney sent the Governor’s health records (which included diagnoses and prescriptions) to his office.”

Sweeney’s demonstration led to important changes in privacy law, especially under HIPAA. But that demonstration was just the beginning. In 2006, the New York Times was able to re-identify one individual (and only one individual)  in a publicly available research dataset of the three-month AOL search history of over 600,000 users. The Times demonstration led to a class-action lawsuit (which settled out of court), an FTC complaint, and soul-searching in Congress. That same year, Netflix began a three-year contest, offering a $1 million prize to whomever could most improve the algorithm by which the company predicts how much a particular user will enjoy a particular movie. To enable the contest, Netflix made publicly available a dataset of the movie ratings of 500,000 of its customers, whose names it replaced with numerical identifiers. In a 2008 paper, Arvind Narayanan, then a graduate student at UT-Austin, along with his advisor, showed that by linking the “anonymized” Netflix prize dataset to the Internet Movie Database (IMDb), in which viewers review movies, often under their own names, many Netflix users could be re-identified, revealing information that was suggestive of their political preferences and other potentially sensitive information. (Remarkably, notwithstanding the re-identification demonstration, after awarding the prize in 2009 to a team from AT&T, in 2010, Netflix announced plans for a second contest, which it cancelled only after tussling with a class-action lawsuit (again, settled out of court) and the FTC.) Earlier this year, Yaniv Erlich and colleagues, using a novel technique involving surnames and the Y chromosome, re-identified five men who had participated in the 1000 Genomes Project — an international consortium to place, in an open online database, the sequenced genomes of (as it turns out, 2500) “unidentified” people — who had also participated in a study of Mormon families in Utah.

Most recently, Sweeney and colleagues re-identified participants in Harvard’s Personal Genome Project (PGP), who are warned of this risk, using the same technique she used to re-identify Weld in 1997. As a scholar of research ethics and regulation — and also a PGP participant — this latest demonstration piqued my interest. Although much has been said about the appropriate legal and policy responses to these demonstrations (my own thoughts are here), there has been very little discussion about the legal and ethical issues aspects of the demonstrations themselves.

As a modest step in filling that gap, I’m pleased to announce an online symposium, to take place the week of May 20th, that will address both the scientific and policy value of these demonstrations and the legal and ethical issues they raise. I’ll cross-post my own contribution here, but the full symposium will be hosted over at Bill of Health. Participants fill diverse stakeholder roles (data holder, data provider — i.e., research participant, re-identification researcher, privacy scholar, research ethicist) and will, I expect, have a range of perspectives on these questions:

Misha Angrist

Madeleine Ball

Daniel Barth-Jones

Yaniv Erlich

Beau Gunderson

Stephen Wilson

Michelle Meyer

Arvind Narayanan

Paul Ohm

Latanya Sweeney

Jennifer Wagner

I hope readers will join us on May 20.