Saturday, November 20, 2010

Do you have the right to tell your own story?

Publishing shares something in common with roller coasters: The rewards are strongly and positively correlated with the capacity to instill fright.

Together with Ana Iltis, Susan DuBois, and the journals division at Johns Hopkins University Press, I recently started a new journal, Narrative Inquiry in Bioethics: A Journal of Qualitative Research. While we will publish some traditional types of articles (such as cases studies and qualitative research reports), our hallmark will be “narrative symposia.” These will consist of 8 to 12 personal stories on a common theme followed by commentary articles that analyze the stories. Issues in our first volume will explore experiences as a hospitalized psychiatric patient, as a physician with conflicts of interest, and as a nursing assistant providing care at the end of life. I cannot wait to read our first three issues. I believe the stories will be interesting and will contribute to a deeper understanding of important topics.

Nevertheless, the day before publishing our first call for papers, I woke up at 2 a.m. worried about terrible scenarios: What if a psychiatric patient had a bad experience of hospitalization and starts naming attending physicians? What if a nursing assistant names an administrator who ignored reports of elder abuse? What if a physician discloses that a specific corporation offered to pay kick backs for referrals? Eeek! Later that day I threw the brakes on our call for papers and the editorial team began a process of discerning how we would address such possibilities. Of course, we already had a confidentiality policy in place, but the reality of soliciting a large number of stories on sensitive topics made me question just how good it was.

After quite a bit of digging through the literature and interrogating ethicists and lawyers, we found a hodgepodge of rules and rulings regarding the disclosure of information about “third parties” such as employers, colleagues, family members or institutions when publishing a story. In some specific situations there are clear and strict rules: Educational records and health records are federally protected. Generally speaking, we cannot share such information about an identifiable individual without written permission. Other rules are clear per se, but can be challenging in application, such as libel laws. Still other domains of law have yet to be adequately clarified by the courts. A new sphere of case or tort law appears to be emerging dealing with privacy: Individuals can be held liable for damage done by publishing true statements about others that are embarrassing or harmful to reputation; but plaintiffs rarely win when the disclosure of information serves an important public good (i.e., the courts still recognize the general value of a free press—although rules for reporters vary from state to state and may differ from rules for academic publishers).

Shortly after beginning this process, I noticed that the upcoming conference of the American Society of Bioethics and Humanities included a session entitled, “Publishing other people’s stories: Editorial practice and policy on the ethics of patient-subject confidentiality and consent.” This sounded providential: Lady Fortuna’s wheel was spinning upward. Although we could not wait until after the conference to adopt a policy, I had hoped the session might present an emerging consensus on the issue and confirm our practices. However, when I attended the session about 2 months later, I discovered quite the opposite. Discussion was lively, reflections were deep, and there were no uniform practices. Between the panelists and audience participants, we heard from the editors of at least 7 journals. One journal had no policy on the matter. Some journals insisted that all details be truthful—cases could not be fictionalized (e.g., by changing names or dates); while another routinely insisted on fictionalization. Some appeared to have a high tolerance for legal risk; another insisted on signed permission for all references to private information about third parties. One journal focused more on ethical duties to the author than to third parties, noting that most authors feel that they own their story—it’s a segment of their life and they have a right to tell it. This is an interesting idea. If a man was beaten as a child by an alcoholic father, does he have the right to tell the world that aspect of his story? Or does the father have a right to privacy that cannot be trumped by the right to tell one’s own story? Do the courts have the right to settle such issues?

Overall, I think our journal has adopted a fairly sensible policy that grants authors flexibility in how they protect privacy even while insisting that they do so. Click here to read our policy. (Scroll to “Ethics—Protecting the privacy of third parties”). Comments are welcome!

Sunday, November 7, 2010

Surgery does not look difficult to me


I’ve butchered a deer, dissected a cat, and sewn buttons and hems. The first time I observed a cardiac bypass I honestly thought: “That does not look very difficult—I could do that.” (Bear in mind, my degrees are in psychology and philosophy.)

Of course, surgeons spend 5-7 years in residency training after completing medical school. They usually know which incisions provide best access to targeted parts and which incisions heal best; how to avoid infections; where to find spare tissue for grafts; and when to ties things up and call it a day. Some develop the skill to tie knots they cannot see; some have incredible manual dexterity and can close an incision without leaving a scar. I know its best to leave surgery to surgeons.

I’m always astonished at the people who decide they’re just going to do a survey or conduct a few interviews. When they do I’m not surprised that their results are invariably worthless or harmfully bad.

As a general rule, you should not design a survey if you cannot explain:
-       What is a construct? And how do we know if we are measuring what we hope to measure (and not something very different)?
-       What is a representative sample, and when does representativeness matter?
-       How to estimate the power of a study.
-       What kinds of reliability are there? Which matter for your study?
-       How to set up a database, clean data, and analyze data.

No, you should not even write the survey items if you cannot explain:
-       Why the way you phrase an item will determine which statistic must be used (or whether any statistic can be meaningfully used)
-       What is a well-formed item or question? And why the word “and” should never appear in an item.
-       Why true/false items are famously bad.
-       What is content validity and how does it differ from construct validity?

A good methodologist knows the answer to all of these questions and has the skills to design your study and analyze your data.

To be clear, methodologists cannot design a good survey by themselves. Presumably, if you want to design a survey on a given topic (say, public attitudes toward biobanking), then you have some knowledge about the topic. This knowledge will be essential to writing items that have good content validity—and your average methodologist or statistician will lack this knowledge. This is why so few social science articles have just one author: A good survey usually requires the complementary expertise of multiple individuals.

However, collaboration alone is not sufficient to guarantee the quality of a survey. You need to collaborate with competent individuals, and the timing of the collaboration matters. Meet with a methodologist before you finalize your questionnaire items or interview guide. Even though you may think these matters pertain to content, the concrete form that survey items or interview questions take will influence the validity and reliability of your study.

Conclusion: Collaborate with a methodologist, and do it early in your project.

Thursday, October 21, 2010

The value of empirical bioethics research

In response to an article by Alexander Kon, I noted an ironic situation: Within his framework, the more valuable an empirical ethics study becomes, the less apparent it is that it constitutes “ethics research.” He claims that one of the highest forms of research in bioethics is the sort that improves patient care. By that standard, the development of a drug that cured AIDS would be a higher form of ethics research than assessing attitudes toward confidentiality in AIDS research or identifying disparities in access to AIDS clinical trials. Some would say drug development research is plain and simply medical research—clinical science—not ethics research. I disagree. Kon’s position is ironic when one thinks of it; but so is most of life and irony is no sign of incoherence or falsehood.* Let’s face it, a large part of ethics concerns maximizing benefits or promoting flourishing; and most of what ethicists do does not benefit humankind nearly as much as a cure for AIDS would. Moreover, if ethics pertains to right and wrong voluntary actions—what we choose to do and to become—then any kind of research can be viewed through the lens of ethics.

Nevertheless, there is little prospect that all research will be labeled as “ethics research,” and nor should it be. Just when a study deserves that label is open to debate. I’ve tried a few definitions, and they’ve failed to capture all studies that seem appropriately labeled as “ethics research” and only such studies.

In the 1990’s, when empirical research in bioethics first started growing popular, debates about its value revolved around the question whether ethics should be determined by the opinion of the majority. It should not, of course. But this unfortunate debate arose from two sources. First, too much of the early empirical research in bioethics (my own included) consisted of polling or attitude surveys. Second, the real value of attitudes research was misunderstood. It cannot determine what is right when we are dealing with matters of principle; but many policy issues involve competing principles and lack one clearly right answer. Particularly in such cases, attitude research can provide individuals with a voice, particularly vulnerable individuals who are often not heard. Additionally, attitude research—particularly when methods are rich (e.g., focus groups or qualitative interviews)—may uncover valid ethical concerns and problems that policy makers have failed to recognize and include in their calculus. Finally, attitude research may give policy makers a sense of what might fly. All three of these benefits can be observed in a simple example. Survey and focus group research has consistently found that African-Americans are nearly 3 times as likely as Caucasians to fear that if they sign their organ donor card physicians will not try to save their life. From this research with a frequently ignored segment of society, we may glean that restoring trust is essential to any good and successful organ transplantation policy.

In any case, it would be a mistake to reduce research in the area of bioethics to polling or attitude research—even if we wish to focus on social science research. Here are a few examples of findings from robust studies with interesting study designs.

-       It was long assumed that people with schizophrenia were incapable of making their own decisions whether to enroll in a research study. Yet social science research over the past 2 decades has found that most persons with schizophrenia retain the capacity to make decisions, and when their comprehension is less than optimal, simple education interventions often suffice to provide adequate comprehension of information.
-       Many physicians have claimed that having a former college cheerleader (drug rep) buy lunch for their department (sponsor a continuing education program) could never change the way they prescribe medications to their patients. Yet studies of physician behavior have shown the opposite. How is this possible when many physicians earnestly doubt they can be bought for the price of a sandwich? Because many of the psychological processes involved operate subconsciously (e.g., the tendency toward reciprocity when given a gift).
-       Some claim that we should not address the shortage of primary care physicians in the US by expanding the number of advanced practice (AP) nurses. They argue that AP nurses will not serve patients as well given that they have less scientific and clinical training, and that most patients would prefer to see a physician. Yet, reviews of outcome studies have generally found that patients’ health outcomes are typically as good and their satisfaction is frequently better with AP nurses. While the quality of some of these studies has been questioned (perhaps legitimately), physicians have yet to produce studies that show the superiority of primary physician care over AP nurse care.

Who thinks our ethical deliberations on these respective ethical and policy questions would be as sound without such data? Aristotle believed that good ethics had to be grounded in experience; social science research is capable of collecting the experience of many and sharing it in ways that enhance the practical wisdom of individuals.


* Elsewhere I’ve argued that life is both absurd and meaningful. That life is absurd seems obvious to me; that it is meaningful requires argumentation, maybe even faith. See “Absurdity, God, and the Sad Chimps We Are.” 

Thursday, October 14, 2010

What do Frenchmen and research methods share in common?

When I first met my wife I bragged about my pure French lineage: All 4 of my grandparents originated from Quebec. She responded, "My father says there are no pure Frenchmen." Either way you read it, she's right.

Similarly, there are no pure methods in the social sciences--only families of methods that ought to be adapted to a specific project. I'll illustrate this with the example of a Delphi survey I conducted with Jeff Dueker in 2007: Teaching and Assessing the Responsible Conduct of Research: A Delphi Panel Report.

Delphi surveys (or Delphi panels) are unlike most surveys: They do not aim to generate knowledge (e.g., about attitudes); rather they aim to generate a consensus. We wanted to achieve a consensus on what should be taught and assessed in those responsible conduct of research (RCR) courses that trainees and other researchers are often required to take. The US Office of Research Integrity had earlier identified a few "core areas" for instruction; but there was considerable debate on what should be taught (e.g., are there universally agreed upon standards for authorship across fields?), what should be assessed (e.g., should we assess commitment to values?), and above all, what the fundamental aims of RCR education should be (e.g., to foster compliance with regulations, ethical problem solving skills, or knowledge of abstract ethical principles?).

Delphi surveys generally involve forming a group of "panelists" and administering several rounds of a questionnaire to the panelists individually. For example, our round 1 questionnaire ask open-ended questions such as "What should be the overarching goals of RCR instruction?" The round 2 questionnaire took all panelists' responses, reduced them to clear and non-redundant goal statements, and asked them to rate the importance of the goals using a 5-point Likert-type scale. Round 3 presented a shorter list of just those items that were endorsed as "important" or "very important" by 2/3's of the group during round 2. We repeated the process to generate a short list of 9 key objectives for RCR instruction that were endorsed by a group of experts.

I was familiar with Delphi surveys others had conducted. But when I did a literature review on the method itself, I could not find clear and consistent guidelines. How many panelists do you need? What should qualify someone as a panelist? How many rounds will suffice? What constitutes support for an item (e.g., "agree" or "strongly agree"?). What constitutes a consensus (e.g., 2/3's agree? 3/4 strongly agree?) What do you do with the results from panelists who participate in some but not all rounds? How much liberty can you take when condensing, organizing, and presenting the results of open-ended items? Should we report mean Likert values (e.g., item 9 had a mean score of 4.2) or rather the percent of experts endorsing an items with a value of 4 (important) or 5 (very important)?

What we found were articles that described generally how Delphi surveys work and how they have been applied in marketing studies or in establishing organizational goals. Even when we found studies that used a Delphi panel for curriculum development, we found that their approach would not meet our needs. We decided to use a lot of experts (n=43) clustered into 4 groups of expertise. We decided we needed to use an online format to accommodate disparate locations and schedules. We decided to start with open-ended questions even though this would mean more work. (It would make it clear the panelists generated the content of recommendations, not the project directors).

Rarely will you find a manual that provides clear guidance on how to apply a method to your study questions. All methods need to be adapted. Naturally, not all adaptations are kosher; at some point, the method may be changed so radically it becomes invalid or at least loses its membership in a family of methods like the Delphi survey. In modifying a method it is essential to have a good rationale for your adaptations, to describe your method accurately in publications, to describe the rationale for unusual features of the method, and to be aware of the limitations of whatever you decide and to discuss them frankly.

Conclusion: You won't find a guidebook on how to design YOUR study. You will need to apply any method critically, adapting it to your research questions, budget, participant group, etc. If you cannot do this well, you should collaborate with a creative and knowledgeable methodologist.