Health Care Ethics
Health care ethics is the field of applied ethics that is concerned with the vast array of moral decision-making situations that arise in the practice of medicine in addition to the procedures and the policies that are designed to guide such practice. Of all of the aspects of the human body, and of a human life, which are essential to one’s well-being, none is more important than one’s health. Advancements in medical knowledge and in medical technologies bring with them new and important moral issues. These issues often come about as a result of advancements in reproductive and genetic knowledge as well as innovations in reproductive and genetic technologies. Other areas of moral concern include the clinical relationship between the health care professional and the patient; biomedical and behavioral human subject research; the harvesting and transplantation of human organs; euthanasia; abortion; and the allocation of health care services. Essential to the comprehension of moral issues that arise in the context of the provision of health care is an understanding of the most important ethical principles and methods of moral decision-making that are applicable to such moral issues and that serve to guide our moral decision-making. To the degree to which moral issues concerning health care can be clarified, and thereby better understood, the quality of health care, as both practiced and received, should be qualitatively enhanced.
Table of Contents
- A Brief History of Health Care Ethics
- Methods of Moral Decision-Making
- Ethical Principles
- Ethical Issues
- The Health Care Professional-Patient Relationship
- The Question of a Right to Life
- Human Subject Research
- Reproductive and Genetic Technologies
- The Allocation of Health Care Resources
- Health Care Organization Ethics Committees
- References and Further Reading
While the term “medical care” designates the intention to identify and to understand disease states in order to be able to diagnose and treat patients who might suffer from them, the term “health care” has a broader application to include not only what is entailed by medical care but also considerations that, while not medical, nevertheless exercise a decided effect on the health status of people. Thus, not only are bacteria and viruses (which are in the purview of medicine) of concern in the practice of health care, so too are cultural, societal, economic, educational, and legislative factors to the extent to which they have an impact, positive or negative, on the health status of any of the members of one’s society. For this reason, health care workers include not only professional clinicians (for example, physicians, nurses, medical technicians, and many others) but also social workers, members of the clergy, medical facility volunteers, to name just a few, and, in an extended sense, even employers, educators, legislators, and others.
For a person to be considered healthy, in the strictest sense of the term, is for that person to exhibit a state of well-being in the absence of which are any effects of disease, illness, or injury as might concern the person’s physiological, psychological, mental, or emotional existence. It is fair to say that no one could ever achieve this level of “complete health.” Consequently, the health status of any given person, at any given time, is best understood in terms of the degree to which that person’s health status can be said to approximate this ideal standard of health.
In the preamble to the Constitution of the World Health Organization, “health” is defined as: “…a state of complete physical, mental and social well-being[,] and not merely the absence of disease or infirmity.” This definition of “health” can also be said to embrace an ideal, but it does so by representing health as a positive, rather than as a negative, concept.
Additional distinctions concerning definitions of “health” include that between what is sometimes referred to as a natural, or biological, view of health (and of disease) as contrasted with a socially constructed view. The former view entails that health, for all natural organisms (to include the biological status of human beings), is to be correlated with the degree to which the natural functions of the organism comport with its natural evolutionary design. On this interpretation, disease is to be correlated with any malfunctions, that is, any deviations of the organism’s natural functions from what would be expected given its natural evolutionary design. The adoption of this view of health by health care practitioners results in identifiable standards, or ranges, of “normalcy” concerning health care diagnostics, such as blood pressure, cholesterol levels, and so forth, the upshot of which is that any deviation from these norms is sufficient to pronounce the patient as “unhealthy,” if not as “diseased.” By contrast, the socially constructed view of health is determined by some social value(s) such that any deviation from the socially accepted norm, or average, for our species is considered to be a disease or a disability if the deviation is viewed as a disvalue, that is, as something to be avoided. For example, whether homosexuality is to be seen as a disease state, specifically, as a mental disorder, as the American Psychological Association officially held it to be for the longest time throughout the 20th century, until they reversed their position in 1980. Based on their own explanations of each of these definitional decisions, it would appear that their former official position was value-based in a way in which their latter position was a correction (Tong, 2012).
Similar distinctions concerning the concept of health, and its resultant definition, include the representation of health as “normative,” as contrasted with a “normal biological functioning” representation. Anita Silvers argues that organizations that set public health policy by their very nature incorporate (even if unconsciously) any of a number of social dimensions of health in their official definitions of “health.” Of course, to do this has practical effects that typically serve the interests of the organization in question. Any definition of “health” that uses a limited standard, and that might be appropriate for some segments of the larger human population to which the definition is being applied, but that of necessity is not reflective of some other of the segments of that same human population might render people in these latter segments of the human population as “pathological,” literally, by definition, despite the fact that with a more objective definition of “health” they would be deemed members of the healthy population.
Moreover, some such organizations implement classification systems that allow for both biological and social considerations to measure health outcomes for the purpose of determining the effectiveness of health care programs when compared to each other. Such comparisons are then used to decide, for example, what type of disease prevention measure(s) to implement or which particular sub-populations get selected for curative measures. According to Silvers, whatever the consensus in any particular society is, concerning what the word “health” designates, determines the health care services to be provided as well as the specific beneficiaries of such services. This conflation of normative and biological factors of consideration in the conceptualization and the ultimate definition of “health” by these organizations that set public health policy leads one to believe that such a definition is exclusively biological, that is, objective, and thereby to be accepted without question (Silvers, 2012).
Michael Boylan surveys a good number and variety of what he calls recent popular paradigms concerning the concept of health, as follows: 1) functional approaches to health, including “objectivism,” as associated with an “uncompromised lifespan,” and the “functionalism/dysfunctionalism” debate; 2) the public health approach to health; and 3) subjectivist approaches to health, which do not restrict themselves to physiological health but focus more broadly on human “well-being.” After demonstrating respects in which each of these approaches to our understanding of health fail, he proposes a “self-fulfillment approach” to human health. Central to this approach, and as a first-order metaethical theory, is the “personal worldview imperative,” which requires of each of us to develop a worldview that is both comprehensive and internally coherent but that is also good and one that we would strive to actualize in our daily lives. In other words, according to this imperative, such a worldview must 1) be comprehensive, 2) be internally coherent, 3) connect to a normative ethical theory, and 4) be, at a minimum, aspirational and acted upon. This personal worldview imperative is designed as an independent and objective means of assessment in order to avoid some of the inherent flaws of the well-being approach. In conjunction with what Boylan recommends as a “personal worldview of cooperation” (as a more holistic way of viewing the world), this personal worldview imperative would, arguably, constitute the most comprehensive and objective approach to our understanding of human health (Boylan, 2004 and Boylan, 2012).
Despite the fact that “health care” is a term that reflects the more recent phenomenon of the practice of health care as expanded beyond the practice of medical care, ethical concerns related to health care can be traced back to the beginnings of medical care. While this would take us back to primitive cultures at the time of the origin of human life as we know it, the first known evidence of ethical concerns in the practice of medicine in Western cultures is what has been handed down as the Corpus Hippocraticum, which is a compilation of writings by a number of authors, including a physician known as Hippocrates, over at least a few centuries, beginning in the 5th century, B.C.E., and which includes what has come to be known as the Oath of Hippocrates. According to these authors, medical care should be practiced in such a way as to diminish the severity of the suffering that illness and disease bring in their wake, and the physician should be acutely aware of the limitations concerning the practical art of medicine and refrain from any attempt to go beyond such limitations accordingly. The Oath of Hippocrates includes explicit prohibitions against both abortion and euthanasia but includes an equally explicit endorsement of an obligation of confidentiality concerning the personal information of the patient.
Additional codes of ethics concerning the practice of medicine have also come down to us: from the 1st century A.D., known as the Oath of Initiation, attributed to Caraka, an Indian physician; from (likely) the 6th century A.D., known as the Oath of Asaph, written by Asaph Judaeus, a Hebrew physician from Mesopotamia; from the 10th century A.D., known as Advice to a Physician, written by Haly Abbas (Ahwazi), a Persian physician; from the 12th century A.D, known as the “Prayer of Moses Maimonides,” Maimonides being a Jewish physician in Egypt; from the 17th century A.D., known as the Five Commandments and Ten Requirements, written by Chen Shih-kung, a Chinese physician; from the 18th century A.D, known as A Physician’s Ethical Duties, written by Mohamad Hosin Aghili, a Persian; and many more.
In 1803, Thomas Percival in England published his Medical Ethics: A Code of Institutes and Precepts, Adapted to the Professional Conduct of Physicians and Surgeons, which included professional duties on the part of physicians in private or general practice to one’s patients. The founding of the American Medical Association in 1847 was the occasion for the immediate formulation of standards for an education in medicine and for a code of ethics for practicing physicians. This Code of 1847 included not only “duties of physicians to their patients” but also “obligations of patients to their physicians,” and not only “duties of the profession to the public” but also “obligations of the public to physicians.” From the 19th century to well into the 20th century, societies or associations of medical doctors formulated and published their own codes of ethics for the practice of medicine.
A good number of medical codes of ethics were formulated and adopted by national and international medical associations during the middle part of the 20th century. In an effort to modernize the Oath of Hippocrates for practical application, in 1948 the World Medical Association adopted the Declaration of Geneva, followed the very next year by its adoption of the International Code of Medical Ethics. The former included, in addition to an enumeration of a physician’s moral obligations to one’s patients, an explicit commitment to the humanitarian goals of medicine. Since then, virtually every professional occupation that is health care-oriented in the U. S. has established at least one association for its membership and a code of professional ethics. In addition to the American Medical Association, there is the American Nurses Association, the American Hospital Association, the National Association of Social Workers, and many others.
Methods of moral decision-making are concerned, in a variety of ways, not only with moral decision-making but also with the people who make such decisions. Some such methods focus on the actions that result from the choices that are made in moral decision-making situations in order to determine which of such actions are right, or morally correct, and which of such actions are wrong, or morally incorrect. Other methods of moral decision-making concentrate on the persons who commit actions in moral decision-making situations (that is, the agents) in order to determine those whose character is good, or morally praiseworthy, and those whose character is bad, or morally condemnable. The theorists of such methods deal with such questions as: Of all of the available options in a particular moral decision-making situation, which is the morally correct one to choose?; What are the particular virtues of character that, in conjunction, constitute a good person?; Are there certain human actions that, without exception, are always morally incorrect?; What is the meaning of the language used in specific instances of moral discourse, whether practical or theoretical?; What is meant by a specific moral concept?; and many others.
What follows is a look at some of the most influential methods of moral decision-making that have been offered by proponents of such methods and that have been applied to ethical issues in the field of health care.
While not the first of the Ancient Greeks to articulate in writing a theory of virtue ethics, Aristotle’s version of virtue ethics, as it has come down to us, has been one of the most influential versions, if not the most influential version of all. According to Aristotle, a person’s character is the determinative factor in discerning the extent to which that person is a good person. To the extent to which a person’s character is reflective of the moral virtues, to that same extent is that person a good person. Moral virtues include but would not be limited to courage, temperance, compassion, generosity, honesty, and justice. The person in whom these moral virtues are to be found as steadfast dispositions can be relied on to exhibit a good character and thereby to commit morally correct actions in moral decision-making situations. For example, a courageous soldier will neither run headlong into battle in the belief that “war is glory” nor run away from the battle in the belief that he is afraid of being injured or killed. The former soldier has chosen to be rash during the heat of battle while the latter soldier has chosen to be a coward. By contrast, the courageous soldier holds his position on the battlefield and chooses to fight when he is ordered to do so. The fundamental difference between the courageous soldier on the one hand and the rash and cowardly soldiers on the other is that, of the three, only the courageous soldier actually knows why he is on the battlefield and chooses to do his duty to defend his comrades, his country, and his family while recognizing, at the same time, the realistic possibility that he might be injured, or even killed, on the battlefield (Aristotle, 1985).
Virtue ethics is directly applicable to health care ethics in that, traditionally, health care professionals have been expected to exhibit at least some of the moral virtues, not the least of which are compassion and honesty. To the extent that the possession of such virtues is a part of one’s character, such a health care professional can be relied on to commit morally correct actions in moral decision-making situations involving the practice of health care.
The preeminent proponent of utilitarianism as an ethical theory in the 19th century was John Stuart Mill. As a normative ethical theorist, Mill articulated and defended a theory of morality that was designed to prescribe moral behavior for all of humankind. According to Mill’s utilitarian theory of morality, human actions, which are committed in moral decision-making situations, are determined to be morally correct to the extent to which they, on balance, promote more happiness (as much as possible) than unhappiness (as little as possible) for everyone who is affected by such actions. Conversely, human actions, which are committed in moral decision-making situations, are determined to be morally incorrect to the extent to which they, on balance, produce more unhappiness rather than happiness for those who are affected by such actions. Mill hastens to acknowledge that the agent in the moral decision-making situation must count oneself as no more, or less, important than anyone else in the utilitarian calculation of happiness and/or unhappiness.
However, unlike virtually all of his utilitarian predecessors, Mill offered a version of utilitarian ethics that was designed to accommodate many, if not most, of the same ethical concerns that Aristotle had expressed in his version of virtue ethics. In other words, even after it is determined that the utilitarian calculation of the ratio of happiness to unhappiness, in a particular moral decision-making situation, might result in an option that is deemed to be morally correct, an additional calculation might be in order to determine the ratio of happiness to unhappiness in the event that such an option, in future like cases, would consistently be deemed the appropriate one such that if this latter calculation would likely result in a ratio of unhappiness over happiness, then the option in the original case might be rejected (despite its having been recommended by the utilitarian calculation for the original moral decision-making situation). For example, in a moral decision-making situation in which an employed blue-collar worker witnesses a homeless person dropping a twenty-dollar bill on the sidewalk, the utilitarian calculation would recommend, as the morally correct option, to return the twenty-dollar bill to the homeless person rather than to keep it for oneself. However, given the same exact moral decision-making situation except that rather than a homeless person dropping a twenty-dollar bill on the sidewalk, the twenty-dollar bill is dropped by a universally known and easily recognizable multi-billionaire. Despite the utilitarian calculation determining that the blue-collar worker should keep the twenty-dollar bill for oneself, the additional calculation would involve the question of the likely negative effect of such an action, if repeated in a habitual way, on the agent’s own character over a period of time.
Another possible reason to reject an otherwise recommended option, based on the utilitarian calculation, would be if the same option were to be repeatedly chosen routinely by others in society, as influenced by the action in the original case in question. To the extent that the action in question, if repeated routinely by others in society, would result in unfavorable consequences for the society as a whole, that is, it would run counter to the maintenance of social utility, then the agent in the original moral decision-making situation in which this action was an option should choose to refrain from committing this action. For example, if a prominent citizen of a small town, upon learning that the local community bank was having financial problems due to an unusually bad economy decided to withdraw all of the money that he had deposited in his accounts with this bank, the utilitarian calculation would, presumably, sanction such an action. However, precisely because this man is a well-known citizen of this small town, it can be predicted, reasonably, that word of his bank withdrawal would spread throughout the town and would likely cause many, if not most, of his fellow citizens to follow suit. The problem is that if the vast majority of the townspeople did follow suit, then the bank would fail, and everyone in this town would be worse off than before. In other words, this would serve to undermine social utility, and so, the original action would not be recommended by the utilitarian calculation.
As applicable to health care ethics, utilitarian considerations have become fairly standard procedure for large percentages of health care professionals over the past several generations. It is not at all uncommon for decisions to be made, by health care professionals at all levels of health care, on the basis of what is in the best interest of a particular collectivity of patients. For example, officials at the U. S. Centers for Disease Control (CDC) learn of an outbreak of a serious, potentially fatal communicable disease. These officials decide to quarantine hundreds of people in the geographic area in which the outbreak occurred and to mandate that health care professionals across the country who diagnose patients with this same communicable disease must not only take similar measures but also must report the names and other personal information of the affected patients to the CDC. These decisions are, themselves, decisions of moral (if not also legal) decision-making, and these decisions raise additional moral issues. At any rate, the fundamental reason for taking such measures, under the specified circumstances, is for the protection of the health of the citizens in those areas where the outbreaks occurred, but, ultimately, such measures are taken for the protection of the health of American citizens in general, that is, to promote social utility (Mill, 1861).
A deontological normative ethical theory is one according to which human actions are evaluated in accordance with principles of obligation, or duty. The most influential of such theories is that of Immanuel Kant, whose categorical imperative, as his fundamental principle of morality, was first formulated as, “Act only on that maxim whereby you can, at the same time, will that it should become a universal law.” In application to any particular moral decision-making situation, the agent is being asked to entertain the question of whether the action that one has chosen to commit is sufficiently morally acceptable to be sanctioned by a maxim, or general principle. In other words, the agent is asked to attempt to universalize the maxim of one’s chosen action such that all rational beings would be morally allowed to commit the same action in relevantly similar circumstances. If this attempt to universalize the maxim were to result in a contradiction, such a contradiction would dictate that the maxim in question cannot be universalized; and if the maxim cannot be universalized, then one ought not to commit the action. Kant asks his reader to consider the case of a man who stands in need of a loan of money but who also knows well that he will not be able to repay such a loan in the appropriate amount of time. The maxim of his action would be: Whenever I find myself in need of a loan of money but know that I am unable to repay it, I shall deceitfully promise to repay the loan in order to obtain the money. To attempt to universalize this maxim, this man would need to entertain a future course of events in which all rational beings would also routinely attempt to act on this same maxim whenever they might find themselves in relevantly similar circumstances. However, as a rational being, this man would come to realize that this maxim could not be universalized because to attempt to do so would result in a contradiction. For, if such an action were to become a routine practice, on the part of all rational beings in relevantly similar circumstances as those in this man’s case, then those who loan money (either as loan officers for financial institutions or as private financiers themselves) would almost immediately wise up to the fact that people are routinely attempting to borrow money on deceitful promises, that is, with no intention to repay such loans. Thus, the loaning of money would, at least temporarily, come to a halt. As Kant points out to his reader, because of the contradiction involved in attempting to universalize this maxim, neither the promise (deceitful as it is) itself nor the end to be achieved by the promise (that is, the loan of money) would be realizable. So, the fact that a contradiction results from the attempt to universalize the maxim reveals the impossibility of the maxim being able to be universalized, and because the maxim cannot be universalized, then the man ought not to commit the action.
Another formulation of the same categorical imperative was formulated as, “Act in such a way that you treat humanity, whether in your own person or in the person of any other, never as a means only, but always as an end.” According to this formulation, Kant is calling attention to his belief that all rational beings are capable of exhibiting a “good will,” which he claims is the only thing in the universe that has intrinsic value, that is, inherent value, and because a good will can only be found in rational beings, they have a singular type of dignity that must always be respected. In application to any specific moral decision-making situation, the agent is being asked to respect rational beings as valuable in, and for, themselves, or as ends in themselves, and, thereby, to commit to the principle to never treat a person (either oneself or any other) as merely a means to some other end. To apply this formulation of the categorical imperative to the same example as before is to realize that, once again, one ought not to make a deceitful promise. For, to make a deceitful promise to repay a loan of money in an effort to obtain such a loan is to treat the person to whom such a promise is made as a means only to the end of obtaining the money. To be faithful to this formulation of the categorical imperative is to never commit any action that treats any person as a means only to some other end (Kant, 1989).
Deontological theories, in general, and Kant’s categorical imperative (in either of these two formulations), in particular, can be applied to any number of issues in the practice of health care. For example, if a patient who had been prescribed an opioid for only a short period of time, post-surgery, were to contemplate whether to feign the continued experience of pain during the follow-up visit with the surgeon in an effort to obtain a new prescription for the same opioid in order to abet the opioid addiction of a friend, then the patient would be attempting to treat the surgeon as a means only to another end. Because any attempt to universalize the maxim of such an action would result in a contradiction, Kant’s categorical imperative would allow one to see that such an action ought not to be taken.
One approach to health care ethics was actually developed as a result of its originators’ belief that, especially, utilitarian and deontological ethical theories were inadequate to deal effectively with the issues that had arisen in medical ethics in particular. Tom Beauchamp and James Childress introduced their “four principle approach” to health care ethics, sometimes referred to as “principlism,” in the final quarter of the 20th century. Central to their approach are the following four ethical principles: 1) respect for autonomy, 2) nonmaleficence, 3) beneficence, and 4) justice. These four ethical principles, in conjunction with what are identified as moral rules and moral virtues, together with moral rights and emotions, provide a framework for what they call the “common morality.” This common morality is put forward as the array of moral norms, which are acknowledged by all people who take seriously the importance of morality, regardless of cultural distinctions and throughout human history and so are said to be universal. However, given the abstract nature of these ethical principles, it is necessary to instantiate them with sufficient content so as to be able to be practically applicable to particular cases of moral decision-making. This is what is referred to as an application of the method of specification, which is designed to restrict the range and the scope of the ethical principle in question. In addition, each ethical principle, again, in order to be practically applicable, needs to be subjected to another methodological procedure, namely that of balancing according to which the principle, as a moral norm that is competing with others, and in order to be eligible for application to a particular case of moral decision-making, needs to be deemed to be of sufficient weight or strength, as compared to its competitors (Beauchamp and Childress, 2009).
None of the four ethical principles has been designated as enjoying superiority over the others; in fact, it is explicitly acknowledged that any of the four principles can, and would, reasonably be expected to conflict with any other. Because of this, it has been pointed out that this method of moral decision-making is subject to the problem of having no means by which to adjudicate such conflicts. Moreover, to the extent that, in practice, the application of principlism can be reduced to a mere checklist of ethical considerations, it is not sufficiently nuanced to be, ultimately, effective (Gert and Clouser, 1990).
Another method of moral decision-making that explicitly rejects the use of any ethical theory or any set of ethical principles is known as “casuistry.” Although not a new method of moral decision-making, it was re-introduced by Albert Jonsen and Stephen Toulmin in the last quarter of the 20th century within the context of ethical issues in the field of health care. This method of moral decision-making is not unlike what is normally referred to in the Western system of jurisprudence as “case law,” which makes almost exclusive use of what are considered to be “precedent-setting cases” from the past in an effort to decide the present case. In other words, like the method of decision-making that is used by judges who must render decisions in the law, casuists insist that the best way in which to make decisions on specific cases as they arise in the field of health care, and which raise significant moral issues, is to use prior cases that have come to be viewed as paradigmatic, if not precedent setting, in order to serve as benchmarks for analogical reasoning concerning the new case in question. For example, if a new case were to come about in the field of health care that raised the moral issue of how the health care professionals of a hospice organization should treat a woman who is five months into her pregnancy but who also has been diagnosed with stage four pancreatic cancer and has a life expectancy between two and three months, the casuist would advise that the moral decisions concerning the treatment of this woman should be made by seeking out as large a number as possible of cases that had occurred prior to this one and that exhibited as many as possible relevantly similar salient characteristics in addition to as many as possible of the same moral issues. To render moral assessments concerning how these previous cases were handled (some more morally acceptable and others not, or even more instructive would be at least one that stands out as reflective of either decisions determined to have been obviously morally correct or decisions determined to have been blatantly morally objectionable) is to have established guideposts for moral decision-making in the present case under consideration (Jonsen and Toulmin, 1988).
According to the proponents of casuistry, normative ethical theories and ethical principles can take moral decision-making only so far because, first, the abstract nature of such theories and principles is such that they fail to adequately accommodate the particular details of the cases to which they are applied, and second, there will always be some cases that serve to confound them, either by failure of the theory or principle to be practically applicable or by suggesting an action that is found to be morally unsatisfying in some way. However, casuistry, as a method of moral decision-making, seems to make use of various sorts of moral norms or rules, if only in a subconscious or nonconscious way. For, in order to reason, analogically, from a paradigmatic or precedent-setting past case to a current case, which exhibits even a good number of relevantly similar salient characteristics and even a good number of the same moral issues, is to base one’s judgment on some norm or rule that serves as the moral standard by which to draw out the points of agreement or disagreement between the past and the present cases. Furthermore, this moral norm or rule, itself, will almost certainly turn out to have been reflective of either popular societal or cultural bias because of the conscious methodology to refrain from the use of normative ethical theories and ethical principles, both of which carry with them standards of objectivity (Beauchamp and Childress, 2009).
Not unlike the proponents of casuistry, the proponents of what has come to be called “feminist ethics” shun the use of ethical theories; however, being distinctively different from traditional methodological approaches to ethics in general, and health care ethics in particular, there is a skepticism concerning traditional ethical concepts, including the concept of autonomy. In an effort to focus more particularly on issues concerning gender equality, including the social and political oppression of women as well as the suppression of women’s voices on social, political, and ethical issues, the concept of autonomy, in an abstract sense, is thought to be less meaningful for women who are socially and politically oppressed, by virtue of their gender, than for men. For example, even though, theoretically and even legally, women, by a particular point in time during the first half of the 20th century, were eligible for admission to medical schools should they have chosen to exercise their autonomous rights to apply for such admission, in practice and in fact, both the social conditioning of women and the gender bias of the men who administered medical schools, and who made decisions on which applicants would satisfy the requirements for admission, ensured that medical schools would graduate, almost exclusively, men (with only single digit exceptions in America). The point is that the concept of autonomy, in its theoretical sense, is too abstract to have had any practical application to women, in this case, whose eligibility for acceptance to medical schools was denied on the basis of gender. Rather, the social realities of the day-to-day existence of women, within their social, political, and cultural confinements, must be addressed in such a way that the specific circumstances concerning a particular woman’s relationships with other people, in all of their varieties of dependence, if not interdependence, are to be taken into account. Thus, in addition to this concept of relational autonomy, concepts of responsibility and compassion as well as those of freedom and equality are essential to the majority of the proponents of feminist ethics (Holmes and Purdy, 1992 and Sherwin, 1994).
While among those who consider themselves to be proponents of feminist ethics there exists a range of perspectives concerning not only some of the most important ethical issues within the framework of this school of thought but also concerning the very nature of this school of thought, agreement can be found in the need to reflect on both the oppression and the suppression of women that has been inherent in most every culture throughout human history.
Yet another method of moral decision-making, which is sometimes thought of as a sub-field of feminist ethics but in the early 21st century has come to be seen in its own right as a methodology and was given birth by feminist ethics, is usually referred to as the ethics of care. Like the proponents of feminist ethics, the proponents of the ethics of care have decided that any methodology of moral decision-making that is based on abstract theories or principles, rights or duties, or even objective decision-making turns out to be unsatisfying in terms of interacting with others in moral decision-making situations. Instead, the focus should be, again, not unlike proponents of feminist ethics, on the specific circumstances of the personal relationships of individual people, with particular attention to be paid to compassion, sympathy/empathy, and a sincere concern for caring for others with whom one shares any intimate relationship. The upshot of this methodology is that “caring” is a necessary constituent of all moral decision-making but that it has been absent from the traditional methodologies of moral decision-making. For, traditional normative ethical theories render objectivity an essential ingredient in moral decision-making but in so doing leave no place for the care (that is, the compassion, sympathy/empathy, and kindness) that is necessary for our inter-personal relationships to be morally successful (Held, 2006).
Nursing, as a profession, has been, traditionally, a profession of the nurturing of, as well as the caring for, the patient. Until the latter part of the 20th century, nursing was also, historically, a profession for women. It should come as no surprise, then, that an “ethics of care” approach to moral decision-making would be embraced by nurses, as well as by women in other health care professions, up to, and including, the profession of medical doctors. (In no way is this to suggest that this ethics of care would, either intentionally or in practice, preclude men from identifying with it also.) In the early 21st century, this approach to patient care in medical facilities, as well as in allied health care facilities, became almost mainstream in many societies across the globe, with accrediting agencies offering their respective “seals of approval” for those medical organizations that are successful in treating patients holistically. It should go without saying that many are the health care professionals who would choose to nurture, and to care for, their own patients in this same way, with or without the existence of any such accrediting agencies (Kuhse, 1997).
In addition to the application of a variety of methods of moral decision-making to the practice of health care, ethical principles are also so applicable, but not procedurally in the same way as in the method of moral decision-making identified above as principlism. As concerning normative ethical theories, in particular, regardless of the particular method of moral decision-making and its moral standard for action that one might choose to apply to moral decision-making situations, and even in the absence of any such theory or standard being applied in the day-to-day practice of professionals in the field of health care, ethical principles serve to guide one’s actions in moral decision-making situations by identifying those important and relevant considerations that must be taken into account in order for one to be able to think about such situations in a serious way. In other words, ethical principles operate on a different level of moral decision-making than do normative ethical theories or other methods of moral decision-making; nonetheless, ethical principles, like normative ethical theories and these other methods of moral decision-making, are prescriptive, that is, they offer recommendations for moral action. In theory, ethical principles can be used as one measure of how effective normative ethical theories are in their application to moral decision-making situations. For, any proposed normative ethical theory that is incapable of accommodating the requirements of the most fundamental ethical principles can be called into question on that very basis.
Patient autonomy, in the clinical context, is the moral right on the part of the patient to self-determination concerning one’s own health care. Conversely, whenever a health care professional restricts, or otherwise impedes, a patient’s freedom to determine what is done, by way of therapeutic measures, to oneself, and attempts to justify such an intrusion by reasons exclusively related to the well-being, or needs, of that patient, that health care professional can be construed to have acted paternalistically. In practice, autonomy, on the part of the patient, and paternalism, on the part of the health care professional represent mutually exclusive events, that is, to the extent that one of these two is present, in decision-making and their attendant actions within the clinical relationship of the patient and the health care professional, to that same extent is the other one absent. In other words, for the health care professional to act paternalistically is for that same health care professional to have failed to respect the patient’s autonomy, and conversely, for the health care professional to respect the patient’s autonomy is for that same health care professional to have refrained from acting paternalistically.
For example, if a physician were to offer a patient only one recommendation as a remedy for a particular medical malady, when, in fact, the physician knows of more than one prospective remedy (even if the different prospective remedies would be expected to address the medical issue in question to varying degrees and/or might have distinct reputations for varying degrees of success), then the physician in question would be said to have acted paternalistically and, thereby, to have failed to respect that patient’s moral right to autonomous decision-making. In such a case, the physician might hasten to call attention to the fact that, in the typical clinical situation, the physician’s knowledge of the medical issue in question is both qualitatively and quantitatively superior to that of the patient. This fact, while not in dispute, fails to change the nature of this physician’s act of paternalism.
Some health care professionals continue to profess their own personal beliefs that patient autonomy is over-rated because, in their own clinical experience, patients continue to make poor decisions concerning what is in the best interest of their own health care. Certainly, this is a realistic concern, and it probably always will be. However, in some cases, the poor decisions, on the part of the patient who has exercised the right to self-determination concerning one’s own health care, can be explained, at least in part, by the fact that the health care professional in question has failed to engage in the necessary amount of “patient education” in an effort to ensure that better quality decisions can be made, by the patient. In too many cases, this failure, on the part of the health care professional, is due to the language in which the patient education takes place relative to the patient’s ability to comprehend language at a certain level of sophistication. That is, not every adult patient has the ability to comprehend medical explanations even if such explanations are cast in the language of the native tongue of the patient and even if the ability of comprehension that is necessary for a proper understanding is at the level of, say, an average high school graduate. The point is that a genuine respect for the patient’s right to autonomous decision-making concerning one’s own health care demands that each and every health care professional make a sincere effort to ascertain the level of language comprehension of each and every patient, and to convey, in language that is understandable by the patient, all of the relevant medical information that is necessary in order for the patient to be able to make, in consultation with the health care professional, better quality health care decisions than might otherwise be the case.
Starting in the latter part of the 20th century, and having enjoyed a sustained progression to the 21st century, has been the belief, on the part of many, if not most, health care professionals (including physicians), that the patient’s moral right to self-determination concerning their own health care is of fundamental importance to the success of the delivery of health care. This transition, from the practice of health care being extremely paternalistic, with virtually no recognition of the patient’s right to autonomy, to the practice of health care in the 21st century, and especially in Western cultures, being such that patient autonomy is respected by health care professionals in general, as being of prime importance in the clinical context, has been painstakingly incremental. However, a fundamental problem concerning this respect for the patient’s autonomy persists, and this is the problem of the inconsistency with which it is applied. Many health care professionals are the first to sing the praises of the need to respect the patient’s autonomous preferences in their own health care, however, they are all too willing to make exceptions in situations in which they, themselves, are fundamentally opposed to such an autonomous decision to be made by a particular patient. Reasons for these so-called exceptional cases vary from cultural or religious differences between the health care professional, on the one hand, and the patient, on the other, to the patient in question being a close relative, or friend, of the health care professional (even in a clinical situation in which the health care professional has no part in the practice of health care for this close relative or friend). In either of these types of cases (and many like ones), these so-called exceptional cases are not exceptional cases at all. Rather, subjective considerations have taken the place of the more objective considerations on which the health care professional in question normally acts; that is, in every such case, the health care professional is imposing one’s own personal beliefs on the patient (albeit, usually, for the patient’s own good, that is, as an act of paternalism), and thereby, is failing to actually respect the patient’s autonomy. In the final analysis, the health care professional’s respect for autonomous decision-making on the part of the patient, in order for it to be sincere and objective, does not demand its adherence only when it is convenient for the health care professional but allows for its suspension when it is inconvenient, again, for the health care professional. On the contrary, for a health care professional to respect a patient’s autonomy is to respect that patient’s autonomous goals and preferences, even if the health care professional does not agree with them. At its most fundamental level, a true respect for autonomous decision-making on the part of the patient demands that it be honored, objectively, even in the tough cases.
To act beneficently toward others is to behave in such a way as to “do good” on behalf of, or to benefit, someone other than oneself. To the extent to which health care professionals serve their patients by helping them to maintain or improve their health status, health care professionals can be said, to the same extent, to be acting beneficently toward the patients they serve. In theory, every action performed by a health care professional, in a professional relationship with a patient, can be expected to be guided by the ethical principle of beneficence. Moreover, the respect for patient autonomy and the practice of beneficent medical care can be considered to be mutually complementary. For, it is difficult to imagine a health care professional who is committed to the principle of beneficence, on behalf of one’s patients, without also respecting the right to autonomous decision-making on the part of those same patients.
However, despite the complementary nature of the ethical principle of autonomy and that of beneficence, it is not uncommon for these two ethical principles to conflict one with the other. It is possible for a patient’s autonomous preference to appear to conflict with what is in that same patient’s own best interest(s). For example, a young adult patient who has only recently suffered a ruptured appendix (such that it is still early in the progression of pain) might refuse to undergo an appendectomy for the reasons that the patient has never undergone surgery before and claims to be deathly afraid of hospitals. To respect this patient’s autonomy is for the patient to, inevitably, die, which, reasonably, is not in the patient’s own best interest. On the other hand, to coerce this patient into agreeing to the appendectomy, and thereby to prevent the patient’s death, would be to fail to respect the patient’s autonomous preference. It is also possible for a patient’s autonomous preference to appear to conflict with the best interest(s) of someone else. For example, a patient who has only recently been diagnosed with a serious sexually transmitted infection (STI) might agree to treatment for this STI only on the condition that the health care professional in question promise to refrain from telling the patient’s spouse about the STI (as the patient’s attempt to invoke the privilege of confidentiality that is considered to be inherent in the health care professional-patient relationship). To respect this patient’s autonomy is to place at risk the health status of the patient’s spouse, at the very least, regardless of whether the patient is provided treatment for this STI.
Such cases of conflict between these two ethical principles would normally be adjudicated according to which right (that is, that of autonomy or that of beneficence) can reasonably, and objectively, be determined to supersede the other in importance. In the former example, the patient, after recovering from the life-saving appendectomy, might be appreciative of the fact that the principle of beneficence was allowed to prevail over the principle of autonomy. In the latter example, the right to know, on the part of the patient’s spouse, of one’s own potential health risks involved in the patient’s having contracted the serious STI in question would allow for the principle of beneficence (concerning another rather than the patient) to take precedence over the principle of autonomy. Of course, many are the occasions on which the principle of respect for autonomy might take precedence over the principle of beneficence. Take, for example, a patient who is similar to the one in the above-mentioned case of a ruptured appendix in that the patient is, once again, deathly afraid of hospitals, but this time is elderly and has had only one surgery, although a major one. This time the surgery is recommended to remedy a leaky mitral valve in the patient’s heart. If, having had a series of bouts of patient education such that the cardiologist can, reasonably, determine that the patient is sufficiently aware of the ramifications of both options, (that is, the likelihood that the mitral valve repair would be successful and the equal likelihood that refraining from undergoing this mitral valve repair would, within a relatively short period of time, result in the patient’s death), then respect for this patient’s autonomous decision to refrain from undergoing this surgical procedure might reasonably be seen as superseding this patient’s right to beneficence, that is, to actually undergo this surgical procedure.
An ethical principle that is typically traced back to the Oath of Hippocrates is to “first, do no harm,” or to refrain from engaging in any acts of maleficence in the clinical context, that is, acts that would result in harm to the patient. Acts of maleficence can be intentional or unintentional, and a large percentage of the latter kind happen as a result of either negligence or ignorance on the part of the health care professional. An example of the former would be a surgeon who fails to exercise due diligence in scrubbing prior to surgery, the result of such negligence being that the surgical patient contracts an infection. An example of the latter would be a primary care physician who fails to scrutinize sufficiently the recent medication history of a patient prior to prescribing a new medication, the result of such ignorance being that the patient suffers a new health issue due to the adverse interaction of the newly prescribed medication with a previously prescribed one that is still being taken by the patient.
Because of the intimate relationship between the principle of nonmaleficence and that of beneficence, it is possible (at least in some cases) to construe the violation of either as a violation of the other. In other words, it might be possible to construe the failure to act in such a way as to benefit someone not only as a violation of the principle of beneficence but also as a violation of the principle of nonmaleficence. Conversely, it might be possible to construe the committing of an action that, reasonably, would be expected to actually cause harm to someone, not only as a violation of the principle of nonmaleficence but also as a violation of the principle of beneficence. To leave a surgical patient under general anesthesia longer than is medically necessary would be an example of the former, and to allow surgery to be performed on a patient by a surgeon who is under the influence of drugs or alcohol to the extent that the surgeon’s skills and judgment have been seriously impaired would be an example of the latter.
Raising the question of whether the principle of nonmaleficence has been violated would also include clinical situations in which it can be determined, objectively, that the potential risks of the recommended treatment option, be it a procedure or a medication, actually outweigh the expected benefits, all things considered. To avoid this possibility, a calculation of the ratio of potential risks to expected benefits (sometimes referred to as a risk-benefit analysis) in the case of both medical procedures and the prescribing of medications is always necessary. For a health care professional to fail to render such a calculation is, at least in theory, to violate the principle of nonmaleficence.
In the clinical context, the ethical principle of justice dictates the extent to which the delivery of health care is provided in an equitable fashion. As such, justice is not applicable to particular decisions, or their attendant actions; rather, the principle of justice is intended to provide the guidance that is necessary to ensure that, considered in conjunction with one another, one’s decisions, and their attendant actions, are consistent each with the others. Consequently, the hallmarks of the concept of justice are fairness and impartiality. In the context of health care, the question of justice is concerned with the degree to which patients are treated in a fair and impartial manner. Justice, as an ethical principle, demands that the actions taken by health care professionals, in their professional relationships with patients, be motivated by a consistent set of standards concerning the relevance of the variety of factors that are taken into consideration for such actions. For example, the recommendation, on the part of a health care professional, of two different primary treatment options for two different patients, each of whom having presented with the exact same symptoms to approximately the same extent, and with no known other relevant differences between the two patients except for one demographic distinction (say, age, gender, or race), would, when taken together, appear to be unjust.
Of course, it is possible for a health care professional to be the subject of an unsubstantiated and erroneous charge of injustice concerning two, or more, clinical cases that might appear to be relevantly the same. Typically, the reason for such an accusation, should the accusation be inaccurate, is that the accuser is lacking the requisite knowledge of the cases in question in order to be able to determine that, although these two, or more, cases do, indeed, appear to be relevantly similar, in fact, they are not. For example, a physician assistant might prescribe two different antibiotics (one of which has been proven to be highly effective but the other of which has an inconsistent success rate, each for the same medical malady) to two different patients who have been diagnosed with the medical malady in question. Learning of these facts, someone might accuse the physician assistant of being unfair, that is, unjust, in the treatment of these two patients. However, what this accuser does not know is that the patient for whom the less effective antibiotic was prescribed is deathly allergic (that is, subject to anaphylactic shock) to the antibiotic with the higher success rate.
In the final analysis, the ethical principle of justice demands that cases, which are relevantly similar, be treated the same and that cases, which are relevantly different, be treated in appropriately distinct ways in recognition of such differences.
The practice of every profession reveals ethical issues that are endemic to the professional field in question. The practice of health care is no different. What follows is a look at some of the most pervasive ethical issues that are encountered in the practice of health care.
Any ethical issues that can arise within the clinical relationship between the health care professional and the patient are of the utmost importance if only because this relationship represents the front line of the provision of health care. The most important part of this relationship is trust on the part of each of the participants in this relationship. This is why the issues of truth-telling, informed consent, and confidentiality are essential to the success of any relationship between a patient and a health care professional.
The most important value of telling the truth is that, under ordinary circumstances, the recipient of a claim, offered by someone else, has reasonable expectations that the claim is true, and for that reason, will, more often than not, adopt such a claim (it is to be hoped only after subjecting it to sufficient scrutiny), incorporate it into one’s own belief system, and eventually act on it. To act on this formerly received claim, which, subsequently, has become one’s own belief, is to engage in autonomous decision-making. However, should it turn out that such a belief is objectively inaccurate because the claim (from which this belief was derived) was not true, then the person who is acting on this belief will have had one’s own capacity for autonomous decision-making compromised. True, or genuine, autonomous decision-making is possible only if the beliefs on which such decisions are made are accurate; in other words, any decision that is based on an inaccurate belief (even if the belief is not recognized as such), cannot be a true autonomous decision. Thus, every person can be said to be under a moral obligation to tell the truth, especially on topics the claims about which are important and relevant to the lives of their recipients. For, in such cases, the recipients of such claims, who choose to accept them, will, eventually, hold them as beliefs, and will act on them in order to pursue what they take to be interests of their own, and, perhaps, too, the interests of others.
To respect another person, as a person, is to respect that other person’s right to autonomous decision-making, especially when such decisions concern their own interests that bear, in important and relevant ways, on the quality of their own lives. For, the quality of one’s life is a pre-requisite for human happiness, and of the entire range of interests that one might identify as essential to one’s own happiness, good health is arguably the most fundamental. Not only can the moral right to autonomy be said to be the most important right of a patient, in a clinical setting, it also can be said to be the foundational right for all of the other rights that a patient can be said to have. In order for a patient to be able to protect one’s own interest in promoting, or regaining, one’s own health, that patient’s moral right to autonomy demands to be respected.
To the extent to which any health care professional, in a professional relationship with a patient, fails to be honest with a patient (concerning that patient’s diagnosis, the recommended treatment options, the identification of realistic potential risks and expected benefits associated with such treatment options, or the patient’s prognosis by virtue of the diagnosis in relationship to each of the recommended treatment options), that patient’s autonomy can be said to have been compromised. If this compromised autonomy were to result in the patient’s inability to protect one’s own interest in promoting, or regaining, one’s own health, then this failure to be honest with the patient would represent a moral failure on the part of the health care professional. For example, a physician who, when asked explicitly by a patient what the potential adverse side-effects of the medication that the physician is in the process of prescribing might be, and who responds in such a way as either to play down the number and severity of such adverse side-effects or to suggest that there are none, can reasonably be considered to have failed one’s patient by having been dishonest. Any attempt, on the part of the physician, to justify such deception as an act of beneficence toward the patient is doomed to failure because, by definition, such deception, resulting from such a motive, would constitute an act of paternalism, that is, an act that would disregard the patient’s right to autonomous decision-making.
Concerns about patient autonomy give rise to the concept of “informed consent.” For, if one believes that the patient, indeed, does have a moral right to self-determination concerning one’s own health care, then it would seem to follow that health care professionals, especially physicians, ought not to prescribe any therapeutic measure in the absence of the patient’s informed consent.
Informed consent is intended to be not only a moral but also a legal safeguard for the respect of the patient’s autonomy. Furthermore, informed consent is designed to promote the welfare of the patient (that is, to ensure the patient’s right to beneficence) and to avoid the causing of any harm to the patient (that is, to ensure the patient’s right to nonmaleficence). In the clinical context, informed consent is a reference to a patient’s agreement to, and approval of, any recommended treatment or procedure that is intended to be of therapeutic value to the patient but only on the condition that the patient has an adequate understanding of all of the most important and relevant information concerning the treatment or procedure in question.
Typically, the concept of “informed consent” arises in the context of a patient (or either a patient advocate or a patient surrogate) who asserts a right to informed consent; it is usually articulated as the patient’s “right to know” any, and all, relevant information in the therapeutic relationship (usually) with the physician. A patient enters a therapeutic relationship with a physician either in an effort to maintain one’s current status of optimal health (perhaps, with an annual visit for a physiological examination in conjunction with a series of laboratory, or other, diagnostic tests) or in an effort to regain the lost status of optimal health that the patient might have previously enjoyed. To fail to respect the patient’s right to informed consent, by refraining to provide any specific important and relevant information to the patient, is to fail to uphold either the principle of beneficence or the principle of nonmaleficence, if not both.
For example, a physician might choose to knowingly, and intentionally, refrain from informing a patient of the potential risks of a certain procedure that has been recommended, up to and including a realistic risk of death. Other examples would include specific anesthetics that have a risk, small though it might be, of causing the death of the patient. To genuinely respect the patient’s right to informed consent in cases like these would be for the physician to fully inform the patient of such risks and to inform the patient, too, of the most recent statistics on how probable such risks might be. This would provide the patient with the opportunity to make a more informed decision in consultation with the physician.
Consequently, for informed consent to be truly meaningful, from the patient’s perspective, not only does the physician have an obligation to provide any and all important and relevant information concerning recommended treatments and procedures but also an obligation to refrain from interfering, without justification, with the patient’s ultimate decision.
Julian Savulescu and Richard W. Momeyer argue, effectively, that not only does being insufficiently informed of relevant information restrict a patient’s autonomous decision-making, so too does the holding of irrational beliefs, which could result in irrational deliberation. To illustrate this point, they choose the case of a patient who is a Jehovah’s Witness and who, on grounds of religious beliefs, refuses a prospective life-saving blood transfusion. They argue that, rather than viewing such a case as one in which the health care professional ought to exercise deference to the patient’s right to autonomous decision-making, out of respect for a patient whose value system differs from one’s own, the health care professional has a moral obligation to attempt, as best one can, to inform the patient of all of the important details that are relevant to the patient’s current health care situation, but also to spend the time that is necessary to help guide the patient through a process of rational deliberation concerning those details in an effort to make the best possible treatment decision. To attempt to accomplish both of these tasks is to demonstrate respect for the patient’s right to autonomous decision-making in a way in which to merely address the former task is not. Savulescu and Momeyer recognize, and advise against, the exercise of paternalism, if not coercion, when it comes to both the providing of important and relevant information and the guiding of the patient through a process of, theoretically, rational deliberation because, as they say, to compel the patient either to accept medically justified information or to engage in practical rational deliberation concerning such information would be counter-productive in many respects (Savulescu and Momeyer, 1997 and Savulescu 1995).
In the case of any non-emergency medical procedure of any significance, there is a moral obligation to obtain the informed consent of the patient by written signature authorization of an informed consent document. In the case of any emergency medical procedure of any significance, there is a moral obligation to make every reasonable effort to obtain the informed consent of the patient, in like manner. Failing that (for example, due to the mental incapacity, or incompetence, of the patient), every reasonable effort should be made to obtain the informed consent, in like manner, of either a patient surrogate (if the patient has a durable power of attorney for health care decisions) or a patient advocate (in the absence of such an advance directive). Only in cases of an emergency medical procedure of any significance in which the nature of the illness, or injury, of the patient is such that proper treatment requires urgent medical attention, in addition to which it is not possible (again, due to the mental incapacity, or incompetence, of the patient) to obtain the written signature authorization of the patient, and there is insufficient time to secure the written signature authorization of either a patient surrogate or a patient advocate, would it be morally justified to proceed with such a medical procedure in the absence of any written signature authorization.
Adolescent patients represent a special case in that while, in many cases, the cognitive ability of the adolescent patient is sufficient to comprehend most, if not all, of the important and relevant information concerning their own health care needs as well as the recommended options for treatment, normally, they are not recognized as competent medical decision-makers in the law. To accommodate both of these facts, and in addition to the written signature authorization by a parent or guardian, every reasonable effort should be made to inform adolescent patients of all of the important and relevant information concerning their own health care needs and the recommended treatment options, including the approved one, in order to obtain their assent to the latter. An exception to this is the case of emancipated minors, that is, minors who are in the military, married, pregnant, already a parent, self-supporting, or who have been declared to be emancipated by a court; emancipated minors, in most legal jurisdictions, are granted the same legal standing as adults for health care decision-making.
There is a moral obligation to protect from dissemination any and all personal information, of any type, that has been obtained on the patient by any and all health care professionals at any medical facility. The justification for the protection of this right is integral to the very provision of health care itself. It is essential that there exist a relationship of trust between the patient and any health care professional. This is so because there is a direct correlation between the trust that a patient places in a health care professional to keep in confidence any and all information of a personal nature that surfaces within the context of their clinical relationship and the extent to which that patient can be expected to be forthcoming with full and accurate information about oneself, which is necessary in order for the proper diagnosis and treatment of the patient to even be possible. In fact, the absence of such trust, either well-founded or not, in the mind of a person who is considering whether to enter a patient-health care professional relationship can be sufficient to keep that person from entering such a relationship at all.
Adding to the concern that a patient in any medical facility has, with respect to the extent to which personal information about oneself can reasonably be expected to be kept in confidence, is the number of employees of such a facility (especially a hospital) who have access to such information. Even limiting the number of such employees to those who need access to such information in order to properly perform their own medical duties, and even allowing for relevant distinctions between, for example, small community hospitals in rural areas and large metropolitan medical centers that serve as “teaching hospitals” for medical schools, there are literally dozens of people who have such legitimate access. For example, it is not atypical for the personal information on a surgical patient in a hospital to be accessed by attending physicians as well as physicians who are specialists and who serve as case consultants, nurses (for example, in the operating room, in the post-anesthesia care unit, in a step-down unit, on a medical-surgical floor, and perhaps, in other clinical areas), therapists (respiratory, physical, and other types), laboratory technicians (of a variety of kinds), dieticians, pharmacists, and others, including, but not limited to, patient chart reviewers (for example, for quality assurance), and health insurance auditors. Eventually, a point is reached at which the very concept of “confidentiality” either no longer applies or loses any meaning that it might have originally had. Moreover, the greater the number of people who have access to the personal information on a patient, the greater is the possibility that such information might be compromised in any of a number of ways.
In order for the respect of the patient’s moral right to the confidential maintenance of personal information in the clinical setting to have any real credibility and in order to ensure that the patient receive the best possible quality of health care, there is a moral responsibility on the part of any and all health care professionals to exercise the utmost care in the handling of the personal information on the patient such that the access to, and the use of, such information is strictly limited to what is necessary for the proper medical care of the patient. Furthermore, patients, themselves, have the right to request access to their own medical records in any medical facility (including medical offices as well as hospitals and long-term care facilities) and should be allowed (to the extent to which it is reasonably possible) a voice in who else has access to such information. To allow the patient this kind of input in one’s own medical care can foster, in any of a number of ways, the relationship of trust between the patient and the various health care professionals that is necessary for the proper medical care of the patient. (Confidentiality rights for patients in America received a comprehensive make-over with the implementation, in 2003, of the Health Insurance Portability and Accountability Act (HIPAA).)
Despite the fact that the patient’s moral right to confidentiality, concerning personal information, is of the utmost importance and despite the fact that the physician-patient relationship has traditionally enjoyed a privileged status, even in the law, there is at least one exception to this moral right: the oral or written expression of the intention, in a serious and credible way, on the part of the patient, to harm another. Such a communication imposes on the health care professional not only a moral, but also a legal, obligation to notify the proper authorities. In such a case, the right of another to not be harmed supersedes the otherwise obligatory moral right to confidentiality on the part of the patient. (The Supreme Court of the State of California decision in the Tarasoff v. Regents of the University of California case (1976) held that mental health professionals have a legal obligation to warn anyone who is threatened, in a serious way, by a patient.)
Another possible exception to the patient’s moral right to confidentiality is to be found within the context of the policies and programs of public health organizations. Given that the primary goals of such organizations are to foster and to protect the health of the members of entire populations, or societies, of people, the fundamental means by which to accomplish these goals are policies and programs the intent of which is either to prevent illness and injury or to provide health care services. In their efforts to prevent illness, public health policies sometimes come into conflict with a patient’s moral right to confidentiality. For example, a person’s right to know, for reasons of self-protection, that one’s spouse has contracted a sexually transmitted infection, by virtue of this spouse’s extra-marital relationship with one, or more, other sexual partners, might be given precedence (on moral, if not legal, grounds) over this spouse’s moral right to confidentiality, which normally would be protected within the physician-patient relationship (in this case, the same physician-patient relationship in which this sexually transmitted infection was discovered). Depending on the severity of the particular type of sexually transmitted infection, and the degree to which it is wide-spread in the population in question, the fact that this spouse has contracted this particular sexually transmitted infection might reasonably be not only a matter of individual concern but also, properly, a public health matter.
Of all of the ethical issues that can be encountered in the practice of health care, none has been more controversial than those of abortion, euthanasia, and physician-assisted suicide. Despite the debates that are waged, with an abundance of passion concerning the specific moral aspects of each of these ethical issues, a reasoned analysis of each of these ethical issues might be expected to provide new opportunities for a better appreciation of the complexities of each.
At least since the time of the Oath of Hippocrates, with its explicit prohibition against abortion, there have been admonishments against the practice of the aborting of a human fetus together with arguments on both sides of this issue. Abortion is a perennial moral issue in most societies that ebbs and flows in its importance as an issue that serves to inform, if not incite, social debate and social action. However, over the late 20th century and early 21st century in America, stark differences between the opinions on each fundamental side of this issue have been voiced by people in the society at large, as compared to the reasoned debates waged by philosophers as a result of their attempts to bring clarity to the relevant moral issues, to the concepts that are inherent in such issues, and to the language that is used to express such issues and concepts. Historically, some theologians and some legal theorists have made moral and legal distinctions, respectively, that are relevant to the practice of abortion based on the concept of “quickening,” that is, the point in time (usually 16 to 20 weeks after conception) during a pregnancy at which the expectant mother is first able to discern fetal movement in the womb, and on the concept of “viability,” that is, the stage of development of the fetus (usually taken to be 24 weeks into the pregnancy) after which the fetus is expected to be able to survive outside of the womb (despite the likelihood of under-developed body organs and physiological, if not also mental disabilities).
The U. S. Supreme Court decision in the case of Roe v. Wade (1973) upheld a woman’s legal right to an abortion in accordance with the “due process” and “equal protection” clauses of the Fourteenth Amendment of the U. S. Constitution, rendering illegal any outside attempts to the contrary (usually by state governments), during the initial trimester of the pregnancy, but allowing state governments to limit, although not prohibit, a woman’s decision to have an abortion during the second trimester of the pregnancy. From the end of the second trimester to the time of delivery, that is, after viability, state governments were granted the authority not only to limit but also to prohibit abortions.
Despite the fact that those who adopt what are usually referred to as conservative positions and those who adopt what are usually referred to as liberal positions on the issue of abortion sometimes take the same position on related moral issues, for example, that murder is morally unacceptable and that people have a moral right to their own lives, many disagree, fundamentally, on the question of whether the act of abortion is also an act of murder and on the question of whether a fetus has a right to life. Since the Roe v. Wade landmark decision, most of the theoretical ethical debates have attempted to address each of these issues by focusing on the concept of “personhood,” as central to this debate.
Mary Anne Warren, in an influential essay in which she responds to many of the significant arguments in the literature to that point in time, makes an important distinction between what it is to be a human being as compared to what it is to be a person. According to Warren, the classic argument against abortion relies on a logical argument that depends on the fallacy of equivocation in order to attempt to be successful. The argument is as follows: since it is morally incorrect to kill innocent human beings, and since fetuses are innocent human beings, then it follows that it is morally incorrect to kill fetuses. Warren points out that the proponent of this argument is equivocating on the term “human being.” For, in its occurrence in the initial premise, “human being” is intended to mean something like “a full-fledged member of the moral community,” that is, the moral sense of the term “human being,” but, in its occurrence in the second premise, “human being” is intended to mean something like “a member of the species, Homo sapiens,” that is, the genetic sense of the term “human being.” Because the term “human being” shifts its meaning from its occurrence in the initial premise to its occurrence in the second premise, the conclusion, in fact, fails to follow from its premises; in other words, because the proponent of this argument is guilty of the fallacy of equivocation, this argument (which in order to succeed would need a different term in the place of “human being,” the meaning of which would be preserved in both of its occurrences) fails.
Warren argues that “moral humanity” and “genetic humanity” are not synonymous in meaning because the membership of these two classes is not the same. In other words, persons are viable candidates to be “full-fledged members of the moral community” in a way in which human beings are not. Consequently, the moral community consists of all, but only, persons. She then entertains the question concerning what characteristics an entity must have in order to be considered a person and launches a search for what might constitute the criteria necessary for personhood. In the final analysis, she identifies five such criteria, which she offers as “most central to the concept of personhood,” as follows: “1) consciousness (of objects and events external and/or internal to the being), and in particular the capacity to feel pain; 2) reasoning (the developed capacity to solve new and relatively complex problems); 3) self-motivated activity (activity which is relatively independent of either genetic or direct external control); 4) the capacity to communicate, by whatever means, messages of an indefinite variety of types, that is, not just with an indefinite number of possible contents, but on indefinitely many possible topics; and 5) the presence of self-concepts, and self-awareness, either individual, or racial, or both” (Warren, 1973). Warren acknowledges that it should not be required of an entity that it must exhibit all five criteria in order to qualify as a person, nor should any particular one of these criteria be deemed necessary for personhood. However, she does identify the first two criteria, followed closely by the third, as the most important. Finally, she insists that any entity that fails to exhibit any of these five criteria is, definitely, not a person, and that a human fetus is just such an entity.
Yet another argument against the right of a woman to have an abortion stems from the claim that, even if it can be demonstrated that a fetus is not, strictly speaking, a person, a human fetus is, after all, “potentially” a person. That is, if a fetus is allowed to develop, over the course of a normal pregnancy, its potential to become a person becomes more and more likely the closer that it gets to its time of delivery. The question is whether this potentiality for personhood should be considered to guarantee the fetus some rights akin to the rights of a person, for example, a right to life. Warren takes up this issue and concludes that while the fact that the human fetus is a potential person, which, on moral grounds, might entail that women ought not to wantonly have abortions, in the final analysis, whenever the question comes down to the right to life of the fetus as opposed to the right of a woman to have an abortion, the right of the woman must always supersede the claimed right on behalf of the fetus because the rights of actual persons always outweigh the rights of potential persons.
Don Marquis takes on the question of the morality of abortion in a way that is separate and apart from any considerations of whether a fetus can be a determined to be a person and even whether a fetus can be considered to be potentially a person. Rather, Marquis’s argument is an attempt to avoid the logical pitfalls of each of these other types of arguments. According to Marquis, the one factor that allows us to consider the taking of a human life to be morally objectionable is that to do so is to take away that individual’s life experiences, activities, projects, and enjoyments, which, had that individual’s life not have been taken away, would have constituted that individual’s future personal life, all of which (that is, one’s experiences, activities, projects, and enjoyments) would have been either intrinsically valuable or, at least, valuable as means to ends, such ends being intrinsically valuable to that individual. To take a human life is to deprive the individual of both what one values at present but also what one would have come to value over time had one been allowed to live on, that is, to deprive one of all of the value that one’s future continued life had promised, a future that now will not exist. It is, says Marquis, this loss that makes the taking of a human life morally incorrect. This argument against the taking of a human life would apply not only to adults but also to young children and babies who, arguably, also have a future of value concerning life experiences, activities, projects, and enjoyments to which to look forward. In the same way, a human fetus has a similar future such that, if aborted, would never be able to come to pass (Marquis, 1989).
An obvious criticism of this argument concerning the moral status of abortion is that Marquis’s argument suggests, at least, that the reason he identifies to support the claim that the taking of a human life, in the case of human adults, young children, babies, and even fetuses, is morally incorrect is, if not the only reason, then at least far and away the most important reason to support this claim. However, this is to minimize the importance of other such reasons, the plausibility of which also seems likely, such as the varying degrees of emotional pain and grief as suffered by the friends and loved ones of the victim and the denigrating effects on the perpetrator’s character, if only in terms of a desensitization to the value of human life itself. Finally, and notwithstanding the concept of personhood, Marquis’s argument, again, at least suggests that the prospective future of a human fetus is, if not identical to, then on a fundamental par with that of not only a baby or a young child but also a human adult. However, surely, there are relevant differences, not the least of which would be the capacity, more so for a young child than for a baby and more so for an adult than for a young child, to envision and to have anticipatory thoughts about one’s own prospective future and the value that it might hold, a capacity that, in theory, a fetus just does not have.
At least since the Roe v. Wade U. S. Supreme Court decision, the spectrum of positions on the issue of the moral status of abortion has been represented by an extreme conservative position, namely, that, without any exception, abortions of human fetuses ought never to be allowed; by an extreme liberal position, namely, that abortions of human fetuses ought always to be allowed, and for any reason whatsoever; and by more moderate positions, like, for example, that abortions of human fetuses ought not to be allowed, in general, but ought to be allowed in cases in which the following circumstances serve as the exceptions: in cases in which pregnancies have occurred as a result of the act of rape or the act of incest, or in cases in which the life of the expectant mother is seriously jeopardized by the pregnancy itself. It is likely that people, in societies throughout the world, will continue to stake out positions on this issue as influenced by their cultural and/or religious beliefs, by the beliefs of their ancestors and/or living relatives, by their own ignorance or knowledge on the subject, and for all other manner of reasons, but it is unlikely that the spectrum of positions on the issue of the moral status of abortion will change.
Euthanasia is an intervention in the standard medical course of treatment of a patient who is reasonably considered to be terminally, or irreversibly, ill or injured for the express purpose of causing the imminent death of that patient, normally for reasons of mercy.
Whenever a patient who is competent to make health care decisions for oneself and who, under no coercion from anyone else, makes an explicit request (oral or written) to be euthanized, the case in question is one of “voluntary euthanasia.” Moreover, whenever a patient is not competent to make health care decisions for oneself but on behalf of whom an advance directive has been properly provided, one that was properly executed by the patient prior to becoming incompetent to make health care decisions for oneself and that explicitly expresses (in the case of a living will) or explicitly authorizes a surrogate to express (in the case of a durable power of attorney for health care decisions) the request to be euthanized under certain specified conditions, and these conditions are present, the case in question is also one of “voluntary euthanasia.”
Whenever a patient who is not competent to make health care decisions for oneself and on behalf of whom no advance directive has been properly provided but for whom a patient advocate (that is, a close relative whose decision-making authority is recognized in the law or, failing that, a more distant relative or friend) makes an explicit request (oral or written) that the patient in question be euthanized, the case in question is one of “non-voluntary euthanasia.”
Whenever a patient who is competent to make health care decisions for oneself but for whom someone other than the patient makes the decision that the patient be euthanized and does so without the consent of the patient (either because the patient was never consulted on the matter or because the patient was consulted but chose not to give consent), the case in question is one of “involuntary euthanasia.” While neither voluntary nor non-voluntary euthanasia presents any moral concerns, by its very nature, it is impossible to imagine a situation in which involuntary euthanasia could ever be morally justifiable.
When an instance of euthanasia takes the form of the committing of an action, it is usually referred to as “active euthanasia;” when such an instance takes the form of refraining from the committing of an action, it is usually referred to as “passive euthanasia.” The administering of a lethal injection would be an example of the former; the withholding of a regular course of medical treatment in order for a fatal injury, illness, or disease to take its natural toll would be an example of the latter. This distinction between active and passive euthanasia has been, historically, the focal point of the most controversy concerning the practice of euthanasia.
Traditionally, all health care-related professional codes of ethics find passive euthanasia to be morally allowable but active euthanasia to be tantamount to murder; the relevant laws in all of the legal jurisdictions in America follow suit. However, an argument can be made that terminally ill or injured patients ought to be allowed, both morally and legally, to decide when one’s own life should end and whether it should be an instance of active or passive euthanasia; the justification for such allowances would be out of a true respect for the right of such patients to self-determination concerning not only their own health care, but also the duration of their own lives as well as the means by which their lives are to end, which would be an instance of a true respect for the autonomy of such patients. Indeed, an additional argument can be advanced in an effort to uphold the patient’s right to beneficent health care. That is, in an effort to attempt to “do good” on behalf of, or to benefit, a terminally ill or injured patient, once again, one could argue that such patients should be allowed to decide their own fate and the means by which to achieve their chosen fate, that is, by the method of either active or passive euthanasia.
James Rachels, in a famous article on this very question (Rachels, 1975), attempts to demonstrate that this controversy represents a distinction without a difference. That is, Rachels argues that there are, indeed, no relevant moral differences between active and passive euthanasia, and that, in order to be consistent in one’s thinking, one has to acknowledge that active and passive euthanasia are either both morally allowable or both morally condemnable. William Nesbitt argues that Rachels fails to prove that the ordinary interpretation of responses to the two agents in Rachels’s famous comparative examples would be the same, which is the heart of the case that Rachels sets forth (Nesbitt, 1995 and Callahan, 1989).
Related to the topic of active euthanasia is what has come to be known as “the doctrine of double effect.” This doctrine has a long and rich history in the doctrine of the Roman Catholic Church but has only been applied to cases of terminally ill patients in the early 21st century. In its application to patients with terminal diagnoses who receive palliative care, the doctrine of double effect is typically invoked in an effort to justify on moral (if not legal) grounds the commission of an action by a medical professional the intention of which is to relieve the patient’s usually excruciating physiological pain while being fully cognizant of the likely, but unintended, consequence of causing the death of the patient. For example, a cancer patient, with a prognosis of only a matter of days to live, continues on a regimen of the sedative lorazepam and the opioid morphine. With increasing frequency, the patient has complained of the worsening of the pain and has repeatedly requested ever-higher doses of the morphine drip. In response to each of these requests, the physician has complied, knowing full well that there will be a threshold beyond which the dosage of morphine will be sufficient (in conjunction with a myriad of other causal factors that are idiosyncratic to this patient) to kill the patient. This, then, comes to pass. If asked by a nurse on this case whether anyone was culpable for the patient’s death, the physician would, typically, reply that no one was so culpable because, even with the final increase in the dosage of morphine, the intention was not to kill the patient; rather, the intention was to alleviate the patient’s pain.
The myriad of other causal factors that can, mutually, hasten such a patient’s death include (but would not be limited to) the patient’s body weight, the status of the patient’s immune system, the effects of the progression of the cancer, the effects of other medications, and whether the patient is still receiving nutrition and hydration. The key factor in the doctrine of double effect is the intention on the part of the medical professional in question. As long as the action in question is deemed a good one, the intention was the beneficial effect (alleviating the patient’s pain) rather than the harmful effect (killing the patient), the beneficial effect stemmed from the action directly rather than as a result of the harmful effect, and the beneficial effect outweighed, in importance, that of the harmful effect, then the action in question is determined to have been morally (if not also legally) allowable by the doctrine of double effect. However, the most fundamental criticism of the application of the doctrine of double effect to such cases is that there is no relevant moral distinction between the action in question and an instance of active euthanasia.
Palliative sedation, as the monitored use of medications, including sedatives and opioids, among others, to provide relief from otherwise unmitigated and excruciating physiological, among other types of, pain or distress by inducing any of a number of degrees of unconsciousness, can be similarly problematic depending on whether and to what extent the pain or distress of the patient in question is managed appropriately. If managed well, palliative sedation need not be a causal factor in hastening the death of the patient; however, if it is not managed well, in theory, palliative care can be such a causal factor.
If “suicide” were to be understood as one’s pursuit of a plan of action the effect of which is expected to be the intentional premature death of oneself, then “assisted suicide” can be understood to be one’s pursuit of a plan of action the effect of which is expected to be the intentional premature death of oneself but the effect of which, in order to be successful, needs to be facilitated in some way, shape, or form by someone else. If that someone else were to be a physician, then it would constitute a case of “physician-assisted suicide.” Public attention was brought to bear on the issue of physician-assisted suicide in America by Dr. Jack Kevorkian who, throughout the final decade of the 20th century, as a retired pathologist, offered to help fatally ill patients to end their lives prematurely. Prior to his fifth, and final, prosecution, which was for second degree murder, and for which he was convicted (having avoided this fate the first four times), he claimed to have assisted approximately 130 patients to end their lives, which he had claimed, throughout his entire medical career, that patients ought to have a right (both morally and legally) to do. Despite the fact that all health care-related professional codes of ethics have consistently, and still do, condemn physician-assisted suicide, currently, at least five of the fifty states in America have legalized physician-assisted suicide. Among those European nations that had legalized both active euthanasia and physician-assisted suicide by the early 21st century, the Netherlands has led the way (Kevorkian, 1991).
Theoretically, the most fundamental reason to conduct research involving human subjects is to add to our existing knowledge concerning the physiological and the psychological constitution of the human body and the human mind, respectively, in an effort to improve the quality of life of people as determined by the status of their bodily and mental health. Thus, the principle of beneficence should lie at the heart of all research that is conducted with human subjects. The history of such research is one of major achievements, typically incremental and over time, each of which has played a part in the extension of not only the duration of human life but also the quality of the day-to-day existence of members of the human race, virtually all over the planet. However, many are the moral issues that have arisen due to the mistreatment to which many such human subjects have been subjected, and which have occurred in any of a number of important ways, from physiological abuse to mental and emotional abuse to the abuse of human rights. The history of human subject research is replete with examples of such abuses. By the middle of the 20th century, enough people in sufficiently important roles in Western societies began to codify what they took to be some of the most basic moral rights that would need to be respected in order for human subject research to be recognized as morally acceptable.
Over many decades throughout the second half of the 20th century, a variety of codes of ethics were developed for the protection of the rights of people who serve as human research subjects. In virtually every case, those codes, that were of the most importance, were formulated in response to specific cases of human subject research during the course of which at least some of the people who served as participants had some of their fundamental rights abused. A few examples follow.
The Nuremberg Code (1949) was formulated in response to experiments that were performed on people who were members of demographic groups that were targeted for extinction by Hitler in Nazi Germany and that were conducted by medical doctors and biomedical researchers some of whom had little to no expertise or experience in either the practice of medicine or the conducting of biomedical research. In the judgment of those who prosecuted two dozen of these experimenters in what came to be known as the “Doctor Trials,” held in Nuremberg after the more famous Nuremberg trials in which the Third Reich’s major suspected war criminals were prosecuted, the main charge for which the defendants were tried was the murderous and torturous human experiments that were conducted in many of the concentration camps and the prisoner of war prisons. Of the ten principles in the Code, the emphasis, in general, was on the need for biomedical researchers to obtain the voluntary informed consent of the prospective human subjects prior to the commencement of any such experimentation. The second most important right of human subjects of such research to be emphasized in the Code was the human subject’s right to protect oneself by determining whether, and when, it is one’s own interest to end one’s own participation in such an experiment, without fear of any penalty or punishment. Despite having no legal force, The Nuremberg Code has had profound effects on the ethics of human experimentation and has spawned a good number of other such codes since its formulation.
The Declaration of Helsinki (1964, and with multiple revised versions since) was adopted by the World Medical Association’s World Medical Assembly with the title, “Recommendations Guiding Medical Doctors in Biomedical Research Involving Human Subjects.” This code of ethics consists of a host of recommendations, the result of which is the establishment of the following moral principles: 1) a competence requirement for research investigators, 2) a requirement that the significance and importance of any expected positive outcomes of the research outweigh any anticipated risks to the human subjects, 3) a requirement of informed consent on the part of the human subjects, and 4) a requirement for the external review of all of the research protocols.
The National Research Act (1974) created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research and did so in direct response to the infamous “Tuskegee Syphilis Study” (1932-1972), which was a study of approximately 400 African-American male share-croppers, each of whom suffered from this most serious of venereal diseases, the stated purpose of which was to attempt to ascertain whether there were any significant differences between the progression of syphilis in African-American men as compared to Caucasian men. The participants in this study, begun during the throes of the Great Depression and in one of the economically poorest regions of America, were promised free food and free medical care for their participation. However, rather than being informed of the venereal disease from which they suffered, they were told only that they had “bad blood.” Most of these men were married and continued to have conjugal relations with their wives and to produce children (many of whom, wives and newborn babies, were infected with syphilis). Worse, even after penicillin was discovered and approved as modern medicine’s first antibiotic (and found to be effective against a variety of bacterial infections in humans, including syphilis, by the late 1940s), not only were these men never informed about this “miracle cure,” the health care professionals who were conducting this study, knowingly and intentionally, refrained from administering any penicillin to any of this study’s participants (Brandt, 1978).
The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research (1979) was generated by the above-mentioned commission and identified boundaries between the practice of routine medical care as compared to biomedical research protocols (again, as a direct result of the “Tuskegee Syphilis Study”), identified moral guidelines for the process by which research subjects are selected and for their informed consent, and emphasized the moral principle of respect for research subjects as persons as well as the ethical principles of beneficence and justice in the treatment of human subjects.
The Public Health Services Act (1985) established and mandated that every research facility in America that conducts either biomedical or behavioral research on human subjects have an Institutional Review Board (IRB) for the protection of the rights of human research subjects. This requirement for each such research institution (academic or otherwise) to have IRB approval for each and every biomedical or behavioral research study was a result of many instances of research protocols that, for a variety of reasons, were thought, at least in retrospect, to have violated the human rights of their human participants. For example, the “Stanford Prison Experiment” (1971) was a behavioral study, the purpose of which was to identify and analyze the psychological effects of the relationship between prison guards and prisoners on members of each group, but which took on a life of its own and resulted in a good number of human rights violations. As for biomedical research, the famous case of Henrietta Lacks and her HeLa cells allowed for at least dozens and dozens of medical breakthroughs in the curing of diseases in the latter half of the twentieth century, making large amounts of money for some people and some institutions in the research process, while most of her descendants, including some of her own children, lived their entire lives without health insurance, some of whom were, even if temporarily, homeless. Only recently has attention been brought to her story, and to this situation, by her biographer (Skloot, 2010).
The composition of the membership of all Institutional Review Boards (IRBs) is mandated to be reflective of diversity with respect to gender, race, and culture or heritage as well as a diversity of social experiences and an appreciation for issues (relevant to the research involving human subjects) that reflect the standards and values of society, if not also of the local community. The fundamental goal of all IRBs is to determine the acceptability of all research proposals, involving human subjects, based on the extent to which such proposals adhere to all relevant federal, state, and local laws, the research institution’s own policies and regulations, and all relevant standards of professional conduct, as mandated by the federal government. Moreover, IRBs are obligated to ensure that all proper procedures are followed for the voluntary informed consent of all of the subjects of all research projects.
In addition to enforcing stringent standards in order to ensure that the consent of prospective human participants be truly informed, IRBs are mandated to enforce equally strict standards concerning the following: that potential risks as well as expected benefits of the research protocols are made clear to prospective participants; that information of a personal nature that is obtained on research participants is kept in strict confidence; and that any research participant who is, simultaneously, a patient (whether in a medical facility or not) under medical treatment, is made sufficiently aware of the differences between those practices that are a part of one’s medical treatment as compared to those practices that are a part of the research protocol. In other words, researchers, in such situations, are morally obligated to exercise what sometimes might constitute supererogatory measures in an effort to help the research participant to be aware of which procedures that one is subjected to are a part of one’s medical treatment and which procedures that one is subjected to are a part of the research study, which might or might not be expected to be of therapeutic value.
The moral issues that have arisen, over decades, concerning human subjects in both biomedical and behavioral research are many and varied. In biomedical research, such issues include the exclusion of the members of specific demographic groups from even being considered to be eligible to become participants in such research. For example, until the latter part of the 20th century in America, biomedical research on breast cancer was almost nonexistent. Not until women, in decent numbers, had entered the field of medicine and the field of biomedical research did research proposals into various aspects of breast cancer begin to compete for funding with research proposals into various aspects of prostate cancer. Furthermore, even biomedical research into, for example, the correlative, if not causal, factors involved in heart disease solicited only Caucasian males as prospective research participants. In response to what some viewed as unjust funding priorities and unfair funding criteria, was the National Institutes of Health (NIH) Revitalization Act of 1993, which mandates that women and members of minority groups be included in all research that is funded by the NIH unless there is a “clear and compelling” reason that their inclusion in such research is “inappropriate” with respect to the health of the prospective subjects themselves or the purpose(s) of the research. Examples of appropriate exclusionary practices would be biomedical research into testicular cancer, which would properly exclude women, just as biomedical research into sickle-cell anemia would properly exclude Caucasians.
One of the most popularly known moral issues concerning both biomedical and behavioral research is the use of placebos. The classic case of the use of placebos is the clinical drug trial, in which researchers are attempting to determine, first, the effectiveness of the experimental drug, and second, the extent to which potential adverse side-effects of the experimental drug are significant, if not fatal. Typically, the study includes two groups of participants: those to whom is administered the experimental drug and those to whom is administered a placebo (popularly known as a “sugar pill” due to the fact that it is designed to have no relevant affect, at all, on the research participant to whom it is administered). In order to attempt to ensure credibility concerning the use of a placebo, the participants in both groups are intentionally deceived as to which group of participants is receiving the experimental drug and which is receiving the placebo. To attempt to ensure even more credibility concerning the use of a placebo, the researchers orchestrate not only a blind study, as just mentioned, but a double-blind study, in which in addition to the researchers withholding from the participants of each group the knowledge of which group’s participants are receiving the experimental drug and which are receiving the placebo, neither do the researchers themselves know this information. The main reason for a blind study is to attempt to avoid any possibility of what we might refer to as suggestive bias on the part of the participant concerning the possible effectiveness of the experimental drug. The main reason for a double-blind study is to attempt to avoid any possibility of what we might call expectation bias on the part of the researchers themselves concerning either the effectiveness, or the lack thereof, of the experimental drug.
The use of placebos in biomedical or behavioral research does raise questions concerning the ethical principle of beneficence in addition to the moral right to be told the truth. First, in theory, the participants in many, if not most, clinical trials, including drug trials, have reasonable expectations of benefitting in any of a number of ways from their participation in such research. At least in cases in which such a participant is, simultaneously, a patient with a terminal illness who ends up in the placebo designated group, it would appear that the right to beneficent treatment is being thwarted. In such a situation, and by the nature of the case, such a participant would be, perhaps, literally, betting one’s life on, in this case, the experimental drug. Second, to the extent to which participants in human subject research are being deceived, knowingly and intentionally by the researchers, which is a necessary part of any research study involving the use of placebos, a case can be made that the moral right to be told the truth, on the part of the research participant, has been violated (regardless of whether such participants are also, simultaneously, patients who are receiving medical treatment). Of course, the response to either of these criticisms of research protocols that make use of placebos is that the participants agree to the use of placebos and know, full well and in advance, that they have an equal opportunity to be members of the group who receive the placebo or members of the group who do not.
By the nature of the case, there are some groups of people in society who are especially susceptible to abuse, concerning their rights, whenever they are the subjects of human research. Such vulnerable populations are as follows: babies, including neonates (as well as human fetuses and the subjects of human in vitro fertilization, at least in theory); children; pregnant women; prison inmates; undergraduate and graduate students; the members of any demographic minority group; and anyone who is cognitively challenged, physiologically challenged, educationally disadvantaged, economically disadvantaged, significantly compromised in one’s health, terminally ill, injured, or disadvantaged in any other relevant way.
Of particular concern in the recruitment of human research subjects, especially in cases involving prospective participants who are known to be vulnerable in any important and relevant respect(s), is the issue of coercion, whether explicit or implicit. Notwithstanding the initial one, people in every category, above-enumerated, as groups of people who represent vulnerable populations, would be susceptible, for a variety of reasons, to the influence of coercion by recruiters for human subject research. Whenever possible, biomedical and behavioral researchers should refrain from even attempting to recruit, as a prospective participant, anyone who is reasonably identifiable as a member of any vulnerable population. In the event that a biomedical or behavioral researcher needs to recruit any such vulnerable prospective participants (by virtue of the nature of the research itself), the researcher has a moral obligation to be aware of the likelihood that the prospective participants in question will feel coerced (either explicitly or implicitly, and whether they are aware of it or not) to “voluntarily” consent to participate in the research project in question. In such a situation, the researcher is morally obligated to engage in supererogatory efforts to attempt to minimize, as best one can, the effects of the coercion involved.
Once recruited, the most fundamental concern of the biomedical or behavioral researcher is the need to ensure, as best one can, that the participant (as a member of a vulnerable population) is as fully informed as possible, with respect to all relevant information concerning the proposed research project and the participant’s role in it, in an effort to approximate, once again, as best one can, truly informed consent on the part of the participant. The main reason for this concern is that any particular research participant, who is vulnerable in any important and relevant respect(s), might find it difficult, if not impossible, to comprehend any, much less all, of the relevant information concerning the proposed research project and one’s own role in it, for any of a number of reasons, for example, insufficient comprehension abilities, insufficient familiarity with the language spoken by the researchers, inadequate cognitive abilities, chronic pain of such intensity as to inhibit one’s cognitive processes in the case of a research participant who is also a patient with at least one acute health issue, and more.
Throughout the history of the practice of health care, the acquisition of knowledge and the innovation of medical technologies have brought with them new moral issues. Beginning in the last quarter of the 20th century and continuing into the 21st century, advancements in knowledge and technologies concerning human reproduction and human genetics have spawned whole new types of moral questions and moral issues, many of which involve even more complexities than the previous ones.
The last quarter of the 20th century brought with it major advances in biological knowledge and in biological technology that allowed, for the first time in human history, for the birth of human offspring to result from biological interventions in the birthing process. For those whose ability to procreate was biologically compromised, new scientific methods were developed to facilitate success in the birthing process. Such methods include artificial insemination (AI), in vitro fertilization (IVF), and surrogate motherhood (SM).
Artificial insemination is the process by which the sperm is manually inserted inside of the uterus during ovulation. In vitro fertilization is the process of uniting the sperm with the egg in a petri dish rather than allowing this process to take place in utero, that is, in the uterus. To increase the probability of success, multiple embryos are transferred to the uterus. As a result, multiple pregnancies are not uncommon. These multiple pregnancies increase the probability of premature births, which usually result in low-birth weight, under-developed organs, and other health issues. As to the embryos that are not chosen for transfer, the normal practice is to freeze them for possible future use because the success rate for any given round of IVF is only approximately 1 in 3.
Many opponents of IVF focus on the probability of the resultant health issues; in other words, to bring into the world, in a contrived way, children who stand a reasonable chance of suffering any of a number of health problems is unfair to such children (Cohen, 1996), if not also to the society into which they are born. Others disagree and argue that to be the recipient of the gift of life would more than outweigh the usual health issues that might result from IVF (Robertson, 1983). Some commentators argue that reproductive technologies, such as AI and IVF, allow women the opportunity to realize their potential for autonomous decision-making when it comes to their own reproductive preferences (Robertson, 1994 and Warren, 1988). Another criticism is the likelihood that the children, so produced, will be viewed as, somehow, inferior to children who are born as a result of the traditional process of procreation. There are also moral issues concerning frozen embryos. First, the longer that an embryo is maintained in a frozen state, the more likely it is that it will become degraded to the extent that either it is no longer capable of being used for its intended purpose or it is no longer alive. Second, there are serious questions as to what the fate of these frozen embryos should be when, for example, because of the splitting up of the relationship of the biological parents or the death of one, or both, of these parents, such embryos are left in a state of limbo. Should they be used for scientific research, should they be offered to other people, whose compromised procreative abilities dictate a need for such embryos to be brought to fruition through the process of IVF, or should such embryos merely be discarded?
Surrogate motherhood is the process by which one woman carries to term a fetus for someone else (typically a couple). The surrogate mother is impregnated by the method of either AI (traditional surrogacy, according to which the surrogate mother’s egg is fertilized) or IVF (gestational surrogacy, according to which an embryo is transferred to the uterus of the surrogate mother). Not only in the former case (in which the surrogate mother is also the genetic mother) but also in the latter case (in which the surrogate mother is not the genetic mother), one of the most important moral, if not also legal, issues has always been whether the surrogate mother has any proprietary rights to the newborn baby, regardless of whether a legal contract applies and regardless of whether any money changes hands.
Another fundamental moral issue occurs in cases in which there is a contractual relationship as a legal guarantee for a financial agreement. Such cases raise the moral issue of whether fetuses and newborn babies should be treated as commodities, and indeed, whether the womb of the surrogate mother should be rented out as a service for someone else, that is, also treated as a mere commodity (Anderson, 1990). However, not all commentators on this subject agree that surrogate motherhood can, of necessity, be reduced to the crass practice of baby selling or that women who serve as surrogate mothers are, necessarily, exploited. On the contrary, it can be argued that women who serve as surrogate mothers are willing to forgo any parental right that they might have to begin, much less to maintain, an inter-personal relationship with the babies they deliver. In the same way in which this forgoing of any parental right to engage in any type of inter-personal relationship with the baby appears to not be offensive in cases of surrogate motherhood, when engaged in for altruistic reasons, consistency would seem to demand that no such offense should enter into the situation just because an exchange of money is involved; in other words, the motive is not relevant to the moral assessment of the process of surrogate motherhood (Purdy, 1989).
Artificial insemination, in vitro fertilization, and surrogate motherhood have been defended on the ground that the right to reproductive freedom, including the right to exercise one’s autonomy concerning procreation, allows for any such means to bring children into the world.
Cloning is the asexual reproduction of an organism from another that serves as its progenitor but that is genetically identical to its progenitor. Cloning has always been a natural process of reproduction for many bacteria, plants, and even some insects, and it has been used as an intervention in the reproduction of plants for hundreds of years. However, since the successful cloning of a sheep named Dolly in 1996, major moral concerns have been voiced concerning the ability of scientists to clone, not only other animals, but also human beings. Despite some claims to the contrary, none of which has ever been verified, the cloning of human beings is not yet feasible.
The purpose of therapeutic cloning is to create an embryo, the stem cells of which are identical to its donor cell and are able to be used in scientific research in order to better understand some diseases, from which can be derived treatments for such diseases. The same moral issues concerning the use and ultimate fate of human embryos, as aforementioned, apply to these cloned human embryos.
The purpose of reproductive cloning is to create an embryo, which if brought to fruition will become a member of the animal kingdom. In the successful attempts to clone a variety of animals to date, a consistent problem has been health issues related to significant defects in major organs, including the heart and the brain; in addition, the duration of the lives of these cloned animals has been, on average, only half of the number of years of the normal life expectancy of such species. Moreover, each successful attempt to clone these animals has been preceded by literally dozens, if not hundreds, of unsuccessful attempts. These same problems would represent major moral concerns in any attempt to clone human beings. However, were any such attempt to be successful and were the resultant cloned human being to be of sufficiently good health to lead anything like a normal existence, new moral issues would arise. Would such cloned human beings be viewed as second class members of the human race? Would they be deprived, either socially or legally, of some of the fundamental freedoms that are normally afforded people, for example, the right to exercise one’s own autonomy? Would cloned human beings have been robbed of the exact same uniqueness (in terms of their physiology, their personality characteristics, and their character traits) that every human being in the history of humankind has hitherto enjoyed? (Just because a cloned human being would be identical, genetically, to its progenitor does not mean, by virtue of its idiosyncratic experiences in utero and in life in a large number and variety of ways, that it would, of necessity, have exactly the same life as its progenitor) (National Academy of Sciences, 2002). This last point notwithstanding, would cloned human beings be denied rights to their own identity (Brock, 1998)?
Any scientific researcher who has aspirations to clone a human being would be well advised to read, carefully, Mary Shelley’s Frankenstein; or, the Modern Prometheus. Published in 1818, this work of science fiction leaves the reader with the not too subtle warning that one ought to keep one’s hubris in check; for to create anything, much less an artificial man, is, almost certainly, to fail to be willing, or, perhaps, to be unable, to anticipate many of the important untoward consequences of one’s actions, and equally problematic, to over-estimate one’s ability to exercise control over one’s own creation.
Since the discovery of the molecular structure of deoxyribonucleic acid (DNA), the molecule that contains the genetic instructions that are necessary for all living organisms to develop and to reproduce, in 1953, and since the completion of the mapping of the human genome, popularly known as the Human Genome Project, that is, the identification of the complete and exact sequencing of the billions of elements that make up the DNA code of the human body, some fifty years later, a vast amount of research has been conducted in the area of disease-causing mutations as causes of many human genetic disorders. This research has also allowed for the creation of literally thousands of genetic tests, the purpose of which is to detect, both in the case of prospective parents and at the fetal stage of the development of human offspring, those genetic mutations that are responsible, in part or in whole, for many non-fatal and fatal conditions and diseases. Furthermore, this research has allowed for the editing of human genes, in an effort to proactively disable some genetic mutations, in the case of adults, children, and newborns as well as in the fetal stage of development. The information derived from genetic testing, more often than not, is anything but definitive; in other words, the results of the vast majority of genetic tests are predictive of the probability that the disease or condition for which the testing was done will actually bear out. Whether such probabilities are low, moderate, or high, many other factors, especially environmental ones, can also be contributing factors. Further, while many genetic tests are available for the detection of conditions and diseases for which there is, at present, a cure, many other genetic tests are able to be conducted for conditions and diseases for which there are no cures. This fact raises the obvious question of whether specific individuals do or do not want to know that there is a probability, to whatever degree, that they will fall victim to a particular condition or disease for which there is no cure.
Each of the advances in genetic knowledge, genetic technologies, and biomedical capabilities concerning genetics brings in its train its own set of moral concerns. Genetic disorders such as amyotrophic lateral sclerosis (ALS, popularly known as Lou Gehrig’s Disease), a motor neuron disease, which is always fatal, can be familial, that is, one who has inherited the gene mutation for ALS has a 50% chance of passing the mutated gene on to any of their offspring. However, one who inherits the mutated gene might or might not fall victim to the ravages of the disease. It is conceivable that an individual, who has begun to exhibit some of the early symptoms of ALS, might choose to be tested for any of the four gene mutations that are thought to be causal. If such testing reveals the presence of one or more such mutations, and if this individual has children, the moral issue of whether any such children should be informed, immediately, and if they are so informed, the moral issue of whether such children should choose, themselves, to be tested, both become of paramount importance, if only because, depending on the outcome of the genetic testing of these children, the fate of any of their children (already in existence or as future possibilities) would be a concern.
Another moral issue that continues to arise in the context of genetic testing is when an adult or a child is tested for one condition or disease and a mutated gene is discovered for another potentially fatal condition or disease. This situation can occur because much genetic testing, at present, is sufficiently broad in its application as to include a variety of different genes. So, it sometimes happens that genetic testing for a toddler, for example, for one, or more, genetic mutations (which are suspected due to the presence of specific relevant symptoms) might reveal one or more other genetic mutations for conditions, diseases, or even specific cancers, or for young adult-onset cardiomyopathy, about which neither the researcher nor the pediatrician was even concerned. In such a case, questions arise as to whether such health risks (again, not anticipated but discovered by the genetic tests) for the toddler should be shared with the toddler’s parents, and if so, when should they be shared, that is, immediately or when the toddler is older (and if when the toddler is older, at what age). If it is not known whether the offensive gene mutations are inherited or are merely spontaneous (which is a common occurrence), does the timing of informing the toddler’s parents become a moral issue, in the event that the toddler’s parents might expect to bring additional children into the world? And, what about the toddler: from the perspective of the pediatrician or the parents, at what age should the toddler be so informed (Wachbroit, 1996)?
The moral issues identified, concerning each of these two hypothetical situations, are reflective of the ethical issues that are most fundamental in health care, namely, cases of conflict involving the ethical principles of respect for the patient’s right to autonomous decision-making as compared to acts of paternalism on the part of health care professionals and as compared to the patient’s right to beneficence in one’s relationship with health care professionals.
In addition to therapeutic reasons for genetics research and its application to health care, there are non-therapeutic reasons for such research and applications, for example, genetic enhancement, that is, the application of genetic knowledge and technologies to improve any of a number of physiological, mental, or emotional human characteristics. Some commentators argue that genetic enhancement, as compared to genetic therapy, is morally objectionable for a number of reasons, not the least of which is that, in a free-market economic system in which genetic enhancement is not provided to each citizen who might choose it by the state, those who could afford to pay for it would have a decided advantage over those who could not (Glannon, 2001). Other commentators do not agree, arguing that any attempt to use gene therapy to cure any type of human dysfunction is, in no way, morally different from any attempt to use gene therapy to enhance human function in cases in which such enhancements serve to protect one’s health or life (Harris, 1993).
Julian Savulescu goes even further by arguing for what he calls “procreative beneficence,” which is that anyone who is making use of genetic testing for non-disease human traits should make selections in favor of a child, from among other available selections in favor of other possible children, who can be expected to have, based on all of the available genetic information, what he calls “the best life,” that is, “the life with the most well-being,” or a life that would be at least as good as the lives that any of the other possible children would be expected to have. For, according to Savulescu, some non-disease-related genes influence the probability of one’s leading the best life; there is good reason to use information, which is at our disposal and which concerns such genes; and one should select embryos or fetuses which, in accordance with the available genetic information (including such information concerning non-disease genes), have the best opportunity for leading to the best life. He does make clear that, consistent with the moral requirement to make selections in favor of the child who can be expected to have the best life, those individuals who are making such selections may be subjected to persuasion but ought not to be subjected to any coercion (Savulescu, 2001).
Stoller contends that Savulescu fails to make his case because the examples that he offers to be, ostensibly, analogous to pre-implantation genetic diagnosis (PGD), a procedure that is used to screen IVF-created embryos for genetic disorders or diseases prior to their implantation, are different in ways that are morally relevant and consequently fail to justify his theory (Stoller, 2008).
Stem cell research, since its inception, has been the subject of much controversy. The pluripotent qualities of embryonic stem cells, that is, their ability to differentiate or to be converted into the cells that make up any of the human body’s parts, render them superior to adult stem cells when it comes to their use in genetic therapeutic research. Hence, many of the same reasons, as above-mentioned, that constitute moral issues whenever embryos are used for research purposes apply to the use of embryonic stem cells. This is despite the fact that they hold out much promise in their application to minimize the negative effects of, if not cure, many previously incurable conditions and diseases, for example, coronary disease, diabetes, Parkinson’s disease, Alzheimer’s disease, spinal cord injuries, and many others.
As genetic research progresses to the point at which gene therapy is able to make use of not only somatic-cell therapy (that is, the modification of genes in the cells of any of a number of human body parts for therapeutic reasons) but also germ-line therapy (that is, the alteration of egg cells, sperm cells, and zygotes for therapeutic reasons), the health care applications are expected to increase in number in an exponential way. However, the most important moral concern that the prospect of being able and willing to eventually engage in germ-line therapy is that this type of gene modification, by its very nature, will affect an unknown number of people in the future as they inherit these genetic changes. By contrast, somatic-cell therapy can only affect the person whose genes are so modified.
Health care resources have never been unlimited in any society, regardless of the type of health care system that was employed. At least for the foreseeable future, this fact is unlikely to change, but it is this fact that necessitates some form of what is normally referred to as the rationing of health care resources. Health care resources include not only the availability of in-patient hospital (and other medical facility) beds, emergency room beds, surgical units, specialized surgical units, specialized treatment centers, diagnostic technology, and more, but also personnel resources, that is, health care professionals of every description.
Whenever the availability of health care resources is exceeded by the demand for health care resources, the financial costs of such resources will rise; to the extent that, historically, there has been a consistent progression of the demand for such resources exceeding their availability, the financial costs of health care have also, consistently, risen. Because there are many other causal factors for this financial phenomenon, the rise in the financial costs of health care has been consistently exponential, in many countries, since the latter part of the 20th century. By the nature of the case, this occurs to a greater extent, and at a more rapid pace, in any country the politicians and public policy makers for which decide to employ a health care system that does not provide universal coverage.
The procurement of human organs for transplantation in order to save the lives of those who otherwise would not survive represents what many consider to be a modern medical miracle, which became possible only in the latter half of the 20th century. However, like all such advances in medical knowledge and in medical technologies, human organ transplantation raises some fundamental moral issues. Throughout the brief history of human organ transplantation, a problem that is expected to continue is the fact that there are many, many more people who need organ transplants in order to survive than there are human organs available to be transplanted. Consequently, the available organs, at any point in time, must be rationed, which raises the question of determining the relevant factors to be considered in deciding who receives transplanted organs and who does not.
To harvest human organs that are necessary for human life, for example, hearts, lungs, or livers, and in order to be able to transplant them into the bodies of people who will not survive without such a transplant, is to harvest them from the bodies of people who are only recently deceased. However, a single kidney or bone marrow, for example, are usually harvested from the body of a donor who is alive and, presumably, well. In either case, in most countries, permission is required to be granted, legally and arguably also morally, in order for the harvesting to take place. Organ donor organizations exist to enlist as many citizens as possible, in countries in which organ harvesting has been legalized, to be organ donors so that, once such donors are deceased, health care professionals are authorized to harvest any of a number of viable organs from the deceased donor’s body. As is the case for any invasive medical procedure, permission is necessary for one to donate one’s kidney or bone marrow as well.
One of the most important moral issues concerning the recipients of human organs is the issue of the criteria that are used for the selection of human organ recipients. It should come as no surprise that one of the major factors to determine which prospective organ recipients are given priority on the waiting list is the age of the prospective recipient. With only rare exception, a young adult, as a prospective heart transplant recipient, will rank higher on the heart transplant waiting list than will an elderly adult, if the latter is deemed to even be eligible. Additional criteria that are used to determine both eligibility and ranking for organ transplantation include: 1) the extent to which the need for organ transplantation is urgent in order to save the prospective organ recipient’s life; and 2) the likelihood that, and the extent to which, the candidate for transplantation will benefit from the procedure, that is, its probability for success; but also, 3) the candidate’s history of deleterious health-related habits (for example, whether the candidate for a lung transplant has ever smoked cigarettes or other tobacco products, or currently does so); 4) the candidate’s ability to pay (either outright or through private or federally funded health insurance) for the procedure; and 5) the value of the candidate, by virtue of, for example, one’s occupation, to society (for example, a cancer biomedical researcher as compared to a high school custodian), and more. If the former two criteria do not seem to raise any moral concerns, each of the latter three, almost certainly, do.
While each of the first two of these criteria could be reflective of egalitarian principles of justice, according to which each candidate, as a person, is viewed as having equal value, each of the latter three of these criteria could be seen as beneficial to the best interests of society, that is, as promoting social utility. As such, egalitarian principles of justice do not necessarily promote what is in the best interests of society any more than social utility considerations necessarily promote what is in the best interests of the individual. However, the application of either of these two criteria is far less controversial than is the application of any one of the latter three criteria. It might be reasonable for people to disagree as to whether a candidate for a lung transplant, who smoked a pack of cigarettes each day for twenty years, is less deserving of such a transplant than another such candidate who has never smoked in one’s life. It might be reasonable for people to disagree as to whether a person who is otherwise a good candidate for an organ transplant should be rejected solely because this person cannot afford to pay for the procedure and has no access to health insurance. Finally, it might be reasonable for people to disagree as to whether a candidate for an organ transplant, who happens to be a cancer biomedical researcher, is any more deserving of such a transplant than is another medically qualified candidate, who happens to be a high school custodian.
Adding to the dissatisfaction that some people express concerning the rationing of human organs for transplantation, in America and in other countries, is the deference that is sometimes offered to people of social prominence. Publicly documented in America are cases in which, for example, a prominent former professional sports figure, who had cirrhosis of the liver due to decades of alcohol abuse, was offered a liver transplant despite being, at that time, far down on the waiting list, and a governor of an East Coast state, who was offered and received both a heart and a lung transplant, again despite being, at the time in question, far down on the waiting list due, at least in part, to his age and his health status. In fact, he died less than a year later.
Another moral issue that is endemic to the human organ transplant industry is the buying and selling of human organs for the purpose of transplantation. In some Central American and some South American countries as well as in some Mideast countries, for the past several decades, there has been a thriving illegal market for human organs. More recently, this practice has spread to some European countries and even to America, when financially impoverished people find themselves in need of money for their own sustenance. Typically, such individuals are promised the equivalent of thousands of dollars for a kidney or bone marrow but find themselves at the mercy of the organ dealer for payment after the fact. Worse, too many times, such medical procedures are performed in non-clinical environments and sometimes by non-clinically trained harvesters.
Raising additional moral concerns is the practice of what is sometimes referred to as the “farming” of human organs, that is, to conceive and to bring to fruition a newborn (or, in some cases, the harvesting of human organs or tissue can be done at the fetal stage) or to maintain on life support the body of someone who has been determined to be brain dead in order to be able to harvest an organ or bone marrow for transplantation. In the former case, questions arise concerning the moral propriety of bringing a child into the world for the express purpose of harvesting some of its body parts. Depending on which specific organs might be harvested, the death of this newborn might be inevitable. In the latter case, anyone, from an anencephalic newborn to a child or an adult of any age, who, as a result of either a non-traumatic or a traumatic event, has been declared to be in a state of unresponsive wakefulness (popularly referred to as a “permanent vegetative state”), that is, a patient whose state of consciousness, due to severe damage to the brain, is not indicative of actual awareness but, at best, only partial awareness or arousal, and whose condition has lasted for three to six months, in the case of a non-traumatic cause, or at least twelve months, in the case of a traumatic cause, might be maintained on life support for the express purpose of harvesting any of a variety of human organs. Any such case introduces questions concerning any of the following moral issues: Is it ever morally allowable to keep the body of an otherwise brain dead person alive for the sole purpose of harvesting some of its organs?; Even if brain dead, does such a practice violate any moral rights or interests of the individual in question? Even if the answer to these questions is in the negative, because this individual might be deemed to have the same physiological, and thereby moral, status as one who has died, does proper respect for the body of the dead dictate that this practice is morally improper?
Both the retail sale of human organs and the farming of human organs continue to raise the moral issue of whether, and to what extent, human organs should be treated as commodities to be bought and sold in the marketplace (legally or not) and grown for the express purpose of harvesting for transplantation. Twenty-first century stem cell research holds out the promise, incrementally and over time, to eventually be able to produce, in theory, any human body part from a single cell of one’s own body. To the extent that these prospects become realities, many of the moral issues that are raised by the procurement and the transplantation of human organs will become moot.
The question of who, in a given society, should be eligible to receive health care is one of the most important ethical issues concerning the provision of health care in the 21st century. This is because of the stark contrasts that exist concerning the distribution of health care when comparing America to other nations. America is the only one of the thirty or more wealthiest nations on the planet to continue to prohibit universal health care. Universal health care, by the nature of the case, leaves out of its financing equation private health care insurance providers. By contrast, in America, these private health care insurance providers are the primary drivers of the health care system, determining who is eligible for health care insurance coverage; what particular health care services they choose to finance, and for whom, including not only diagnostic procedures but also surgical and other invasive medical procedures; the lengths of stays in hospitals or other medical facilities, for both surgical and non-surgical patients; the cost of health insurance premiums as well as financial deductibles and co-payments to be paid by their customers; the fees for services for physicians, surgeons, and other health care professionals, and the percentage of such fees that they will pay; the particular prescription medications that they deem eligible for payment by themselves and how much, in co-payments, that their customers have to pay; and many additional factors that affect both the health and the finances of those who maintain such insurance coverage.
In fact, there is a direct relationship, due to the effects of this type of health care system, between the health care and the finances of all members of society (both those with health insurance and those without). Many members of American society with health insurance, by virtue of their own personal financial situations, face the choice, usually on a regular basis, as to whether they can afford to pay the financial deductibles and/or the co-payments for their own health care because their earned weekly wages, all too often, preclude them from making these payments in addition to paying for rent, food, and other necessities for their families and for themselves. Added to these issues is the fact that not all health insurance plans are the same concerning which services and procedures that they cover and which they do not, the practical effect of which is that many families with working parents do not have health insurance coverage for many important and significant health care services and procedures, or even prescription medications. Worse, a large percentage of wage earners, and some salaried employees, cannot, reasonably, afford to pay the costs of health insurance premiums, and so, have no health insurance coverage at all. The practical effect of this is that in addition to not being able to afford, out of pocket, health care services or procedures that serve to maintain one’s reasonably good health status, these individuals cannot afford to seek medical attention when they experience health care symptoms even of a dire nature.
All of these facts concerning the health care system in America as compared to the health care systems in virtually every other reasonably wealthy nation in the world raise the following questions of a moral nature. Does each and every citizen of any society have a moral right to health care? If so, does the government of any society have a moral obligation to provide each and every one of its citizens with health care? These questions, by their very nature, raise the issue of the extent to which the ethical principle of justice can be realized in any given society. At the societal level, the ethical principle of justice is applicable, fundamentally, to the ways in which goods and services as well as rights, liberties, opportunities for social and economic advancement, duties, responsibilities, and many other entities (both tangible and intangible) are distributed to citizens. The application of the ethical principle of justice to these questions concerning health care provides a benchmark for the determination of which types of health care systems are more, or less, just than others.
While any of the methods of moral decision-making, as delineated above, could be applied in fruitful ways to such questions, it might be more instructive to apply two public policy perspectives: libertarianism and egalitarianism. Those politicians and public policy makers who are responsible, over many decades, for the health care system in America, have, for the most part, done so based on libertarian principles of justice, while those politicians and public policy makers who are responsible, again, over many decades, for the health care systems in those countries with universal health care coverage, have, by and large, done so based on egalitarian principles of justice.
According to libertarian principles of justice, citizens might or might not have any kind of right to health care, but even if they do, it should not result in the placing of financial burdens on wealthier citizens to fund, in part or in whole, the health care of their less financially well-off counterparts. Rather, health care, like food, clothing, the cost of shelter, and the costs of all other goods and services available in society, should be distributed by the dictates of a free-market economic system. Those who are wealthier, and who are able to buy more expensive goods and services of superior quality, will also be able to afford to buy not only health care services and procedures themselves, but also a superior quality of such health care commodities. Those who are less wealthy, and who are able to buy less expensive goods and services of comparatively inferior quality, will be able to afford health care services and procedures, but only of a comparatively inferior quality. Finally, those who are financially impoverished will not be able to afford health care services or procedures at all. Under the public policy dictates of this type of health care system, the ethical principle of the autonomy of citizens to make their own choices, as citizens in society, takes precedence over the ethical principle of beneficence.
According to egalitarian principles of justice, each citizen in society has an equal right to health care services and procedures because each citizen in society has equal value as a person. Because the status of one’s health is foundational for one to even be able to enjoy a reasonably good quality of life (and all that that entails), the government is obligated to provide each and every one of its citizens with access to health care services and procedures. Unlike most of the goods and services the distribution of which is dictated by a free-market economic system, health care is essential to the well-being of every citizen. Of course, the politicians and public policy makers, in accordance with this type of health care system, would have to adjudicate the question of whether all health care services and procedures would be available to all of the members of society, in equal measure, or the ways in which, and the degrees to which, such services and procedures would be made available to the members of society. Under the public policy dictates of this type of health care system, the ethical principle of beneficence supersedes, in importance, the ethical principle of the autonomy of its citizens to make their own choices.
In the final analysis, the ways in which, and the degrees to which, particular health care services and procedures are distributed among the citizens of a given society depend on the dictates of the principles of justice not only as they are applied to the society’s economic system but also as they are applied to the society’s governmental system.
The Joint Commission is the comprehensive accrediting agency for health care programs and organizations, of all types, throughout America, and has, for some time, mandated the inclusion of ethics committees as an accreditation requirement. The purpose of any health care organization ethics committee is to develop, to engage in an on-going process of the review of, and to ensure the proper application of the medical ethics policies of the health care organization in question. Such policies would normally include such significant issues in health care ethics as informed consent, confidentiality, euthanasia, assisted suicide, the withholding and withdrawing of medical treatment, the harvesting and transplantation of human organs, and many others depending on the specific type of health care organization. While there is a wide latitude concerning the membership composition of health care ethics committees, typically, the following professions are represented: physicians, nurses, social workers, senior administrators, risk managers, chaplains, and ethicists, in addition to lay people from the local community, among others.
Functions of a health care ethics committee include the following: to become informed about, and to maintain a credible level of awareness of, significant issues in health care ethics, generally, and their relationships to the needs of both the patients and the health care professionals who are associated with the health care facility in question; to educate, on an on-going basis, the health care professionals of the facility in question, in addition to the members of the ethics committee, on significant issues in health care ethics as well as the ethics committee’s policies concerning such issues; and to be responsible for the particular cases of the facility’s patients that warrant either a review by, or a consultation with, the ethics committee. The health care ethics committee is, usually, the final authority on ethics policy concerning medical issues, subject to approval by the facility’s Board of Trustees.
Health care ethics is a multi-faceted and fundamentally important issue for the citizens of any society because the provision of health care is essential to the well-being of each person, and the ways in which people are treated, concerning their health care, bears importantly on their health status. The many moral issues that arise out of the provision of health care—from those that are inherent in the relationship between the health care professional and the patient to those associated with abortion and euthanasia, from those to be encountered in biomedical or behavioral human subject research to those that have come about as a result of reproductive and genetic knowledge and technologies, and from those concerning the harvesting and transplantation of human organs to those that stem from public policy decisions as determinative of the allocation of health care services and procedures—are perennial issues. To attempt to clarify these moral issues by use of the philosophical analysis of the language and the concepts that underlie them is, at least in theory, to provide a framework in accordance with which to make better quality decisions concerning them.
- Anderson, E. S. (1990) “Is Women’s Labor a Commodity,” in Philosophy and Public Affairs, 19: Winter, pp. 71-92.
- Aristotle (1985) Nicomachean Ethics, trans. by Terence Irwin, Hackett Publishing Co.
- Beauchamp, T. L. and Childress, J. F. (2009) Principles of Biomedical Ethics, 6th ed., New York: Oxford University Press.
- Beauchamp, T. L., Walters, L., Kahn, J. P., and Mastroianni, A. C. (2014) Contemporary Issues in Bioethics, 8th ed., Boston: Cengage.
- Boylan, M. (2004) A Just Society, Lanham, Maryland: Oxford: Rowman and Littlefield.
- Boylan, M. (2012) “Health as Self-Fulfillment,” in the Philosophy and Medicine Newsletter, 12:4. (Reprinted in Boylan, M. (2014) Medical Ethics, 2nd ed., Malden, Massachusetts: Wiley-Blackwell, pp. 44-57.)
- Boylan, M. (2014) Medical Ethics, 2nd ed., Malden, Massachusetts: Wiley-Blackwell.
- Brandt, A. M. (1978) “Racism and Research: The Case of the Tuskegee Syphilis Study,” in the Hastings Center Report, 8:6, pp. 21-29.
- Brennan, T. (2007) “Markets in Health Care: The Case of Renal Transplantation,” in the Journal of Law, Medicine & Ethics, 35:2, pp. 249-255.
- Brock, D. W. (1998) “Cloning Human Beings: An Assessment of the Ethical Issues Pro and Con,” in Clones and Clones: Facts and Fantasies About Human Cloning, edited by Nussbaum, M. C. and Sunstein, C. R., W. W. Norton & Co.
- Callahan, D. (1989) “Killing and Allowing to Die,” in the Hastings Center Report, 19 (Special Supplement), pp. 5-6.
- Chadwick, R. F. (1989) “The Market for Bodily Parts: Kant and Duties to Oneself,” in the Journal of Applied Philosophy, 6:2, pp. 129-140.
- Cohen, C. B. (1996) “‘Give Me Children or I Shall Die!’ New Reproductive Technologies and Harm to Children,” in the Hastings Center Report, 26:2, pp. 19-27.
- Gert, B. and Clouser, K. D. (1990) “A Critique of Principlism,” in The Journal of Medicine and Philosophy, 15:2, pp. 219-236.
- Glannon, W. (2001) “Genetic Enhancement,” in Genes and Future People: Philosophical Issues in Human Genetics, Glannon, W., Westview Press, pp. 94-101.
- Harris, J. (1993) “Is Gene Therapy a Form of Eugenics?” in Bioethics, 7:2/3, pp. 178-187.
- Held, V. (2006) The Ethics of Care, New York: Oxford University Press.
- Holmes, H. B. and Purdy, L. M. (1992) Feminist Perspectives in Medical Ethics, Bloomington: Indiana University Press.
- Jonsen, A. R. and Toulmin, S. (1988) The Abuse of Casuistry: A History of Moral Reasoning, Berkeley: University of California Press.
- Kant, I. (1989) Foundations of the Metaphysics of Morals, edited and translated by Lewis White Beck, Library of Liberal Arts: Pearson.
- Kevorkian, J. (1991) Prescription—Medicine: The Goodness of Planned Death, Prometheus Books.
- Kuhse, H. (1997) Caring: Nurses, Women and Ethics, Oxford: Blackwell.
- Kuhse, H., Schuklenk, U., and Singer, P. (2015) Bioethics: An Anthology, 3rd ed., Malden, Massachusetts: Wiley Blackwell.
- MacKay, D. and Danis, M. (2016) “Federalism and Responsibility for Health Care,” in Public Affairs Quarterly, 30:1, pp. 1-29.
- Marquis, D. (1989) “Why Abortion is Immoral,” in the Journal of Philosophy, LXXXVI:4, 183-202.
- Mill, J. S. (1861) Utilitarianism, in Collected Works of John Stuart Mill. Edited by J. M. Robson, Vol. X, Toronto: University of Toronto Press, 1969.
- National Academy of Sciences (2002) Committee on Science, Engineering, and Public Policy, Scientific and Medical Aspects of Human Reproductive Cloning, Washington, D. C.: National Academy Press.
- Nesbitt, W. (1995) “Is Killing No Worse than Letting Die?” in the Journal of Applied Philosophy, 12:1, pp. 101-105.
- Noonan, J. T. (1968) “Deciding Who Is Human,” in the American Journal of Jurisprudence, 13:1, pp. 134-140.
- Noonan, J. T. (1970) “An Almost Absolute Value in History,” in The Morality of Abortion: Legal and Historical Perspectives, John T. Noonan, Cambridge: Harvard University Press, pp. 51-59.
- Purdy, L. M. (1989) “Surrogate Mothering: Exploitation or Empowerment?” in Bioethics, 3:1, pp. 18-34.
- Rachels, J. (1975) “Active and Passive Euthanasia,” in the New England Journal of Medicine 292, pp. 78-80.
- Ram-Tiktin, E. (2012) “The Right to Health Care as a Right to Basic Human Functional Capabilities,” in Ethical Theory and Moral Practice, 15:3, pp. 337-351.
- Robertson, J. (1994) “The Presumptive Primacy of Procreative Liberty,” in Children of Choice: Freedom and the New Reproductive Technologies, Princeton: Princeton University Press, pp. 22-42.
- Robertson, J. A. (1983) “Procreative Liberty and the Control of Conception, Pregnancy, and Childbirth,” in the University of Virginia Law Review, 69, pp. 405-464.
- Savulescu, J. (1995) “Rational Non-Interventional Paternalism: Why Doctors Ought to Make Judgments of What Is Best for Their Patients,” in the Journal of Medical Ethics, 21, 327-331. (Reprinted in Medical Ethics, 2nd ed. (2014), ed. by Michael Boylan, Malden, Massachusetts: Wiley-Blackwell, pp. 83-90.)
- Savulescu, J. (2001) “Procreative Beneficence: Why We Should Select the Best Children,” in Bioethics, 15:5/6, pp. 413-426.
- Savulescu, J. and Momeyer, R. W. (1997) “Should Informed Consent Be Based on Rational Beliefs?” in the Journal of Medical Ethics, 23, pp. 282-288. (Reprinted in Medical Ethics, 2nd ed. (2014), ed. by Michael Boylan, Malden, Massachusetts: Wiley-Blackwell, 104-115.)
- Shaw, D. (2009) “Euthanasia and Eudaimonia,” in the Journal of Medical Ethics, 35:9, 530-533.
- Sherwin, S. (1992) No Longer Patient: Feminist Ethics and Health Care, Philadelphia: Temple University Press.
- Sherwin, S. (1994) “Women in Clinical Studies: A Feminist View,” in the Cambridge Quarterly of Healthcare Ethics, 3:4, pp. 533-539.
- Silvers, A. (2012) “Too Old for the Good of Health?” in the Philosophy and Medicine Newsletter, 12:4. (Reprinted in Boylan, M. (2014) Medical Ethics, 2nd ed.,Malden, Massachusetts: Wiley-Blackwell, pp. 30-43.)
- Skloot, R. (2010) The Immortal Life of Henrietta Lacks, New York: Crown/Random House.
- Steinbock, B., London, A. J., and Arras, J. (2013) Ethical Issues in Modern Medicine: Contemporary Readings in Bioethics, 8th ed., Columbus, Ohio: McGraw-Hill.
- Stoller, S. (2008) “Why We Are Not Morally Responsible to Select the Best Children: A Response to Savulescu,” in Bioethics, 22:7, pp. 364-369.
- Thomson, J. J. (1971) “A Defense of Abortion,” in Philosophy and Public Affairs, 1:1, 47-66.
- Tong, R. (1997) Feminist Approaches to Bioethics: Theoretical Reflections and Practical Applications, Boulder: Westview Press.
- Tong, R. (2002) “Love’s Labor in the Health Care System: Working Toward Gender Equity,” Hypatia, 17:3, pp. 200-213.
- Tong, R. (2012) “Ethics, Infertility, and Public Health: Balancing Public Good and Private Choice,” in the Newsletter on Philosophy and Medicine, 11:2, pp. 12-17. (Reprinted in Boylan, M. (2014) Medical Ethics, second ed., Malden, Massachusetts: Wiley-Blackwell, p.13-30.)
- Wachbroit, R. (1996) “Disowning Knowledge: Issues in Genetic Testing,” in Report from the Institute for Philosophy and Public Policy, 16:3/4, pp. 14-18.
- Warren, M. A. (1973) “On the Moral and Legal Status of Abortion,” in The Monist, 57:1, 43-61.
- Warren, M. A. (1988) “IVF and Women’s Interests: An Analysis of Feminist Concerns,” in Bioethics, 2:1, pp. 37-57.
- Warren, V. L. (1992) “Feminist Directions in Medical Ethics,” in the HEC Forum, 4:1, pp. 73- 87.
- World Health Organization, Preamble to the Constitution of the World Health Organization, New York, June 19-July 22, 1946 (New York: Adopted by the International Health Conference, and signed on July 22, 1946.)
Stephen C. Taylor
Delaware State University
U. S. A.