Lecture Transcript
Lecture Transcript: The Ethics of Science
Hello and welcome to this session on the ethics of science.
I would like to explore this topic through two main aspects. First, how ethics relates to the process of producing science. Then, toward the end of the presentation, I will discuss how science affects society and how this might influence not only the ethics of science but also how science might shape the ethics and values of wider society.
Defining Ethics
Let us begin by defining what we are discussing. Establishing clear definitions helps ensure clarity of thought and communication.
Ethics is the branch of philosophy that examines the values of human conduct in terms of rightness or wrongness—the rightness or wrongness of particular actions, the goodness or badness of particular motives for actions, and whether the products of particular actions can be considered good or bad.
Philosophy offers several theories to help us answer questions about whether a particular action is right or wrong, or whether the result of a particular action is good or bad. These theories often provide different answers to questions of rightness and wrongness. Therefore, determining how the ethics of science works is not necessarily straightforward. There is no simple theory or calculus we can apply, because human values are contested. There is no society-wide consensus about our values, so different philosophical theories operate on different values or assess values differently.
I will discuss how three major ethical theories in philosophy influence our understanding of the ethics of science. I will focus primarily on the process of producing science, though toward the end I will address how science affects society and what responsibilities scientists might have in relation to that effect.
Goals of Science and Values
The values we will discuss today relate to the goals of science. We must consider what these goals are.
My suggestion is that science aims at the production of knowledge about the world and understanding of the world, seeking these things in order to benefit society. Of course, there is an element of seeking understanding simply for its own sake—understanding and knowledge are goods in themselves. However, most scientists believe that their work provides value to wider society and benefits society in some way.
A Case Study: The Ethical Dilemma
Let us begin with an imaginary case study. Imagine that you are a young scientist poised to publish a significant paper that will establish your reputation. This paper is groundbreaking, proposing a new hypothesis that will revolutionize your field. If the theory is demonstrated and accepted by the scientific community, you are almost certain to receive the Nobel Prize. It is serious—you are on the very cusp of a major breakthrough.
The paper is ready to go, coming out in print or online tomorrow. However, just before publication, you discover a potential error in your data. It is quite a subtle error, but if confirmed, it completely undermines all your work. Your breakthrough will disappear into the ether. It will be nonexistent. Your Nobel Prize will not be awarded, and you will be back to mundane work in the laboratory, trying to understand the data, reproduce your experiments, and find a new way of understanding your work.
You quickly realize that nobody else is going to spot the error. Even if the error exists, no one else will notice it. You could publish your brilliant new hypothesis and establish your reputation.
The question I would like you to consider is this: Knowing the risk of a potential error in your work, and knowing that stopping publication to check it out will cost you valuable months—during which someone else might publish ahead of you—will you stop the publication of your article? Will you try to stop it, admit to the potential error, and ask for more time? Or will you proceed with publication anyway and hope for the best?
I hope you would all say that you would stop the publication. You would explain that there is a potential issue with the data and that you want to verify it before proceeding. You would think that this is what most, if not all, scientists would do. After all, we expect scientists to be truthful and honest about their data and results.
However, surprisingly, not all scientists are that honest. History is replete with examples of scientists who have failed to do the right thing—who have covered up errors or deliberately and fraudulently published results they knew to be untrue.
Two Contrasting Examples
The examples on screen show two scientists who acted very differently toward their data.
Daniel Bolnick is an ecologist who published a paper in the journal Ecological Studies. Shortly after publication, while reviewing his data, he realized he had made a catastrophic error in his interpretation. He had used a statistical tool to analyze his data, which produced a particular trend that enabled him to interpret the data in a significant way—even though the raw data showed no clear trend or pattern.
Bolnick realized he had used the statistical tool incorrectly, which caused a misinterpretation. The tool gave him a trend in his data that was not actually there. As soon as he discovered this error, Bolnick wrote to the journal explaining what he had found and requesting that the article be retracted. This was a very courageous and honest action. You can read about his reactions and thoughts upon discovering his error by following the web link on the slide.
Jan Schön presents a very different case. He was a physicist working at Bell Labs, rapidly earning a reputation in the field of semiconductors by using molecules that do not normally conduct electricity in innovative ways. His data and concepts were revolutionary, and his reputation grew to such an extent that others in the scientific community expected him to receive a Nobel Prize.
However, concerns emerged when nobody else could replicate his results or obtain the same experimental data. It turned out that all of Schön’s data had been produced fraudulently—he simply fabricated it. He conducted experiments but fudged the error data and replicated old experimental results in the background to demonstrate the significance of his new approach. It was totally fraudulent, with no truth in the concepts he was advancing.
When exposed, Schön lost his job. His university investigated and revoked his doctorate. It was the end of a potentially brilliant career that came crashing down around him.
Not all scientists are honest. Science is meant to be a cooperative enterprise—scientific papers often have dozens of authors, suggesting collaboration. However, it is also quite competitive, and the temptation to publish data first can be overwhelming. Sometimes, when researchers honestly believe they are demonstrating the truth, the temptation to publish even without completing all proper verification processes proves too great.
These are two cautionary tales in science. Perhaps they represent extreme cases, and most of us will not find ourselves in such situations. Let us consider how we might navigate the middle ground and how ethical theories can help us do so.
Three Ethical Theories
Philosophy offers numerous moral theories that explain and ground ethical values. I will concentrate on three theories in this presentation: Kantian ethics, utilitarian ethics, and virtue ethics. Each is associated with a particular philosopher in the history of philosophy.
Kantian Ethics
Kantian ethics is associated with Immanuel Kant. According to Kant, we have certain duties as human beings, and we must conform our actions to the moral law, which is the product of human reason. Our supreme duty is to conform our actions to this moral law.
This theory is more generally known as deontological ethics, from the Greek word “deon,” meaning duty. The moral law—the code of ethics—is a product of the human mind, a product of reason. It consists of principles that are argued for and that no reasonable person could reject. Because they are products of reason, Kant believed they are universally applicable.
Reason provides us with a set of rules to follow—a moral code, the moral law—and doing the right thing is a matter of following the rules correctly. For example, there is a rule that says we must always tell the truth or never lie, and we must always follow such rules.
Utilitarian Ethics
Utilitarian ethics, mostly associated with John Stuart Mill, an English philosopher of the 19th century, approaches things very differently. According to Mill, there are no fixed rules for ethics. Instead, ethics is a matter of careful calculation of right and wrong, good and bad, through cost-benefit analysis.
We must calculate the benefits and costs of our proposed actions. Any action with a positive balance is good, and any action with a negative balance is bad. Mill considered this in terms of overall happiness or well-being. If an action results in an increase in happiness or well-being in the world, then it is good. If it results in a decrease in happiness or well-being, the action is bad and should not be done.
This sounds simple and straightforward—almost like a scientific approach to ethics—but it leads to some rather unpalatable conclusions. For example, the theory permits (and would even encourage) murdering one person or a few people if doing so produces a greater good in the world. Similarly, it holds that enslaving a small group of people and depriving them of their human rights would be morally good if doing so greatly benefits the rest of humanity.
Utilitarian ethics thus leads to some counterintuitive and unwanted conclusions. While there are subtle variations of the theory that attempt to counter these problems, the issues remain. Like Kantian ethics, utilitarian ethics does not fully account for how human beings actually behave. It is a model, a picture, but not a complete understanding.
Virtue Ethics
The third theory I would like us to consider is virtue ethics, which takes a very different approach from the other two. The focus is not on the actions themselves, their consequences, or following rules. In fact, virtue ethics has virtually no rules at all.
Instead, virtue ethics focuses on the person—the agent carrying out the actions—rather than the actions themselves. Ethics is about developing the right sort of character that makes you a good human being. For Aristotle, there is such a thing as human nature—something that human beings ought to be—and we ought to display certain virtues.
Virtues are qualities we must develop. A virtuous person possesses the good qualities that a human being should have, enabling them to apply sound judgment in all situations. There are no fixed rules as such. Moral decisions are context-specific, but a virtuous human being has all the right qualities to make sound judgments. In different circumstances, they can integrate all the different sources of information and all the different values in a particular context in a sound and sensible way, producing a virtuous, right, and good judgment.
These are the three ethical theories: ethics based on rules and the duty to conform (Kantian ethics), ethics based on cost-benefit analysis (utilitarian ethics), and virtue ethics, which focuses on developing good character.
Applying Ethical Theories to Science
How do these theories apply in science? Consider the question: Does the end justify the means?
Suppose you have an idea for an experiment that would produce beneficial, positive results, but it will cause significant harm—either to the person conducting the experiments, to experimental subjects (animals or humans), or there could be huge risks to the experiments themselves. There might be a serious risk that the experiment could go seriously wrong—perhaps even blow up the university campus. Or consider the Large Hadron Collider when it was first switched on: some people expressed concern that it might generate a black hole and consume the planet within minutes. Despite such great risks, the potential benefit is enormous. Should you take the risk of doing harm?
Deontological ethics provides a very clear answer: No. There is a rule in deontological ethics, based on reason, that states it is never right to arrive at a good end by improper means. If you are using improper means—causing harm to arrive at an answer—then you ought not to proceed along that line of thought. Deontological ethics gives us a clear answer: You must not do this thing.
Utilitarian ethics, on the other hand, says: Yes, you could do this thing. You could perform an experiment that causes harm to experimental subjects if the benefits outweigh the costs. That is all that matters. Utilitarian ethics permits causing harm in the process of generating results that will benefit people. However, we might think this is extreme, though at the very least, we would think scientists should seek to minimize any harm their research might inflict.
These are two contrasting answers: Yes, you can cause harm (but should minimize it), or No, you must never cause harm in the process of generating scientific results.
The fact is that scientific progress has been made throughout history in ways that today would be considered highly unethical. Some obvious examples include the medical experiments conducted by Nazis in death camps and Nazi rocket research. This research benefited the Western world—the rocket research team from Germany, led by Wernher von Braun, was brought to America and given exemption from prosecution after World War II to start the American space and rocket programs. The Western world benefited from that work. Historically, data generated through experiments in Nazi death camps has also been used.
A less morally extreme example is Edward Jenner, known as the founder of vaccination. He discovered that inoculating someone with the cowpox virus—a live virus that causes a relatively mild disease—would protect them from smallpox, which is usually fatal in humans. Jenner tested this hypothesis on a young boy, about ten years old. The boy likely had no say in the matter. Jenner deliberately infected him with cowpox and then, after the cowpox had run its course, exposed him to smallpox. He found that the boy did not catch smallpox. An experiment like that would be highly unethical today, but it provided proof that vaccination works. To a very significant extent, we all benefit from that unethical experiment today.
Perhaps we could justify this by saying that, at the very least, it is a way of bringing good out of something that ought not to have happened—a way of redeeming those events. It ought not to have happened, but the fact is that it did. It would add to the wrongness of the research if we do not try to use that improperly gained knowledge to prevent further suffering. While I do not think this is a sound moral argument, one could argue that even though these things ought not to have been done, we can still use that knowledge for benefit, and in a way, that redeems those events.
You are getting a picture of ethics as not an exact science or a precise science, but one based on reason and argument. Decisions are made on the basis of arguments that we find compelling.
The third theory—virtue ethics—is more subtle. I will not provide a general answer to how we should proceed with potentially harmful experiments. Instead, it suggests that a scientist who embodies a set of values and virtues that make them not just a good human being but a good scientist will be able to make the correct decision at the time in that particular context.
Six Values for Scientists
Let me suggest six values that a scientist ought to embody, live out, and allow to shape their professional life: honesty, objectivity, tolerance, doubt of certitude, unselfish engagement, and accountability.
Honesty: First, we need to be truthful about the data we generate and its quality. Anyone who conducts experiments knows that some data will fit our hypothesis, while other data will not. We call this outlying data—data that lies outside the bounds of fit to the hypothesis. It is tempting to think, “Oh well, that’s just experimental error” and conceal it or not report it. However, we need to be honest about our rogue results and outliers because they might tell us something important. We might be so close to our hypothesis that we explain away the data, but someone less invested in our hypothesis might spot significance in those outliers.
Honesty today extends beyond what we publish. We need to give other scientists access to the raw data as well as our published data. Others need access to everything—to see everything we have done and all the results we have achieved.
Objectivity: Secondly, we need to be objective. Objectivity is extremely difficult to achieve, but we must be alert to any biases we have, especially confirmation bias, where we read data in a way that reinforces our own ideas. We have a hypothesis, and some data fits it while other data does not. It is tempting to read that outlying data as experimental error and not give it the full attention it deserves. Being objective means being alert to the ways our ideas might be coloring or blinding the way we look at the data.
Tolerance: Thirdly, we need to develop the virtue of tolerance. An individual scientist is part of a community, and other scientists have valuable knowledge and experience to contribute to the enterprise of developing scientific knowledge—even if they have different views from us, even if they are working with a totally different hypothesis in our field that we think has virtually no merit, even if they are working with different interpretations of the data or different ground theories. They all have something valuable to contribute, and we need to be tolerant of these and other different views.
Tolerance is perhaps not quite the right word because it suggests merely putting up with something you do not like. I think we need to go further than that. Tolerance is really about being open to the insights of others with totally different perspectives from us. We do not have to embrace those perspectives or think they are right, but we must be open to the insights they might bring and allow those insights to influence and shape our own perceptions and understandings.
Doubt of Certitude: The fourth virtue is doubt of certitude—being alert to the possibility of error. This, I think, was Jan Schön’s problem. He was not alert to the possibility of error. He was so convinced he was right that he ignored anything that did not fit his hypothesis. He was so convinced that he did not bother to do proper background checks and simply replicated old data. Some errors are easier to spot than others, admittedly, but admitting error is always the right thing to do. It seems to me that this is especially true in science because it always favors progress in understanding. If you admit an error, it always favors the progression of science—it always moves science forward.
Since the goal of science is to produce true knowledge about the world, admitting a mistake advances science. It is always the scientifically right thing to do.
Unselfish Engagement: The fifth value is unselfish engagement. Schön’s example is relevant here. There is a tension between personal ambition and the goals of science. We like to think of scientists as dispassionate people dedicated to the pursuit of truth wherever it comes from. However, we are human beings like everyone else, with our own ambitions, desires, and goals. All of us want to be the person who makes the great breakthrough or discovery. But we cannot always be that person. Often it is as much a matter of luck as skill. There is a temptation to publish sooner rather than later—to stake a claim before anyone else, before the data is fully verified. This poses a risk to unselfish engagement. The virtue of unselfish engagement gives you the strength of character to overcome that tension and increasingly put the goals of science first, above personal ambition.
Accountability: Finally, there is accountability. None of us are isolated individuals; none of us operate as lone agents. As scientists, we are part of a scientific community to which we are accountable. Anything we want to publish will be tested and scrutinized by other competent members of the community. We must be accountable to one another. Beyond peer review and beyond accountability to our methods and standards within the scientific community, science as a whole needs to be accountable to wider society. It needs to be able to explain and justify itself to wider society, and to answer questions that non-scientists have about scientific results. Usually, these questions concern the effects that some new discovery will have. Often, they are about the quality of the science and the new understandings that scientific discoveries bring, which will change the way we think about ourselves and our world. Scientists need to be accountable both to each other and to the wider human community.
These are the six values of a scientist that I would like to commend to you.
Science and Society
Let me now return to the second aspect of science and ethics that I mentioned: the relationship between science and society.
There is an old ethical saying: Just because you can do something does not mean that you should do it. Just because we can perform a particular experiment, or because we could apply modern genetic engineering techniques to human babies or to the human germline and produce genetically engineered babies, does not mean we should do so.
Scientists are part of society, not separate from it. We must be accountable for the use of public goods and the way we use resources. We must be accountable for the values we espouse when doing science. Conducting an experiment that produces a human being in a particular way or changes the genetic makeup of a human being is something that needs to be discussed more widely than just within the scientific community. The scientific community needs to have a view on what it is doing, but the wider ethical judgment is not for science alone—it belongs to wider society.
We need to be accountable for how we use public goods and public money. We need to be accountable for the effects of our work on others. We need to explain it to others. We have a responsibility as scientists to consider the effects of our work on society. This seems to be part of the communication of results and engagement with the wider community.
Sometimes scientists need to challenge authority. We might need to challenge senior figures in science or politicians. Climate scientists, for example, need to constantly challenge politicians and those in political leadership about how policies are affecting the health of the planet. We need to be able to challenge authority appropriately. More generally, we need to explain the consequences or potential consequences of research. We need to allow informed debate about the consequences of our research. We need to inform the public and decision-makers so that they have a clear understanding of what the science is, enabling society to debate the consequences of science from a better-informed perspective rather than from a position of ignorance and fear.
My suggestion is that those six values—those six virtues of a scientist—apply not just to the process of producing science but also to the way science engages with society.
Resources
That concludes my brief consideration of the ethics of science. Here are some links and resources: The World Economic Forum Code of Ethics for researchers, produced by a body of young scientists all under 40, actually has seven values—they add “Be a mentor” to the six that I have outlined in this presentation. You can read more about Daniel Bolnick and Jan Schön via the web links provided. There are resources on ethical conduct from the Royal Society website. New Scientist has many interesting articles on what it considers to be the big scientific and ethical issues of the day. The last two books are available in the library: the first is an introduction to moral philosophy, and the second—by Fleddermann—is an applied ethics book on Ethics in Science and Engineering.