This essay was co-authored by myself and Steve Mills.
We worry about the ethics of Machine Intelligence (MI) and we fear our community is completely unprepared for the power we now wield. Let us tell you why.
To be clear, we’re big believers in the far-reaching good MI can do. Every week there are new advances that will dramatically improve the world. In the past month we have seen research that could improve the way we control prosthetic devices, detect pneumonia, understand long-term patient trajectories, and monitor ocean health. That’s in the last 30 days. By the time you read this, there will be even more examples. We really do believe MI will transform the world around us for the better, which is why we are actively involved in researching and deploying new MI capabilities and products.
There is, however, a darker side. MI also has the potential to be used for evil. One illustrative example is a recent study by Stanford University researchers who developed an algorithm to predict sexual orientation from facial images. When you consider recent news of the detainment and torturing of more than 100 male homosexuals in the Russian republic of Chechnya, you quickly see the cause for concern. This software and a few cameras positioned on busy street corners will allow the targeting of homosexuals at industrial-scale – hundreds quickly become thousands. The potential for this isn’t so far-fetched. China is already using CCTV and facial recognition software to catch jaywalkers. The researchers pointed out that their findings “expose[d] a threat to the privacy and safety of gay men and women.” That disavowal does little to prevent outside groups from implementing the technology for mass targeting and persecution.
Many technologies have the potential to be applied for nefarious purposes. This is not new. What is new about MI is the scale and magnitude of impact it can achieve. This scope is what will allow it to do so much good, but also so much bad. It is like no other technology that has come before. The notable exception being atomic weapons, a comparison others have already drawn. We hesitate to draw such a comparison for fear of perpetuating a sensationalistic narrative that distracts from this conversation about ethics. That said, it’s the closest parallel we can think of in terms of the scale (potential to impact tens of millions of people) and magnitude (potential to do physical harm).
None of this is why we worry so much about the ethics of MI. We worry because MI is unique in so many ways that we are left completely unprepared to have this discussion.
Ethics is not [yet] a core commitment in the MI field. Compare this with medicine where a commitment to ethics has existed for centuries in the form of the Hippocratic Oath. Members of the physics community now pledge their intent to do no harm with their science. In other fields ethics is part of the very ethos. Not so with MI. Compared to other disciplines the field is so young we haven’t had time to mature and learn lessons from the past. We must look to these other fields and their hard-earned lessons to guide our own behavior.
Computer scientists and mathematicians have never before wielded this kind of power. The atomic bomb is one exception; cyber weapons may be another. Both of these, however, represent intentional applications of technology. While the public was unaware of the Manhattan Project, the scientists involved knew the goal and made an informed decision to take part. The Stanford study described earlier has clear nefarious applications; many other research efforts in MI may not. Researchers run the risk of unwittingly conducting studies that have applications they never envisioned and do not condone. Furthermore, research into atomic weapons could only be implemented by a small number of nation-states with access to proper materials and expertise. Contrast that with MI, where a reasonably talented coder who has taken some open source machine learning classes can easily implement and effectively ‘weaponize’ published techniques. Within our field, we have never had to worry about this degree of power to do harm. We must reset our thinking and approach our work with a new degree of rigor, humility, and caution.
Ethical oversight bodies from other scientific fields seem ill-prepared for MI. Looking to existing ethical oversight bodies is a logical approach. Even we suggested that MI is a “grand experiment on all of humanity” and should follow principals borrowed from human subject research. The fact that Stanford’s Institutional Review Board (IRB), a respected body within the research community, reviewed and approved research with questionable applications should give us all pause. Researchers have long raised questions about the broken IRB system. An IRB system designed to protect the interests of study participants may be unsuited for situations in which potential harm accrues not to the subjects but to society at large. It’s clear that the standards that have served other scientific fields for decades or even centuries may not be prepared for MI’s unique data and technology issues. These challenges are compounded even further by the general lack of MI expertise, or sometimes even technology expertise, within the members of these boards. We should continue to work with existing oversight bodies, but we must also take an active role in educating them and evolving their thinking towards MI.
MI ethical concerns are often not obvious. This differs dramatically from other scientific fields where ethical dilemmas are self-evident. That’s not to say they are easy to navigate. A recent story about an unconscious emergency room patient with a “Do Not Resuscitate” tattoo is a perfect example. Medical staff had to decide whether they should administer life-saving treatment despite the presence of the tattoo. They were faced with a very complex, but very obvious, ethical dilemma. The same is rarely true in MI where unintended consequences may not be immediately apparent and issues like bias can be hidden in complex algorithms. We have a responsibility to ourselves and our peers to be on the lookout for ethical issues and raise concerns as soon as they emerge.
MI technology is moving faster than our approach to ethics. Other scientific fields have had hundreds of years for their approach to ethics to evolve alongside the science. MI is still nascent yet we are already moving technology from the ‘lab’ to full deployment. The speed at which that transition is happening has led to notable ethical issues including potential racism in criminal sentencing and discrimination in job hiring. The ethics of MI needs to be studied as much as the core technology if we ever hope to catch up and avoid these issues in the future. We need to catalyze an ongoing conversation around ethics much as we see in other fields like medicine, where there is active research and discussion within the community.
The issue that looms behind all of this, however, is the fact that we can’t ‘put the genie back in the bottle’ once it has been released. We can’t undo the Stanford research now that it’s been published. As a community, we will forever be accountable for the technology that we create.
In the age of MI, corporate and personal values take on entirely new importance. We have to decide what we stand for and use that as a measure to evaluate our decisions. We can’t wait for issues to present themselves. We must be proactive and think in hypotheticals to anticipate the situations we will inevitably face.
Be assured that every organization will be faced with hard choices related to MI. Choices that could hurt the bottom line or, worse, harm the well-being of people now or in the future. We will need to decide, for example, if and how we want to be involved in Government efforts to vet immigrants or create technology that could ultimately help hackers. If we fail to accept that these choices inevitably exist, we run the risk of compromising our values. We need to stand strong in our beliefs and live the values we espouse for ourselves, our organizations, and our field of study. Ethics, like many things, is a slippery slope. Compromising once almost always leads to compromising again.
We must also recognize that the values of others may not mirror our own. We should approach those situations without prejudice. Instead of anger or defensiveness we should use them as an opportunity to have a meaningful dialog around ethics and values. When others raise concerns about our own actions, we must approach those conversations with humility and civility. Only then can we move forward as a community.
Machines are neither moral or immoral. We must work together to ensure they behave in a way that benefits, not harms, humanity. We don’t purport to have the answers to these complex issues. We simply request that you keep asking questions and take part in the discussion.
This has been crossposted to Medium and to the Booz Allen website as well.
We’re not the only one discussing these issues. Check out this Medium post by the NSF-Funded group Pervasive Data Ethics for Computational Research, Kate Crawford’s amazing NIPS keynote, Mustafa Suleyman’s recent essay in Wired UK, and Bryor Snefjella’s recent piece in BuzzFeed.