The world has endless moral quandaries, and AI is just the latest. Should we adopt it, and in what capacity? How can it contribute to safety and security?
Frank Chen asked a question five years ago, when self-driving car technology became feasible: “If self-driving cars are 51% safer, are we not morally obligated to adopt them?” This question has repeatedly sparked debate, and every person who hears it instantly thinks they know the answer one way or the other. The knife’s edge makes this question so interesting—51% is hardly a runaway statistic.
However, let’s do the math. The National Highway Safety Administration reported that in 2022, there were an estimated 42,795 traffic fatalities. One percent of that number, the difference that self-driving cars would hypothetically make, is about 427 people. That number would fill around 1.5 Boeing 777 airplanes.
Are 427 lives enough to qualify as a moral obligation to adopt AI?
Though there is no simple answer, and the debate is not open and shut, the answer is yes. The opportunity to save lives, enhance the quality of life, and address long-standing inequality is significant and incontrovertible. That doesn’t mean the argument isn’t complicated, and the technology is not done developing.
Adopting AI or any tech too hastily can lead to many issues, making this debate hotly contested even as the technology improves. However, avoiding technology that could save lives may be just as ethically problematic as adopting it before it is fully developed.
The Choice at Hand
There is a lot of anxiety around adopting AI technology, and for good reason. We see concern in various industries where AI could replace people in jobs, and people will always be hesitant to accept removing the human element from any kind of work.
This is not the place to start digging into the eternally hazy question of what it means to be “human,” but this is part of the debate. However, adopting AI doesn’t mean we have to throw out the human input; if used properly, it just means accepting that there are things that AI can do better and not taking it personally that people will be inferior to robots at specific tasks.
The argument, then, boils down to deciding if AI’s efficiencies, safety and quality-of-life benefits can outweigh the disruption of life and industry as we know it.
Human beings as a species are notably hesitant to accept change. Our cognitive biases, informed by our evolution, are habits our minds fall back on that were helpful when we were running from predators. However, they only sometimes reflect our perception of the modern world well.
Understanding what biases might be at play is essential to get a solid, holistic sense of the argument. While they don’t negate people’s fears and anxieties, knowing why we feel the way we feel can be helpful. Here are some that might influence our thinking when it comes to trusting AI:
- Anthropomorphism: People tend to see human traits and qualities in things that aren’t human, leading to unrealistic expectations of what AI can do. It can also mean that people attribute intentions, good or evil, to technology that doesn’t have the capacity for such reasoning.
- Status Quo Bias: Change is always hard; people prefer things to stay the same. Even with research proving the potential benefits, people will resist innovation if it disrupts what they have always known.
- Availability Bias: People will guess probability or risk based on how easily they can think of examples, like memorable events or news. With so much clickbait about AI on both sides of the argument, any opinion can get blown out of proportion and influence new information about reliability or safety.
- Fear of Missing Out (FOMO): No one wants to miss out on something, and as more people and organizations jump on the AI trend, reasonable risk assessment can go out the window as we adopt new tech before we feel left out. Likewise, if someone is surrounded by people who mistrust AI technologies, they are less likely to form a contradictory opinion.
- Confirmation Bias: Information that matches what we already think or believe will stand out more than anything that challenges preexisting knowledge or beliefs. It is difficult to objectively assess risks and benefits when we weigh the numbers we agree with more heavily.
- Algorithm Aversion/Trust: Most people find numbers challenging to comprehend. This creates doubt in algorithms’ capabilities and enforces the idea that human decision-making, contrary to evidence, is superior. It can also mean the exact opposite, leading to blind trust in AI decisions that overlook biases and errors in the AI code itself.
- Loss Aversion: We feel a loss more sharply than a gain of comparable value. The loss of jobs and control that come with AI adoption creates fear and anxiety that may overshadow any potential benefits of convenience or safety.
These biases are compounded by the fact that the technology is not universally accessible. Though it is baked into most of our daily lives in some form or another, the true potential of AI is barred by a pretty significant gap in privilege. Without a push to ensure that the technology gets democratized, many of the benefits of AI remain hypothetical to the average person.
Without tangible proof of concept, these mindsets can abruptly halt innovation. That is not to say that human brains are foolish for being so hesitant to trust new technology, but it is important to be aware of why we feel certain ways so we can attempt to develop a more objective view.
Practical Example: Self-Driving Cars
Let’s alleviate some of these biases by looking at Frank’s concrete example. Self-driving cars and the technology they operate on create a microcosm of the larger discourse, one with digestible statistics to back it up.
Start with the numbers: studies show that human drivers have a higher rate of crashes with meaningful risk of injury than autonomous vehicles. Human drivers caused 0.24 injuries per million miles (IPMM) and 0.01 fatalities per million miles (FPMM), while self-driving cars caused 0.06 IPMM and 0 FPMM.
Remember the number 427? These are not just statistics; they represent human lives that could be saved with AI technology. The moral argument for vehicles seems obvious.
So why stop there? Fields like medicine, public health, food safety, agriculture, cybersecurity, crime prevention, and military science can all benefit from AI technology’s increased efficiency and accuracy. Finding data security risks and protecting them before they become breaches, predicting crop failure before a harvest is ruined, diagnosing diseases faster and more accurately before patient lives are ruined— these are all areas where the numbers don’t lie, though perhaps they are more nuanced than a statistic like “fatalities per million miles.”
These examples are more in-your-face than freeing up time in software coding, but are they so much more important? AI can measurably improve the daily quality of our lives by automating any number of mundane tasks, increasing accessibility, and enhancing our security. Our moral obligation to adopt AI is as much about contributing to general human well-being as it is about preventing unnecessary deaths in traffic accidents.
Crunching the Numbers
Even as individuals grapple with the moral quandaries surrounding AI, many corporations have made their decisions. For them, the ROI speaks for itself.
Looking at Amazon, we can see that the significant shift towards automation has produced tangible and measurable increases in efficiency. If that is truly possible, then the questions of morality seem academic and nebulous.
“Academic and nebulous” doesn’t mean unimportant, however. The economy depends on people having jobs, and some of those jobs will inevitably be replaced by AI. Businesses have to consider the human cost of their decisions as well as the potential for growth. The economic cost of adopting AI is as much dependent on the changing landscape of the job market as it is on streamlining operations.
The shift will not be simple; organizations must keep employees’ welfare in mind as they adopt this technology.
Designing Around the Hesitation
We have established why AI is worth adopting; we have also shown why people might not want to. However, the technology is barreling ahead, and no matter what side of the issue you fall on, you’re likely to get swept up in it.
With the choice of whether to adopt likely taken out of our hands by the obvious benefits, the question turns to how. How best can we bridge the gap between the people who want to integrate AI and those who are hesitant? The solution can be found in emerging design philosophies that keep the ethical and moral implications in mind while tailoring technology to what people actually need. Some options that attempt to address this include:
- Human-Centric AI Design (HCAI)
- Explainable AI (XAI)
- Ethical AI Frameworks
- Adaptive/Responsive AI
- Participatory Design
- Augmented Intelligence
- Trustworthy AI
Questions Inevitably Remain
Though we have a moral imperative to adopt AI, the debate is not over. The benefits are clear and measurable, but the costs are also worth considering. Biases in AI training, the environmental impact of LLMs, and the inevitable changes to the job market are just a few of the considerations that must be made to take advantage of AI ethically. However, we can be morally excited about the opportunities available if we approach AI adoption optimistically and cautiously.
Patience, consideration, strict frameworks and governance, and education are the keys to safely and responsibly utilizing AI’s enormous potential.