Navigating ethics in AI today to avoid regrets tomorrow

As artificial intelligence (AI) programs become more powerful and more common, organizations that use them are feeling pressure to implement ethical practices in the development of AI software. The question is whether ethical AI will become a real priority, or whether organizations will come to view these important practices as another barrier standing in the way of fast development and deployment.

ethics in AI

A cautionary tale could be the EU General Data Protection Regulation (GDPR). Enacted with good intentions and hailed as a major step toward better, more consistent privacy protections, GDPR soon became something of an albatross for organizations trying to adhere it. The GDPR and other privacy regulations that followed were often seen as just adding more work that kept them from focusing on projects that really mattered. Organizations that attempt to solve for each new regulation in a silo end up adding significant overhead and making themselves vulnerable to competition in form of agility and cost effectiveness.

Could an emphasis on ethics in AI go the same route? Or should organizations realize the risks—as well as their responsibilities—in putting powerful AI applications into use without addressing ethical concerns? Or is there another way to deal with yet another area of quality without the excessive burden?

AI bias is human bias

AI programs are undoubtedly smart, but they’re still programs; they’re only as smart as the thought—and the programming—put into them. Their ability to process information and draw conclusions on their own adds layers to the programming that isn’t necessary with more traditional computing programs in which accounting for obvious factors is relatively simple.

When, for example, an insurance company is determining the cost of a yearly policy for a driver, they typically take data like gender and ethnicity out of the equation to come up with a quote. That’s easy. But with AI, it gets complicated. You don’t micro-control AI. You give it all the information, and the AI decides what to do with it. AI starts out with no understanding of the impact of factors such as race, so if programmers haven’t limited how data can be used by the AI, you can wind up with racial data being used, thus creating AI bias.

There are many examples of how bias creeps into AI programs, often because of incomplete data. One of the most infamous examples involved the Correctional Offender Management Profiling for Alternative Sanctions, known as COMPAS, an algorithm used in some U.S. state court systems to generate sentencing recommendations. COMPAS used a regression model to predict whether someone convicted of a crime would become a repeat offender. Based on the data sets put into the system, the model predicted twice as many false positives for recidivism for Black offenders.

In another example, a health care risk-prediction algorithm used on more than 200 million U.S. patients to determine which ones needed advanced care was found to be preferential toward white patients. Race wasn’t a factor in the algorithm, but health care cost history was, and it tended to be lower for Black patients with the same conditions.

Compounding the problem is that AI programs aren’t good at explaining how they reached a conclusion. Whether an AI program is determining the presence of cancer or simply recommending a restaurant, its “thought” processes are inscrutable. And that adds to the burden on programming in ethics up front.

Ethics and privacy together

Continued improvements in AI have potentially far-reaching consequences. The Department of Defense, for one, has launched a slew of AI-based initiatives and centers of excellence focused on national security. Seventy-six percent of business enterprises are prioritizing AI and machine learning in their budgeting plans, according to a recent survey.

Alongside the ethical concerns of AI’s role in decision-making is the inescapable issue of privacy. Should an AI scanning social media be able to contact authorities if it detects a pattern of suicide? Apple, as an example, is considering a plan to scan users’ iPhone data for signs of child abuse. Considering the ethical and potential legal implications, it makes sense that privacy and ethics get folded into the same security process as organizations plan on how to address ethics. The two should not be treated separately.

As these and other programs move forward, new guidelines on ethics in AI are inevitable. This will create even more work for teams trying to get new products or capabilities into production, but it also raises issues that can’t be ignored.

Successful AI ethics policies will likely depend on how well they are integrated with existing programs. Organizations’ experience with GDPR can offer a good example. Where it once was seen primarily as a burden, some organizations that are integrating it into their security processes have gained a lot more maturity by treating privacy and security as one bucket.

Consider future regrets

Ultimately, it comes down to programmers baking in certain guidelines and rules on how to treat various types of data differently, and how to make sure that data segregation is not happening. Integrating these guidelines into overall operations and software development will depend on an organization’s leaders making ethics a priority.

Enterprises should be addressing ethics and security together, leveraging systems and tools they use for security for ethics. This will ensure effective management of the software development lifecycle. I would go so far as to say that ethics should be considered an essential part of a threat modeling process.

The question organizations should ask themselves is: Five years down the road, looking back at how you handled the question of ethics in AI, what could be your regrets?

Considering the history of how the impact of other game-changing technologies (e.g., Facebook) were overlooked until legal issues arose, the potential for regret may well be in not taking it seriously and not acting proactively until it becomes a pressing priority.

People tend to address the loudest problem at the time, the squeaky wheel getting the most attention. But that’s not the most effective way of handling things. The ethical implications of AI need to be confronted now, in tandem with security.

Don't miss