How companies can avoid ethical pitfalls when building AI products

We’re excited to bring Transform 2022 back in person on July 19 and pretty much July 20-28. Join AI and data leaders for insightful conversations and exciting networking opportunities. Register today


In all sectors, companies are expanding their use of artificial intelligence (AI) systems. AI isn’t just for the tech giants like Meta and Google anymore; logistics companies use AI to streamline their operations, advertisers use AI to target specific markets, and even your online bank uses AI to enhance its automated customer service experience. Addressing ethical risks and operational challenges related to AI is inevitable for these companies, but how should they prepare to face them?

Poorly executed AI products can invade individual privacy and, in extreme cases, even weaken our social and political systems. In the US, an algorithm used to predict the likelihood of future crime was found to be biased against black Americans. strengthening racially discriminatory practices in the criminal justice system.

to avoid dangerous ethical pitfalls, any company looking to launch its own AI products must integrate its data science teams with business leaders trained to think broadly about how those products interact with the larger company and mission. Going forward, companies should approach AI ethics as a strategic business problem at the heart of a project — not as an afterthought.

When assessing the various ethical, logistical and legal challenges surrounding AI, it often helps to break down a product’s lifecycle into three phases: pre-deployment, initial launch, and post-deployment monitoring.

Pre-implementation

In the pre-deployment phase, the most crucial question to ask is: do we need AI to solve this problem? Even in today’s big data world, a non-AI solution can be the much more effective and cheaper option in the long run.

If an AI solution is the best choice, pre-deployment is the time to think about data acquisition. AI is only as good as the data sets used to train it. How do we get our data? Is data obtained directly from customers or from a third party? How do we make sure it’s ethically sourced?

While it’s tempting to get around these questions, the corporate team should consider whether their data collection process allows for informed consent or violates reasonable expectations of user privacy. Team decisions can make or break a company’s reputation. Example: When the Ever app collected data without properly informing users, the FTC forced them to delete their algorithms and data.

Informed consent and privacy are also intertwined with a company’s legal obligations. How should we respond when domestic law enforcement requests access to sensitive user data? What if it’s international law enforcement? Some companies, such as Apple and Meta, deliberately design their systems with encryption so that the company cannot access a user’s private data or messages. Other companies carefully design their data acquisition process so that they never have sensitive data in the first place.

In addition to informed consent, how do we ensure that the data obtained is sufficiently representative for the intended users? Data that underrepresents marginalized populations can yield AI systems that maintain systemic bias. For example, it has been regularly shown that facial recognition technology show bias by race and gender lines, especially since the data used to create such technology is not sufficiently diverse.

First launch

There are two critical tasks in the next stage of an AI product lifecycle. First, assess whether there is a gap between what the product is supposed to do and what it actually does. If actual performance doesn’t meet your expectations, find out why. Whether the initial training data was inadequate or there was a major flaw in the implementation, you have the ability to identify and resolve issues immediately. Second, assess how the AI ​​system integrates with the larger company. These systems do not exist in a vacuum – implementing a new system can affect the internal workflow of current employees or shift the external demand for certain products or services. Understand how your product impacts your business as a whole and be prepared: If a serious problem is found, it may be necessary to roll back, downsize, or reconfigure the AI ​​product.

Check after implementation

Post-implementation monitoring is critical to product success, but often overlooked. In the final stage, there should be a dedicated team to track AI products after implementation. After all, no product – AI or otherwise – works perfectly forever without tune-ups. This team may periodically perform a bias audit, reassess the reliability of the data, or simply refresh “outdated” data. They can make operational changes, such as getting more data to account for underrepresented groups or retraining corresponding models.

Above all, remember: data informs, but does not always explain the whole story. Quantitative analysis and performance tracking of AI systems will not capture the emotional aspects of user experience. Therefore, after implementation, teams must also delve into more qualitative, people-oriented research. Instead of the team’s data scientists, seek out team members with diverse expertise to conduct effective qualitative research. Consider people with liberal arts and business backgrounds to help discover the “unknown unknowns” among users and ensure internal accountability.

Finally, consider the end of life data of the product. Should we delete old data or reuse it for alternative projects? If it is reused, should we inform users? While the abundance of cheap data warehousing tempts us to simply store all the old data and get around these problems, keep sensitive data increases the company’s risk of a potential security breach or data breach. An additional consideration is whether countries have a right to be forgotten

From a strategic business perspective, companies will need to staff their AI product teams with responsible business leaders who can assess the impact of the technology and avoid ethical pitfalls before, during and after a product’s launch. Regardless of the industry, these skilled team members will provide the foundation to help a company navigate the inevitable ethical and logistical challenges of AI.

Vishal Gupta is an associate professor of data science and operations at the University of Southern California Marshall School of Business.

DataDecision makers

Welcome to the VentureBeat Community!

DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.

If you want to read about the latest ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contribute an article of your own!

Read more from DataDecisionMakers

Leave a Comment

Your email address will not be published.