According to Adobe, review boards have the ability to assist companies in mitigating some of the risks involved in using artificial intelligence (AI). At a time in which there is a growing awareness that the use of AI comes with risks, there has been a growing conversation about how to make AI less biased. In this article, we will discuss how Adobe manages bias in its AI.
In developing computer systems, there is a simple relationship that we need to be aware of: junk in, junk out. This relationship expresses how the quality of data fed into a program determines the quality of its outputs. Consequently, AI has the ability to perpetuate harmful biases against minorities and other demographics, based on the data that has been fed into it. Consequently, ethics committees have been shown to have a vital role in helping businesses such as Adobe reduce AI bias and drive the values of the organization.
Adobe has an AI ethics committee, which it formed two years ago. The role of this committee is to review features as they are developed and determine if they are in any way biased, prior to their deployment. Adobe believes that it is important to have a range of opinions in order to get the most unbiased conclusions from its ethics committee. Consequently, it’s ethics committee is composed of experts from a variety of disciplines, genders, ethnicities and life experiences, who are trusted to help Adobe find and remove bias from its products and their features. The experts are all employees of Adobe and come from various departments, including government relations, legal and marketing. The breadth of the ethics committee reflects the fact that bias is hard to detect if a person is not part of the affected demographic or especially sensitive to the issue. One person may not see bias which another person is aware of, so it’s crucial to get as many people under the tent as possible so that important voices are not left out.
An example of how bias can affect consumers is that a feature meant to block unauthorized purchases of the company’s software may end up learning to block purchases by particular demographics. This is not as improbable as it sounds. The company revealed that it has in the past reviewed a feature designed to detect fraud, and which had the potential to be discriminatory against certain demographics. This happens because AI can match surnames with geographies and conclude that because there is a great deal of fraud from Brazil, for instance, purchases from Brazil must be blocked. The feature is still undergoing testing in order to remove this bias.
Assessing the ethics of AI is an important part of Adobe’s work. Part of this involves having the developers fill in a form in which they explain the purpose of the AI feature and how it functions. If in making the assessment, no potential for bias is found, then the feature does not have to be presented to the ethics committee for review. This serves to highlight the importance of having LPC supervision in order to maintain the right ethical standards within an organization.