Can an automated system used in approving or rejecting home-loan applications be guilty of racial bias?
Is it reasonable for police to use facial recognition systems to catch suspected criminals?
Can developers harvest user data to train improved speech recognition systems in video conferencing?
The science and engineering community has always found itself in a delicate position on questions of AI ethics and social responsibility. On one hand, it is reasonable to argue that the laws of physics or mathematical equations per se are value-neutral – they are silent on human moral issues. On the other hand, science does not, and engineering certainly cannot, operate in an abstract sterile world separate from the application of scientific and technical knowledge to human activity. All human activity does have moral and ethical ramifications. So, it makes sense that there are social and ethical responsibilities of engineers. But we cannot be responsible engineers without considering how our systems will support or undermine key human rights for privacy, safety, agency and equity. The emergence of machine learning and data-driven artificial intelligence has only exacerbated widespread concerns about the role of technology in society.
How can we keep AI ethical issues at the forefront and choose a responsible path for building machine learning systems?
We should start with some basic understanding of what machine-learning-based design involves, and why it can be a source of controversy. The core idea of machine learning is that the detailed function is not described by a piece of software code but learned by generalizing from a range of examples of expected behavior. That training data may be explicitly prepared by combining a range of inputs with target outputs (supervised learning) or the training may be implicit or continuous with expected results either implicit in the input data (unsupervised learning) or by a reward metric for successful series of outputs (reinforcement learning). Typically, the behavioral model learned in training is then deployed as a building block function within a deployed “inference” system, where the user’s real inputs flow into the model, so the system can compute a result that closely matches the patterns of behavior learned in training. What kinds of AI ethical issues found in machine learning-based functions deserve consideration when thinking about responsible AI? At some level, machine-learning-based software isn’t significantly different from software developed by traditional procedural programming methods. We care about bias, privacy, fairness, transparency and data protection in all software, but machine-learning methods are less widely understood, require large amounts of training data, and sometimes easy explainability. These characteristics demand that we take a closer look at the role that AI ethics has to play in this game.
Key questions to ask when creating responsible AI design principles
Here are some core AI ethics questions of importance. Arguably these issues often overlap, but it doesn’t hurt to look at responsible design from several angles:
Bias: Does the function implement unfair, unintended or inappropriate bias in how it treats different individuals? Is the system designed and trained for the distribution of users on which it is actually being applied? Does the design, implementation and testing prevent bias against legally protected characteristics of individuals?
Privacy: Does the training and operation of this function require an individual to disclose more personal information than required, and does it fully protect that private information from unauthorized or inappropriate release?
Transparency: Is the behavior of the function sufficiently well understood, tested and documented so that developers of systems integrating this function, users and other appropriate examiners can understand it? Is behavior of the implemented function essentially deterministic such that repeating the same inputs yields the same outputs?
Security: Does the training and implementation of the function protect any data captured or products from inappropriate transfer, misuse or disclosure? This data may include personal information that is also subject to privacy concerns and non-personal data which may be subject to ownership and contractually agreed permissible use concerns.
Societal Impact: Beyond the specific concerns in bias, privacy, transparency and security, what is the direct and indirect impact on society if this technology is widely deployed? Does it help or hinder the exchange of ideas? Does it increase the likelihood of violence or abuse? Is it harmful to the environment? This category of AI ethics is intentionally open-ended, such that we cannot expect any design and deployment team to fully comprehend all the indirect effects of their work, especially when deployed over years and around the world. Nevertheless, the effort to anticipate the negative long-term effects may encourage teams to design mitigations into their work, or shift technology strategy to alternatives with fewer apparent societal downsides.
This list of AI ethics questions may initially seem too fuzzy and abstract to be actionable, but the industry has successfully deployed development guidelines, especially in system security and data privacy that can serve as a useful template. Cisco’s long leadership in system design for data security make that framework the natural starting point for Webex’s work in responsible machine-learning systems. The framework for Europe’s General Data Protection Regulation (GDPR) also provides a framework for protecting “fundamental rights and freedoms of natural persons” that may provide some useful principles to apply to machine-learning-based systems.
Other considerations that support AI and ethics
As I have thought about conscious, responsible AI system design, I have found three concepts particularly useful to hold in mind when looking to understand the social and ethical responsibilities of engineers:
A process for responsible machine learning development. The potential applications for machine learning are so vast that we cannot hope to prescribe a universal development flow. The rate of innovation in model types, training systems, evaluation metrics and deployment targets is so great that any narrow recipe will be instantly obsolete. Instead we expect to build ethical AI guidelines with explicit checkpoints at which both the developers themselves and others examine their work to verify that key questions have been considered and key design choices are documented. This may also entail specific tests that systems must pass before they can proceed to deployment.
Consideration of consequentiality. Designers and implementers need to have a clear understanding of AI ethics and the impact of the learned decisions of their trained systems. They must embrace the notion that their machine learning functions often make decisions with real impact on the users or other downstream individuals. Sometimes the decisions are quite big and explicit – is a home loan application accepted or rejected? In other cases, the decisions are subtler but still with pervasive impact. For example, if a learned speech enhancement function in a video conferencing system lowered the volume of the average woman by 2% relative to her male colleagues, it could have the insidious cumulative effect of reducing women’s impact and contribution in the workplace.
Statistical variance in training data and expected use data. Machine learning must use a diverse range of input data to train the system to handle all expected conditions. The statistical design of the training set is the greatest determinant of the statistical behavior of the ultimate system. For speech systems, for example, this specified distribution might include a target percentage for American English speakers, British English speakers, Australian English speakers, Hispanic English speakers, speakers from South Asia, China, continental Europe and other regions. It might also include a target percentage for high and low pitched voices, voices of speakers of different ages, speech in rooms with different reverberation levels, and different types and relative amplitude of noise. The developers should have an explicit, documented understanding of the target user distribution, and should construct training and testing to match that target use. Moreover, the developers should consider what may be missing from the target user specification, for example coverage of legally protected characteristics (race, national origin, gender, religion and so forth). This is not an easy problem, because there are so many potential dimensions of relevant variation in use conditions. The developers should test for a wider range of characteristics than they initially train for, with an expectation that they may discover sensitivity to some variances which will required addition of new data or shifting of distributions to achieve adequate performance across all target use conditions.
Design for responsible AI is still in its infancy. We have a lot to still to understand about what pitfalls exist in protecting against bias, compromise of privacy, transparency, data loss and serious societal impact. Nevertheless, conscious attention AI ethics in discussion, specification, training, deployment and maintenance can start to make a big difference in how these power methods can become trusted and reliable.
Sign up for a Webex free trial and enable a better team collaboration and video conferencing technology experience today.
Chris is a Silicon Valley entrepreneur and technologist known for his groundbreaking work developing RISC microprocessors, domain-specific architectures and deep learning-based software.