I recently had the opportunity to weigh in on the topic of ethics in AI. It’s a fascinating discussion and I’d like to thank @byronacohido for the chance to be a part of the conversation – you can see his entire article here. Worth a read!
I thought I’d expand on a few thoughts Byron highlighted in his article – AI is increasingly in the news, and not always in a good way. Making progress towards more ethical AI use will go a long way to changing that. Let me give you a few questions to ponder – along with my thoughts on each topic.
When does regulation enter the picture, and what are the ingredients for good oversight?
- Governments define “protected classes” of people (race, gender, etc.) to not only foster equal access and justice but also to make sure everyone understands the rules. Regulation of AI systems should adopt a similar framework: consequential AI use cases (e.g. surveillance) should be subject to oversight.
- It’s all about consequences: is an AI decision, for example, going to send someone to jail? Or are the consequences something less? We need to design regulations that let the technology flourish while still keeping its designers and deployers accountable for damage.
- An explainable model lets non-technical people inspect how the model works, and that opens the door for those with different backgrounds to give feedback. A hard-to-explain model makes it impossible to get input from a diverse set of stakeholders.
- “Explainability” is a close cousin to transparency and there can’t be oversight without transparency. That’s the most important element for ethical AI use.
- Government use cases are problematic simply because of the power they wield – it’s hard to control how they use it and the consequences of an AI model failure can be severe.
What are some specific hazards unique to AI we should be thinking about?
- AI systems are designed to categorize things based on the variables you’ve built into the model. If you think about it, that goal is eerily similar to the problem of human stereotyping.
- Machine learning systems amplify whatever biases exist in the data used to create them. No data set is free of bias and that makes mistakes unavoidable.
- No one assigns malicious intent to an algorithm that misclassifies, say, a picture of a leaf as coming from the wrong tree. Categorizing people incorrectly, on the other hand, is a minefield of potential bad outcomes.
- Many AI solutions are designed to put people into categories and, as we get better at it, we’ll be able to more accurately categorize people. But we’re still categorizing, and sometimes categorization itself leads to bad outcomes. And that question moves us from the realm of science and engineering to society and policy.
- What’s important is to have a clear understanding of the cost of implications of making mistakes, and what kind of guardrails one needs to have to cushion the impact when such a mistake invariably happens.
Is optimism about the future of AI justified? In what ways?
- When we improve an AI model, those improvements stick. If we continue to attack model bias we should see more accurate, less biased results.
- We know there are many areas in society where implicit bias (by humans) is still a problem. An AI-based job candidate screening tool, for example, could make hiring less prone to human bias.
- AI is not a magical black box. It’s just another tool – a powerful one to be sure – but removing the mystique will go a long way to creating more realistic expectations.
- Transparency around training data, modeling assumptions and design tradeoffs, coupled with an inclusive way to incorporate feedback, could create utilitarian systems without any hype.
- We’re seeing far broader sets of data being made available to develop AI models – both from private enterprises and the government. This is a very encouraging trend because diverse data translates into results that better reflect the community.
- My optimistic scenario is one of boring but useful tools.