What springs up in your mind when hearing or reading something about AI ethics? Timnit Gebru, data biases, automated inequality, discrimination at scale? But have you ever thought of what do we mean by using the term “ethical AI?” What are the ideas of ethical AI shared and agreed upon to be perceived as so? And how to develop actionable steps toward ethical AI practices at your company? Let’s dive right into it.
We don’t have any universal and conventional shared practice worldwide, but the need for an ethical AI framework is rising. Meanwhile, some principles are mentioned more often than others when drafting ethical AI policies, as per the research, The global landscape of AI ethics guidelines by Anna Jobin et al. Given their findings, AI ethics is mainly concentrating on the following five ethical principles: “transparency, justice and fairness, non-maleficence, responsibility and privacy.” Skip to the next block if you are eager to learn the actionable steps to implement ethical AI at your company. Here, we’d like to provide you with some foundational principles and useful links for further research.
Based on the OECD recommendations, you can think of transparency in AI as facilitating the general understanding of AI systems and providing awareness of possible outcomes and interaction with AI systems for all stakeholders. It’s part of a bigger story related to the black-box and white-box AI principles. When interested in the practices of transparency in AI, check out our blog Introduction to the White-Box AI: the Concept of Interpretability for more details.
As per the guidelines of the European Commission, this principle relates to “ensuring equal and just distribution of both benefits and costs, and ensuring that individuals and groups are free from unfair bias, discrimination and stigmatisation.” To make this principle more feasible, take the automated gender recognition of commercial solutions as an example. There is still a lot to change when talking about commercial automated: gender recognition:
“AI systems and the environments in which they operate must be safe and secure” and defended from malicious use, as the European Commission defines. One of the interpretations proposed by Anna Jobin et al. implies “avoidance of specific risks or potential harms — for example, intentional misuse via cyberwarfare and malicious hacking — and suggest risk-management strategies.”
Privacy is closely related to data governance, at least per the European Commission’s guidelines, relating to the principle of non-maleficence. AI Actors should ensure that data collected or provided as an output by the AI system would not harm their users. Moreover, the regulation relates to “responsibility and accountability for AI systems and their outcomes, both before and after their development, deployment and use.” Check out also this great list on AI ethics, curated by Eirini Malliaraki for further research.
However, developing ethical AI practices is easier said than done. Shipping the biased-free AI-powered product could be tricky, and tech giants demonstrated this multiple times. So, how to transform nebulous theory into actionable steps at your company? First, AI ethics should not exist in a vacuum but be adjusted to the values and principles of your company. You will ensure a sustainable and scalable ethical AI program when developing ethical AI practices on top of your mission and values. Consider the following steps:
AI ethics is a responsibility of the whole company but relates to the C-level first. Thus, executives can set the tempo of how employees would take these guidelines. Usually, the data governance department deals with compliance and privacy issues, so they could also deal with AI ethics challenges. Consider inviting other relevant experts and ethicists to empower your ethical AI team.
An internal document including your company's ethical standards explained in clear terms comes in handy in case of emergencies. Thus, product owners, data collectors, and managers would know their scenarios when the crisis is knocking on your doors. Do not forget to define the precise KPIs and quality assurance practices, keeping your framework updated. Consider the specificity of your domain. For instance, it would be crucial to measure the quality of recommendations that should be free of biased associations related to some particular groups of society in retail.
Someone has already faced this challenge and probably found out how to deal with it. Take the healthcare industry as an example — patients could not be treated until they express their consent. Thus, it should be the same when talking about data of users collected, used, and shared further. All privacy details should be communicated clearly to ensure that users understand the possible outcomes.
Check out our blog HIPPA vs. GDPR: major acts regulating health data protection for more details.
Ensure that all know about your code of conduct regarding ethical AI. For example, just a decade ago, companies paid little attention to cybersecurity. But now, every organization faces 22 security breaches every year, as per the State of Cybersecurity Report 2020 by Accenture. Now, this fact makes cybersecurity a point of primary concern in every organization. Developing ethical AI practices is going the same way. Thus, it is better to define crucial ethical infrastructure at the outset. Start building a culture that would nurture an appropriate attitude toward ethical AI. Making everyone related informed and motivated to respect the principles is an investment into the reputation of your AI-driven product.
It is crucial to constantly monitor the changes in the AI world since it is not possible to oversee all the outcomes. But developing an ethical AI framework, bearing the best practices in mind, fostering the AI-bias-aware culture at your company, and monitoring the changes around ethical AI would make your future safer.
Ethical AI is not an agreed-upon and conventionalized practice globally, but researchers suggest the following principles. When working with AI, every organization should consider “transparency, justice and fairness, non-maleficence, responsibility and privacy” when working with AI. In practice, we propose to use existing organizational infrastructure, take the best methods available, facilitate AI-bias-aware culture, and constant monitoring of changes in this domain.