AI, Ethics & Law – What is Fair & Unfair?

If there is one thing the webcast on ‘AI, Ethics and Law: what is fair and unfair?’ clarifies, it is the fact that there is no general or clear-cut answer to this fairness question, even if AI developers, deployers, and all those involved would welcome more clarity. There are, however, numerous ethical and legal principles that can offer broader guidance in this respect. For instance, you can count on an AI system being ‘unfair’ if it unjustly infringes fundamental rights like privacy or non-discrimination, or if it causes physical or mental harm.

Yet these principles still need to be applied to the specific context in which an AI system is developed or used. In other words, it is essential to perform a contextual and continuous assessment of the design, development, and use of AI systems to align them with fundamental rights and values. Moreover, the measures to ensure this alignment must be appropriately tailored to the particular application.

The webcast presentations by Lode Lauwaert, Martin Canter, and myself aim to shed some light on these issues and can be viewed through this link. In my presentation, I, for instance, included an example regarding the application of the right to non-discrimination and how its contextual implementation can differ between the context of insurance rates and the context of healthcare. However, note, that the need for a contextual application of rights and principles is not unique to AI systems. We can draw on the rich experience we have with other situations and technologies to conduct this assessment.

Below, I provide brief answers to the audience questions submitted during the webcast. Many of these questions touch upon intricate ethical and legal issues that cannot be captured in simple solutions or black and white reasoning. Therefore, my answers are only a starting point. Hopefully, it encourages further reflection and discussion on this topic.

The fairness concept 

A first set of questions focused on the criteria that make the (non-)use of a particular AI system unfair. Thus, it was asked: “Suppose an AI system can save more lives of old people than young people – should it be used or would that be unfair? Is it not unfair to let people die if the use of an AI system could avoid this?” Someone else raised: “As technology is not neutral, one could also claim that traffic light colors are unethical from the perspective of those who are color-blind. So isn’t it the case that an AI system only becomes “unfair” when a certain % of people are affected by the law / convention? And if so, what percentage is that?”

As extensively discussed during the seminar – especially by Lode Lauwaert and by Martin Canter – depending on the ethical theory you uphold, the concept of fairness can be defined in different ways. Moreover, different fairness concepts will be more or less appropriate in different contexts. That immediately points out that there is no generally applicable black and white answer. Some persons only consider it fair to take a measure when it advantages everyone, while others consider it sufficiently fair if a measure benefits most people, even if some will not be benefitted or may even be disadvantaged.

Should everyone gain?

The first approach may sound ideal but is, of course, not always possible. Some measures are, for instance, specifically meant to advantage a particular group. As a consequence, that group will benefit from certain resources which are no longer available for others. In this example, a ‘fairness’ assessment looks very different if the specific group is disadvantaged or vulnerable, or one that is already in a very privileged position.

However, solutions can often be found where everyone gains to some extent or another, even if not everyone gains equally. Consider the development of an AI system that helps identify patients with a specific type of cancer. Such a system will not advantage people without cancer (or with a different type of cancer), but it can be said that everyone, regardless of status, benefits from living in a society that improves cancer identification and lessens the pressure on the healthcare system as a whole.

Should most gain?

The second approach sounds more practical but  is not always acceptable. A simple cost-benefit sum might, for instance, overlook that those who are being disadvantaged might start from a more disadvantaged situation, while those who are advantaged might already find themselves in a more privileged position. Or it might overlook that those who are disadvantaged will be negatively impacted to a much greater extent than the advantaged group will be benefitted.

That is where fundamental rights and other essential safeguards come in: to ensure that those negatively impacted by a specific AI system or measure still retain a certain level of protection. Ensuring this protection is, however, not always easy in the context of AI, as it can involve opaque decision-making processes. Developers and deployers of AI-systems hence need to acknowledge their responsibility to uphold this protection. They should map the impact that their AI-system can have on different stakeholders and ensure that the rights of those involved are fully respected.

Finally, some have argued that not only the abuse and overuse of AI, but also the underuse of AI can lead to an unfair situation, for instance, where AI systems could help save lives or otherwise benefit human beings significantly. The question, however, remains: what are the broader effects of developing and using the system? AI systems are socio-technical systems. They do not exist independently but are part of a broader social, societal, and technical context. Therefore, before concluding that a situation is unfair because a potentially beneficial AI system is not used, the impact that the development and use of the system can have should be assessed, not just in terms of benefits but also in terms of costs and risks, not just for those directly involved but for society at large. This assessment always depends on the specific situation and domain, as the ethical and legal questions raised by AI are context-dependent. 

Piercing through the black-box

An observation made by a webcast participant is that you cannot always tell whether something went wrong in a black-box algorithm. We can document things as much as possible, yet we cannot always explain or assess an outcome or process as “wrong”, as this depends on the definition of “wrong”.

For so-called black-box algorithms – for instance, those operating on deep learning techniques – the decision-making process of the algorithm is indeed non-transparent. Even the developers of the system do not precisely know a certain decision came about. A lot of exciting research is currently ongoing in the field of  “explainable AI”, with the aim of rendering such systems less opaque. There is, however, still a long way to go.

During the webcast, I mentioned that increased transparency through documentation could already contribute to minimize certain ethical risks in the meantime. It is true that documenting elements such as the type and selection of data, the reason for using a particular AI technique, the function that the algorithm has to optimize, the purpose of the system, and the testing methods used to assess performance, will not render the system explainable.

However, it does give more insight into the processes around the system. This way, at least, developers of AI system are forced to reflect on the particular choices they make. Moreover, it also allows to verify (ex post) whether and which steps were taken to mitigate potential risks (e.g., was the training dataset representative enough?) and to ensure compliance with existing legislation. Without documentation, it is difficult to evaluate AI systems and hold the organizations that use them accountable for potential harm. Moreover, documentation also allows organizations who use AI responsibly to show their clients that they use appropriate methods and take adequate measures to ensure legal, ethical, and robust AI.

Human versus algorithmic decision-making

Another set of questions focused on whether more specific compliance standards are needed for AI-based systems than for human decisions more generally. AI merely automates a decision- making process, so why not impose the same level of scrutiny for any human decision? Therefore, how should we distinguish between concepts that are AI-specific and those that also apply in other contexts? And is it at all useful to have AI-specific legislation?

Rightfully, the above questions indicate that many ethical questions arising in the context of AI – and automated decision-making more generally – also arise in other contexts. You do not need to use an AI system to unlawfully discriminate people or to breach the right to privacy. In this regard, it is important to recall that AI-systems do not exist independently but are developed and used by human beings. In other words: it is always about human decision-making. Concretely, in the context of AI-systems, it is about human decisions to design and develop a system in one way or another. This includes the decision to go for a particular technique or optimization function, to use the system in a specific context, and the decision to use an AI system at all. Regulating AI is thus not about regulating “AI” as such, but about regulating human behavior related to AI-systems.

A related question is whether the standards aiming to govern human behavior around AI systems should be technology-neutral or AI-specific. I’ve touched upon this question in this article (2.2), and many others wrote about this too. There is no simple answer to this question. The best approach might be a mixed one. This means: confirming that legal rules on human action also apply to human action in the context of AI, while acknowledging some features of AI systems that require extra attention, such as the delegation of human control and authority to systems with limited transparency.

Human interference

One webcast participant asked how humans should interfere in algorithmic decision-making processes to correct past injustices? To what degree should they interfere, and when is such interference to justified?

An example could help clarify this issue. Suppose an organization wants to hire someone for a job opening and wants to use an AI-system to go through the mass of incoming CV’s to spare time. It could consider using an AI-system trained on data of past employees who worked within the organization and excelled. On that basis, it could then learn to identify the features that would be desirable in a new candidate.

Now let’s assume that, in the past, this organization almost exclusively employed men. This can be the case if, for instance, it operates in a historically male-dominated sector or if its previous management upheld discriminating hiring policies. In this case, the dataset of past employees who excelled will, by definition, contain almost only men. Hence, the algorithm might learn to associate male-oriented terms with excellence. Without remedying this, the algorithm might consider the CV of a female applicant less valuable merely because it does not contain male-oriented terms.

It is evident that, in this case, intervention is needed to secure a more representative dataset or to use alternative hiring methods to ensure that female applicants are not unjustly disadvantaged. The precise interference will however, depend on the specific context and application of the AI system in question. Again, there is no one-size-fits-all solution to this problem.

Responsibility, accountability, and liability

One participant asked whether the terms responsibility, accountability, and liability could be distinguished from each other from a legal and philosophical standpoint. As there are entire books written about these terms, I provide a simplified answer to clarify how I use the terms during the webcast, tailored to AI.

Responsibility focuses on the duty of AI practitioners (i.e., developers and deployers) to ensure that AI systems operate in a manner that does not cause any unjust harm, and more generally, in a legal, ethical and robust manner. It expresses a duty of care that should be taken up by AI practitioners throughout the entire process.

Accountability is more relevant when a specific process took place. From an external point of view, it denotes the possibility of holding AI practitioners to account if something goes wrong. From AI practitioner’s viewpoint, it denotes taking ownership of decisions and actions and of all related consequences. That also means remedying any harm and ensuring redress.

Liability is more closely related to the legal field. It can be considered as the state of being legally liable for a particular harm. It hence corresponds to legal accountability and typically entails the need to ensure compensation for any unjustly suffered harm.

For a more extensive discussion of responsibility, accountability, and liability, I recommend the special edition of the Utrecht Law Review edited by Ivo Giesen & François Kristen.

Norms and values

In my webcast presentation, I discuss norms and values that should be taken into account in the context of AI. Someone asked how these two terms differ from one another. Similar to the question above, academic literature extensively describes norms and values, as well as their relationship with each other. Therefore, I only provide a simplified answer that hopefully helps to distinguish the two terms better.

Norms and values are often used interchangeably. However, values embody more abstract conceptions that individuals, groups or society at large deem valuable or desirable. On the other hand, norms typically denote more specific standards or rules for human behavior, which can be binding or non-binding, formal or informal. To give a simplified example: while equality can be considered a value, the fact that people in similar situations should be treated similarly can be regarded as a norm.

AI rules and clarity

One of the webcast participants stated that: “A significant barrier to implementing laws consists of the fact that the overwhelming majority of people do not know or understand the entire legal code. Even experts struggle with what something like GDPR actually entails. Similarly, very few people have a good philosophical background or choose to engage with philosophical thinking. As a result, there is a meta-issue: should we require AI guidelines to have a certain level of clarity and brevity to ensure implementation? After all, what are the ethics of creating laws and guidelines so complex they aren’t widely understood or implemented?”

As discussed above, many ethical questions that arise in the context of AI also arise in other contexts. In this sense, AI is not that unique. Like many technologies – and like human behavior more generally – it poses dilemma’s around fairness and justice and requires us to reflect on what this means concretely. You do not need to be a specialized lawyer to behave legally, nor do you need to be a specialized ethicist to act ethically in society.

Certainly, legal rules should ideally be formulated straightforwardly so that they can be applied in a predictable manner and provide legal certainty. Furthermore, guidelines meant to offer organizations guidance on implementing ethical or legal principles likewise benefit from clarity. Simultaneously, there is a tension between ensuring that legal rules are sufficiently broad and flexible to remain relevant and applicable throughout the evolution of a technology, and ensuring that the rules are as precise as possible and tailored to each situation. That is a balance to be carefully drawn and improved through the feedback from those involved.

Do you have a burning question about AI, Ethics & Law? Contact us via the below Virtual DI Summit Contact Form

Share this post

Share on linkedin
Share on twitter
Share on email