AI & Ethics – Precautionary and Proactive Principles

Ethics, what’s in a word?

According to Wikipedia, Ethics is a branch of philosophy that involves systematizing, defending, and recommending right and wrong behavior concepts.

Speaking about AI and ethics, I consider it essential to determine how you can use AI to systematize, defend, and recommend the right behaviors. That means right actions or abstentions for the citizens.

We used tools to accomplish our goals long before being totally humans, with the first stones and the first fires. In relatively recent times, we became conscious that using new tools can also be dangerous. And in less than a century, we can create tools that could destroy ourselves as a species (nuclear weapons).

We live in a world with a great diversity of beliefs and ethical principles. However, compared to the past, the world today is far more united. The differences of ethical principles between a citizen of Mexico, Tokyo, or Johannesburg today are almost nothing. At least, when you compared them to the differences between ethical principles of Aztecs, Japanese people, or San people six centuries ago. Most citizens largely admit many moral concepts. For example, today, almost nobody in the world will consider that killing foreigners or disabled people is admissible, and even less consider it a duty.

AI & Precautionary Ethics

In general, a consensual ethical goal is not to harm people or make life worse. If you apply this to AI, your Artificial Intelligence must avoid arming users to discriminate and create inequalities.

But this is not enough. You also have to avoid AI causing harm indirectly. For example, when AI changes something for a person or group, it might create inequalities. For instance, each time more AI is used, people can lose their jobs. When AI mimics social contacts, it could destroy real social connections. When AI replaces specific human activities, it creates danger of degradation. That is because some individuals will “unlearn” what AI is now doing.

In this conception, the prominent role of ethical guidance to AI is to use it only when it is inevitable that it doesn’t cause harm. Since the choice to use or not AI is made mostly by those who own the machines, ethical guidelines are there to restrain those who own AI from using it. Except when no harm is proven. That is a sort of general precaution principle applied to the machines – Primum non nocere.

AI & Proactive Ethics

However, you could consider something slightly different as a general consensual goal. A general principle of ethics could be not “how to avoid harming people”. It could be “how to make possible a good life”. That means that a general ethical principle would be positive, not negative.

The duty to help people in need is a largely accepted moral principle. For example, in some countries (France and Germany), it is even a crime to abstain from helping people in danger. At least, if there is no danger for the person helping. Helping those in need is maybe one of the most profound intuitive answers for human beings. Somebody once said that even the worst criminal would stop a child from falling down a well.

In this sense, we could consider that accelerating AI to save people is not only something positive. It is also an ethical duty. For example, we know that each day about 15,000 people die of hunger and malnourishment. We know that about 120,000 people die from diseases related to age. We also know that accelerating scientific and medical research could save millions of lives and that AI can help with this. Furthermore, we also know that AI could help, e.g., create better ways to fight malnourishment, improve  agriculture and transportation of food.

You could be reluctant to promote AI for good because you are not totally sure that it saves lives. However, it is not ethically admissible to do nothing because a positive result is only an option. Suppose you see an older woman falling in a river during the winter and say: “I am not trying to rescue her because she probably dies anyway from her wounds or hypothermia,” is not a valid ethical answer.

Sure, there are differences between the ethical duty to rescue individuals and using AI to save lives. 

The risk at stake for individuals helping each other is personal and immediate, and for AI, it is collective and longer-term.

People saved by “altruistic AI” will benefit from it only in a few years. Maybe, they will even never know that better AI was helping them, for example, to avoid cancer. But this is not an ethical difference. That is only a practical difference. 

Many ethicists will also argue that “pushing” for AI can have unexpected collateral consequences. That is sure, but this is also the case when you help individuals in need. The classical ethical question is, “Would you save Adolf Hitler as a child if you know what would happen?”. 

Thinking about collateral consequences is essential. But those who make this argument to avoid action might forget that collateral consequences are not necessarily collateral damages. They also often forget that doing nothing also has collateral consequences. For example, if you decide not to use AI for faster medical and scientific progress. People could trust less of their governments and the scientific community because they experience AI does not focus on the common good. 

We could write thousands of pages about possible collateral consequences of developing AI faster for the common good. However, we live in such a complex world that it is probably impossible to be reasonably sure of big decisions’ collateral consequences. When we have positive direct consequences and unknown indirect consequences, the choice is ethically clear.

Besides, we should maybe be sometimes afraid of artificial intelligence. But we should certainly also be afraid of stupidity, whether it is artificial or human. Generally, it seems logical to consider that using more intelligence, including AI, for the common good is positive. 

Finally, concerning the proactive point of view for AI for Good, you know that AI, especially a possible “Artificial General Intelligence,” might be very dangerous. That includes bringing risks of humanity’s destruction with malicious or non-benevolent AI. However, it is risky to stop AI from progressing for the collective good because you don’t know everything about the consequences. Especially since other forms of (bad), AI at the same time, can develop elsewhere. 

In my opinion, we could and should use AI for improving health and well-being, developing proactive ethical AI tools, and “teaching” AI. The major ethical AI goal is to enhance people by supporting them to become more resilient, happier, and healthier. In other words, AI might eliminate many potential existential risks and therefore save many lives.

Share this post