Artificial Intelligence (AI) is an important part of our everyday lives, for better or for worse. Ensuring that AI is developed, designed, and deployed in an ethical manner is critical given its role in our society. AI ethics, also called Ethical AI or Responsible AI, can refer to both the process of developing AI and to the AI product itself.
Companies, governments, associations, and communities have drafted Codes of Ethics for AI. These codes address issues such as accountability, trust, transparency, fairness, and agency. Philosopher Luciano Floridi has analyzed many of these codes of ethics for AI, and from them developed an overarching framework consisting of five key principles:
Principle #1. Beneficence: AI should be developed and applied to improve the well-being of our planet and its people.
Principle #2. Nonmaleficence: Because AI could end human life as we know it, a “do no harm” principle is critical. It’s also important to avoid harming privacy, autonomy, employability, and other such interests.
Principle #3. Autonomy: Our ability to act freely and independently must be preserved and promoted, while the autonomy of machines must be restricted.
Principle #4. Justice: AI must be developed, designed, and deployed in ways that promote justice, fairness, equity, and related values.
Principle #5. Explicability: To promote the other principles, we need to know the “how” and “why” of AI systems and products. Accountability and intelligibility are key. Knowing the “how” and “why” allows us to hold the correct groups responsible for the beneficial and negative impacts of AI.
Because AI has the potential to improve our world, Ethical AI advocates for more than simply preventing harm. AI that benefits society, or AI for Social Good (“AI4SG”), is another key concept of the ethics of AI. It argues that those who develop and apply AI have a moral responsibility to use AI to advance social welfare and promote the well-being of our planet.
So, while the ethics for AI are evolving, one thing is clear: policymakers, business leaders, technology developers, academics, and communities must come together to mitigate harms and to ensure AI supports a flourishing global society.