Ethical Artificial Intelligence: What It Is, Its Principles, and How to Apply Them

Artificial intelligence (AI) is a technology that enables machines and systems to perform tasks that would normally require human intelligence, such as reasoning, pattern recognition, decision-making, among others. AI has the potential to bring benefits to various areas and sectors of society, such as health, education, transportation, security, and others. However, AI also brings challenges and risks, such as the possibility of reproducing or amplifying biases, violating privacy, compromising security, or affecting employment. Therefore, it is essential that AI is developed and used in an ethical, responsible, and fair manner, respecting human rights and civil liberties.
But what does it mean to be ethical in AI? How to ensure that AI is used for good and not for evil? What are the principles and values that should guide the development, use, and deployment of AI? How to deal with the challenges and solutions for ethical AI? These are some of the questions that we will address in this article, which aims to present the concept, principles, and applications of ethical artificial intelligence.
Table of Contents
What is ethical artificial intelligence?
Ethical artificial intelligence is an area that studies how to ensure that AI is used responsibly, fairly, and transparently, in accordance with the ethical principles and values of society. Ethical AI seeks to avoid or minimize the negative impacts of AI, such as discrimination, exclusion, manipulation, exploitation, or misinformation, for example. Ethical AI also seeks to promote or maximize the positive impacts of AI, such as inclusion, participation, collaboration, innovation, and sustainability.
In an era where AI is increasingly present, understanding and implementing ethical principles is vital. This not only improves public trust in technology but also ensures that its benefits are shared fairly, avoiding damages or abuses.
Ethical AI is not an isolated discipline, but rather an interdisciplinary approach that involves different areas of knowledge, such as computer science, philosophy, sociology, psychology, law, and others. Ethical AI is also not an exclusive issue of AI developers or users, but a shared responsibility among all actors involved in the AI chain, such as researchers, educators, regulators, legislators, journalists, activists, etc.
What are the principles and values of ethical artificial intelligence?
There is no universal consensus on the principles and values of ethical AI, but there are various proposals and initiatives that try to define and operationalize these concepts. Some examples are:
- The Asilomar Principles for Beneficial AI Research, which were developed by a group of experts in 2017, and contain 23 principles, divided into three categories: research, ethics and values, and long-term issues.
- The Ethical Guidelines for Trustworthy AI, published by the European Commission in 2019, which contain seven essential requirements for trustworthy AI: respect for human dignity, human autonomy, harm prevention, fairness, explainability, robustness and safety, and privacy and data governance.
- The OECD Principles on AI, adopted by member countries of the Organisation for Economic Co-operation and Development (OECD) in 2019, which contain five principles for the responsible development and use of AI: respect for human values and democracy, benefit for people and the planet, transparency and accountability, robustness, safety and protection, and diversity, inclusion, and equity.
Despite the diversity of proposals, it is possible to identify some recurring principles and values that can serve as a reference for ethical artificial intelligence. Some of these are:
- Beneficence: AI should be used to promote human and social well-being, contributing to sustainable development and solving global problems.
- Non-maleficence: AI should avoid causing harm or suffering to human beings and the environment, preventing or mitigating the risks and negative impacts of AI.
- Autonomy: AI should respect the ability and freedom of human beings to make decisions about their own lives, ensuring informed consent and active participation of users and those affected by AI.
- Justice: AI should be used to promote equality of opportunities and rights, avoiding or correcting inequalities and discriminations that may be generated or amplified by AI.
- Explainability: AI should be transparent and understandable, allowing users and those affected by AI to know how, why, and with what data AI operates and makes decisions, enabling verification, challenge, and correction of AI. Explainability is fundamental to building trust in AI. When users understand how a decision was made, they can trust more in the technology and its creators.
- Privacy: AI must protect the personal and sensitive data of users and those affected by AI, ensuring control, security, and confidentiality of such data, and respecting norms and rights regarding data protection.
- Security: AI should be robust and reliable, functioning appropriately and predictably, avoiding or minimizing failures, errors, and vulnerabilities that may compromise the integrity, availability, and quality of AI.

How to apply the principles and values of ethical artificial intelligence?
The principles and values of ethical artificial intelligence must be applied in all phases and at all levels of the AI chain, from planning, design, development, testing, deployment, use, evaluation, review, and deactivation of AI. To do this, a series of measures and tools must be adopted to assist in the implementation and verification of AI’s compliance with ethical principles and values. Some of these measures and tools are:
- Codes of conduct: These are documents that establish the norms and good practices that should be followed by professionals and organizations involved in the AI chain, according to the ethical principles and values of AI. Some examples are the Professional Ethics Code of the Brazilian Society of Computing and the Partnership on AI’s PAI’s Responsible Practices for Synthetic Media framework.
- Impact assessments: These are processes aimed at identifying, analyzing, and mitigating the potential positive and negative impacts of AI on human rights, society, and the environment, before and after AI deployment. Examples include the AI Impact Assessment and the Algorithmic Impact Assessment.
- Audits and certifications: These mechanisms aim to verify and attest to AI’s compliance with ethical, legal, and technical standards and requirements, through independent testing, inspections, and validations. Examples include the AI Audit Framework and the AI Certification Scheme.
- Governance mechanisms: These are structures and instruments aimed at ensuring the participation, transparency, accountability, and responsibility of actors involved in the AI chain, through consultations, dialogues, monitoring, and sanctions. Examples include the European Commission’s AI Ethics Advisory Board and Brazil’s National Council on Artificial Intelligence (anticipated in Bill No. 2338, of 2023, still under review as of January 2024).
Conclusion
Artificial intelligence is a technology that can bring both benefits and challenges to humanity and the planet. Therefore, it is essential that AI is developed and used in an ethical, responsible, and fair manner, respecting the principles and values of society.
In this article, we presented the concept, principles, and applications of ethical artificial intelligence, an area that studies how to ensure that AI is used for good and not for evil. We also showcased some of the proposals and initiatives that attempt to define and operationalize the ethical principles and values of AI, such as beneficence, non-maleficence, autonomy, justice, explainability, privacy, and security.
Moreover, we discussed some of the measures and tools that can assist in implementing and verifying AI’s compliance with ethical principles and values, such as codes of conduct, impact assessments, audits and certifications, and governance mechanisms.
Thank you for your attention, and see you next time!