AI systems and applications are being unleashed across sectors without formalized accountability, impact assessment, or regulatory oversight of key ethical issues. As the authors of a 2024 Computer article point out, this makes voluntary standards and independent scrutiny all the more imperative.
The article, “Artificial Intelligence For the Benefit of Everyone,” reports on a 2023–2024 review of the IEEE Standards Association’s specific, measurable, achievable, relevant, and time-bound (SMART) criteria to evaluate the ethics of AI systems in four key areas: accountability, algorithmic bias, transparency, and privacy.
The review was motivated by the unprecedented growth and evolution of AI since the SMART criteria were developed in 2018–2021. Following is a brief overview of the reviews findings in four key SMART areas.
The accountability area is aimed at keeping humans in the loop and on the hook for AI systems and their decisions, actions, errors, and outcomes.
Over the past three years, accountability has been the subject of national and sector-specific regulatory actions, from the European Union’s Artificial Intelligence Act to the U.S. Blueprint for an AI Bill of Rights.
As the article’s authors note, the regulatory discourse and stakeholder positions on accountability provide a clearer picture on societal and legal expectations, including on the purpose of AI use in society. It also highlights the need for

Ethical and unethical algorithm biases play a crucial role in AI applications.
As the authors note, the distinctions between ethical and unethical biases “hinge on the bias’s purpose, context, and impact on stakeholders” and thus make human oversight essential.
The SMART criteria updates and notes reflect these issues and include
Ethical transparency is essential to ensuring accountability and to identifying and addressing biases. This transparency entails a visible decision-making process for AI systems that is understandable to users and fosters trust.
The SMART criteria updates and notes reflect these issues and include
The existing SMART privacy criteria focuses on established legal concepts—including the right to confidentiality and to data privacy, protection, and security. It also emphasizes the context and culture in which an AI system is used.
The SMART criteria review took a more holistic view of ethical privacy, understanding its intrinsic link to an individual’s self-expression, personhood, ethics, values, and personal safety and security.
The SMART criteria updates and notes reflect these issues and include
As the authors of this important AI ethics update note, if the polarization of AI accelerationists vs. de-accelerationists offers a credible measure, AI system harms and benefits are neither fully mapped nor likely to be equitably shared between the developers/service providers and the societies across the globe consuming these exponentially growing and evolving AI products.
“Artificial Intelligence For the Benefit of Everyone” discusses this and other ethics-related issues in depth; it also provides details on each of the four key workstream areas.
To dig even deeper, join other AI experts, researchers, government officials, and enthusiasts at the international IEEE Conference on Artificial Intelligence (IEEE CAI) 5–7 May 2025 in Santa Clara, California.
In addition to showcasing the latest AI research and breakthroughs, IEEE CAI emphasizes applications and key subject areas, from sustainability and human-centered AI to issues and industry-specific applications in healthcare, transportation, and engineering and manufacturing.