top of page

Integrating Human Rights in the AI Lifecycle

In a recent research brief, the Institute of Ethics in Artificial Intelligence (IEAI) of the Technical University of Munich examined the challenges involved in implementing human rights responsibilities throughout the life cycle of artificial intelligence. Our managing partner, Dr. Alexander Kriebitz, identified several major obstacles in this area.


Implementing human rights in the development and deployment of AI is a pressing concern for companies. Business and human rights frameworks bind organizations to respect human rights in their business operations. According to the UN Guiding Principle 11, companies should respect human rights, avoid infringing on the rights of others, and address any negative human rights impacts with which they are involved.


Similarly, the UN Global Compact, which many major enterprises have signed, requires a strong commitment to human rights. This commitment creates normative implications for companies' conduct.


However, implementing human rights in AI is a complex task, as the technology can create many adverse human rights impacts. Previous research has demonstrated that these impacts can manifest in biases and low-quality AI. Depending on the use case, these adverse consequences can violate codified human rights, such as the right to health, human autonomy, the right to work and freedom of occupation, and the right to non-discrimination.


As a result, the deployment of AI is highly relevant from a business and human rights, as well as a wider ESG (Environmental, Social, and Governance) and sustainability perspective. With this research in mind, companies must explore how to develop and deploy AI in a manner that respects human rights to own-up to their own commitment.


Find the research brief here.


If you would like to learn more about this topic, we look forward to hearing from you.

Comments


bottom of page