
Artificial Intelligence and Human Rights: Towards the Summit of the Future
We are currently living in a Fourth Industrial Revolution characterized by full automation of production processes due to the influence of artificial intelligence (AI), digital technology and the internet of things.
AI is commonly seen as a tool capable of making completely objective decisions, free of bias and prejudice. However, as the use of AI continues to evolve and affect our daily lives. For example, many generative AI models have been built with elements that have inevitably become contaminated with ideas of hate and discrimination that infect our societies, racist and misogynistic content that reflect numerous misperceptions, inaccuracies or simple lies that circulate in all societies and foster hatred.
It is therefore essential that we ensure that its development benefits everyone.
Placing human rights at the core of how we develop, use and regulate technology is absolutely essential in order to design our response to these new contemporary challenges, as UN Human Rights Chief Volker Türk warned during the fifth annual AI for Good Global Summit, the UN’s premier platform focused on advancing AI to advance areas such as health, climate, gender, inclusive prosperity, sustainable infrastructure and other priorities for global development.
Indeed, the human rights framework, as it has been developed and applied for decades, embraces the idea that people need to be protected from certain abuses that may be the work of governments as well as other individuals, private entities or companies, and thus provides a fundamental basis for addressing the many issues raised by AI.
Since 2011, the Guiding Principles on Business and Human Rights, developed by the United Nations Human Rights Council, represent the global standards of conduct that seek to prevent and combat the negative impact that companies can have on human rights. Although these contemplate the responsibility of companies to respect human rights, this responsibility does not imply that state standards of international law apply directly to companies, so it falls to states to implement appropriate measures to prevent, investigate, punish and redress corporate human rights abuses, through appropriate policies, regulatory activities and submission to justice.
Several initiatives have been recently developed by the international community to address these challenges, among them the B-Tech Project, which has developed a set of recommendations, tools and guidance, prepared with the active participation of companies and other partners, on how to implement the Guiding Principles to prevent and address risks related to digital technologies, and the High-Level Advisory Council on Artificial Intelligence, recently created by the United Nations, which has made preliminary recommendations on the regulation of AI.
Likewise, last week the United Nations presented the “Fundamental Principles for Information Integrity”, emphasizing the need to review especially the disinformation models of platforms and social networks, which represent an “existential risk” for humanity. These principles are articulated in five fundamental axes aimed at facing the challenges that arise with technological evolution and the use of artificial intelligence in the dissemination of information: trust and social resilience; independent, free and pluralistic media; transparency to research; public empowerment and positive incentives. And they are expected to be applied by platforms and media through the regulations of different governments, as well as within the United Nations.
In September, the Summit of the Future will be held to agree on a Global Digital Compact, involving all stakeholders: governments, the UN system, the private sector (including technology companies), civil society, grassroots organizations, academic institutions and individuals, including young people.