Lunes, 10 de febrero, 2025
Ahead of the AI Action Summit, which begins on February 10, Amnesty International’s Director of the technology and human rights programme, Damini Satija, said:
“With global leaders and tech executives gathering to attend the Artificial Intelligence (AI) Action Summit in Paris, the French government must not miss a crucial opportunity to make meaningful progress towards achieving human rights respecting AI regulation globally. Governments at the summit must not be swayed by corporate interests at the expense of those experiencing the sharpest human rights impacts of AI systems today.
“While France undertook a significant task in hosting the summit, the participation of civil society and human rights activists in the main summit agenda is wholly inadequate. The allocation of resources necessary to ensure a collaborative dialogue with representatives from the global majority, impacted communities, and human rights activists has not been prioritized.
“Lack of support by the summit organizers for human right advocates and community representatives in need of visas to enter France, exemplifies a lack of true commitment to engage in an equal dialogue with civil society particularly from Global Majority countries.
“If states are serious about an open, multi-stakeholder and inclusive approach around development, deployment and regulation of AI technologies, they must elevate and centre voices and priorities of impacted communities.
We are now living in a world that feels increasingly terrifying. The omnipresence of predictive algorithms, coupled with rising global backlash against civil liberties risks giving a carte blanche to tech companies, to operate without rules or guidelines.
Damini Satija, Programme Director, Amnesty Tech
“State actors must also not be swayed by false ‘innovation vs regulation dichotomy’ parroted by tech companies and their executives to stifle human rights centric regulatory efforts. Governments must not ignore underlying systemic human rights issues heightened due to automation of our lives and roll-out of AI technologies.
“We are now living in a world that feels increasingly terrifying. The omnipresence of predictive algorithms, coupled with rising global backlash against civil liberties risks giving a carte blanche to tech companies, to operate without rules or guidelines.
“While governments present these announcements as ‘efficiency solutions’, they increasingly go hand in hand with austerity policies and the deployment of data-intensive AI technologies. Additionally, these systems also amplify pre-existing discrimination in society, ultimately leading to exclusion, inequalities, and the entrenchment of corporate power.
“There is ample evidence, along with investigations by civil society and journalists, exposing the grave consequences of AI technologies operating unchecked. From lethal autonomous weapons systems to facial recognition used for mass surveillance, and risk-scoring algorithms being used in the context of migration and the public sector for welfare distribution, it has become abundantly clear that the deployment and use of such technologies are incompatible with our rights and disregard human dignity.
“We must also acknowledge that the harms perpetuated by AI technologies have far-reaching consequences beyond the technologies themselves. The exploitative supply chains that fuel them, relying on inhumane labor practices and causing serious environmental damage, have created a disproportionate impact on people, particularly in the Global Majority. Given such devastating lasting effects of AI technologies, it is essential the impact of technologies is not just tackled within state boundaries, but also beyond.
“All AI regulation must also be free of loopholes and exemptions which risk violation of human rights. All public and private actors, including law enforcement, border management and national security bodies, must adhere to human rights standards throughout the whole lifecycle of AI technologies, including during research, development and testing phases of AI technologies.
“More importantly, people and communities impacted by AI must be empowered to seek redress and remedy. As prerequisite to effective remedy, impacted people should be guaranteed the right to information and explanation of AI-supported decision-making, including about the use and functioning of AI in the system.”
Damini Satija will be attending the AI Action Summit in Paris throughout its duration from 10 February to 11 February. She will be available for interviews on range of tech issues including:
a) Artificial Intelligence and algorithmic accountability
b) Artificial Intelligence regulation
c) Big Tech and policy
d) Spyware and surveillance
e) Children and Young people’s digital rights
Information for journalists:
Damini Satija is a technology, human rights and public policy expert. She is the Director of Amnesty Tech, the global human rights movement’s technology and human right’s programme which she originally joined to set up the Algorithmic Accountability Lab (an interdisciplinary unit investigating the impact of Artificial Intelligence technologies on human rights). Amnesty Tech works across a range of areas, most notably spyware and cyberattacks, surveillance, state use of AI and automation, big tech and social media accountability and children and young people’s rights in digital environments. Prior to her time at Amnesty International, Damini worked in a number of tech policy roles. She was most recently Senior Policy Advisor in the Center for Date Ethics & Innovation, the UK government’s independent expert body on data and AI policy and the UK’s policy expert at the Council of Europe’s committee on Artificial Intelligence and Human Rights.
For more information or to arrange an interview please contact Amnesty International’s press office: press@amnesty.org