Call by UN Human Rights body’s chief to ban AI systems prone to be misused by States must be complied with
UN High Commissioner for Human Rights has expressed concern at “unprecedented level of surveillance across globe by State and private actors”, which was “incompatible” with human rights
The controversy over Pegasus spyware, which allegedly affected thousands of people in 45 countries including India, has now attracted the attention of even UN High Commissioner for Human Rights Michelle Bachelet, who has called for a moratorium on sale and deployment of Artificial Intelligence (AI) systems until adequate safeguards are put in place. She opined that AI applications that cannot be used in compliance with human rights laws should be banned.
Bachelet’s call came as her office published a report on “The right to privacy in the digital age” which analyses how AI affects people’s right to privacy and other rights, including the rights to health, education, freedom of movement, freedom of peaceful assembly and association, and freedom of expression. The document also includes an assessment of profiling, automated decision-making and other machine-learning technologies.
The Pegasus revelations were no surprise to many, she told a Council of Europe hearing on the implications stemming from the controversy in reference to the widespread use of spyware commercialized by the NSO group. She expressed her concern about the “unprecedented level of surveillance across the globe by State and private actors”, which was “incompatible” with human rights.
The situation is “dire”, said Tim Engelhardt, Human Rights Officer, Rule of Law and Democracy Section at the launch of the report. The situation has not improved over the years but has become worse, he said. “We don’t think we will have a solution in the coming year, but the first steps need to be taken now or many people in the world will pay a high price.”
The report has categorically mentioned that States and businesses often rushed to incorporate AI application without carrying out due diligence. It states that there have been numerous cases of people being treated unjustly due to AI misuse, such as being denied social security benefits because of faulty AI tools or arrested because of flawed facial recognition software.
The document details how AI systems rely on large data sets, with information about individuals collected, shared, merged and analysed in multiple and often opaque ways. Moreover, the data used to inform and guide AI systems can be faulty, discriminatory, out of date or irrelevant. Long term storage of data also poses particular risks, as data could in the future be exploited in as yet unknown ways, it added.
The inferences, predictions and monitoring performed by AI tools, including seeking insights into patterns of human behaviour, also raise serious questions. The biased datasets relied on by AI systems can lead to discriminatory decisions, and these risks are most acute for already marginalized groups.
“The complexity of the data environment, algorithms and models underlying the development and operation of AI systems, as well as intentional secrecy of government and private actors are factors undermining meaningful ways for the public to understand the effects of AI systems on human rights and society,” the report says, adding “there also needs to be much greater transparency by companies and States in how they are developing and using AI.”
An increasingly go-to solution for States, international organizations and technology companies are biometric technologies, which the report states are an area “where more human rights guidance is urgently needed”. These technologies, which included facial recognition, are increasingly used to identify people in real time and from a distance, potentially allowing unlimited tracking of individuals.
The report reiterates calls for a moratorium on their use in public spaces, at least until authorities can demonstrate that there are no significant issues with accuracy or discriminatory impacts and that these AI systems comply with robust privacy and data protection standards.
“Artificial intelligence now reaches into almost every corner of our physical and mental lives and even emotional states. AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online,” Bachelet said at the launch of the report.
“Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared and used is one of the most urgent human rights questions we face. … The risk of discrimination linked to AI-driven decisions – decisions that can change, define or damage human lives – is all too real. This is why there needs to be systematic assessment and monitoring of the effects of AI systems to identify and mitigate human rights risks,” she added.
“We cannot afford to continue playing catch-up regarding AI – allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact. The power of AI to serve people is undeniable, but so is AI’s ability to feed human rights violations at an enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us,” the UN High Commissioner for Human Rights stressed.
Urgent action is needed, she emphasized, as it can take time to assess and address the serious risks the AI technology poses to human rights. “The higher the risk for human rights, the stricter the legal requirements for the use of AI technology should be. All applications that cannot be used in compliance with international human rights law should be banned. Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic effects if they are used without sufficient regard to how they affect people’s human rights,” she pointed out.
Views are personal