The world's best-known arbiter of artificial intelligence isn't sure if the new technology might not pose a danger to people in the Middle East.
Published: undefined
"There have been reports and concerns raised about human rights abuses in Saudi Arabia that involve the use of digital technologies," ChatGPT, the artificial intelligence language model that's been making international headlines since it was introduced last November, replies when a DW journalist asks whether it might cause problems in the Middle East. But, the robotic assistant added, "it is important to note that these reports and allegations are not limited to artificial intelligence specifically but encompass a broader range of digital technologies and their potential misuse."
Published: undefined
Recent high-profile cases involving Saudi Arabia include the country using digital technologies to spy on dissidents and their families overseas, as well as trying to infiltrate Twitter in order to identify government opponents using anonymous accounts.
Published: undefined
This is why there are concerns about what the country's government might do with its increasingly rapid deployment of artificial intelligence, or AI, a technology whose implications are already regularly questioned by digital rights activists.
Published: undefined
"The use of so-called AI and AI-based systems is increasing all over the world, and they open up novel ways of potentially infringing on people’s most basic rights by surveilling or manipulating them," Angela Mueller, the head of policy and advocacy at Berlin-based organization, Algorithm Watch, told DW. "There is definitely the danger that the use of AI-based systems will further exacerbate existing injustices, especially when such states [without human rights protections or rule of law] now boost AI development and use by billions of dollars," she pointed out.
Published: undefined
Published: undefined
The most recent market intelligence suggests that governments of wealthy oil-producing Gulf states like the United Arab Emirates, Saudi Arabia and Qatar are now spending as much, if not more, than some individual European countries on advancing AI-related technologies at home.
Published: undefined
A report on worldwide AI expenditure by the International Data Corporation says the Middle East will be spending $3 billion (€2.8 billion) on AI this year, rising to $6.4 billion by 2026. Investment will continue to ramp up, market researchers say, with the region seeing annual growth in spending of almost 30% in this technology over the next three years. That's "the fastest growth rate worldwide over the coming years," they note.
Published: undefined
A much-hyped term, AI covers a wide range of digital technologies. It can mean anything from the speedy processing of large amounts of digital data for analysis, to what's known as "generative AI." The latter, which includes the attention-getting and much-discussed ChatGPT, is considered one of the most exciting developments in AI because it "generates" information and insights as it evolves.
Published: undefined
"The more computing power, data and users it gets, the better it [generative AI] performs, sometimes in unexpected ways," Deutsche Bank research analysts explained in a briefing on the technology. "Its talents range from sifting through data and recognizing images and speech, to identifying sentiment in swathes of documents and generating text, images and code. Future iterations will soon do still more. Most importantly, it synthesizes these tools so they feed on each other."
Published: undefined
Published: undefined
Gulf states are spending so much on AI because it is an important part of future plans to develop their national economies away from oil income.
Published: undefined
The UAE was the first in the region to adopt a national AI strategy in 2017 and became the first country in the world to appoint a minister for artificial intelligence. Other countries, including Egypt, Jordan, Morocco, Qatar and Saudi Arabia, have since followed suit, most of them over the past three years.
Published: undefined
Saudi Arabia is particularly notable because it intends to use all kinds of AI in its futuristic city-building project, Neom, and it has the wealth to invest in these technologies both via state funding and through its state-controlled sovereign wealth fund.
Published: undefined
Perceptions of AI in the Gulf states also differ. A 2022 IPSOS survey of international attitudes toward AI asked people whether they thought using AI in consumer products and services offered more advantages than disadvantages. Just over three-quarters of Saudi Arabians were enthusiastic, agreeing that it offered more benefits, compared to only 37% of the more cautious German respondents.
Published: undefined
Published: undefined
Currently in the Gulf states, AI technologies are being used for the same kinds of things they are in other countries: for example, as chatbots on retailer websites or to streamline state services for power and water, enhance digital financial services like web-based banking, analyze the performance of companies like the Emirates airline, or to provide insights from local health care data. In late May, the UAE released its own version of ChatGPT .
Published: undefined
None of this is necessarily nefarious. But the same concerns that have been expressed about the use of AI elsewhere also apply here.
Published: undefined
Digital rights activists are not seriously worried about a science-fiction-style scenario where robots kill us all. They're more concerned about data security, surveillance, content filtering, the targeted dissemination of propaganda, accuracy in AI analysis and bias, as well as the potential for "dual use" of certain AI-linked technologies.
Published: undefined
For example, AI-powered facial recognition has potential for dual use, for both civilian and military purposes. On one hand, it's useful on Facebook to find your friends. On the other, it could be used to identify protesters at an anti-government demonstration.
Published: undefined
As Geoffrey Hinton, the respected AI pioneer who made international headlines when he quit his job at Google recently, told The New York Times, "it is hard to see how you can prevent the bad actors from using it [AI] for bad things."
Published: undefined
Published: undefined
So what happens when AI ends up in the hands of autocratic governments, such as those in the big-spending Gulf states? The countries may have some trappings of democracy, but they are essentially led by royal families who tolerate little dissent and no political opposition.
Published: undefined
"In countries where the authorities already target human rights defenders and journalists for peacefully exercising their rights, the implications [of AI] can be even more devastating," Iverna McGowan, director of the European office of the Centre for Democracy and Technology, or CDT, told DW.
Published: undefined
In a 2022 summary of the laws that currently pertain to AI in the Middle East, researchers at multinational legal firm Covington and Burling pointed out that no legislation on AI exists in the region as yet. This is also true for many other jurisdictions, they added. The sector is largely unregulated.
Published: undefined
Both the UAE and Saudi Arabia have published ethical guidelines for the use of AI. However, neither country's guidelines, which include a checklist of do's and don’ts for software developers, are legally binding. That's something they have in common with heavily criticized ethical guidelines on AI elsewhere.
Published: undefined
"AI ethical principles are useless, failing to mitigate the racial, social, and environmental damages of AI technologies in any meaningful sense," Luke Munn, an Australian digital cultures researcher, argued last year in the journal AI and Ethics. Part of the reason for this is the lack of any laws backing up the ethical guidelines, he wrote. "The result is a gap between high-minded principles and technological practice."
Published: undefined
CDT director McGowan agreed. "Voluntary measures in the context of such systemic repression will be nothing other than window dressing," she told DW.
Published: undefined
"These systems open up novel ways of potentially infringing on people’s most basic rights by surveilling or manipulating them, by preventing their means to have a say and to defend themselves," Algorithm Watch's Mueller concluded. "The combination of opacity, sensitive areas and these potential impacts are especially problematic in contexts where there is no reliable protection of human rights and the rule of law."
Published: undefined
Edited by: Timothy Jones
Published: undefined
Follow us on: Facebook, Twitter, Google News, Instagram
Join our official telegram channel (@nationalherald) and stay updated with the latest headlines
Published: undefined