Deepfakes: India 6th most susceptible nation. Can our laws tackle the menace?
With national elections due next year, India is set to unveil regulations to control the spread of deepfakes on social media. But will legislation be enough?
Deepfakes are fast becoming a problem and are used to spread misinformation online as India grapples with the treacherous costs of a rapidly evolving AI technology.
The concerns come after a series of recent deepfake incidents involving top Indian film stars and personalities prompted the government to meet social media platforms, artificial intelligence companies and industry bodies, to come up with a "clear, actionable plan" to tackle the issue.
Deepfakes can 'create huge problems': Modi
Indian PM Narendra Modi said deepfakes were one of the biggest threats faced by the country, and warned people to be careful with new technology amid a rise in AI-generated videos and pictures.
"We have to be careful with new technology. If these are used carefully, they can be very useful. However, if these are misused, it can create huge problems. You must be aware of deepfake videos made with the help of generative AI," Modi said on Wednesday, 20 December.
The proliferation of online deepfake videos has surged by 550 per cent, reaching a staggering 95,820, as revealed in the 2023 State of Deepfakes report by Home Security Heroes, a US-based organisation.
The report identifies India as the sixth most susceptible country to this emerging threat.
How do deepfakes work?
Cybercriminals use facial mapping technologies to create an accurate facial symmetry dataset. They use AI to swap the face of a person onto the face of another person. As well as this, voice matching technology is used to accurately copy the user's voice.
Apprehensive of AI-generated deepfakes and misinformation, the government last month issued an advisory to all social media platforms reminding them of the legal obligations that require them to promptly identify and take down misinformation.
Experts have pointed out that India lacks specific laws to address deepfakes and AI-related crimes, but provisions under several pieces of legislation under the IT Act could offer both civil and criminal relief.
Others have pointed out that though deepfakes have challenged the legal system across the world, a practical solution is available.
Pranesh Prakash, a law and policy consultant, told DW that although there's moral panic about deepfakes that is disconnected from the actual harm posed by the technology, it was necessary to approach the problem by clearly identifying harms and identifying gaps in the existing law.
"The IT minister has said that regulations will be passed urgently, but it is unclear what precise problem he's seeking to solve nor what legal provision he's proposing to use for the proposed action," said Prakash, who is also a co-founder of the Bangalore-based Centre for Internet and Society nonprofit.
"Clearly, engaging in fraud by using deepfakes is a problem, but we already have laws that cover fraud and impersonation for fraud. The government needs to clarify what lacunae exist in the law that they are seeking to address," he said.
"Multi-stakeholders must be involved to work toward eliminating this problem including tech companies, society and the government as there is a lacuna in the law," Anushka Jain, research associate at Digital Futures Lab, told DW.
Challenges posed by misinformation and deepfakes
Cyber law expert Pavan Duggal said with no dedicated law on AI, identifying the originator of deepfakes and the first transmitter of deepfakes is a big challenge.
"With most of these service providers in India not wanting to share information about deepfake originators because of potential impact it may have upon them loosing statutory exemption from legal liability, the time has come for India to take more effective action in terms of legal provisions on deep fakes," Duggal told DW.
"Further, trying to detect, investigate and prosecute deepfake crimes will involve need for adopting more effective tools and new mindset approaches as far as law enforcement agencies are concerned, because technology is moving at a rapid pace and the legal frameworks and political will also needs to keep pace," he added.
Google, one of the largest tech companies in the world, has already said it will work with the Indian government to address the safety and security risks posed by deepfake and disinformation campaigns.
"There is no silver bullet to combat deep fakes and AI-generated misinformation. It requires a collaborative effort, one that involves open communication, rigorous risk assessment and proactive mitigation strategies," said Michaela Browning of Google Asia Pacific, ahead of the Global Partnership on Artificial Intelligence Summit in New Delhi.
Modi inaugurated the event last week to arrive at a consensus on a declaration document on the proper use of AI, the guardrails for the technology and how it can be democratised.
Jency Jacob, managing editor of BOOM, a leading fact-checking website which has been closely studying the issue, said deepfake videos are becoming a cause of worry and there are valid concerns, especially during an election season.
"Governments around the world are still working on a policy response but we are yet to see anything that sounds like a plan. The Indian government has also shared its concerns and it will be interesting to see how they use existing laws and new provisions to protect victims," Jacob told DW.