AI-powered tools, deepfakes pose challenge of misinformation before Internet users

The battle against misinformation has become harder since developments in AI-powered tools have made detecting deepfakes on multiple social media platforms more difficult.

Getty Images
Getty Images
user

PTI

Artificial intelligence, deepfakes and social media little understood by laypersons, the combo of three poses a mystifying hurdle for millions of Internet users caught in the everyday battle of trying to filter the real from the fake.

The battle against misinformation was always challenging and has become much more so since developments in AI-powered tools have made detecting deepfakes on multiple social media platforms more difficult. The unintended ability of AI to create fake news faster than stopping it has worrying consequences.

"In India's fast-changing information ecosystem, deepfakes have emerged as a new frontier of disinformation, making it difficult for people to distinguish between false and truthful info," Syed Nazakat, founder and CEO of DataLEADS, a digital media group building information literacy and infodemic management initiatives, told PTI.

India is already fighting a flood of misinformation in different Indic languages. This will worsen with different AI bots and tools driving deepfakes over the Internet.

"The next generation of AI models, called Generative AI -- for example, Dall-e, ChatGPT, Meta's Make-A-Video etc -- do not need a source to transform. Instead, they can generate an image, text or video based on prompts. These are still in the early stages of development, but one can see the potential to cause harm as we would not have any original content to use as evidence," added Azahar Machwe, who worked as an enterprise architect for AI at British Telecom.


What is Deepfake?


Deepfakes are photos and videos that realistically replace one person's face with another. Many AI tools are available to Internet users on their smartphones for almost free.

In its simplest form, AI can be explained as using computers to do things that otherwise require human intelligence. A notable example can be the ongoing competition between Microsoft's ChatGPT and Google's BARD.

While both AI tools automate the creation of human-level writing, the difference is that BARD uses Google's Language Model for Dialogue Applications (LaMDA) and can offer responses based on real-time and current research pulled from the internet. ChatGPT uses its Generative Pre-training Transformer 3 (GPT-3) model, which is trained on data before late 2021.

Recent Examples

Two synthetic videos and a digitally altered screenshot of a Hindi newspaper report shared last week on social media platforms, including Twitter and Facebook, highlighted the unintended consequences of AI tools in creating altered photos and doctored videos with misleading or false claims.

Synthetic video is any video generated with AI without cameras, actors, and other physical elements.

A video of Microsoft co-founder Bill Gates being cornered by a journalist in an interview was shared as real and later found to be edited. A digitally altered video of US President Joe Biden calling for a national draft (mandatory enrolment of individuals into the armed forces) to fight the war in Ukraine was shared as authentic. In another instance, an edited photo to make it look like a Hindi newspaper report was circulated widely to spread misinformation about migrant workers in Tamil Nadu.

All three instances the two synthetic videos and the digitally altered screenshot of a Hindi newspaper report were shared on social media platforms by thousands of Internet users who thought they were real.

The issues escalated into stories on social media and mainstream media outlets, highlighting the unintended consequences of AI tools in creating altered photos and doctored videos with misleading or false claims.

PTI's Fact Check team looked into the three claims and debunked them as deepfakes' and 'digitally edited' using AI-powered tools readily available over the Internet.


AI and fake news

A few years back, the introduction of AI in journalism raised hopes of a revolutionary upheaval of the industry and generation and distribution of news. It was also seen as an effective way to curb the spread of fake news and misinformation.


"A weakness of deepfakes has been that they require some original content to work with. For example, the Bill Gates video overlaid the original audio with the fake one. These videos are relatively easier to debunk if the original can be identified, but this takes time and the ability to search for the original content," Azahar told PTI.

He believes deepfakes shared recently on social media are easy to track but was also concerned that debunking such synthetic videos will be challenging in the coming days.

"Transforming the original video can lead to defects (e.g. lighting/shadow mismatch) which AI-models can be trained to detect. These resultant videos are often of lower quality to hide these defects from algorithms (and humans)," he explained.

According to him, fake news floats in many forms and deepfakes are created by very basic AI-powered tools these days. These videos are relatively easy to debunk.

"But there cannot be 100 per cent accuracy. Intel's version, for example, promises 96 per cent accuracy, which means 4 out of 100 will still get through," he added.

Road ahead

Most social media platforms claim to reduce the spread of misinformation at the source by building fake news detection algorithms based on language patterns and crowd-sourcing. This ensures that misinformation is not allowed to spread rather than detected after the fact and removing it.

While examples of deepfakes highlight the potential threats of AI in generating fake news, AI and machine learning have provided journalism with several task-facilitating tools that help generate content to voice-recognition transcription tools automatically.

"AI continues to help journalists focus their energy on developing quality content as the technology ensures timely and quick content distribution. Human-in-the-loop will be required to check the consistency and veracity of the content shared in any format text, image, video, audio etc.," said Azahar.

Deepfakes should be clearly labelled as synthetically generated' in India, which had over 700 million smartphone users (aged two and above) in 2021. A recent Nielsen report says rural India had more than 425 million internet users, 44 per cent more than 295 million people using the internet in urban India.

"Humans tend to join the echo chambers' of those who think alike. We need the inculcation of media literacy and critical thinking curriculum in basic education to boost awareness and build a proactive approach to help people to protect themselves from misinformation.

"We need a multi-pronged, cross-sector approach across India to prepare people of all ages for today's and tomorrow's complex digital landscape to be vigilant of deepfakes and disinformation," Nazakat said.

For a large country such as India, the changing information landscape creates an even greater need for information literacy skills in all languages. He added that every educational institution should prioritise information literacy for the next decade.

Follow us on: Facebook, Twitter, Google News, Instagram 

Join our official telegram channel (@nationalherald) and stay updated with the latest headlines