Photo by Andres Siimon on Unsplash

Introduction

Recently, a video of actress Rashmika Mandanna has gone viral on social media. It was shown in this video that the popular Indian actress was entering an elevator in revealing clothes. It was found that the woman in the original clip was British-Indian social media influencer Zara Patel who happens to be a British influencer on whose body Rashmika’s face was morphed. Such types of videos are generally known as Deepfake which are generated by employing Artificial Intelligence tools to create fake images, audio and videos. Artificial Intelligence (AI), which has improved tremendously over the past year, there are now platforms allowing almost anyone to create a persuasive fake by entering text into popular AI generators that produce images, video or audio. This aforementioned video went viral because it was a deepfake of a female celebrity - a popular film actress. This wasn't, however, the first case of its kind. Earlier, football fans in a stadium in Madrid were shown holding an enormous Palestinian flag. In another deepfake video, the Ukrainian President Volodymyr Zelenskyy was calling on his soldiers to lay down their weapons. Not only this, Pope was shown wearing a Balenciaga puffer jacket in a deepfake video. In the recent Israel-Gaza conflict, Internet platforms like X, Facebook and YouTube have been awash with AI-generated content, propagated by accounts from both sides, showing the destruction caused by the conflict.

In 2020, researchers uncovered, then a rudimentary but still dangerous, underground service called DeepNude. It allowed people to create fake nude images by supplying regular photos of an individual. As a result, a large number of the anonymous users who had used it had done so to create non-consensual intimate images of women. In a recent article based on an industry analyst, published in the Washington Post, it was stated about the top 10 websites hosting AI-generated porn photos. There is no denying the fact that Deepfake is worse than fake news where “fake nudes have ballooned by more than 290 percent since 2018.” It also mentions a 2019 study by Sensity AI, a deepfake monitoring company, which says that 96 percent of deepfake images are pornography and 99 percent of those photos target women. It is a fake reality, indeed. In the words of Rajeev Chandrasekhar, Union Minister of State for Electronics and Information Technology, “Deepfakes are latest and even more dangerous and damaging form of misinformation and needs to be dealt with by platforms.”  The Indian Prime Minister, Narendra Modi has also expressed his concerns at the growing use of deepfakes for nefarious purposes in a party function at New Delhi. He said “The manner in which deepfake is spreading in this age of AI poses a big threat. A big section of the populace does not have a parallel system for verification or authentication of the same. Since the struggle for Independence, whichever issue the media has espoused has enjoyed certain credibility and respect.” He further observed “Misinformation can also spread a barrage of dissatisfaction in society. Hence, you should educate people through your programmes about what is deepfake and inform them through examples of how big a crisis it can create and its impact.” At present, a large number of images and videos of the victim for the programme are required to make a good impersonation. The people in public life like politicians and film stars fit into this category and they are the easiest to make deepfakes. There are myriad images of them from all possible angles that the programme can morph into another body.

In pornography, one of the biggest uses of deepfake is made and from there it even earned its modern name. It was in the year 2017 when a user of the online forum Reddit named “deepfakes” started posting pornographic images of celebrities using the technology. In the following year, 2018, the BuzzFeed, an American Internet media, news, and entertainment company with a focus on digital media, based in New York, released a fake video of former U.S. President Barack Obama in which he gave a talk in which he was warning against such impersonations. They managed it by using technology that was freely available online. According to the BuzzFeed article, it took 56 hours for a professional to make it. Warning about the dangers lying ahead, the article said: “So the good news is it still requires a decent amount of skill, processing power, and time to create a really good ‘deepfake ‘. The bad news is that [with] the lesson of computers and technology, this stuff will get easier, cheaper, and more ubiquitous faster than you would expect or be ready for.“ Prophetic words, indeed! Five years since then, deepfake has made entry into the mainstream consciousness of India. There is now even a word for it-cheap fakes. It is being accomplished with comparative ease. The phenomenon has been well-explained in an article published in the New York Times: “Making realistic fake videos, often called deepfakes, once required elaborate software to put one person's face onto another's. But now, many of the tools to create them are available to everyday consumers on smartphone apps, and often for little to no money. The new altered videos-mostly, so far, the work of meme-makers and marketers-have gone viral on social media sites like TikTok and Twitter. The content they produce, sometimes called cheap fakes by researchers, work by cloning celebrity voices, altering mouth movements to match alternative audio, and writing persuasive dialogue.” Further, it came to notice in 2022 that Western countries suspected Russia could use deepfakes to justify its invasion of Ukraine. Likewise, in May 2023, a deep fake image of smoke near the White House in Washington, USA, unsettled the stock markets. The use of synthetic media, though not exactly called deepfake, by Hollywood studios to be able to revive dead actors, was one of the prominent issues behind the writers’ strike which has ended recently.

Are Deepfakes Beneficial?

Experts generally believe that in the form of deepfakes, the growth of AI is experiencing bathwater-baby problems. They ascribe to the fact that deepfakes have beneficial effects in communications for individuals, companies and governments. It remains a legitimate course of technological advancement. Sounding a word of caution at the same time, they opine that some degree of proliferation control is needed at this stage to protect personal and national security. It is also hoped that technology to detect deepfakes is gradually improving and big investment is needed in this regard. This can be wishful thinking, but the current scenario is frightening, indeed.

Deepfakes-Women and Children are the Worst Sufferers

Deepfakes are making an unwanted and clandestine approach to exploring women's bodily autonomy and their right to privacy, adversely affecting their morale and freedoms guaranteed by the Constitution of India. Women posting their videos and photos on their social media accounts cannot and should not be considered to have consented to any subsequent use or misuse in the form of deepfakes, in particular. Their privacy rights must be respected at all costs. According to research by IT for Change, in India, one-third of the women surveyed reported that they had faced harassment, abuse, or unwanted behaviour online and two-fifths knew women in their circles who had similar experiences. Ninety per cent of the respondents who had faced harassment reported having faced it on multiple occasions. Dalit women are facing casteist and sarcastic remarks even today. Going by the data, there exists a significant digital divide amongst the males and females in India. For example, as per Instagram demographics, around 27.5 per cent of Instagram users in India are women, while around 72.5 percent are men. This poor representation of women on social media sites makes them more vulnerable.

Morphing faces onto naked bodies is not new or uncommon. Technology has made this phenomenon faster, easier, cheaper and nearly impossible to tell fake from real. Adrija Bose, a journalist of BoomLive says “It's insane. All it takes is one photograph.” This is high time women should be aware of the consequences of uploading a selfie which may put them at risk of sexual abuse. More than 90% of malicious deepfake videos are pornographic. According to an article published by BoomLive, there is a rash of sexualised videos on X (formerly Twitter) that ‘literally steal the faces and identities of scores of actresses.’

Deepfakes and its Use During Elections

Elections are the other grey area where the use of deepfake can affect the whole electoral process and the future of democracy as well. Deepfakes can play an important role in spreading disinformation of any type about political opponents, in compromising election integrity and causing further divisions in the communities. The Presidential elections in USA in 2016 and the Brexit voting in UK following, were effectively influenced by the misinformation campaign launched by the political leaders using social media to their advantage. Now, the deepfake phenomenon is powered by AI which can prove to be more lethal for the democratic polity in any country.

Global Concern

Realization of impending dangers of deepfake technology, the issue has been raised on international forums recently. Such concerns were tacitly acknowledged in the beginning of November 2023 at the first AI summit held at Bletchley Park in the UK where the Benchley Declaration was signed by 28 countries including the US, UK, France, China, Japan and India. It noted the risks “stemming from the capability to manipulate content or generate deceptive content”, and called for global action to face the potential dangers stemming from AI. However, there are still no universal laws to tackle this menace. Countries have adopted different approaches in the name of regulating AI across the world. Some countries are favouring strict oversight and regulations while others are opting for a softer approach. For example, in the beginning of November 2023, the US President Joe Biden issued an executive order to establish “new standards for AI safety and security”. As per this order, the companies are required to share the results of their safety tests with the US government. It also involves the setting up of standards for “extensive” testing in order to ensure that they are safe before they are publicly released.

In this context, a significant role has to be played by the big tech companies such as Alphabet, Meta, and OpenAI. They are taking measures like watermarking AI-generated content to allow users to identify deepfakes. They have begun to insist on transparency for those who create non-malicious content with such tools.

The Indian Response

Taking into cognizance of the Rashmika Mandanna fake video issue, the Cyber Law division of the Ministry of Electronics & IT, Government of India, has issued two letters, dated November 6 and 7 as follow-ups to the advisory on deepfakes sent in February this year. They reminded the social media platforms of their obligations under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code), 2021. One of the letters stated “As per the laws in force, such content/information which violates the IT Act/Rules, should be removed or access disabled upon receipt of court orders or notification from the Appropriate Government or its authorized agency or based on a complaint made by the impersonated individual or person authorized by him in this behalf. “This letter did not suggest any timeline to remove such content or cite any specific rules. However, Section 66D of the Information Technology Act, 2000 was cited in this letter which envisages that anyone who uses a communication device or a computer resource to cheat by “personating” can be punished with three years in jail and a fine extending to Rs. 1 lakh. The letters also warned the social media platforms of losing their safe harbour in case of non-compliance. However, only courts can determine if an intermediary can lose its safe harbour protection. In a further statement, the Ministry said “It is a legal obligation for online platforms to prevent the spread of misinformation for any user under the Information Technology Rules, 2021. They are further mandated to remove such content within 36 hours upon receiving a report from either a user or government authority.” This provision is operative only after receiving a court or government order to that effect. When a user raises a grievance, it has to be resolved within 72 hours. The exception to this is laid down in rule 3(2)(b). It says that on receiving a complaint from a user or someone on his or her behalf about content that has “any material which exposes the private area of such individual in full or partial nudity or shows or depicts such individual in any sexual act or conduct or is in the nature of impersonation in an electronic form, including artificially morphed images of such individual “, the content must be removed in 24 hours. In the February advisory, three rules were mentioned: Rule 3(1)(vi), rule 3(1)(c) and aforementioned rule 3(2)(b). Under rule 3(1)(b)(vi), all intermediaries, and not just social media platforms, are required to ensure that no user impersonates another person on their platform. Under rule 3(1)(c), all intermediaries are required to inform their users at least once a year of their policies, rules and regulations. This advisory also asked social media platforms to “put in place appropriate technology and processes for identifying information that may violate the provisions of rules and regulations or user agreement.” Now, the government will meet social media and internet intermediaries next week to discuss ways to contain the spread of deep fake photos and videos, said the Union Minister of Electronics and Information Technology, Ashwini Vaishnaw on Nov 18, 2023.

Conclusion

The spread of deep fake phenomenon is fast engulfing our society in dangerous proportions. AI has made it more lethal. Even if the tweet or digital communication on the social media platforms is taken down in 24 hours as per the rules, the reputation damage is caused in 10 minutes. That is why the implementation of the harm-related stringent penalties should be ensured on a fast-track basis so that the perpetrators are deterred. Mere issuing advisories by the government will carry no impact if stringent legal action in the form of penalties is not taken from time to time. Specific provisions covering the ingredients of deepfake should also be introduced in the Information Technology Act, 2000 without any further delay. It has to be always borne in mind that the deepfake phenomenon is causing the most harm to womenfolk. Every effort should be ensured to stem the rot. With the advancement of technology, this problem is going to be more acute.

.    .    .

References:

  • Soumyarendra Barik, Viral ‘Rashmika Mandanna video’ spotlights Big Tech's deepfake problem, The Indian Express, Nov 7, 2023.
  • Editorial, Hindustan Times, Nov 11, 2023.
  • Rishika Singh, How deepfakes shrink online space for women, The Indian Express, Nov 19, 2023.
  • MadhavanKutty Pillai, Damned by Deepfakes, OPEN, November 20, 2023.
  • Ibid.
  • Ibid.
  • Ibid.
  • Editorial, Hindustan Times, Nov 11, 2023.
  • Editorial. The Economic Times, Nov 10, 2023.
  • Anupriya Dhonchak, The Unequal Deep Fake Risk, The Indian Express, Nov 11, 2023.
  • Namita Bhandare, Dealing with deep fakes: Regulation & education, Hindustan Times, Nov 11, 2023.
  • Editorial, The Economic Times, Nov 8, 2023.
  • Ibid.
  • MadhavanKuttyPillai, Damned by Deepfakes, MadhavanKutty Pillai, OPEN, Nov 20, 2023.
  • Aditi Agrawal, IT ministry sends social media firms two letters on deep fake regulations, Hindustan Times, Nov 8, 2023.
  • Our Bureau, Govt to Meet Co's to Discuss Ways to Check Spread of Deep Fakes, The Economic Times, Nov 19, 2023. 

Discus