Prime Minister Narendra Modi on Friday flagged the misuse of artificial intelligence for creating 'deepfakes', and said the media must educate people about this crisis in making. Deepfake videos are synthetic media in which a person in an existing image or video is replaced with someone else's likeness.
Modi's statement comes days after a ‘deepfake’ video of actress Rashmika Mandanna, that was suspected to be made with the help of artificial intelligence, went viral on social media. The original video was stated to be of a British-Indian influencer, whose face had been edited with Mandanna's face.
Addressing journalists at Bharatiya Janata's Diwali Milan programme at the party's headquarters in New Delhi, Modi also referred to his resolve to make India 'Viksit Bharat' (developed India), saying these are not merely words but a ground reality.
He also said that 'vocal for local' has found people's support. Modi further said that India's achievements during the Covid-19 pandemic created confidence among the people that the country is not going to stop now.
He also said that Chhath Puja has become a 'rashtriya parva' (national festival) and it is a matter of great happiness.
The video of Mandanna had led to widespread calls for regulation of the technology. Rajeev Chandrasekhar, Union minister for electronics and technology, had said on the social media platform X that deepfakes are the latest and a “more dangerous and damaging form of misinformation” that need to be dealt with by social media platforms. He also cited the legal obligations of social media platforms and IT rules pertaining to digital deception.
IT ministry's letters to social media platforms
The ministry’s two letters, dated November 6 and 7, were issued by the cyber laws division of the ministry as follow-ups to the advisory on deepfakes sent in February. They reminded the social media platforms of their obligations under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code), 2021.
The letter dated November 6 did not give a timeline to remove such content or cite specific rules. It, however, cited Section 66D of the Information Technology Act, 2000, under which anyone who uses a communication device or a computer resource to cheat by “personating” can be punished with three years in jail and a fine of up to ₹1 lakh.
“In the case of this particular deepfake, section 66D may not be applicable since the element of cheating as understood under the Indian Penal Code, or acting dishonestly to gain an advantage is hard, if not impossible, to establish,” Priyadarshi Banerjee, partner at Delhi-based Banerjee & Grewal Advocates, said.
The letters also warned that the platforms could lose their safe harbour in case of non-compliance. To be sure, only courts can determine if an intermediary can lose its safe harbour protection. Deepfakes need to be removed within 36 hours of being reported, the ministry said in a separate statement on Tuesday. “It is a legal obligation for online platforms to prevent the spread of misinformation by any user under the Information Technology (IT) rules, 2021. They are further mandated to remove such content within 36 hours upon receiving a report from either a user or government authority,” the statement quoted junior electronics and IT minister Rajeev Chandrasekhar as saying.
To be sure, the law mandates intermediaries to remove violating content within 36 hours only after receiving a court or government order to that effect. When a user, or someone on behalf of the user, raises a grievance, the earliest it needs to be resolved is 72 hours.