DOI : 10.17577/IJERTCONV14IS010082- Open Access

- Authors : Mrs. Jayashree J, Ms Rakshitha B, Ms Ruchita Jadhav, Ms Sahana V Naik
- Paper ID : IJERTCONV14IS010082
- Volume & Issue : Volume 14, Issue 01, Techprints 9.0
- Published (First Online) : 01-03-2026
- ISSN (Online) : 2278-0181
- Publisher Name : IJERT
- License:
This work is licensed under a Creative Commons Attribution 4.0 International License
Deepfake Cyber Threats in India: An Emerging Challenge Without Legal and Technical Safeguards
|
Mrs. Jayashree J Assistant Professor Department of MCA AJ Institute of Engineering and Technology, Mangalore, India Ms Ruchita Jadhav Department of MCA AJ Institute of Engineering and Technology, Mangalore, India |
Ms Rakshitha B Department of MCA AJ Institute of Engineering and Technology, Mangalore, India Ms Sahana v Naik Department of MCA AJ Institute of Engineering and Technology, Mangalore, India |
Abstract Deepfake technology is a form of artificial intelligence that uses generative adversarial networks (GAN) that are used to create convincing images, videos, and audio recordings. Deepfakes transform the existing source content where one person is swapped with another. While this application was invented for entertainment and education purposes, as time passed, it was misused by the people for spreading fake news, cybercrimes, etc., especially in India. Notably, it was used for AI-generated videos and images of public figures like Anil Kapoor, and deepfake videos of political leaders during the 2024 elections in India were specially designed to spread misinformation among people. During the India-Pakistan war, the cross-border cyberattacks increased rapidly in 2025.Even though deepfakes are becoming a serious threat to India, India still doesn't have strong laws or a technical system to overcome this issue. This paper looks at the weakness of Indian laws and technical systems by comparing the laws of other countries and giving suggestions for how government, technical experts, and legal authorities can work together to control and detect the deepfake content. As part of the solution, we are proposing a model to address this problem more effectively.
Keyword: Deepfake, Artificial Intelligence, Generative adversarial network (GAN), Indian Laws, Cybercrime, deepfake detection
-
INTRODUCTION
The technology behind deepfakes is getting smarter, while new apps are making it easier. The better the deepfakes become, the harder it becomes to assess what is real and what is fake. The term "deepfake" comes from "deep," which refers to the deep learning machine, which is a type of artificial intelligence, and this is used to impersonate people, making them say things that they never said or act like they never acted before. Fake comes from the word fake
technology. In 2023, well-known Indian actor Anil Kapoor went to the Delhi High Court to stop the unauthorized use of his name, likeness, voice, speech patterns, and visual identity. He took this step because fake AI-generated videos, audio, and altered media falsely portraying him were spreading widely. The problem didn't end there. During the 2024 Indian general elections, many deepfake videos featuring politicians, including the prime minister, appeared on social media. These aimed to mislead voters and change public opinion More recently, in 2025, as tensions between India and Pakistan grew, deepfake technology was also used in cross-border cyber warfare. Deliberate AI-generated videos and images spread misinformation, stirred conflict, and damaged trust among the public. These events show how deepfake technology, which started as an experiment in entertainment, has turned into a serious risk to national security, democracy, and individual rights The origins of deepfake technology go back to 1997, when researchers launched the Video Rewrite program. This system could analyse facial movements and match lip motions with new audio content. This early work set the stage for later developments in facial manipulation technology. A few years later, in 1999, the movie The Matrix featured advanced computer-generated imagery (CGI) and facial motion capture. It showed what synthetic media could do in entertainment A breakthrough happened between 2014 and 2016 Ian Goodfellow and his team introduced Generative Adversarial Networks (GANs). GANs changed the game for AI-based image and video generation. They use two neural networksa generator and a discriminatorthat compete to create highly realistic synthetic content. This change marked a significant moment in the development of deepfakes. In 2016, the Face2Face project showcased real-time facial reenactment with consumer-grade cameras. This made deepfake-like manipulation available to the public. However, the term "deepfake" had not yet been used. It wasnt until 2017 that a Reddit user coined the term "deepfake." This user
created and shared pornographic videos of celebrities using open-source face-swapping algorithms. They also uploaded the code on GitHub, which made this powerful manipulation technology easy to access. By 2018, the misuse of deepfake technology started to increase. Mobile apps like Face app and Deep Face put this technology in the hands of everyday users. That same year, a viral deepfake video of Barack Obama showed how synthetic media could spread misinformation India's strategy for addressing the rising threat of deepfakes has been largely reactive, relying on existing legal frameworks. Currently, there is no specific legislation dedicated to deepfake technology, but several laws indirectly address the misuse of AI-generated content. These include the Information Technology Act, 2000[14]; the Indian Penal Code 1860[15]; as well as provisions related to privacy, child protection, and intellectual property. The most recent addition is the Digital Personal Data Protection Act, 2023. Together, these laws form a patchwork yet effective system [16] The objective of this research paper is to critically analyse the technical and legal flaws in Indias current strategy in preventing deepfake threats. It examines the shortcomings of the current laws and cybersecurity safeguards, especially the lack of regulation designed for deepfakes. This study also compares India's response with international standards such as the legal framework and detection technologies used by the US and EU. Based on this comparison, it proposes cooperative approaches to address the increasing abuse of deepfake technology in India, which involve technological advancements, public awareness campaigns, and legal changes[13]
-
LITERATURE REVIEW
-
Deepfakes and Indian Criminal Law" by N. R. Divyashree highlights the legal gaps in addressing [6] Deep Fake Technology and Identity Theft: An Emerging Challenge for Cyber Laws in India By referencing international frameworks such as the deepfake threats in India. It stresses the lack of clear laws, platform accountability, and victim redressal mechanisms. The author recommends defining deepfakes legally, enhancing digital forensics, and promoting awareness. The paper warns of deepfakes' potential to disrupt democracy and calls for a balanced, proactive legal response.
-
"Exploring the Misuse of Deepfake Technology in India" by Dr. Aahana Chopra and Ms. Ananya Shukla examines the societal impact of deepfakes through surveys and media analysis. It identifies key areas of misusesexual exploitation, fraud, and misinformationwhile revealing low public awareness and weak legal protections The study stresses the need for legal reforms, awareness campaigns, and stakeholder collaboration It also warns of an 'infopocalypse' a future where trust in information collapses due to deepfakes.
-
This legal analysis highlights the urgent challenges deepfake technology poses in India. It explains how current laws like the IT Act and IPC only indirectly address deepfake misusesuch as in pornography, fraud, and political maniplationwithout specific provisions. The paper urges the creation of a dedicated deepfake law, mandatory detection tools, intermediary accountability, AI literacy, and fast-track redressal systems. It warns that without legal reform, deepfakes will continue to exploit gaps in India's legal system and harm public trust[4]. "Mitigating Deepfake Threats to
Privacy: Legal Frameworks and Technological Safeguards" examines how deepfakes threaten privacy and evaluates legal and tech-based solutions. It reviews laws in the EU, US, and India, noting gaps in enforcement and India's lack of specific legislation. Technological safeguards like detection tools, blockchain, and watermarking are explored. A key insight is the idea of using AI itself to combat deepfakes through a "safety by design" approach, promoting self-regulation and innovation. [5]"Deepfake Video Detection: Challenges and Opportunities" surveys current methods for detecting deepfakes and outlines key obstacles in the field. It identifies three major challenge areas: lack of quality data, complex training needs, and unreliable detection models in real-world use. While deep learning methods show promise, they often fail outside controlled environments. The paper highlights the ongoing "arms race" between deepfake creation and detection, calling for more generalizable models, standardized benchmarks, and collaborative efforts to stay ahead of evolving threats. layers and leaky ReLU activation functions are used for better training results and generalization. Privacy Act of California, the Act on Artificial Intelligence 2024, and the EU's GDPR, this report emphasises the necessity for India to improve its cyber and information protection regulations. Through the implementation of lessons learnt from these approaches, India can effectively address new digital hazards like identity theft and deepfakes. In addition to protecting customers, a more robust legal system would increase confidence in the nation's technology developments and digital expansion.[7] Analysing the Identification Approaches of Deep Fake Images and Videos Encapsulated in Fake Contents Available on Social Platforms. This paper specifies the growing impact of AI tools across fields like healthcare, education, research, and media. These tools can generate lifelike images and videos, making it hard to tell real from fake. Social media content, such as WhatsApp videos and Instagram reels, often uses AI, blurring reality. As a result, edited content may spread as real news. The paper highlights the need to identify such manipulations, especially when used as legal evidence.
-
-
PROPOSED METHODOLOGY
-
Working of Deepfake Model
Deepfakes are primarily created using neural networks, specifically autoencoders and Generative Adversarial Networks (GANs); both are the main tools used to generate deepfakes. The main concept is to swap one persons face for another so that their way of facial expressions, movements, and the angle or direction of the face flow together naturally the standard deepfake architecture uses:a) A shared Encoder: This is part of the model which is responsible for compressing facial features from input images (Face A and Face B) into a latent representation (vector format) b) An intermediate layer (bottleneck): This fully connected layer learns the abstract features common to all faces, capturing meaningful patterns like facial structure orientation and arms race between deepfake creation and detection, calling for more generalizable model, standardizable bench marks and collaborative efforts to stay ahead of evolving threats.
Efforts layers and leaky ReLU activation functions are used for better training results and generalization. Predictive face masks and discriminator loss (as in GANs) can improve realism by adjusting the sharpness and boundaries of the synthetic face, but they are optional. These additions, though,
may make training more difficult and time-consuming. Lastly, the system reduces the loss (difference) between the generated and real images during training. The model is iteratively trained using datasets with multiple samples of faces A and B, modifying weights to maximize facial realism and consistency
Predictive face masks and discriminator loss (as in GANs) can improve realism by adjusting the sharpness and boundaries of the synthetic face, but they are optional. These additions, though, may make training more difficult and time- consuming. Lastly, the system reduces the loss (difference) between the generated and real images during training. The model is iteratively trained using datasets with multiple samples of faces A and B, modifying weights to maximize facial realism and consistency.
Fig. 3.1.1 Working model
Fig. 3.1.2 flow chart of working model
Improved: Integration of GAN In order to increase output realism, today's more sophisticated deepfake models frequently combine autoencoders and GANs with a discriminator. You referred to GANs, but you did not employ a discriminator during training. While this is acceptable for easier applications, GAN-based methods are more practical. Masking and Face Alignment: To better blend the face with the background and body, more recent pipelines make use of predictive masking, 3D landmarks, and precise face alignment. Superior Architectures: For full-body movement or expression transfer, some more recent techniques (like StyleGAN2 and First Order Motion Model) go beyond the shared encoder-decoder method. Arms Race for Deepfake Detection: As deepfake generation improved, so did detection algorithms. Techniques
In the digital age, identifying deepfakes has become increasingly difficult, particularly as the technology underlying them keeps developing. Relying solely on the human eye is no longer adequate, as artificial audio and video are becoming almost identical to authentic media. To accurately and consistently identify these manipulated files, researchers and forensic specialists are developing tools and techniques. Visual and motion analysis is one of the best methods for identifying deepfakes. Deepfakes are usually created frame by frame, which occasionally results in minute mistakes. Unnatural blinking, uneven lighting, or misaligned facial features are a few examples of these mistakes. For instance, a person's eyes might not blink naturally, or their mouth might move a little bit out of time with the sound. These minor irregularities, such as facial flickering, jitter, or awkward frame transitions, are more noticeable when watching such videos in slow motion. Another strategy is to monitor facial expressions. Leading digital forensics specialist Dr. Hany Farid created a technique that examines the behaviour of actual human faces rather than relying solely on artificial intelligence. His team looks at eye movements, head movements, and facial expression changes. They can identify abnormal behaviour that would not normally be present in a real recording, which allows them to map these patterns onto both real and fake videos and identify deepfakes. Since the majority of deepfake generators still have trouble accurately simulating subtle human expressions, this kind of behavioural biometrics can be particularly useful. Farids method concentrates on facets of human behaviour that machine learning models still struggle to replicate, in contrast to AI-based detection systems that depend on training data. It is less susceptible to attacks or ploys by deepfake producers thanks to this non-AI-based forensic technique. Having alternative detection techniques that do not exclusively rely on AI offers a stronger defence because AI-generated fakes are always getting better. The arms race between creators and detectors is a crucial element in the fight against deepfakes. Computer science professor Hao Li, a pioneer in deepfake generation, works with detection specialists to create incrediby lifelike deepfakes. These videos are analysed by professionals like Farid to enhance detection tools. By pushing both fields forward and keeping one from lagging too far behind, this competitive relationship aids in the evolution of detection systems alongside generation techniques. Tech firms are also taking over. Microsoft and Facebook are among the social media companies that have started campaigns to stop the spread of deepfakes. To assist researchers in developing more accurate
detection models, they have made available sizable, labelled datasets of authentic and fraudulent videos. These datasets help AI systems become more adept at identifying inconsistencies and teach them how deepfakes are created. Automatic deepfake detection tools that are integrated straight into websites like YouTube, Instagram, or TikTok could be a future development, identifying potentially manipulated videos before they go viral. Finally, experts stress the importance of public awareness and critical thinking. People should be wary of videos that seem too shocking, too flawless, or too strange because deepfakes are becoming increasingly realistic. Viewers should think about the video's source, check it with reliable news sources, and assess the video's plausibility. Digital literacy becomes a crucial barrier against false information in a time when "seeing is no longer believing. In conclusion, identifying deepfakes necessitates a blend of technological instruments, human discernment, and interdisciplinary cooperation. Deepfakes are largely produced and detected by AI, but forensic techniques, platform accountability, and public awareness campaigns are just as crucial in stopping the spread of misleading digital content.
Fig 3.1.3:deepfake
Fig 3.1.4 deepfake image
-
Deepfake Victim: A Real-Life Incident
-
Beginning on November 6, 2023, the fake footage of Rashmika Mandanna has been circulated online. Deepfake technology, which uses artificial intelligence to modify images and videos to make them appear falsely authentic, was used to create this film. Rashmika's face was superimposed onto someone else's body to create the video, which is against privacy laws. A screenshot of the deepfake film, which purports to show actress Rashmika Mandanna entering a lift, is seen in Fig. 1. On social media, this fake video became viral and attracted a lot of interest from users in general. Millions of people have viewed the deepfake video, with at least 2.4 million of those views occurring on X, the social media platform that was formerly known as Twitter. Abhishek Kumar, an Indian journalist, located the source of the false on social media, this fake video became viral and attracted a lot of interest from users in general. Millions of people have viewed the deepfake video, with at least 2.4 million of those views occurring on X, the social media platform that was formerly known as Twitter. After tracing the fake video's origin, Indian journalist Abhishek Kumar pushed for new "legal and regulatory" frameworks to
combat online counterfeit photos. On October 8, the original videowhich featured a woman named Zara Patelwas posted to Instagram. Whether Patel contributed to the production of the deepfake version of the video is yet unknown. Celebrities from a variety of fields have been seen watching phoney self-portrait videos, yet it's still unknown who made them or why.[7]
Fig 1 Rashmika Mandanna video
-
Steve Beauchamp, an 82-year-old retiree, was just looking for a way to provide his family a little more financial stability. It appeared genuine and enticing when he watched a video of Elon Musk fervently endorsing a fresh investment opportunity. He made the decision to give it a go with $248 as the starting sum. However, he was persuaded by constant communication and his increasing faith in the plan over the course of the following few weeks, and he ultimately invested almost $690,000his entire retirement savings. Then, everything disappeared. Beauchamp was unaware that the clip he was watching was a deepfake. To make a very realistic fake, scammers employed artificial intelligence to distort an actual conversation and Assembly Bill 602 holds perpetrators accountable to victims of non-consensual pornography. Another with Elon Musk. They changed his voice and synchronized the phony audio with minor mouth movements. For the typical individual, it seemed entirely genuine.AI-generated films of this type are becoming more prevalent online, particularly on social networking sites. They only take a few minutes to prepare and are inexpensive to make, frequently costing just a few bucks. However, they are very successful, particularly when disseminated via sponsored ads on websites like Facebook. Millions of dollars have been lost because of thousands of people being duped by phony representations of well-known individuals like Musk. This kind of AI-powered fraud could cost consumers billions of dollars in the years to come, according to experts, including professionals at Deloitte[9]
Fig 2 Elon musk deepfake video
-
-
Global Legal Framework Analysis
a) USA
Federal Laws: At present, the United States does not have federal legislation that is entirely dedicated to deepfakes, but the discourse on filling this lacuna is gradually emerging. The No Artificial Intelligence Fake Replicas and Unauthorized Duplications (No AI FRAUD) Act
intelligently up the intellectual ante. This proposed law will mean that it will become a criminal offense to develop any electronic image of another person if they are alive or even dead and without their consent. That needs to cover both look and sound, due to the increasing complexity of AI-generated content. Other bill proposals for federal legislation include the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, which targets performers voice and likeness, and the Disrupt Explicit Forged Images and Non- Consensual Edits (DEFIANCE) Act, which seeks to protect individuals from non-consensual explicit images. However, this means that the deepest regulation of deepfakes still does not work in the United States as a single federal law.
State Laws: Some states in the U.S. have passed laws to regulate deepfakes depending on certain problems like interference in the elections, non-consensual creation of adult material, or identity theft. California has set progressive statutory laws such that Assembly Bill 730 outlaws deepfakes in political campaigns, was Texas which passed Senate Bill 751 that will prevent the making and dissemination of deepfake videos that seek to alter the electoral process. The state also prohibits the creation of steamy sex scenes through the fake impersonation of any person without their permission. Other states, including Florida, New York, Illinois, and Virginia are also supporting certain regulations in specific forms. However, because their definitions and applications are not uniform, these kinds of laws provide a rather piecemeal set of protection one state at a time. b) UK
The UK passed the Online Safety Act of 2023, legalizing the sharing of fake sexually explicit images when the act results in distress and where the sender had intent or couldnt care less. This law has made a significant achievement compared to the fight against the effects of deepfakes. Those who have been targeted by non-objectionable deepfake content must use the current laws that govern things like defamation, harassment, or data privacy, which can be a tall order. This illustrates the requirement for additional regulation to be enacted while dealing with the newer complex issues brought by deepfakes.
c)EU
The AI Act put forward by the European Union is a pioneering attempt to build the legal framework for AI solutions, including deepfakes. Following Article 52(3), the Act provides new transarency provisions that will dictate that the creators of such deepfake videos must indicate that the content was synthetically generated. As mentioned above, the Act does not prohibit deepfakes directly; this measure should also be considered as a work in progress that attempts to weigh the possible benefits to be gained from developing Deepfake technology against possible harm done to peoples rights and freedoms. Audit The Act was being negotiated in December 2023 and approved in 2024. When it comes into force, the Act will place the EU in the vanguard and set the course for the regulation of AI in the rest of the world.
D)China
China has not been defensive against deepfake technology and has rapidly established a set of strict laws to protect the technology from misuse. It is going to be mandatory to label all AI-generated content by the start of 2023 to avoid confounding end-users. It holds sanctions when they are violated hence indicating the governments desire to retain control over the content produced and shared on the Internet.
In 2019, the Chinese app ZAO gained widespread popularity for allowing users to replace actors faces with their own in movie clips. Despite its viral success, the app soon attracted criticism over data privacy issues. This led the Cyberspace Administration of China (CAC) to step in, urging the developer, Momo Inc., to revise its privacy policies and strengthen data protection measures. Authorities Administration of China (CAC) to step in, urging the developer, Momo Inc., to revise its privacy policies and strengthen data protection measures. Authorities highlighted the importance of complying with legal standards when collecting and using user information.[8] Chinas regulations do however cover the additional practical issue of using deepfakes in frauds and disinformation, signifying that China understands how to encompass all the potential evils of this technology.
-
Australia
Australia has only just started to develop more concrete legislation to address deepfakes with the emphasis currently being placed on safety and minimisation of harm online. Currently, there are legal provisions in place concerning defamation, harassment, and misuse of data some of which deepfake cases are dealt with. But the authors provide compelling evidence that such laws do not adequately protect AI-generated content from infringements.
The Australian governments approach is to engage stakeholders in a dialogue to establish a strong policy that both fosters innovation as well as protects the consumer.
-
France
Anti-deepfake measures have been put in place in France to prevent cases of identity fraud and prevent the spread of fake news. These laws also penalize producers or distributors of manipulated content that results in harm. However, as with many other countries, the French regulations are manifested concerns and do not consider the broader scope of deepfake yet [10]
-
-
Cybercrimes: Indian Statistics
(a) Registered Cases of Various Categories under IT Act: A total of 217 cases were registered under IT Act 2000 during the year 2007 as compared to 311 cases during 2006, showing a decline of 30.2%. Out of the total (217) cases registered under IT Act, 2000, the major cases related to hacking (Sec. 66) (126), obscene publication/transmission in electronic form (Sec. 67) (68) and others (23).The major states reporting cases under IT Act, 2000 were Maharashtra
-
followed by Karnataka (40), Andhra Pradesh (25), Kerala (22), Rajasthan (19) and Uttar Pradesh (14). These six states together accounted for 82.5% of the total cases (217) reported in the country under IT Act, 2000 during 2007.A total of 154 persons were arrested for committing offences under IT Act during 2007. Out of these, 98 persons (63.6%) belonged to the age group 18-30 years. State-wise analysis revealed that Maharashtra reported the highest number of persons arrested (47) followed by Andhra Pradesh (22), Kerala (20), Karnataka (17) and Uttar Pradesh (9). Among the arrested persons, 85 were educated up to graduate and above level, 49 were diploma holders, and 20 were educated below matric/secondary level. videos that harm reputations. Section 354D is about cyberstalking, which includes using deepfake
Fig 3.4.1 IT case and arrest by state
While the IT Act, 2000particularly Chapter XIoutlines punishable cyber offences, its ability to address todays complex cybercrimes remains limited. Many offences under the Act are bailable, reducing the law's effectiveness as a deterrent. The key offences listed include:
Tampering with the computer source code or computer source documents. Publishing, transmitting or causing to be published any information in the electronic form which is obscene. Failure to protect data. Failure to decrypt information, if necessary, in the interest of the sovereignty or integrity of India or to prevent incitement to the commission of any cognizable offences. Creating or publishing digital signatures with the intent of misrepresenting or for fraudulent purposes. Obtaining any licence as a Certifying Authority (CA) or a digital signature certificate by misrepresentation. Publication of digital signature certificates for fraudulent purposes.
Despite these provisions, enforcement faces challenges. Police and judicial officers often lack adequate training in cybercrime investigation and digital evidence handling. Coordination between law enforcement and technical agencies is weak, and cyber forensic support is underdeveloped. Strengthening cyber law enforcement, training officials, and updating procedures across states are crucial to improve the current framework. The IT Act 2000 has several limitations that affect its ability to address modern cyber issues. It does not resolve jurisdiction conflicts or regulate domain name disputes. The Act also overlooks intellectual property protection online and misses many cybercrimes like cyberstalking, cyberfraud, and cybersquatting. Important e-commerce concerns such as privacy, content regulation, and electronic payment rules are not covered. Additionally, it lacks clear implementation guidelines, and low awareness among officials hampers enforcement. These gaps highlight the need to update the law to better handle todays cyber challenges.The Indian Penal Code, 1860 (IPC) can help in dealing with deepfake abuse. Section 499 and 500 cover defamation, which can apply to fake technology to harass someone, especially women. Sections 503 and 507 address criminal intimidation and anonymous threats. Section 469 deals with forgery aimed at harming someone's reputation, which can include fake videos. Section 509 punishes words, gestures, or actions meant to insult a woman's modesty and is often used in such cases.[11]
-
-
LESSONS FOR INDIA TO OVERCOME DEEPFAKE CYBERCRIMES
India currently lacks specific laws to tackle the growing threat of deepfakes, unlike countries such as the U.S. and U.K., which have introduced targeted legislation. In India,
deepfake-related crimes are handled under general provisions of the Information Technology Act, 2000, such as Section 66C (identity theft), 66E (violation of privacy), and 67 (obscene content), along with certain IPC sections related to indecent representation. However, these laws are not designed to specifically address the unique challenges posed by deepfakes. Another concern is copyright infringement, as AI-generated deepfakes often use original human-created content without permission, leading to potential misuse of a persons likeness or intellectual property
Establish a Specific Deepfake Law
India must develop a specific legal framework to deal with crimes related to deepfakes. The laws in place are either out of date or too general to deal with the complexity of media produced by artificial intelligence. It should be considered a serious criminal offenseto make or distribute deepfakes that use someone else's voice, images, or videos to harass, threaten, or defame them. To act as a deterrent, punishments must consist of severe jail time and hefty fines. Even though the word "hang" might sound harsh, the goal is to make sure that the penalty is so harsh that people are deterred from considering committing such crimes in the first place.
Tech Platforms Must Label AI Content
Social media sites and tech firms should be required to put in place AI detection tools that identify, and label content produced by AI. For example, a noticeable watermark or "AI- generated" tag should be present on any video, image, or audio file that has been altered or produced using AI tools. This would lessen the impact of deceptive media and make it easier for viewers to spot fake content. This becomes particularly crucial when deepfakes are used to harm public figures or celebrities, as was the case with the recent "Ghibli" arts cases. Uploading one's own images and employing AI technologies to create "Ghibli-style art" has been popular recently. Although it's entertaining and aesthetically pleasing, there are significant privacy and deepfake issues with it, particularly when: The platform doesn't make it apparent how it keeps or uses your photo. As mentioned on X, Proton is a platform that focuses. differentiate between authentic and fraudulent content. To educate. The next generation of responsible digital citizens, schools and universities. on data security and privacy. "Aside from the dangers of data breaches, sharing private images with AI gives you no control over the way they are utilized because the images are used to train the AI. They might be used, for example, to produce content that is harassing or defamatory. Individuals unwittingly divulge their facial information, which may be utilized maliciously or even to teach artificial intelligence. People should therefore be mindful of this and use caution before posting their images on any social networking site
User Responsibility and Awareness
Users need to be taught how to spot and react to deepfake content. People should be urged to confirm the legitimacy of any dubious media prior to forwarding or sharing it. Governments and tech companies can use change photo inquiries, information checks and AI detection tools to teach people how to spot manipulated content through digital literacy campaigns. Encouraging people to "think before you share" will help stop harmful content from spreading
Cybercrime Cell Action with Fast Track
Because of their workload as well as lack of expertise, cybercrime cells frequently take months to respond to complaints pertaining to deepfakes. India should create specialized units within its cybercrime departments that are solely focused on AI-driven crimes, particularly deepfakes, to combat this. To promptly identify the origin of deepfakes, remove the content, and bring the offenders to justice, these units ought to be outfitted with sophisticated forensic equipment and skilled staff
To effectively respond to this issue
India needs to draft new legislation that defines terms like "deepfake" and "synthetic media" and lays out procedures for handling such crimes. This includes classifying deepfake offenses based on their severity and updating the IPC and IT Act to reflect modern digital threats. Takedown systems should be developed to quickly remove harmful content, and a dedicated enforcement agency must be empowered to act against such crimes. Additionally, law enforcement officers should be trained to recognize and respond to the dangers of deepfakes, ensuring a strong and informed response to this emerging challenge.
Education of Students and Youth Early
awareness can be greatly increased by including chapters on deepfakes and AI-based manipulation of media in school curricula. The moral and juridical ramifications of careless use of such technology should be explained to students, along with how to can also host cybersecurity seminars, workshops, and awareness campaigns[12]
-
-
CONCLUSION
Although deepfake technology was first developed for constructive purposes, it has now grown to be a significant cyberthreat in India, particularly when it is used improperly during political debates and international disputes. It is challenging to address the rising threats of deepfakes in India due to the absence of robust legislative frameworks and technological infrastructure. This study emphasises how urgently India must improve its cyber laws, create sophisticated detection systems, and take inspiration from international best practices. India can create a safer online environment and lessen the negative effects of deepfake content by promoting cooperation between the government, technology specialists, and law enforcement.
REFERENCES
-
Deepfakes and Indian Criminal Law: Addressing the gaps in legal protection Author: NR Divyashree, MK PM RV Institute of legal studies.
-
Exploring the misuse of deepfake technology in India: implication for society Dr Ahana Chopra, Ms Ananaya Shukla.
-
International Journal of Advanced Legal criminalizing deepfake technology in India: A legal Analysis of privacy and regulatory gaps author: Karan Choudary and Mahak Rajpal
-
Mitigating deepfake threats to privacy: Legal framework and technological safeguards author: Manish Nadal
-
Deepfake video detection: challenges and opportunities Achhardeep Kaur, Azadeh Noori Hoshyar.
-
Deep Fake Technology and Identity Theft: An Emerging Challenge for Cyber Laws in India author: Sourav mandal Research Scholar, CHRIST (Deemed to be University)
-
Analysing the Identification Approaches of Deep Fake Images and Videos Encapsulated in Fake Contents Available on Social Platforms author: Gaurav Agarwal, Akash Sanghi. Department of Computer Science and Engineering Invertis University [8]https://news.cgtn.com/news/2023-01-10/China-s-first- deepfake-rules-to-develop-AI-prevent-misuse-enter-force- 1gtxetyThwQ/index.html
-
Elon Musk deepfake goes viral after Donald Trump Twitter return | verifythis.com]
-
Gaps in Law and Deepfake Threats in India https://lawfullegal.in/deepfakes-and-indian-law-bridging- the-legal-gaps-in-the-era-of-synthetic-media/
-
https://youtu.be/q_IOxVBEstQ?si=tZzBKUhyEzKXe4y [13]https://www.youtube.com/watch%3Fv%3DfSm6ecT9P zEhttps://askpromotheus.ai/artificial-intelligence
-
Information Technology Act, No. 21, Acts of Parliament, 2000 (India).
-
Indian Penal Code, 1860, No. 45, Acts of Parliament, 1860 (India)
-
CRIMINALIZING-DEEPFAKE-TECHNOLOGY-IN- INDIA-A-LEGAL-ANALYSIS-OF-PRIVACY-AND- REGULATORY-GAPS.pdf
.
