Tamil Star Shruthi Narayanan Responds After Deepfake Video Controversy
Chennai, India — April 10, 2025 – Tamil television actress Shruthi Narayanan has finally spoken out and made her first public appearance after becoming the target of a viral fake video scandal. The 24-year-old actress, who gained fame playing Rohini in the popular show Siragadikka Aasai, was falsely linked to a private video that recently spread across social media.
The Viral Video: What Happened?
The controversy erupted after a video, claimed to be of Shruthi, went viral online. The video sparked heated debates around the casting couch in the Tamil film industry and led to widespread speculation about her involvement. However, several sources have since clarified that the video was not real. It was reportedly created using AI technology to mimic her appearance — a dangerous and increasingly common practice known as a deepfake.
Shruthi Breaks Her Silence
Amid growing backlash and online harassment, Shruthi took to Instagram to address the issue directly. In a heartfelt story, she condemned the spread of fake content and shared how deeply the incident had affected her.
“For those of you sharing this kind of content, it may be a joke or entertainment. But for me, it’s heartbreaking and painful,” she wrote.
Her emotional note quickly gained attention, sparking support from fans and fellow actors who urged people to stop spreading false information and to show more responsibility online.
First Public Appearance After the Scandal
Despite the controversy, Shruthi recently made a confident public appearance. Dressed elegantly and appearing calm, she seemed determined to move forward from the distressing episode. Her appearance has been praised by fans as a bold statement against cyber harassment and the misuse of digital technology.
Shruthi’s decision to step back into the public eye is seen as a powerful message — that she will not be silenced by online rumors or manipulated media.
A Growing Concern: Deepfakes in Entertainment
This incident is just one of many recent cases where AI-generated deepfakes have been used to target public figures, especially women in the entertainment industry. These digital forgeries pose a serious threat to privacy, reputation, and mental well-being.
Many are now calling for stricter cybercrime laws in India and globally to protect people from such digital abuse.
What Are Deepfakes and How Do They Work?
Deepfakes use a type of AI called deep learning, where computers are trained on real photos and videos of a person. This allows the software to recreate their face, voice, and mannerisms, and insert them into fake videos that look very real. These can then be shared across the internet, often fooling viewers and causing real-world damage.
In the case of actress Shruthi Narayanan, a deepfake video falsely claimed to show her in a private situation. Even though it wasn’t her, the damage was already done. People believed it, shared it, and criticized her — all based on a fake clip.
While celebrities are often targeted, everyday people are also at risk. There have been cases of deepfakes being used in revenge porn, fake job interviews, political manipulation, and even scams.
This technology is advancing faster than the laws meant to control it.
What's Being Done?
Governments, tech companies, and online platforms are starting to take action:
-
Laws are being reviewed and updated to include punishments for creating or sharing harmful deepfakes.
-
Social media platforms like Instagram, X (formerly Twitter), and TikTok are improving their tools to detect and remove fake videos.
-
Celebrities and activists are speaking out, demanding protection and accountability.
However, many experts believe stronger laws and better public awareness are still needed — especially in countries like India, where deepfake use is rising but legal protections are still catching up.
Final Thoughts
Shruthi Narayanan’s case is just one of many that show how deepfakes are not just a tech trend — they’re a digital threat. As fake videos get harder to detect, it becomes even more important to verify what we see online, support those affected, and push for better laws and online safety tools.