Rashmika Mandanna Recent Video AI News & Concern

A disturbing deepfake video featuring popular Indian actress Rashmika Mandanna recently went viral online, underscoring the potential dangers of artificial intelligence manipulation. The digitally altered video depicts Mandanna entering an elevator, but her face was superimposed onto the body of another woman using deepfake technology.

The viral clip garnered over 2.4 million views before being taken down across social media platforms. It highlights the alarming threat deepfakes pose to individuals, society, and democratic institutions.

The Viral Deepfake Sparks Outrage

The viral deepfake video disturbed many, including Bollywood legend Amitabh Bachchan who shared it on Twitter. Bachchan called for new regulations to tackle the creation and spread of deepfake content that violates privacy and spreads misinformation.

Union Minister Rajeev Chandrasekhar also reminded social media platforms of their responsibility to curb such manipulative media according to Indian law.

Rashmika Mandanna herself denounced the “scary” deepfake, calling it a “great misuse of tech” that could harm many more if not addressed urgently. The woman originally in the video, influencer Zara Patel, also expressed her deep concern over the unauthorized use of her likeness.

The video’s rapid spread and ensuing backlash highlight deepfakes’ capacity to deceive viewers and potentially cause real harm to reputations and privacy.

See More: How to Use Cheater Buster AI to Catch Infidelity on Tinder

What are Deepfakes and How are they Created?

Deepfakes utilize powerful artificial intelligence technologies to digitally swap people’s faces and voices onto source images or videos. The algorithms analyze facial expressions, poses, mouth shapes and tones to create manipulated footage that looks strikingly real.

Sophisticated deepfake software like Zao and DeepFaceLab make it possible for anyone to generate fake videos, often called “cheapfakes”, on consumer PCs. Large datasets of photos and videos are used to train deep learning models on human facial and vocal characteristics.

The danger is that deepfakes make spreading misinformation and maliciously impersonating people incredibly easy. Their potential for abuse will only grow as the technology becomes more accessible.

Deepfake Dangers to Individuals and Society

While deepfakes are often created for entertainment purposes, their capacity to distort truth and violate consent make them a serious societal threat. Here are some of the dangers posed by deepfakes:

Spreading Misinformation

Convincing deepfakes can rapidly spread false news, hoaxes and propaganda on social media or messaging apps. Inserting famous faces and voices into events lends credibility that manipulates opinions and beliefs.

This erodes trust in institutions and can undermine journalism. During elections, deepfakes could falsely depict politicians making inflammatory remarks, skewing discourse.

Identity Theft and Loss of Privacy

Inserting someone’s likeness without consent into embarrassing or explicit deepfakescenarios violates privacy and harms reputations. Personal information exposed in leaked deepfakes may also enable identity theft.

Women are disproportionately targeted in the majority of deepfake pornography found online. Actress Scarlett Johansson has described this as a “virtually lawless (online) world.”

National Security Threats

Deepfakes that depict government officials declaring wars or ordering strikes could set off global unrest and chaos. They also aid espionage and disinformation campaigns seeking to destabilize democracies and gain geopolitical influence.

In 2018, a deepfake video of Gabon’s president helped spur an unsuccessful coup by errantly convincing soldiers the president was resigning.

Facilitating Political Deception

Even if deepfake technology wasn’t used, its mere existence helps perpetrators dismiss legitimate videos as fake.

This “Liar’s Dividend” allows covering up crimes and wrongdoing by cynically claiming real footage must be a deepfake. Such plausibledeniability severely undermines holding power to account.

As deepfakes grow more advanced and ubiquitous, they threaten to sever fact from fiction, allowing the powerful to reshape reality to their advantage.

Challenges in Detecting Deepfakes

While lawmakers, tech companies and startups are racing to devise deepfake countermeasures, detection remains an uphill challenge.

Sophisticated deepfakes are strengthening faster than defensive detection tools. Artificially intelligent systems that can imagine realistic images and speech from scratch are also on the horizon, skirting many current deepfake detectors.

Progress has been made by analyzing pixel-level distortions and identifying unnatural head and facial movements atypical of humans. But most detection methods still fall short against near-flawless fakes.

Until better technical and policy safeguards emerge, the onus lies on individuals and journalists to independently verify the authenticity of suspicious viral media. Critical thinking is crucial.

Conclusion

The recent viral deepfake video of Rashmika Mandanna offers a sobering look at just how convincing and potentially dangerous AI manipulations have become. As the technology proliferates across the internet, adequate protections and vigilant citizens are needed to safeguard privacy and truth.

Policymakers must enact reforms to curb deepfake abuses that spread misinformation and violate consent. We must thoughtfully navigate artificial intelligence’s capacity to reshape reality, guarding against those who would irresponsibly distort it for harmful ends.

Until solutions arise, we must instill societal resilience against deepfake deceptions. Being judicious in what we share online and questioning the veracity of viral media protects us all against these uniquely modern hazards.

Leave a Comment