author photo
By Cam Sivesind
Fri | Dec 8, 2023 | 9:10 AM PST

A recent report revealed a disturbing new tactic in Russia's ongoing disinformation campaign targeting Ukraine. A group linked to the GRU, the Russian military intelligence service, is using images of celebrities such as Taylor Swift, Beyoncé, Kim Kardashian, Justin Bieber, and Oprah Winfrey to spread anti-Ukrainian propaganda.

These manipulated images feature fabricated quotes from celebrities, critical of Ukraine and supportive of Russia's actions. The operation, dubbed Doppelganger, aims to exploit the trust and influence of these celebrities to spread disinformation and manipulate public opinion.

Examples of the fabricated quotes

  • "Now, how long will this take? The Ukrainians behave like charlatans and we continue to pay," reads a quote attributed to Taylor Swift.
  • "Supporting Ukrainians is unacceptable. Their actions destroy lives and societies," reads a quote attributed to Oprah Winfrey.

The use of celebrities in this disinformation campaign is particularly concerning as it can reach a wider audience and appear more credible than traditional propaganda methods. This tactic is not new, but it highlights the increasing sophistication of Russia's disinformation efforts.

Col. Cedric Leighton, CNN military analyst, USAF (Ret.), and Chairman, Cedric Leighton Associates, said to SecureWorld News, "Welcome to the Brave New World of disinformation, particularly Doppelganger, which I have spoken about before."

"Celebrities are really going to have to take control of their messaging and enlist the help of specially trained IT and cybersecurity professionals to combat inauthentic content that is posted in their name on social media platforms," Col. Leighton continued. "The potential reputational damage to celebrities—or any individual or organization—who fall victim to such fakery is incalculable. Laws and court procedures must be modernized both within nations and across national boundaries to combat these malicious efforts."

Teresa Rothaar, Governance, Risk, and Compliance Analyst at Keeper Security, had this to say:

"It's difficult for anyone to stay up-to-date on all of the ways cybercriminals are worming into our lives. However, it's no surprise that nefarious actors are using social media to sway public opinion. We tend to believe what we see, which is why aesthetics often trump awareness of inaccurate representations.

Identifying disinformation on social media can be challenging; however, there are several strategies and practices that users can adopt to become more discerning consumers of information. First and foremost, ensure the source is reputable and credible. It's critical to verify any information with sources outside of social media. Always remember that images and videos can be manipulated. Use reverse image searches or check the original source of multimedia content. And finally, be wary of sensational or overly emotional language, as it may indicate an attempt to manipulate emotions rather than provide objective information."

Col. Leighton added: "Platforms like Meta's Facebook, Alphabet's Google, and X (formerly Twitter) must ramp up their policing of fake accounts like those used in the Doppelganger operation. If they don't, national and international legal systems must hold these platforms accountable for the spread of this type of disinformation. The legal issues involved include not only reputational damage but also have national security implications of the highest order."

Impacts of the campaign

  • Erosion of trust in information: The use of fabricated quotes undermines trust in credible sources of information and makes it difficult for people to distinguish truth from fiction.
  • Manipulation of public opinion: The campaign aims to sway public opinion in favor of Russia and against Ukraine. This can have a negative impact on international support for Ukraine and its war effort.
  • Damage to celebrities' reputations: The use of celebrities' images without their consent can damage their reputations and lead to negative public perception.

"Through its Doppelganger operation, the GRU has tapped into prevalent human psychological and mental weaknesses," Col. Leighton said. "Humans are hard-wired to believe what they see and hear with their own eyes and ears. When what they see and hear is fake, it's very difficult for people to recalibrate themselves, and that makes it possible for Doppelganger-style manipulations to gain traction."

Jordan Fischer, Partner, Constangy, Brooks, Smith & Prophete, had this to say:

"With the intense evolution of AI that is readily accessible by the public, it is not a surprise that state actors, and threat actors, are going to start using this technology for nefarious purposes. Sadly, I think this is only the start," she said. "AI really changes the game with propaganda because of how realistic it can make any of these materials. And, while artists who are being misappropriated can try to fight this, it is often countries where there is no recourse to combat the misuse."

What can be done to combat this disinformation campaign?

  • Fact-checking and awareness campaigns: It is essential to fact-check the information circulating online and raise awareness about the dangers of disinformation.
  • Social media platforms need to take action: Social media platforms have a responsibility to remove fake content and accounts that engage in disinformation campaigns.
  • Critical thinking and media literacy: Individuals need to develop critical thinking skills and media literacy to evaluate the information they encounter online.

Richard Staynings, Teaching Professor at the University of Denver, and Chief Security Strategist at Cylera, had this to say:

"Unfortunately, this is a growing and very concerning problem. At the root of this problem are two failures: 

  1. The failure of the school education system in Western countries—especially in the United States where critical thinking is no longer taught—so Europeans and Americans grow up without the skills to consider and evaluate the origin of the information being presented to them and to discount low reputation sources or obvious fake news sites.
  2. The second is a lack of policing of content being propagated by the internet and in particular by social media. Facebook, TikTok, X-Twitter, Truth Social, and others are notorious for being sewage pipes of fake information. This comes in two forms: user posts and paid advertising.

Given the number of automated social media bots, fraudsters, and fake accounts purporting to be someone they are not, or an entirely made up persona using a stolen or AI-generated image, this is becoming an endemic problem that the social media companies seem unwilling or incapable of addressing."

Staynings continued:

"Of course, a verification system that works in all countries would be problematic to set up and verify, but a U.S. government mandate to do so could not be overlooked if penalties were attached. The drawback of this is that individuals could no longer post anonymously about government corruption, human rights abuses, etc. from dictatorial or oppressive regimes like Hong Kong, Russia, or elsewhere. The most totalitarian dictatorships like China and North Korean don't even allow their subjugated citizens access to global social medial platforms, unless they are agents of the state pushing propaganda and fake news.

The workaround may be for the U.S. government to mandate the protection of identities so that local governments cannot obtain the true identity of users, while that information is available to the platforms only to prevent fraud. When fraudulent activity is spotted or when fake information is attempted to be posted, the platforms could warn then suspend or disable accounts for those who continue to do so. This happens at present, but when a malicious user's account is deleted, they simply set up another, often from the same IP address.

User verification will not be an easy thing to push through in any Western country, and you can be sure that the Russian, Chinese, and Iranian troll farms and their amplification bots will be sure to campaign vigorously against such controls. Government needs to take the lead here if it is to solve this problem. Many platforms are already implementing voluntary user verification programs, so this may be gradualistic over several years to fully implement and eventually mandate."

"A couple of years ago, Vladimir Putin told Russian high school students that the country that controls the evolving discipline of AI will control the world," Col. Leighton said. "Doppelganger is simply the manifestation of Putin's ambitions on a global scale. Putin is using the GRU to help him realize his goal of dominating the AI sphere. But, he's not just dominating AI; he wants to dominate what people think and feel all around the world. That's what makes operations like Doppelganger so profoundly dangerous."

The other more pressing area of social media concern is paid advertising, Staynings said.

"This WIRED article discusses advertisements from Kremlin operatives being placed on Facebook, but recently X has seen a heap of Neo-Nazi and anti-Semitic advertisements placed on its pages, so many in fact that many corporate advertisers have pulled out of the platform," Staynings said. "While some of these may be deliberately instigated by Kremlin or Chinese state groups to polarize the audience of these platforms, others from extreme right-wing or extreme left-wing groups are pushing their political messages using fake or misleading information."

"The problem of misleading paid advertisements is easy to fix. Make the platforms liable for advertising they accept onto their platforms. That will force them to do a better job of verifying advertising content before accepting it to their platform and accepting payment," Staynings continued. "I would imagine that VIPs and stars will happily sue these platforms for multiple millions for illegally using their image in a paid commercial, or for suggesting that they support or have said unsubstantiated things. The government should also regulate and fine these platforms for propagating un-policed content. But the amounts need to be significant to make a difference and put in place the type of verification systems that are needed; otherwise, the billionaire owners of these platforms will just pay out any fine from petty cash as they do currently."

Comments