Not that I want to weigh in on the silly madness surrounding Taylor Swift—with people complaining she gets too much air time on NFL game coverage and is a distraction; she's empowering Democratic voters to register and vote; she makes too much money; she's so nice and gives so much money to charity and her concert tour road crew—but there is certainly a cybersecurity angle here that can't be denied.
Enter the age of the deepfake, where artificial intelligence becomes a weapon of misinformation, and celebrities like Taylor Swift are just pixels away from a digital doppelganger nightmare.
Threat actors leverage powerful deepfake algorithms, trained on hours of video and audio footage, to seamlessly superimpose a celebrity's likeness onto another person's body, or even to generate entirely synthetic speech. These hyper-realistic creations can then be used for:
The Cyber Helpline, a movement by the cybersecurity community to step in and fill the gap in support for victims of cybercrime, digital fraud, and online harm, had this to say about recent deepfakes that put Swift in an unflattering light with explicit images:
"We are saddened to hear about what has happened to Taylor Swift over the last few days. No one should have to suffer the consequences of technology being used to objectify and harm them this way.
That's why we have launched the Global Online Harms Alliance, a network of organizations that work together to mitigate this type of harm globally. Ultimately, these crimes do not have borders, and our approach to resolving it needs to reflect that."
[RELATED: What Is The Cyber Helpline?]
The consequences of deepfake misuse are far-reaching:
The fight against weaponized deepfakes is multi-pronged:
The Deepfakes Accountability Act, currently being proposed in the U.S. Congress, aims to tackle the growing threat of nonconsensual, sexually explicit deepfakes. Here's a short synopsis:
The bill is still under development and faces debate regarding its scope and potential implications for freedom of expression.
The case of Taylor Swift, targeted by malicious deepfakes, serves as a stark reminder of the vulnerability of our digital identities. It's a call to action for a collective effort—tech companies, policymakers, and the public—to work together and ensure that AI, instead of being a tool of deception, becomes a force for safeguarding truth and trust in the digital age.
Another pop culture figure, Colin Cowherd, host of sports talk show The Herd on Fox Sports, had this to say yesterday regarding the uproar over Swift supposedly getting too much attention from the broadcast networks. Essentially, Cowherd said everyone needs to cool their jets, and the numbers show that Swift gets an average of 25 seconds of air time during a 3.5-hour NFL game broadcast and that men—probably living in their moms' basements—should focus on themselves not others.
In a recent interview with NBC, Microsoft CEO Satya Nadella had this to say about deepfakes, particularly around the porn-related AI-generated posts with Swift as the target:
"First of all, absolutely this is alarming and terrible. And so, therefore, yes, we have to act, and quite frankly, all of us in the tech platform, irrespective of what your standing on any particular issue is—I think we all benefit when the online world is a safe world."
Nadella added, "So I don't think anyone would want an online world that is completely not safe for both content creators and content consumers. So therefore, I think it behooves us to move fast on this."