We've been reporting on this topic for a while.
Machine learning and artificial intelligence have created the ability to fake real life.
One of the hottest examples of this is the fake Obama video that sure looked and sounded like President Obama but was a fake.
Now a bi-partisan group in Congress has written U.S. Director of National Intelligence Dan Coats with a list of questions that can be summed up like this: What are intelligence agencies doing about this threat?
Here is an excerpt from the letter:
"Forged videos, audio or images could be used to target individuals for blackmail or other nefarious purposes. Of greater concern for national security, they could also be used by foreign or domestic actors to spread misinformation."
Governments and certainly sophisticated crime groups must be salivating over this technology.
There is no disputing it will be powerful, particularly before we learn how to spot and disprove these deep fakes in video, audio, and images.
A recent artificial intelligence study at Oxford University puts it like this:
"There is no obvious reason why the outputs of these systems could not become indistinguishable from genuine recordings, in the absence of specially designed authentication measures. Such systems would, in turn, open up new methods of spreading disinformation and impersonating others."
Here is the letter from Congress to U.S. Intelligence on machine learning and deep fakes.
Congress is asking for a response by December 14, 2018.
By the way, this topic has come up at SecureWorld 2018 cybersecurity conferences and it sure fits in with this year's theme: Fast Forward: Predicting and Preparing For Our Cyber Future.
Let's hope we are prepared for this one.