Microsoft Declares War On Deepfakes As US Election Looms

Microsoft is targeting the deepfaked content that is used to manipulate public beliefs and opinions, the company said in a statement today. This content is created using artificial intelligence technologies that are increasingly accessible to the average person, ones that are capable of generating content that is realistic and hard to detect using the naked eye.

Advertisement

Microsoft has previously launched its Defending Democracy program, which is designed to counter disinformation that is spread across social media to influence the outcome of elections. One particularly troublesome form of disinformation is called 'deepfakes,' and it can include everything from a still image generated or modified by AI all the way to videos that seemingly show a politician or other individuals saying or doing things that didn't really happen.

Though a long-term solution to this problem will be difficult due to the nature of artificial intelligence, Microsoft notes that at this point in time, it is focusing on the next couple of months in an effort to help protect the outcome of the US presidential election. That's where the newly announced Microsoft Video Authenticator detection tool comes in.

Advertisement

As its name suggests, the Video Authenticator can analyze media — both images and video — and produce a 'confidence score' that shows the odds of whether the media had been manipulated. Users are presented with a real-time percentage chance for every frame of the content, according to Microsoft, which says that its technology works by looking for subtle elements like greyscale, fading, and blending boundaries.

The tool is able to detect elements that a human viewer may not be able to see with the naked eye, making it a useful technology against election interference. Microsoft says it trained the tool using the Face Forensic++ public dataset and then tested it using the DeepFake Detection Challenge Dataset.

As for the long-term reality of such technologies, Microsoft explains:

We expect that methods for generating synthetic media will continue to grow in sophistication. As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media. There are few tools today to help assure readers that the media they're seeing online came from a trusted source and that it wasn't altered.

Advertisement

Recommended

Advertisement