With fewer than 100 days before the U.S. presidential election, Microsoft announced it has developed a new way to combat disinformation on the internet, including a new system of detecting deepfakes — synthetic audio or video that mimics a real recording.
Microsoft said Tuesday it is launching the “Microsoft Video Authenticator,” which it says can analyze photos and videos to provide a confidence score about whether the image has been manipulated. The authenticator will both alert people if an image is likely fake or assure them when it’s authentic, Microsoft said.
“The fact that they [the deepfakes] are generated by A.I. that can continue to learn makes it inevitable that they will beat conventional detection technology,” the company said in a statement. “However, in the short run, such as the upcoming U.S. election, advanced detection technologies can be a useful tool to help discerning users identify deepfakes.”
Microsoft said the new software was built in partnership with the Defending Democracy Program, which fights disinformation, protects voting, and secures campaigns.
Tech and privacy advocates have been sounding the alarm on the rise of deepfakes and its political implications for several years, as the technology has gotten noticeably harder to detect. Some companies have even started developing deepfake services, ostensibly for entertainment purposes.
In February, Twitter announced they would ban media that was “synthetic or fake,” and Facebook made a similar move in January.
Related Posts
Meta is killing Messenger on desktop, here’s what you need to do
On Windows, the desktop app stops working on December 14, 2025. A notification appears if you have it installed.
Use a passkey on X? Update it by November 10 or lose access
The company’s Safety account said that accounts using security keys for 2FA must re-enroll to keep access, via posts on X.
Meta brings disappearing posts to Threads under a spooky name
What's the big shift?