| As this fall’s presidential election grew closer, experts warned that videos generated by artificial intelligence could threaten our democracy. Experts told Congress that so-called deepfake clips could hurt a presidential candidate by falsely showing them doing or saying something they did not. As a precautionary measure, Facebook and Twitter added new rules banning malicious deepfakes and promised swift action should any appear. The election’s now over, and while plenty of disinformation circulated online, the democracy-destroying deepfakes never showed up. Fact-checkers unmasked videos that had been deceptively edited with conventional tools, but not any AI-generated propaganda. Facebook and Twitter don’t appear to have needed to activate their new deepfake moderation rules. Whither the anti-democratic deepfakes of 2020? In an election full of potent ways to spread disinformation—like simply sharing the president’s tweets—mastering AI algorithms may have been the least attractive option. But that doesn’t mean we can ignore deepfakes, or future warnings about their dangers. The rapid pace of AI research will make deepfake videos more convincing and easier to create. And deepfake-detection technology is less advanced than deepfake-generation technology. When WIRED tested out a prototype detector powered by algorithms from Microsoft and leading academic labs, it judged deepfakes of Donald Trump and North Korean leader Kim Jong-un to be real—and some real videos to be fake. Deepfakes may have ghosted the 2020 election, but the threat will still haunt future ones. Tom Simonite | Senior Writer, WIRED |
Post a Comment