A new publication provides a primer to deepfakes and forecasts their potential future role in online disinformation campaigns. It concludes that while the threat from deepfakes is real, the risks are somewhat narrower than frequently portrayed. It also argues that deepfakes may serve as a distraction, reducing focus on the deeper issues that must be resolved to confront the problems of online disinformation and misinformation.
Based on this analysis, there are four proposals that would make a significant difference in combatting the threat posed by deepfakes.
- Bridge Media Forensics and Strategic Communications
- Accelerate Detection Democratization
- Invest in Research on the Cognitive Dimension of Deepfakes
- Invest in Next Generation Detection Techniques
Even in the midst of these efforts, it is important that deepfakes do not become a distraction. Faked images and video are simply one tool among many in the hands of malicious actors. Overinvestment in countering this cutting-edge tool may simply encourage media manipulation campaigns to adopt alternative tools that are equally effective in eroding trust in the overall information environment. Ultimately, resilience against online disinformation will depend not only on the ability to harness technology, but the ability to harness social and psychological forces, as well.
Author of the publication is Tim Hwang - Director of the Harvard-MIT Ethics and Governance of AI Initiative, a philanthropic project working to ensure that machine learning and autonomous technologies are researched, developed, and deployed in the public interest.