The ability to morph change and recomposite them produces some spooky and rather disturbing results. Image by Edward Webb (CC BY-SA 2.0)
How concerned should we be about deepfakes? Is this valuable technology or something of great concern for trust in public life? To understand the complexity, Digital Journal spoke with AI expert David Britton, Experian’s VP of Strategy, Global ID & Fraud on deepfake learning benefits & risks. Britton explains how bad actors can deceive or manipulate consumers and businesses – and what can both do to mitigate the dangers.
Digital Journal: What exactly are “deepfakes”, and why are they concerning?
David Britton: “Deepfakes” – a portmanteau of “deep learning” and “fake” – are artificially created images, video, and audio that are designed to emulate real human characteristics. Oftentimes they’re used to replace the likeness of one (real) person with that of another, creating either artificial speech or imagery. Would-be fraudsters can then leverage such materials for their purposes, putting businesses, governments, and/or individuals at risk.
DJ: What are some of the problematic ways deepfakes can be utilized?
Britton: In the case of businesses and governments, deepfakes can be used to access points of vulnerability that put organizations at risk. This could be by proliferating misinformation, for example, or sophisticated social engineering schemes via remote channels. In the case of consumers, the risks include being duped into scams soliciting funds or accessing personal information – often by using voice cloning to bypass biometric systems to which nefarious actors wouldn’t otherwise have access.
DJ: What can individuals do to mitigate these types of risks?
Britton: The most important thing businesses and individuals can do is maintain constant vigilance. Fraudsters are relentless and always at work, quick to jump on any loophole or weak spot. For consumers, this means staying on top of personal, potentially sensitive information. It also pays to be attentive to suspicious voice messages or calls – oftentimes voice deepfakes sound somewhat familiar but feel slightly off – especially if the message in question solicits personal information or cash.
It’s also important to apply some scrutiny to video content within social media from leaders or trusted personalities; does the message sound overly alarmist or out of character for that individual? If so, trust your instincts and verify those statements via alternate sources, before sharing and propagating potentially harmful and fake content.
DJ: Are there specific steps that businesses can take to thwart deepfakes?
Britton: In the case of businesses, a layered strategy is the best defense. The best way to prevent the propagation of deepfakes, is to make it impossible for fraudsters or attackers to gain access to the platform to distribute that content.
When allowing users to open accounts, or to access accounts, it’s important to layer in tools, like Identification checks such as verification, device ID and behavioral analytics, all of which are powerful tools. Businesses can also fight fire with fire – leveraging the same technology at the fraudster’s disposal – especially machine learning and advanced analytics – to fight such attacks, as Experian already does in its fight against fraud.
DJ: What does the future look like in the realm of deepfakes?
Britton: Technology is constantly evolving, and so the nature of deepfakes will continue to shift in the years ahead. But while those nuances may change, the best practices to offset their impact should not. By maintaining vigilance and awareness of surroundings and emerging threats, both consumers and individuals can stay one step ahead of the risks and continually strengthen their own defenses.