Digital truth in the age of the fake

February 2020  |  FEATURE  |  RISK MANAGEMENT

Financier Worldwide Magazine

February 2020 Issue


Deepfakes – fake videos or audio recordings that purport to be the real thing – are generally not too difficult to spot. Tell-tale signs of digital doctoring, such as distorted backgrounds, face discolouration and badly synchronised sound, tend to give the game away, even to the technologically uninitiated.

As technologies inevitably improve, however, fakers will have the means to do more than put celebrities’ (usually female) faces on porn stars’ bodies and make politicians say or do amusing things. Increasingly, the scope for misuse of deepfakes is heading beyond mere titillation toward the much more malign.

Indeed, digital fakery has the scope to manipulate public sentiment toward business and government, with profound implications for national security. The US presidential election later this year is considered a particular target for digital forgeries. Determining fact from fake is therefore key, with the consequences of such deception potentially severe.

“Artificial intelligence (AI)-generated synthetic media tools provide an increasingly accessible means for bad actors to enhance social engineering and cyber attacks on businesses,” says Henry Ajder, head of communications and research analysis at Deeptrace. “Numerous threats emerge from the ability to generate realistic synthetic media that impersonates an individual’s voice or physical appearance in a photo or video.

“These tools can be used in a variety of ways, including committing fraud by synthetically impersonating a chief executive or a client over the phone, or on a platform such as Skype to extract information or money,” he continues. “Synthetic media-enabled reputation attacks could also be used to cause a drop in a company’s stock price by falsely depicting a company’s executives or branding in a damaging scenario or context."

Detection

With the commodification of digital manipulation tools meaning that it is only a matter of time before they are widespread and accessible to the general public, companies need to boost their data manipulation detection capabilities in order to stem the tide of misinformation.

“We hope companies will rise to this challenge and develop detection and protection technologies, with services to monitor or remove deepfakes,” says Brenda Leong, senior counsel and director of artificial intelligence and ethics at the Future of Privacy Forum. “In the same way that companies have to be aware of phishing and other social hacking threats, along with systemic cyber security for their data systems, they will have to understand the variety and formats of artificial or edited audio and video files.”

In the view of Patrick Hillmann, executive vice president, crisis & risk management at Edelman, the detection methods currently available have proved incapable of keeping up with the pace of deepfakes. “Deepfake output quality has rapidly improved as the computer learning modules their tech is built on are rapidly evolving as well,” he says. “The common opinion on what must be done to combat this trend currently is ‘what AI has broken, AI will have to fix’. Given that most organisations fall flat on their face when they try and use memes on social media, we would suggest steering clear of deepfakes, even with the best intentions.”

Undetectable?

Going forward, while the detection of a deepfake may be easy for the trained eye to spot, increasingly sophisticated technologies may lead to a scenario where digital chicanery is undetectable, even for a digital expert.

“We are already at the point where the naked eye – or ear – cannot always be trusted to identify manipulated files,” suggests Ms Leong. “Not all are based on deepfake tech. Even traditional editing techniques can provide very sophisticated versions of video or audio files that are more than sufficient to fool the casual, or even a fairly aware, observer. And as with much of the emerging technology in the world today, ethical questions abound about the appropriate use cases and value of such systems.”

For Mr Hillmann, given the weaponising of social media over the past decade, they likelihood is that deepfakes will become the next generation of misinformation tools that nation states and criminal organisations wield online. “We have already seen how susceptible more educated populations are to this type of sophistry,” he says. “Imagine the chaos that could be created in less developed regions around the world with this technology. However, as deepfake technology becomes more pervasive, audiences will eventually adapt and learn to separate what is real from fake, much like we do with computer-generated imagery in modern-day movies.”

In the age of the fake, every organisation, particularly those operating in sensitive sectors and industries, needs to strive for digital truth and get real as to the threat posed by deepfake technologies.

© Financier Worldwide


BY

Fraser Tennant


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.