Latest News

Deepfake: British firm Arup falls prey to $25-M scam, how can you protect yourself?


British multinational design and engineering company Arup, renowned for iconic buildings like the Sydney Opera House, confirmed it was targeted by a deepfake scam.

This sophisticated fraud resulted in one of its Hong Kong employees transferring $25 million to scammers.

Arup notified Hong Kong police in January about the incident, confirming that fake voices and images were used.

The scam involved a finance worker who was tricked into attending a video call with people he believed were the chief financial officer and other staff members, all of whom were deepfake recreations.

Despite initial suspicions of a phishing email, the realistic appearance and voices of his supposed colleagues led the employee to proceed with the transactions, totaling 200 million Hong Kong dollars ($25.6 million) across 15 transfers.

The incident underscores the increasing sophistication of deepfake technology.

“Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes.”

The number and sophistication of such attacks have been rising sharply, posing significant challenges for companies worldwide.

Authorities globally are growing concerned about the malicious uses of deepfake technology.

In an internal memo, Arup’s East Asia regional chairman, Michael Kwok, emphasized the increasing frequency and sophistication of these attacks, urging employees to stay informed and alert to spot different scamming techniques.

Despite the significant financial loss, Arup assured that its financial stability and business operations were unaffected, and none of its internal systems were compromised. The company continues to work with authorities, and the investigation is ongoing.

3rd party Ad. Not an offer or recommendation by See disclosure here or
remove ads

This high-profile incident highlights the urgent need for businesses to enhance their cybersecurity measures to combat the growing threat of deepfake technology and other sophisticated scams.

A deepfake is content generated using deep learning techniques that appears real but is fabricated. Artificial intelligence (AI) used to create deepfakes typically employs generative models, such as Generative Adversarial Networks (GANs) or auto-encoders.

Deepfakes can be videos, audio recordings, or images depicting individuals or groups doing or saying things they never did.

To produce convincing content, AI must train on large datasets to recognize and replicate natural patterns.

Deepfake technology, while innovative, opens up dangerous opportunities for illegal use, including identity theft, evidence forging, disinformation, slander, and biometric security bypass.

Fraudsters often leverage the depicted person’s authority or personal connection to their targets.

Deepfakes can produce video, audio, or image content, used as recorded media or in real-time streams. These formats can be encountered in various scenarios, from social media posts to phone calls and video conferences.

Face swapping: This application replaces the facial features of a target person with fake features, often of another person.

Techniques like facial landmark detection and manipulation make the blending seamless and hard to spot when caught unaware.

Voice cloning: This technique replicates an individual’s voice. High-quality audio data from recordings of the target person speaking in various contexts is needed to train a voice cloning model.

Real-time video deepfakes generate manipulated video content instantly during live streams and video calls.

3rd party Ad. Not an offer or recommendation by See disclosure here or
remove ads

Voice cloning and face swapping are frequently used to create a convincing fake environment. Deepfake generation software can integrate with streaming platforms and video conferencing tools in several ways:

A separate application captures, processes, and sends the manipulated video feed to the conferencing software.

Direct integration into video conferencing software as an optional feature or plugin.

Using a virtual camera to intercept the video feed from the physical camera and output the manipulated feed.

As deepfake technology advances, it is crucial to protect yourself and your organization from fraud. Here are some ways to safeguard against deepfakes:

Watch out for red flags: Look for unrealistic facial expressions or movements, inconsistencies in lighting and shadows, unnatural head or body movements, and mismatched audio and video quality.

Be proactive if suspicious: Engage in casual conversation to catch a faker off guard. Ask the person to share their screen or confirm their identity by providing exclusive information or sending a confirmation message through a different channel.

Set up a passphrase: Establish a password or passphrase for sensitive topics with colleagues and family members. This method is effective in voice, video, and text communication.

Deepfake technology presents significant risks that require vigilance and proactive measures to mitigate.

By understanding the types of deepfakes and implementing strategies to identify and counteract them, individuals and organizations can better protect themselves from potential fraud.

As generative AI continues to develop, staying informed and prepared is crucial in safeguarding against the growing threat of deepfakes.

3rd party Ad. Not an offer or recommendation by See disclosure here or
remove ads

This article first appeared on

Land Securities incurs losses after write down in property values

Previous article

Royal Mail takeover to face national security checks, says Hunt

Next article

You may also like


Leave a reply

Your email address will not be published. Required fields are marked *

More in Latest News