How Crooks will Use Deepfakes to Scam Your Biz
Reading Time: 3 minutes

Crooks will use Deepfakes to scam your business, as more and more materials  and tools to make deepfake videos in the form of source code to publicly available images and account authentication bypass services. Anyone can easily lay their hands on these tools available on the public internet and underground forums. 

Cyber criminals are using deepfakes to make it appear as though they are endorsing products, or their security specialists or Elon Musk is recommending specific products. Deepfakes is a subset of artificial intelligence.

Targeted deepfake attacks are significantly altering the security landscape. They can be used by hackers to alter images and video, changing the perception of reality. Vladimir Kropotov, Fyodor Yarochkin, Craig Gibson, and Stephen Hilt have published a report warning of the effects targeted deepfakes could have in future disruptions to business and politics.

Criminals are creating fake people to scam the existing employees or to impersonate executives on video calls.

The FBI has received an increase in complaints relating to the new web-based software that creates deepfake videos.

Once they get hired to do the job, deepfake actors can hack someone’s computer with their fake identity and use this network access to explore IT assets, steal corporate data, deliver ransomware or worse.

Last month, Binance PR exec said that crooks have been using deep fake tech to scam projects by creating a hologram of himself and through Zoom calls.

Patrick Hillmann, chief communications officer at the crypto hyper-mart, claims that a sophisticated hacking team used archived records to create a deep fake of him. The Deepfake AI was so sophisticated that it fooled crypto community members.

The deepfake of Binance is notable because it’s topic is how to get around bank verification. A trend Micro researcher says that since 2021, there have been plenty of deepfake use cases in underground forums. Users are using them as a way to scam banking or digital finance verification.

You may already have the identity documents needed to create a new account, but it’s also possible they’ll need a video of you in order to use them.

Deepfake production tools are popular now on the internet, available in the open on GitHub and are accessible to the public such as the Telegram bot RoundDFbot.

How to protect yourself against deep fakes?

Trend Micro says that more deepfakes will be used for attack methods and sales. Scammers have found that they can create deepfakes and impersonate executives to request money in financial scams. This tactic has proven to be very successful without the need of fake videos.

AI technology is being used to create fake videos that closely mimic reality. Criminals or impersonators can use this technology to impersonate others and access sensitive information.

Financial institutions such as banks use video verification to prevent hackers from accessing or manipulating banking information.

The researchers have noted that deepfakes can be used to attack organizations by adding false evidence into videos. Fake videos created in extortion campaigns can make organizations pay a ransom to not be publicly exposed.

Trend Micro puts Amazon famously used in movie and TV shows, Alexa “on the target list of deepfake criminals.” Amazon’s Alexa or any voice recognition machine generated by humans are susceptible to attack.

Organizations should take measures to protect themselves from this kind of attack. One way is using multi-factor authentication, which will ensure the safety of sensitive accounts. This type of authentication uses two different types of codes to log in to the account, and it should be standard for any organization who handles sensitive or critical data.

The best way to keep valued comments and contributions legitimate is by making sure your team has some form of identification. You want to make sure three things are taken into account when granting access-something the user has, something the user knows, and something the user is.

Researchers advise that companies whose profiles are often impersonated should prioritize safeguarding their information with biometrics.

Related Articles:
OpenAI Allows Users to Edit Faces with DALL-E 2
Markpainting – Detecting Deepfake Picture Editing
Deepfake Amazon Workers Create Confusion on Twitter