Deceived with the vocal Deepfake: They steal 243,000 dollars

The CEO of a UK company thought he was dealing with his superior on the phone: it is the power of artificial intelligence and speech synthesis software, which can be very dangerous if used with bad intentions.

The Wall Street Journal reports the news of the first officially documented case of ” vishing ” or Voice Phishing: it is nothing less than a fraud perpetrated through a telephone call made with an artificial voice. In other words, a vocal deepfake.

A case that cost the victim – a UK company operating in the energy field – as much as $243,000. The fraud was committed in March, when still unknown criminals have exploited the help of a commercially available speech synthesis software, in order to impersonal the manager of a German company, owner of the aforementioned British company. The police are still investigating to find the culprits, while the names of the companies that remained victims of the criminal operation were not revealed.

The CEO of the latter was reached by telephone from what appeared to be to all effects his superior: the German accent and the vocal patterns had appeared familiar, such as not to arouse any suspicion. And the ” known voice ” asked the subject to urgently transfer, within an hour, the funds to a Hungarian supplier with the guarantee that the transaction would be repaid immediately.

Convinced that he was dealing with his boss, the CEO of the English company carried out what he had ordered. And at this point not only the transfer was not (obviously) reimbursed, but the criminals have made another vocal request for a further transfer urgently. At this point, the British CEO opposed it. The funds transferred to Hungary were then broken up and diverted to various other jurisdictions.

Last July, the Israeli National Cyber Directorate issued a warning note regarding a ” new type of cyber-attacks ” based on Artificial Intelligence technologies to impersonate high-level business executives giving orders (monetary transactions or other harmful activities) to employees.

The fact that a crime of this precise nature has already made its first victim should be a cause for concern, as well as an important wake-up call to begin to consider that it is unlikely that such an event will remain an isolated case.

Indeed, the greater probability is just the opposite: if the attacks based on social engineering and supported by these technologies will be successful, cases like these can only increase in frequency.

It is likely to assume that synthesis and vocal imitation technologies will become even more accurate, prompting criminals to use them to their advantage to impersonate a specific identity by telephone and thus be able to obtain confidential information to be exploited for any subsequent moves.

Last year Pindrop, a security company specializing in the design of software to combat vocal fraud, has seen a 350% growth in this type of operation, with a voice call out of 638, which appears to be created by synthesis software.

For now, the advice that can be given is that the voice instructions are always verified by instructions given through another communication channel, or in any case implement a double-checking system that can ensure that on the other side of the handset, there is really the one with whom we believe we speak.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More