According to Cointelegraph, AI-powered deepfake scams are becoming increasingly prevalent, with security firms warning that this attack method could extend beyond videos and audio. On September 4, software firm Gen Digital reported that malicious actors using AI-powered deepfake scams to defraud crypto holders have ramped up operations in the second quarter of 2024. The company revealed that a scammer group called 'CryptoCore' had already scammed over $5 million in crypto using AI deepfakes.
While the amount may seem low compared to other attacks in the crypto space, security professionals believe that AI deepfake attacks can expand further, threatening the safety of digital assets. Web3 security firm CertiK believes that AI-powered deepfake scams will become more sophisticated. A CertiK spokesperson explained that the attack vector could be used to trick wallets that use facial recognition, giving hackers access. The spokesperson emphasized the importance of evaluating the robustness of facial recognition solutions against AI-driven threats and increasing awareness within the crypto community about how these attacks work.
Luis Corrons, a security evangelist for cybersecurity company Norton, believes that AI-powered attacks will continue to target crypto holders due to the significant financial rewards and lower risks for hackers. Corrons noted that cryptocurrency transactions are often high in value and can be conducted anonymously, making them an attractive target for cybercriminals. He also pointed out that the lack of regulations in the crypto space provides cybercriminals with fewer legal consequences and more opportunities to attack.
Security professionals believe that there are ways for users to protect themselves from AI-powered deepfake attacks. According to CertiK, education is a good place to start. A CertiK engineer explained the importance of knowing the threats and the tools and services available to combat them. Being wary of unsolicited requests and enabling multifactor authentication for sensitive accounts can add an extra layer of protection against such scams. Corrons also suggested looking for 'red flags' such as unnatural eye movements, facial expressions, body movements, lack of emotion, facial morphing, image stitches, awkward body shapes, misalignments, and inconsistencies in the audio to determine whether they are looking at an AI deepfake.