Experts from Kaspersky warn that deepfakes and voice spoofing content, also known as deepvoice, are already being used to bypass security controls, as well as to manipulate online information and commit other attacks. In the illegal market for deepfakesthe cost of content can range from $300 to $20,000 per minute, depending on its quality and complexity. Check out more details below.

While there are benefits of AI for certain fields, such as brands and creators creating avatars of real people to produce ads in multiple languages, the use of deepfakes for malicious purposes harms other sectors, such as banking. Identity theft with the tool has played a major role in financial fraud, with tactics and applications capable of circumventing biometric verification mechanisms already used by different institutions. This allows attackers to gain unauthorized access to bank accounts and sensitive information.

Scammers looking to buy this type of content are looking for other criminals who can produce realistic, high-quality, and even real-time material – since simulating movements, facial expressions, or voices fluidly is a challenge, as algorithms must capture and reproduce very specific details. For this reason, there is a specific market for deepfakes already Darknetwhere the vendors of this malware offer services to create fake content and even tutorials with tips on how to select source material or how to swap faces and voices to generate a convincing fake.

To create this type of scam, a generative network can be trained with real photos of a person, generating numerous convincing images to create a video. The same techniques machine learning can also be used to develop artificial voices, enabling the creation of audio content. Unfortunately, people whose voice or video samples are available online, such as celebrities and public figures, are more vulnerable to these impersonation attacks. As their evolution is expected to continue, as deepfakes have already been listed as one of the most worrying uses of Artificial Intelligence.

Isabel Manjarrez, Security Researcher at Kaspersky Global Research and Analysis Team, explains that the demand for deepfakes in this market is so high that it exceeds the existing supply. Therefore, an even more significant increase in incidents related to high-quality fake content is likely to occur soon. This poses a real risk to the cybersecurity landscape since, according to data from Kaspersky, two thirds of Brazilians (66%) do not know what it is deepfake.

“The exposure of sensitive personal data online, such as facial images or audio, poses a significant challenge to protecting this information from unauthorized access and malicious use to create fake content or impersonate other people. This can cause material or monetary damage, as well as psychological and reputational impact for victims. Therefore, it is essential to establish clear guidelines and standards for the creation and use of deepfakes, ensuring transparency and accountability in their implementation. There is no need to fear AI – it is a tool with great potential, but it is up to humans to make ethical use of it,” concluded Isabel Manjarrez.

Kaspersky shares some characteristics that can help identify a deepfake:

  • The source of the content and the information you share is suspicious. Be suspicious of emails, text or voice messages, phone calls, videos or other media that you see or receive, especially if they communicate strange or illogical information. Check with official sources of information.
  • Facial and body movements are unusual. Be suspicious if facial or body expressions are unnatural, such as awkward blinking or no blinking at all. Check to see if the words match the lip movements and if the facial expressions are appropriate for the context of the video.
  • The conditions and lighting in a video are inconsistent. Be suspicious if you notice anomalies in the background of the video. Assess whether the lighting of the face matches the environment; inconsistencies in lighting may indicate manipulation.
  • Fake audios may have voice distortions. Pay attention to the sound quality. Be suspicious if you hear an unnatural monotone in the voice, if it is unintelligible, or if there are strange noises in the background.
  • To keep ourselves protected, information is important. Be aware of the existence of deepfakes and educate yourself about digital manipulation technologies. The more aware you are, the better you will be able to detect potential fraud.
  • Use security solutionswhich protect against all types of threats, known and unknown, when browsing the Internet, both on computers and mobile devices.

Source: https://www.hardware.com.br/noticias/kaspersky-revela-crescimento-de-deepfakes-e-clonagem-de-voz-em-ciberataques-com-ia.html



Leave a Reply

Your email address will not be published. Required fields are marked *