HomeTechnology5 ways cybercriminals are using AI to scam

5 ways cybercriminals are using AI to scam

AI is also used by cybercrime to expand and improve their tactics and strategies

Artificial Intelligence (AI) came to revolutionize everything and became an indispensable tool in many disciplines. Unfortunately, it is also used in different ways by cybercrime to carry out their attacks and malicious actions.

The company ESET, a company specialized in threat detection, warns about the 5 ways in which they come together to steal sensitive information and obtain financial gain, outlined a press release.

“In February 2023, at ESET we were already warning about the malicious use that cybercrime could give to a tool with scope as broad as Artificial Intelligence. Less than a year and a half after that warning, we can already see the implications that malicious actors give to AI and how they use it to expand and improve their attack tactics and strategies,” comments Camilo Gutiérrez Amaya, Head of the Laboratory. of Research of ESET Latin America.

1- Perfect phishing: According to AI security researchers at ETH Zurich, the great rise of ChatGPT was accompanied by very dangerous collateral damage: the enormous increase in phishing emails. Generative AI has become the best ally for plotting ways to trick people into revealing sensitive information, either to obtain financial gain or use it for other malicious actions.

But AI is not only used as a tool to achieve increasingly compelling content, it is also used to translate messages and improve communication between different cybercriminal groups around the world. Services like GoMail Pro, with a ChatGPT integration, allow cyber attackers to improve the
content of the messages that they then send to their victims. While OpenAI tries to limit the use of its products for illegal activities, it is very difficult to detect and control. Even so, at the beginning of this year they announced the closure of five accounts associated with cybercrime.

2- Optimize doxing: Doxing, also known as doxxing, is the practice of publishing
personal information of third parties with the intent to intimidate, extort or affect in any way. This practice became established in 2020 as part of ransomware attacks, in which cybercriminals, in addition to kidnapping all the files, steal information from their victims to add pressure to organizations, threatening them with publishing all that private information if they are not paid. the rescue.

Today AI, which is trained on the vast amount of data on the Internet, including personal data, can deduce where a person might be located. The rule is simple: the more information there is about each user on the Internet, the more vulnerable they are to this type of practices. In fact, security researcher Mislav Balunović discovered, together with a research team, that thanks to GPT-4 it is possible to infer sensitive information about a person, be it their ethnic origin or location simply by using conversations with a chatbot. Therefore, we always recommend at ESET to think twice about what type of information and content is shared online.

3- Make audio and deepfake scams more realistic: It is not new that generative AI has evolved to such a point that today it is really difficult to discern between a real image and one created by this tool. An example that illustrates in a paradigmatic way is that of the scam involving the employee who lost 24 million dollars after a meeting with who was supposedly the financial director of the company and asked him to transfer that important amount of money.

The other side of this same coin is audio scams. It takes just a few seconds' shot of a person's voice - from a video uploaded to Instagram or TikTok, for example - to produce something dangerously convincing.

4- Avoid identity controls: In the same vein as deepfakes, cybercriminals have found a way to bypass those controls that verify a person's identity through an image thanks to Artificial Intelligence. How do they do that? Using an identification document, whether false or stolen, they manage to superimpose that image over the face of the real person - similar to a filter on the Instagram platform - and thus deceive the system. Although at the moment this practice is in a basic state, it is a methodology that could continue to improve and expand its application.

5- Offer jailbreak as a service: The companies behind the artificial intelligence tools implement several instances of protection so that their models do not return any information to users that could be dangerous. Now, cybercrime is choosing to hire jailbreak services, instead of building their own AI models, something that would involve much more time and money. Cyberattackers can modify the targeted artificial intelligence system and generate responses linked to the generation of content for malicious emails or directly to write code for ransomware or other types of malware, circumventing these limitations of the original tools.

Leave a response

Please enter your comment!
Please enter your name here