AI at the service of cybercrime groups
top of page

AI at the service of cybercrime groups

Artificial intelligence (AI) has ushered in a new era of innovation, with a transformative impact being felt across several industries at an unprecedented pace.


AI at the service of cybercrime groups

However, the rise of AI has also led to an evolving landscape of emerging cyber threats, as cybercriminals harness the power of AI to develop more sophisticated and hyper-targeted attacks, which represents a major challenge for cybersecurity service companies.


Organizations continue to integrate AI-powered technologies into their cybersecurity operations to properly anticipate and adapt to the ever-changing threat landscape and strengthen their security posture to address these new security challenges.


How can cybercriminals exploit ChatGPT?


ChatGPT, a powerful AI language model developed by Open AI, offers numerous applications in several domains, but also presents potential exploitative risks that could be exploited by cybercriminals.


One of the main ways in which cybercriminals can exploit ChatGPT is through social engineering attacks, where they take advantage of the natural language processing capabilities of AI to create highly compelling emails or phishing messages, not to mention that, phishing attacks account for almost 90% of cyberattacks worldwide and with the use of these technologies, the trend is that this percentage could rise in the coming months.


Cybercriminals can also use ChatGPT to generate input designed to exploit vulnerabilities in the security system or bypass content filters, such as creating obfuscated malicious code or generating text that evades content moderation systems such as CAPTCHA.

Additionally, tools like ChatGPT can also be used to create exploits, fragments of malicious code (Malware) that can be used by cyber actors for a specific attack. While it is true that the development of certain technologies largely seeks the positive benefit for organizations, this potential can also be used in a negative sense.


At Cyberpeace, we believe that in order to control or mitigate these potential risks associated with the operation of ChatGPT, a proactive approach to security must be taken. This includes staying informed of the latest trends, developments in AI and cybersecurity, as well as implementing robust security measures to protect sensitive data and promote awareness of the potential risks associated with emerging AI-powered technologies.




Written by:

Alberto Ávalos

Director of Incident Response and Threat Intelligence of Cyberpeace


0 comments
bottom of page