Cybercriminals start using ChatGPT
Cybercriminals have begun exploiting the artificial intelligence tool ChatGPT for nefarious activities, such as replicating malware, developing encryption tools, and creating dark web marketplaces. The Check Point Research team warns that while current misuses are basic, more sophisticated threats are expected as cybercriminals enhance their use of AI tools for malicious purposes.
US-based cyber threat intelligence research team Check Point Research (CPR) found that cybercriminals have been using the artificial intelligence-based tool ChatGPT for malicious purposes. The team described three examples of such misuses of ChatGPT:
- Recreating malicious strains and techniques described in research publications and write-ups about common malware.
- Creating encryption tools
- The second thread is found to perform cryptographic combinations of different signing, encryption, and decryption functions.
- Creating dark web marketplaces.
As CPR notes, although the examples given in the report are relatively basic, ‘it is only a matter of time until more sophisticated actors enhance the way they use AI-based tools for bad’.