Cybercriminals exploit Facebook ads for fake AI tools and malware
The stolen data is sold on the dark web or used for financial crimes. The cybercrime targets Facebook users in Europe, specifically males aged 25-55.
Cybersecurity researchers from Bitdefender have uncovered a disturbing trend where cybercriminals exploit Facebook’s advertising platform to promote counterfeit versions of popular generative AI tools, including OpenAI’s Sora, DALL-E, ChatGPT 5, and Midjourney. These fraudulent Facebook ads are designed to trick unsuspecting users into downloading malware-infected software, leading to the theft of sensitive personal information.
The hackers hijack legitimate Facebook pages of well-known AI tools like Midjourney to impersonate these services, making false claims about exclusive access to new features. The malicious ads direct users to join related Facebook communities, where they are prompted to download supposed ‘desktop versions’ of the AI tools. However, these downloads contain Windows executables packed with harmful viruses like Rilide, Nova, Vidar, and IceRAT, which can steal stored credentials, cryptocurrency wallet data, and credit card details for illicit use.
The cybercrime scheme goes beyond fake ads and hijacked pages; it involves setting up multiple websites to avoid suspicion and using platforms like GoFile to distribute malware through fake Midjourney landing pages. Bitdefender’s analysis highlighted that hackers particularly targeted European Facebook users, with a prominent fake Midjourney page amassing 1.2 million followers before being shut down on 8 March 2024. The reach of these scams extended across countries like Sweden, Romania, Belgium, Germany, and others, with ads primarily targeting European males aged 25-55.
Bitdefender’s report also exposed the cybercriminals’ comprehensive distribution network for malware, known as Malware-as-a-Service (MaaS), enabling anyone to conduct sophisticated attacks. These include data theft, online account compromise, ransom demands after encrypting data, and fraudulent activities.
The case mirrors previous incidents, such as Google’s lawsuit against scammers in 2023 for using fake ads to spread malware. In that case, scammers posed as official Google channels to entice users into downloading purported AI products, highlighting a broader trend of exploiting trusted platforms for illicit gains.