Thairath Online
Thairath Online

Cybercriminals Complain About AI Slop Flooding Underground Forums, Damaging Hacker Community and Reputation

Tech08 May 2026 12:13 GMT+7

Share

Cybercriminals Complain About AI Slop Flooding Underground Forums, Damaging Hacker Community and Reputation

The arrival of AI has affected not only ordinary users but also hackers and cybercriminal groups, as they are encountering the problem of AI slop, particularly low-quality content and misleading advertisements lacking genuine skills.

Wired magazine published an article titled "Cybercriminals Are Complaining About AI Slop Flooding Their Forums," citing a study by researchers from the University of Edinburgh, along with Cambridge and Strathclyde universities. They analyzed nearly 100,000 AI-related conversations in cybercrime forums since 2022, the year ChatGPT was launched, finding that initial excitement about generative AI technology has now shifted toward growing resistance.

Many forum members have begun criticizing the posting of basic cybersecurity explanations generated by AI, which not only lowers the quality of discussions but also damages the human interactions that are central to these communities.

Ben Collier, a security researcher from the University of Edinburgh, noted that these forums originally focused on specialized skills, but recently new users have been using AI to write technical hacking explanations. This has led real hackers to feel that their true skills are being devalued. Additionally, some users worry that AI-driven Google searches may reduce forum traffic, impacting forum activity.

The Wired report also states that AI has not significantly lowered the skill barriers for high-level attacks. Most AI influence remains concentrated in automated tasks such as SEO fraud, social media bot creation, and generating deceptive messages in romance scams.

Meanwhile, high-level hackers remain cautious about using AI-generated projects due to concerns about vulnerabilities and coding errors that might expose their traces.

Although some criminal groups have proposed AI-powered cyber black markets to speed up buying and selling stolen data, this idea faces strong opposition from many members, who view it as ineffective and damaging to the trustworthiness of existing human-verified trading systems.