
Threat actors are leveraging newly released artificial intelligence (AI) attack security tools to leverage recently disclosed security flaws.
Hexstrike AI is pitched as an AI-driven security platform for automating reconnaissance and vulnerability discovery, with the aim of acquiring licensed red teaming operations, bug bounty hunting and flag (CTF) challenges, according to its website.
For each information shared in the GitHub repository, the open source platform integrates with over 150 security tools to promote network reconnaissance, web application security testing, reverse engineering and cloud security. It also supports a number of specialized AI agents tweaked for vulnerability intelligence, development, attack chain discovery, and error handling.

However, reports from Checkpoint show that threat actors have got the tools to gain a hostile advantage and are trying to weaponize tools to take advantage of the recently disclosed security vulnerabilities.
“This marks a pivotal moment. It is claimed that tools designed to enhance defenses will be rapidly reused in engines for exploitation and crystallized previous concepts into widely available platforms that drive real-world attacks,” the cybersecurity company said.
Discussions at the Darknet Cyber Crime forum show that threat actors claimed they successfully exploited three security flaws that Citrix disclosed using Hexstrike AI last week.
Checkpoint said the malicious use of such tools has a major impact on cybersecurity, not only reducing the window between public and mass exploitation, but also helping to concurrently automate exploitation efforts.

Furthermore, it reduces human efforts and allows them to automatically retry failed attempts of exploitation until they succeed. The cybersecurity company said it would increase “overall exploitation yields.”
“The immediate priorities are clear: we will strengthen our patches and hardening systems,” he added. “Hexstrike AI represents a broader paradigm shift, increasingly being used to weaponize AI orchestration at a rapid and large scale.”

The disclosure comes from two researchers at Alias Robotics and Oracle Corporation in a newly published study that AI-powered cybersecurity agents like Pentestgpt increase the risk of rapid injection and effectively transform security tools into cyberweapons through hidden instructions.
“Hunters become hunters, security tools become attack vectors, and what started as penetration testing ends with attackers gaining shell access to the tester’s infrastructure.”
“Current LLM-based security agents are fundamentally insecure for deployment in hostile environments without comprehensive defense measures.”
Source link