ConnectWise
;

3/13/2026 | 8 Minute Read

The dark side: How threat actors are using AI

Contents

    Automate cybersecurity remediation

    See how the ConnectWise Platform helps you stay ahead of risk without slowing down.

    Artificial intelligence (AI) is rewriting how work gets done across just about every industry, and its potential impact within cybercrime is no exception. Threat actors don’t have to reinvent the wheel. They’re now able to potentially use AI to scale familiar attacks and make them harder to detect, and at unprecedented levels of automation. Today’s AI-enhanced threat landscape, especially within the realms of identity impersonation, is characterized by scale, persuasion, and speed. Gartner predicts that by 2027, 17% of total cyberattacks/data leaks will involve generative AI.

    Let’s explore how AI could be weaponized across the attack chain today, what that looks like in the real world, and the practical steps that can reduce risks.  

    Prompt injection: From thesis to….?

    According to Learn Prompting, prompt injection can be defined as “a way to change AI behavior by appending malicious instructions to the prompt as user input, causing the model to follow the injected commands instead of the original instructions.”

    A real-life example of prompt injection

    Security researchers have demonstrated that prompt injection attacks can cause LLM‑powered tools to search connected resources for secrets such as API keys and exfiltrate them to attacker‑controlled endpoints.

    How to protect against prompt injection attacks

    Large language models (LLMs) can’t tell the difference between instructions from developers, users, and bad actors. LLMs don’t have any built-in mechanism for security, so to protect against prompt injection, you must build controls outside the LLM itself. This includes limiting the LLM’s access to sensitive data, not training it on sensitive data, and, if using retrieval-augmented generation (RAG), operating the LLM with the same permissions as the user accessing the data. 

    Add data loss safeguards such as secret detection, output filtering, logging, and alerting to catch attempted retrieval of API keys and other credentials before that information leaves your environment.

    Automated reconnaissance and targeting

    Threat actors are using AI to speed up reconnaissance and turn publicly available information into ready-to-use attack plans. Instead of manually combing through websites, social profiles, breach data, and technical breadcrumbs, they can use AI tools to summarize large volumes of open-source intelligence (OSINT) into actionable targeting notes. These notes might identify the right people to impersonate, the tools an organization relies on, and the most likely entry points for phishing, vishing, or vendor compromise attempts.

    How to defend against AI-enabled reconnaissance and targeting

    Start by reducing what attackers can learn from public sources. Review job postings and public web content to ensure they do not disclose unnecessary details about security controls, internal tools, or processes. Audit public code repositories for exposed secrets and enforce scanning for credentials, tokens, and sensitive configuration files before code is published. Strengthen vendor and internal impersonation defenses by using allowlisted contact methods, verified support channels, and callback procedures for high-risk requests. Finally, continually train teams to treat highly specific messages as a risk signal rather than proof of legitimacy. A message that knows your tools and your org chart can be a sign that the attacker did their homework.

    Phishing

    Threat actors are using AI to automate the creation of convincing phishing emails. According to The State of Phishing report by SlashNext, “email phishing has increased significantly since the launch of ChatGPT, signaling a new era of cybercrime fueled by generative AI.”

    While attackers could use ChatGPT, Bard, Midjourney, or other AI models to help them create malicious content, they would generally need to use prompt injection to get around the safeguards that exist. This gave rise to AI tools made specifically for cybercrime, such as WormGPT and FraudGPT.

    WormGPT: An article on DarkReading.com calls WormGPT “the Dark Web imitation of ChatGPT that quickly generates convincing phishing emails, malware, and malicious recommendations for hackers.”

    FraudGPT: According to an article on DarkReading.com, “FraudGPT—which in ads is touted as a ‘bot without limitations, rules, [and] boundaries’—is sold by a threat actor who claims to be a verified vendor on various underground Dark Web marketplaces, including Empire, WHM, Torrez, World, AlphaBay, and Versus.”

    A real-life example of AI used for phishing attacks 

    According to an article on BleepingComputer.com, OpenAI reported that SweetSpecter, a Chinese adversary known for targeting Asian governments, “targeted them directly, sending spear phishing emails with malicious ZIP attachments masked as support requests to the personal email addresses of OpenAI employees. If opened, the attachments triggered an infection chain, leading to SugarGh0st RAT being dropped on the victim’s system. Upon further investigation, OpenAI found that SweetSpecter was using a cluster of ChatGPT accounts that performed scripting and vulnerability analysis research with the help of the LLM tool.”

    How to protect against phishing emails written by AI 

    While phishing attacks may become more sophisticated with fewer spelling errors and better images, the underlying goal remains the same: to trick you into clicking on malicious links, downloading malicious attachments, or even getting you to call a number. These messages are still going to contain other hallmarks of phishing, such as urgency and scare tactics. 

    Remember, defending against phishing requires a combination of awareness, skepticism, and proactive measures. User education and awareness programs are crucial to ensure individuals can recognize and report phishing attempts, even when they are highly sophisticated.

    Deepfakes

    Deepfakes refer to manipulated audio, images, or videos that appear authentic but are actually fabricated. As the technology improves (and becomes cheaper to use), we expect deepfakes to contribute more to other crimes, such as extortion, harassment, blackmail, document fraud, and identity-based scams. The ability to pretend to be someone else, not just through email but also by phone and video, makes it more difficult to discern what is real from fake in the digital realm.

    How it works

    Deepfakes are often created using generative adversarial networks (GANs). GANs use two machine learning models, image generator and discriminator, which are trained on the same set of images. These two models work together to refine an image until it is indistinguishable from the real reference image. This gives adversaries the ability to modify or generate audio and video in several ways, from swapping one person’s face and voice with another to impersonating an individual or even generating an entirely new person using whatever voice or face they want. What’s more, the hardware required to run models like this can be bought off the shelf, negating the need for massive cloud computing.

    Deepfakes become especially dangerous when paired with familiar social engineering tactics:

    • Vishing (voice phishing): Attackers use phone calls, voicemails, or voice notes to pressure a target into taking an action (sharing credentials, approving MFA prompts, wiring funds, changing bank details, or resetting passwords). With AI, vishing can be amplified using voice cloning (audio deepfakes) to impersonate a trusted executive, coworker, or vendor and make the request feel urgent and legitimate.
    • Swatting: Swatting is a hoax call or report intended to trigger an emergency response. With AI, criminals can craft more convincing, detailed false narratives (and in some cases, use synthetic voice) to increase believability and speed up escalation, turning a digital impersonation into a physical-world safety risk. The FBI has issued public guidance on swatting and recommended safety steps.

    Related content: AI swatting: How synthetic threats are crossing from cyber to physical security

    Real-life examples of deepfakes

    A North Korean hacker successfully posed as a fake IT worker using deepfake technology, deceiving a security firm, KnowBe4, into hiring them. This case serves as a stark reminder of how advanced and convincing deepfake technology has become, making it imperative for organizations to implement stringent verification processes to prevent such breaches.

    WIRED reported in 2025 on a group linked to a wave of swatting incidents targeting US universities, using hoax “active shooter” threats to trigger armed emergency responses and campus lockdowns. It serves as an example of how swatting can be scaled into repeatable “threat-as-a-service” and how AI can amplify it by generating convincing scripts and details that make hoax reports sound more credible.

    How to defend against deepfakes 

    Tools such as Deepfake Detector and Deepware Scanner can help flag suspicious media; however, as technology improves, it’s getting harder to distinguish between what’s real and what’s not. The strongest defense is shifting from visual inspection to verification and control. Treat unexpected requests involving money, credentials, MFA resets, or urgent operational changes as high risk, even if the person looks or sounds right.

    Malware development

    AI-powered malware development allows threat actors to automate various stages of the attack lifecycle, including reconnaissance, evasion, and exploitation. Machine learning algorithms enable attackers to analyze vast amounts of data, identify vulnerabilities, and develop tailored malware. 

    In 2023, Hyas Labs developed a proof of concept called EyeSpy, which they describe as “AI-powered malware that chooses its targets and attack strategy backed by reasoning, then adapts and modifies its code in-memory to align with its changing attack objectives. Its evasive nature evolves on its own.” 

    Malware like this is being developed by researchers in a move to understand the potential tactics, techniques, and procedures (TTPs) that attackers and their malware may use in the future, given these advancements. 

    A real-life example of malware developed by AI

    We can easily identify AI-written malware because of its simplicity and the fact that it’s well-commented and uses function names that describe what they do. Typically, malicious code is written in such a way that it obscures what it does and makes the code hard to read.

    However, the risk lies in lowering the bar of entry for threat actors to get into the game. According to Inside the Mind of a Hack, “74% of hackers agree that AI has made hacking more accessible, opening the door for newcomers to join the fold.”

    How ConnectWise can help

    By integrating best-in-class EDR solutions with ConnectWise Managed EDR™, MSPs can offer SOC-led, enterprise-grade cybersecurity to their clients. ConnectWise Managed EDR also uses AI-driven technology to continuously monitor endpoint data, quickly identifying and responding to potential threats. This advanced capability ensures that emerging threats are detected and managed before they can cause significant damage.

    Conclusion

    As AI continues to evolve, threat actors will undoubtedly find new ways to exploit its capabilities. It is imperative for organizations and individuals alike to remain vigilant, regularly update their security protocols, and invest in robust AI-powered defenses. By staying informed, proactive, and adaptive, we can effectively mitigate the risks posed by the dark side of AI and safeguard our digital lives.

    Related Articles