With AI at the center of media and industry focus, cybersecurity teams are increasingly putting pressure on themselves to prepare for AI-fueled cyber attacks.

According to Ivanti’s 2025 State of Cybersecurity research, half of IT security professionals ranked “yet unknown weaknesses” as a high or critical threat – the same as or higher than compromised credentials, supply chain risks, DDoS attacks and other real-world threats.

These “unknown” concerns remain more hype than substance for the moment. In fact, the Picus 2025 Red Report found no notable uptick in the use of AI-driven malware techniques in 2024. The report goes on to state that “AI enhances productivity but doesn't yet redefine malware.”

In other words, adversaries are leveraging AI in their attacks – automating phishing content, debugging malicious code, accelerating reconnaissance – but they’re not creating fundamentally new attack classes. Traditional attack techniques still dominate the cyber landscape, yet many teams remain fixated on speculative threats.

The big takeaway? Stop worrying about hypothetical AI-powered launches from beyond the front lines and focus on defending against the threats that are putting your organization most at risk today.

Real threats vs. AI hype: misaligned risk prioritization

Ever since ChatGPT started capturing headlines in November 2022, the world has been saturated with anticipation of the potential impact of AI. Analysts are projecting AI will inject $15.7 trillion into the global economy by 2030, and the tech is also increasingly shaping daily workflows across industries.

So, it’s no surprise that AI is dominating boardroom talk. Since CISOs and security leaders are tasked with preparing for worst-case scenarios, they’re understandably alert to what AI could do in the wrong hands. However, an issue arises when much of this attention is focused on novel AI-generated threats. Less focus is being paid to the main way that we’re already seeing attackers use AI — as a tool to amplify familiar existing threats.

AI synthetic digital content — such as manipulated media and deepfakes — and AI-based spoofing attacks — such as using gen AI to mimic someone’s voice and tone — were both top-ranked predicted AI-related threats for 2025, with 53% of security professionals rating their threat levels as “high / critical.”

Yet in practice, these AI-generated threat types are not yet as commonplace as traditional phishing techniques and ransomware attacks where we see AI speed up attackers’ efforts to an unprecedented rate. The result is a risk prioritization model that skews toward theoretical rather than real-life considerations.

Critical gaps in threat preparedness persist

Ivanti’s report also paints a troubling picture: real-world threats are outpacing organizational preparedness across multiple critical categories.

The problem isn’t a lack of awareness; rather, it’s that security hygiene is inconsistent. The gaps are still there: weak credential management, patch delays, untested incident plans and API / third-party blind spots. In short, new risks coming from AI aren’t the issue – it’s more that AI is strengthening existing threats.

Consider the top five areas in which our research found defenders are falling behind – not because security teams lack awareness, but rather, the fundamentals are being inconsistently enforced, and AI is helping adversaries exploit them more efficiently:

1. Ransomware attacks

Ransomware remains a top threat with 58% of security professionals ranking it as high/critical. Yet only 29% say their teams are prepared to defend themselves against ransomware attacks. Despite this, many organizations still lack tested backup and recovery protocols, haven’t segmented their networks effectively and run outdated IR plans.

When it comes to gen AI’s ability to ramp up ransomware threats, security teams may face even quicker code iteration, more automated vulnerability chaining and adaptive payload construction that can circumvent defenses. Attackers are also using AI to create virtual simulations for testing ahead of deployment – and adjusting their strategies accordingly.

2. API-related exposures

The increasing use of API-supported software has also increased the risk of API exposures. API-related vulnerabilities was the second highest ranked threat type by the security professionals Ivanti surveyed with 52% rating it a “high / critical” threat. Yet just 31% of security teams said they felt “very prepared” to defend against API attacks — a 21% preparedness gap.

Threat actors can now automate the discovery of endpoints through traffic analysis, reverse engineering poorly documented APIs and generating fuzzing inputs to identify logic flaws. Once they’re inside, they exploit permission creep, unvalidated input or weak auth mechanisms.

Still on the cybersecurity front, AI can also be a useful tool to analyze large amounts of API traffic data in real-time to identify issues and patterns that may indicate an attack.

3. Software vulnerabilities

Overall software vulnerabilities had a worrying 19-point gap between “high / critical” threat level and preparedness. Security teams still struggle with critical issues around departmental silos, inaccessible data and tech debt that make managing their expanding attack surface challenging. Ivanti’s report found that 45% of security teams said they lacked data to confidently identify specific vulnerabilities.

Furthermore, more than half of organizations surveyed — 51% — admitted to using software that has reached end-of- life and therefore would not be regularly updated and patched to ensure security compliance.

Security and IT teams are struggling to maintain patch hygiene, especially across legacy systems and shadow IT environments. This isn’t only a tooling problem. Patching cycles and security response times are slowed by security and IT having misaligned priorities – and AI is widening the gap between those who patch quickly and those who wait.

4. Compromised credentials

Like software vulnerabilities, the preparedness gap for compromised credentials also sits at 19%. Stolen credentials are an easy way in for attackers, even more so when paired with AI’s capabilities to scale credential stuffing attacks, simulate human-like login behavior and adapt to multi-step authentication flows.

Gen AI is also utilized to analyze leaked datasets and identify reused passwords or credential patterns across platforms. What was once a manual grind is now becoming a fully automated infiltration process. And yet, MFA coverage remains patchy, and identity governance is often an afterthought. AI is simply exposing how poorly it’s implemented.

5. Phishing

Phishing has long been a mainstay method for attackers to breach an organization’s defenses. Yet even now, only 37% of cybersecurity teams say they’re readily prepared to defend against phishing threats.

This lag has never been more of a concern as we’ve seen how phishing methods evolve with AI and attackers have the capabilities to create convincing deepfake content and personalize attacks using scraped public data.

Organizations clearly have a lot of work to do to bridge the gaps in their defenses and combat prominent threats because we know that these threats are not going away but continuing to evolve and develop. However, there is some positive news behind the AI threat hype which is that AI and automation can also serve as powerful tools to bolster cybersecurity teams' efforts as well.

AI cybersecurity tools: foe or friend?

Exploring the other side of the AI conversation, cybersecurity leaders have begun to view AI as an asset in bolstering defenses. Ivanti’s 2024 report “Gen AI and Cybersecurity: Risk and Reward” found that 90% of security professionals believe that gen AI benefits security teams as much or more than threat actors.

Some ways organizations today use AI tools include automating threat detection and triage, accelerating log analysis, increasing anomaly spotting and simulating different attack methods. Gen AI does not need to be only a looming threat but should be seen as another tool to help security teams more effectively identify weaknesses in their defenses and proactively address vulnerabilities.

Recommendations to refocus on cybersecurity fundamentals

AI threats are of course real threats, and cybersecurity teams shouldn’t disregard the growing use of AI in cyber attacks. However, with cybersecurity and IT teams dealing with a lack of talent / skills and battling burnout, they need to prioritize security resources on the most prominent and impactful types of threats.

Rather than spend time, budget and personnel crafting a security strategy around speculative AI threats, defenders should be doubling down on security best practices and fundamentals such as:

  • Identifying attack surface gaps
  • Remediating existing exposures
  • Ensuring rigorous credential protection and access controls

Your vulnerability and threat management strategy needs to focus primarily on proactively identifying and managing the most pressing threats to your organization rather than on trying to guard against speculated future threats, especially when unknown.

Security teams can’t know the threat level of abstract unidentified threats and thus can’t act on them. It’s more beneficial to gain a comprehensive, real-time view of your attack surface and use that to understand your organization’s current risk posture.

Real cybersecurity leadership isn’t about chasing hypotheticals. Rather, it’s about systematically reducing exposure through full visibility, quick validation and risk-based prioritization.

Here’s what today’s leading security leaders are doing, and what you should do to stay ahead:

1. Continuously monitor your complete attack surface

Teams can’t protect what they’re not able to see. As such, ensure a real-time, continuously updating view of every exposure point to have visibility into the entire landscape. With limited resources, you cannot prepare for every unknown potential threat. Instead, teams need a framework for assessing their attack surface and classifying known and unknown vulnerabilities.

2. Prioritize threat response based on overall impact threat and risk to business

Go beyond scanning for existing gaps and opportunities – run simulations, use red teams and adopt behavior-based analytics to validate which exposures are actually exploitable. Leverage frameworks like MITRE ATT&CK, threat intelligence and your own readings to identify exposures and prioritize remediation efforts based on an established risk framework that considers critical factors such as exploitability and overall impact to your organization’s wider business objectives.

3. Leverage AI to accelerate defenses

AI is a threat vector, but it’s also a force multiplier for cybersecurity teams. Use AI in cybersecurity to its fullest potential, surfacing anomalies in real time, automating log analyses, generating simulations and reducing manual obligation. The hype is real, but especially in the way you apply it in such a way that it speeds up and tightens response.

Real cybersecurity leadership is forged in present-day focus and near-future developments. This means making surgical decisions under pressure, identifying actual risk and driving measurable reduction. 

Forget the future unknowns – focus on what’s known

Research reveals that many security teams are overestimating the predicted risks of AI-powered threats, and they’re underprepared to defend against the actual threats targeting them today. Cybersecurity teams need to realign cyber defense strategies to reflect reality or risk unnecessary damages.

The truth is, AI may redefine the scale and speed of cyber attacks, but even with new AI capabilities, today’s attackers are still using the same old tricks. The winning cybersecurity strategies won’t put all of their time and resources into building AI-resistant walls – they’ll be the ones proactively readying defenses to fend off the threats right in front of them.

Check out Ivanti’s 2025 Cybersecurity Report to benchmark your readiness and close the preparedness gap.