Industry executives and experts share their predictions for 2025. Read them in this 17th annual VMblog.com series exclusive. By Mark Wojtasiak, Vice President of
Research and Strategy at Vectra AI
The cybersecurity landscape in 2025 will be
defined by both the promise and the challenges of artificial intelligence (AI).
Attackers will increasingly use AI to enhance the efficiency and effectiveness
of their operations, with a divide emerging between those who master AI to
create sophisticated, adaptive attacks and those employing it more
superficially. On the defensive side, while AI will remain a critical tool in
combating these threats, its success will hinge on intentional, comprehensive
integration into people, processes, and technology.
I spoke with my colleagues at Vectra AI to
gather their insights on what to expect in 2025 and how businesses can best
prepare for next year's threats. Here's what they had to say, as well as my
personal thoughts:
Not All
AI-based Attacks Will be Created Equal
In 2025, attackers will continue to leverage
AI to streamline attacks, lowering their own operational costs and increasing
their net efficacy. In most cases this will increase attacker sophistication,
however we'll start to see a clear distinction emerge between groups that
masterfully apply AI and those adopting more simplistically. The attackers who
skillfully leverage AI will be able to cover more ground more quickly, better
tailor their attacks, predict defensive measures, and exploit weaknesses in ways
that are highly adaptive and precise.
Defensive AI will play a critical role in
combating these attacks but will require intentionality in how, where, and when
it is operationalized to be truly effective. The teams that excel will be those
that understand how to apply AI beyond surface-level automation, integrating it
into the full range of people, process, and technology. Having done so, they
will find they stop attackers sooner, faster, with more precision and with
broader coverage than their peers. -Tim
Wade, Deputy CTO
Autonomous
AI Will Gain Momentum as AI Copilots Lose Steam
In 2025, the initial excitement surrounding
security copilots will begin to diminish as organizations weigh their costs
against the actual value delivered. With this, we'll see a shift in the
narrative toward more autonomous AI systems. Unlike AI copilots, these
autonomous solutions are designed to operate independently, requiring minimal
human intervention. Starting next year, marketing efforts will increasingly
highlight these autonomous AI models as the next frontier in cybersecurity,
touting their ability to detect, respond to, and even mitigate threats in
real-time - all without human input. -Oliver
Tavakoli, CTO
Threat
Actors Will Focus on AI Productivity Gains, But Malicious Agentic AI is
Unlikely to be Seen in the Wild
In the near term, we will see attackers focus
on trying to refine and optimise their use of AI. This means using Gen AI to
research targets and carry out spear phishing attacks at scale. Furthermore,
attackers, like everyone else, will increasingly use GenAI as a means of saving
time on their own tedious and repetitive actions. Rote tasks from coding to
answering straight-forward security questions will be offloaded to LLMs,
whenever possible.
But, the really interesting stuff will start
happening in the background, as threat actors begin experimenting with how to
use LLMs to deploy their own malicious AI agents that are capable of end-to-end
autonomous attacks. While threat actors are already in the experimental phase,
testing how far agents can carry out complete attacks without requiring human
intervention, we are still a few years away from seeing these types of agents
being reliably deployed and trusted to carry out actual attacks. While such a
capability would be hugely profitable in terms of time and cost of attacking at
scale, autonomous agents of this sort would be too error-prone to trust on
their own. Nevertheless, in the future we expect threat actors will create Gen
AI agents for various aspects of an attack - from research and reconnaissance,
flagging and collecting sensitive data, to autonomously exfiltrating that data
without the need for human guidance. Once this happens, without signs of a
malicious human on the other end, the industry will need to transform how it
spots the signs of an attack. -Sohrob
Kazerounian, Distinguished AI Researcher
Disillusionment
Around AI's Promise in Cybersecurity Will Push Vendors to Focus on
Demonstrating Value
In the coming year, we'll see the initial
excitement that surrounded AI's potential in cybersecurity start to give way
due to a growing sense of disillusionment among security leaders. While AI
adoption is on the rise - 89% plan to use more AI tools in the coming year
- there is still cautious optimism within the industry. Many practitioners
worry that adding more AI tools could create more work and as a result, vendors
will need to focus on demonstrating value and proving ROI. Vendors will no
longer be able to rely on generic promises of "AI-driven security" to make
sales. Instead, they will need to demonstrate tangible outcomes, such as
reduced time to detect threats, improved signal accuracy, or measurable
reductions around time spent chasing alerts and managing tools. Additionally,
vendors must be able to show how AI can both proactively and reactively bolster
an organization's resilience to cyberattacks-enabling security teams to better
anticipate, mitigate, and recover from attacks while proving their competence to
stakeholders. -Mark Wojtasiak, Vice
President of Research and Strategy
As we move into 2025, the role of AI in
cybersecurity will be both transformative and challenging, demanding a
recalibration of expectations and strategies across the industry. While
attackers continue to refine their AI-driven approaches, defenders must match
their sophistication with intentional and integrated AI applications that go
beyond surface-level automation. Success in this new era will rely on the
ability to prove ROI while balancing innovation with practicality, ensuring AI
serves as an enabler of resilience rather than just a buzzword.
##