Beyond the hype: The business reality of AI for cybersecurity

Credit to Author: Sally Adam| Date: Tue, 28 Jan 2025 12:30:44 +0000

AI is firmly embedded in cybersecurity. Attend any cybersecurity conference, event, or trade show and AI is invariably the single biggest capability focus. Cybersecurity providers from across the spectrum make a point of highlighting that their products and services include AI. Ultimately, the cybersecurity industry is sending a clear message that AI is an integral part of any effective cyber defense.

With this level of AI universality, it’s easy to assume that AI is always the answer, and that it always delivers better cybersecurity outcomes. The reality, of course, is not so clear cut.

This report explores the use of AI in cybersecurity, with particular focus on generative AI. It provides insights into AI adoption, desired benefits, and levels of risk awareness based on findings from a vendor-agnostic survey of 400 IT and cybersecurity leaders working in small and mid-sized organizations (50-3,000 employees). It also reveals a major blind spot when it comes to the use of AI in cyber defenses.

The survey findings offer a real-world benchmark for organizations reviewing their own cyber defense strategies. They also provide a timely reminder of the risks associated with AI to help organizations take advantage of AI safely and securely to enhance their cybersecurity posture.

AI terminology

AI is a short acronym that covers a range of capabilities that can support and accelerate cybersecurity in many ways. Two common AI approaches used in cybersecurity are deep learning models and generative AI.

  • Deep learning (DL) models APPLY learnings to perform tasks. For example, appropriately trained DL models can identify if a file is malicious or benign in a fraction of a second without ever having seen that file before.
  • Generative AI (GenAI) models assimilate inputs and use them to CREATE (generate) new content. For example, to accelerate security operations, GenAI can create a natural language summary of threat activity to date and recommend next steps for the analyst to take.

AI is not “one size fits all” and models vary greatly in size.

  • Massive Models, such as Microsoft Copilot and Google Gemini, are large language models (LLMs) trained on a very extensive set of data that can perform a wide range of tasks.
  • Small models are typically designed and trained on a very specific data set to perform a single task, such as to detect malicious URLs or executables.

AI terminology graphic

AI adoption for cybersecurity

The survey reveals that AI is already widely embedded in the cybersecurity infrastructure of most organizations, with 98% saying they use it in some capacity:

Does your organization currently use AI technologies as part of your cyber defenses? (n=400)

AI adoption is likely to become near universal within a short time frame, with AI capabilities now on the requirements list of 99% (with rounding) of organizations when selecting a cybersecurity platform:

How important are AI capabilities when selecting a cybersecurity platform? (n=400)
How important are AI capabilities when selecting a cybersecurity platform? (n=400)

With this level of adoption and future usage, understanding the risks and associated mitigations for AI in cybersecurity is a priority for organizations of all sizes and business focus.

GenAI expectations

The saturation of GenAI messaging across both cybersecurity and people’s broader business and personal lives has resulted in high expectations for how this technology can enhance cybersecurity outcomes. The survey revealed the top benefit that organizations want genAI capabilities in cybersecurity tools to deliver, as shown below.

Top desired benefit from GenAI in cybersecurity tools
What benefits, if any, do you want generative AI capabilities in cybersecurity tools to deliver? Responses ranked first.(n=400)

The broad spread of responses reveals that there is no single, standout desired benefit from GenAI in cybersecurity. At the same time, the most common desired gains relate to improved cyber protection or business performance (both financial and operational). The data also suggests that the inclusion of GenAI capabilities in cybersecurity solutions delivers peace of mind and confidence that an organization is keeping up with the latest protection capabilities.

The positioning of reduced employee burnout at the bottom of the ranking suggests that organizations are less aware of or less concerned about the potential for GenAI to support users. With cybersecurity staff in short supply, reducing attrition is an important area for focus and one where AI can help.

Desired GenAI benefits change with organization size

The #1 desired benefit from GenAI in cybersecurity tools varies as organizations increase in size, likely reflecting their differing challenges.

What benefits, if any, do you want generative AI capabilities in cybersecurity tools to deliver? Responses ranked first.(n=400)

Although reducing employee burnout ranked lowest overall, it was the top desired gain for small businesses with 50-99 employees. This may be because the impact of employee absence disproportionately impacts smaller organizations who are less likely to have other staff who can step in and cover.

Conversely, highlighting their need for tight financial rigor, organizations with 100-249 employees prioritize improved return on cybersecurity spend. Larger organizations with 1,000-3,000 employees most value improved protection from cyberthreats.

AI risk awareness

While AI brings many advantages, like all technological capabilities, it also introduces a number of risks. The survey revealed varying levels of awareness of these potential pitfalls.

Defense risk: Poor quality and poorly implemented AI

With improved protection from cyber threats jointly at the top of the list of desired benefits from GenAI, it’s clear that reducing cybersecurity risk is a strong factor behind the adoption of AI-powered defense solutions.

However, poor quality and poorly implemented AI models can inadvertently introduce considerable cybersecurity risk of their own, and the adage “garbage in, garbage out” is particularly relevant to AI. Building effective AI models for cybersecurity requires extensive understanding of both threats and AI.

Organizations are largely alert to the risk of poorly developed and deployed AI in cybersecurity solutions. The vast majority (89%) of IT/cybersecurity professionals surveyed say they are concerned about the potential for flaws in cybersecurity tools’ generative AI capabilities to harm their organization, with 43% saying they are extremely concerned and 46% somewhat concerned.

Percentage concerned about GenAI in security products causing harm
Focusing on the use of AI in cybersecurity solutions, to what extent are you concerned about the potential for flaws in the Generative AI capabilities in cybersecurity tools to harm your organization? n=(400)

It is therefore unsurprising that 99% (with rounding) of organizations say that when evaluating the GenAI capabilities in cybersecurity solutions, they assess the caliber of the cybersecurity processes and controls used in the development of the GenAI: 73% say they fully assess the caliber of the cybersecurity processes and controls and 27% say they partially assess the caliber of the cybersecurity processes and controls.

Percentage that assess the caliber of GenAI in tools
When evaluating the Generative AI capabilities in cybersecurity solutions, does your organization assess the caliber of the cybersecurity processes and controls used in the development of the Generative AI? (n=390)

While the high percentage that report conducting a full assessment may initially appear encouraging, in reality it suggests that many organizations have a major blind spot in this area.

Assessing the processes and controls used to develop GenAI capabilities requires transparency from the vendor and a reasonable degree of AI knowledge by the assessor. Unfortunately, both are in short supply. Solution providers rarely make their full GenAI development roll-out processes easily available, and IT teams often have limited insights into AI development best practices. For many organizations, this finding suggests that they “don’t know what they don’t know”.

Financial risk: Poor return on investment

As previously seen, improved return on cybersecurity spend (ROI) also tops the list of benefits organizations are looking to achieve through GenAI.

High caliber GenAI capabilities in cybersecurity solutions are expensive to develop and maintain. IT and cybersecurity leaders across businesses of all sizes are alert to the consequences of this development expenditure, with 80% saying that they think GenAI will significantly increase the cost of their cybersecurity products.

Despite these expectations of price increases, most organizations see GenAI as a path to lowering their overall cybersecurity expenditure, with 87% of respondents saying they are confident that the costs of GenAI in cybersecurity tools will be fully offset by the savings it delivers.

Diving deeper, we see that confidence in gaining positive return on investment increases with annual revenue, with the largest organizations ($500M+) 48% more likely to agree or strongly agree that the costs of generative AI in cybersecurity tools will be fully offset by the savings it delivers than the smallest (less than $10M).

Percentage thinking savings will offset gen AI costs split by revenue
Thinking about the cost of Generative AI capabilities, to what extent do you agree or disagree with the following statements within your organization: The costs of Generative AI in cybersecurity tools will be fully offset by the savings it delivers. Strongly agree, Agree. (n=400)

At the same time, organizations recognize that quantifying these costs is a challenge. GenAI expenses are typically built into the overall price of cybersecurity products and services, making it hard to identify how much organizations are spending on GenAI for cybersecurity. Reflecting this lack of visibility, 75% agree that these costs are hard to measure (39% strongly agree, 36% somewhat agree).

Broadly speaking, challenges in quantifying the costs also increase with revenue: organizations with $500M+ annual revenue are 40% more likely to find the costs difficult to quantify than those with less than $10M in revenue. This variation is likely due in part to the propensity for larger organizations to have more complex and extensive IT and cybersecurity infrastructures.

Percentage challenged to measure costs of GenAI split by revenue
Thinking about the cost of Generative AI capabilities, to what extent do you agree or disagree with the following statements within your organization: The costs of the Generative AI capabilities available in cybersecurity products are hard to measure. Strongly agree, Agree. (n=400)

Without effective reporting, organizations risk not seeing the desired return on their investments in AI for cybersecurity or, worse, directing investments into AI that could have been more effectively spent elsewhere.

Operational risk: Over-reliance on AI

The pervasive nature of AI makes it easy to default too readily to AI, assume it is always correct, and take for granted that AI can do certain tasks better than people. Fortunately, most organizations are aware of and concerned about the cybersecurity consequences of over-reliance on AI:

  • 84% are concerned about resulting pressure to reduce cybersecurity professional headcount (42% extremely concerned, 41% somewhat concerned)
  • 87% are concerned about a resulting lack of cybersecurity accountability (37% extremely concerned, 50% somewhat concerned)

These concerns are broadly felt, with consistently high percentages reported by respondents across all size segments and industry sectors.

Recommendations

While AI brings risks, with a thoughtful approach, organizations can navigate them and safely, securely take advantage of AI to enhance their cyber defenses and overall business outcomes.

The recommendations provide a starting point to help organizations mitigate the risks explored in this report.

Ask vendors how they develop their AI capabilities

  • Training data. What is the quality, quantity, and source of data on which the models are trained? Better inputs lead to better outputs.
  • Development team. Find out about the people behind the models. What level of AI expertise do they have? How well do they know threats, adversary behaviors, and security operations?
  • Product engineering and rollout process. What steps does the vendor go through when developing and deploying AI capabilities in their solutions? What checks and controls are in place?

Apply business rigor to AI investment decisions

  • Set goals. Be clear, specific, and granular about the outcomes you want AI to deliver.
  • Quantify benefits. Understand how much of a difference AI investments will make.
  • Prioritize investments. AI can help in many ways; some will have a greater impact than others. Identify the important metrics for your organization – financial savings, staff attrition impact, exposure reduction, etc. – and compare how the different options rank.
  • Measure impact. Be sure to see how actual performance relates to initial expectations. Use the insights to make any adjustments that are needed.

View AI through a human-first lens

  • Maintain perspective. AI is just one item in the cyber defense toolkit. Use it, but make clear that cybersecurity accountability is ultimately a human responsibility.
  • Don’t replace, accelerate. Focus on how AI can support your staff by taking care of many low-level, repetitive security operations tasks and providing guided insights.

About the survey

Sophos commissioned independent research specialist Vanson Bourne to survey 400 IT security decision makers in organizations with between 50 and 3,000 employees during November 2024. All respondents worked in the private or charity/not-for-profit sector and currently use endpoint security solutions from 19 separate vendors and 14 MDR providers.

Sophos’ AI-powered cyber defenses

Sophos has been pushing the boundaries of AI-driven cybersecurity for nearly a decade. AI technologies and human cybersecurity expertise work together to stop the broadest range of threats, wherever they run. AI capabilities are embedded across Sophos products and services and delivered through the largest AI-native platform in the industry. To learn more about Sophos’ AI-powered cyber defenses visit www.sophos.com/ai

http://feeds.feedburner.com/sophos/dgdY

Leave a Reply