By: Brad Jones Chief Information Security Officer, Snowflake
Heading into 2025, the cybersecurity landscape is set to grow more complex, with new challenges emerging as quickly as the technologies driving them. Brad Jones, CISO at Snowflake, shares insights on how AI will shape the cybersecurity landscape in the coming year.
Generative AI takes center stage as businesses’ personal security experts.
While there is a lot of talk about the potential security risks introduced through generative AI, and for good reason, there are real and beneficial applications already happening today that people neglect to mention. As AI tools become more versatile and more accurate, security assistants will become a significant part of the SOC, easing the perennial manpower shortage. The benefit of AI will be to summarize incidents at a higher level — rather than an alert that requires analysts to go through all the logs to connect the dots, they’ll get a high-level summary that makes sense to a human, and is actionable.
Of course, we must keep in mind that these opportunities are within a very tight context and scope. We must ensure that these AI tools are trained on an organization’s policies, standards, and certifications. When done so appropriately, they can be highly effective in helping security teams with routine tasks. If organizations haven’t taken note of this already, they’ll be hearing it from their security teams soon enough as they look to alleviate workloads for understaffed departments.
AI models themselves are the next focus of AI-centered attacks.
Last year, there was a lot of talk about cybersecurity attacks at the container layer — the less-secured developer playgrounds. Now, attackers are moving up a layer to the machine learning infrastructure. I predict that we’ll start seeing patterns like attackers injecting themselves into different parts of the pipeline so that AI models provide incorrect answers, or even worse, reveal the information and data from which it was trained. There are real concerns in cybersecurity around threat actors poisoning large language models with vulnerabilities that can later be exploited.
Although AI will bring new attack vectors and defensive techniques, the cybersecurity field will rise to the occasion, as it always does. Organizations must establish a rigorous, formal approach to how advanced AI is operationalized. The tech may be new, but the basic concerns — data loss, reputational risk and legal liability — are well understood and the risks will be addressed.
Concerns about data exposure through AI are overblown.
People putting proprietary data into large language models to answer questions or help compose an email pose no greater risk than someone using Google or filling out a support form. From a data loss perspective, harnessing AI isn’t necessarily a new and differentiated threat. At the end of the day, it’s a risk created by human users where they take data not meant for public consumption and put it into public tools.
This doesn’t mean that organizations shouldn’t be concerned. It’s increasingly a shadow IT issue, and organizations will need to ratchet up monitoring for unapproved use of generative AI technology to protect against leakage.