Artificial intelligence (AI) is having a moment… a really long moment. It’s been evolving for decades, but now it’s everywhere all at once. AI-powered digital assistants like Siri and Alexa, as well as generativeAI tools like ChatGPT, Gemini and Copilot, have put AI at everyone’s fingertips, including cybercriminals.
For non-digital natives, it’s 1991 all over again. That’s the year that the internet morphed from being a geeky military/academic domain to the World Wide Web. Then as now, companies are scrambling to learn how best to harness this powerful technology to improve every aspect of their operations.
In cybersecurity, the race is on to outsmart bad actors who are already using new forms of AI to find vulnerabilities more quickly and launch more effective attacks.The challenge – and opportunity – for chief information security officers (CISOs) and security operations center (SOC) analysts is to figure out how to use AI and machine learning (ML), a subset of AI, to automate and improve cyber defense processes.
To secure operational technology (OT) and Internet of Things(IoT) networks, the challenge is even greater and the stakes even higher. Here is a brief introduction to AI, its applications for cybersecurity and leading use cases for critical infrastructure and other industrial organizations.
Artificial Intelligence and Its Evolution
AI is a field of science and technology focused on the ability of machines to perform tasks that are typically associated with human intelligence, including learning, problem solving and decision making. Talk to a data scientist and they’ll tell you rudimentary AI has been around since the 1950s. More sophisticated AI became reality in the 1980s thanks to exponential leaps in computing power, and it received another boost over the last two decades thanks to cloud computing. These innovations enabled rapid advances in analytics and the ability to solve big-data problems.
According to the Defense Advanced Research Projects Agency (DARPA), since the 1950s there have been three historical waves of AI:
- Handcrafted knowledge (1950s – 1980s): Rules-based systems capable of implementing simple logical rules for well-defined problems but incapable of learning or dealing with uncertainty. Examples: Global positioning systems that can plan optimal routes, chess-playing computers.
- Statistical learning (1980s – 2010s): ML and neural networks, or deep learning models that can learn and adapt to different situations if properly trained on large datasets. Examples: Facial and speech recognition programs, aerial drones.
- Contextual adaptation (2010s - present): Generative AI that can understand complex, real-world context without training on datasets, using large language models (LLMs) to create nuanced content, including images. Examples: ChatGPT, DALL-E.
The first wave of AI was largely academic. The second wave, statistical learning, was the game changer, and it is still widely used across all industries. Specifically, ML employs three groups of algorithms for training large datasets to learn patterns and make predictions:
- Regression algorithms that help predict future events from past data
- Classification algorithms that help split data into known categories
- Clustering algorithms that discover new patterns in data without knowing categories
Deep learning and large language models (LLMs) are two examples of how these lower-level algorithms can be combined to create higher level analysis. For example, LLMs, otherwise known as generative AI, use regression analysis to learn the underlying patterns in a data set, then use a recurrent neural network (or, more recently, a transformer) to generate synthetic content. As impressive as these second-wave capabilities are at making sense of oceans of data, attention has turned to the third wave and how to harness its power.
AI’s Impact on Cybersecurity
As sci-fi creators have long foretold, superhuman intelligence can be used for good or evil. In the cyber realm, AI is being leveraged by both attackers and defenders to do what they’ve been doing, only better. Here are three ways AI is being deployed.
AI-Assisted Cyberattacks
Bad actors have quickly seized on AI and ML to make their attacks faster, more accurate and less detectable. They use it to find and exploit vulnerabilities more easily than ever, as well as generate malware, craft phishing emails and create deepfakes.
Threats That Target AI Systems
As organizations increasingly adopt AI to power already automated processes, the vulnerability of AI systems themselves is an emerging concern. Devious tactics such as data poisoning, large language model (LLM) prompt injection or ML model evasion pose a formidable challenge as threat actors learn how to abuse technology designed to enhance efficiency and innovation.
To use an OT/IoT example, consider what would happen if an AI-driven predictive maintenance system were manipulated in a data poisoning cyberattack (AI assisted or not). Adversaries might change sensor readings or introduce deceptive maintenance logs into the data. By feeding the system with false information, attackers could mislead the AI model into making inaccurate predictions about health and maintenance needs, which may lead to breakdowns, increased downtime and potential safety risks.
A great resource for understanding how attackers exploit the vulnerabilities of AI systems is MITRE ATLAS™ (Adversarial Threat Landscape for AI Systems). This complementary framework to MITRE ATT&CK® focuses on real-world tactics and techniques that adversaries use to target AI systems. In November 2023 it was updated to address vulnerabilities to systems that incorporate generative AI and LLMs.
The vulnerability of AI systems raises another déjà vu challenge. OT has long been labeled “insecure by design” an intractable flaw that has proven hard to overcome. Can the same be said about AI? Are we building security into AI systems now or setting ourselves up for a similar retrofitting nightmare?
AI-Assisted Cyber Defense
On the defenders’ side, the need to analyze and correlate vast amounts of data from dozens of sources presents a prime use case for AI and ML. Indeed, by now most cybersecurity vendors have incorporated AI into their products to various degrees. Today, you can assume that ML and behavioral analytics are at work throughout your cybersecurity stack to improve the speed and accuracy of every process, including:
- Big data analysis and correlation
- Threat detection (anomalies, attacks, malware)
- Vulnerability identification and prioritization
- Risk scoring and prioritization
- Incident response automation
When evaluating cybersecurity vendors and their AI capabilities, it’s important to understand exactly what you’re getting. The question isn’t whether they’re incorporating AI/ML, but how. Questions to ask include:
- What algorithms are in use and where? (They may consider that information proprietary but should at least be willing to share whether and how they employ regression, classification, clustering and generative algorithms.)
- How will the AI help my team with day-to-day security tasks? (monitoring, correlation, detection, analysis)?
- Where do emerging AI capabilities fit into your roadmap?
AI for OT Cybersecurity
AI-assisted OT cybersecurity requires even greater capabilities because, as we know, there’s more to protect and the stakes of an attack are often higher. You have control systems and physical processes, with configurable process variables, all potentially exploitable. Organizations that manage critical infrastructure and other industrial environments must enlist AI to tackle every stage of the cybersecurity lifecycle — identify, protect, detect, respond and recover — but with extra functionality to secure OT/IoT networks. Prime use cases include:
- Using ML to learn the behavior of process variables collected from network traffic and highlighting anomalies from the baseline time series
- Predicting and acting on abnormal bandwidth from each sensor’s baseline network activity
At Nozomi Networks, using AI to tackle the toughest OT/IoT security challenges is in our DNA. Co-founders Andrea Carcano and Moreno Carullo, today chief product officer and chief technical officer, respectively, are internationally recognized experts not only in AI but in industrial network security and systems engineering. We introduced the industry’s first AI-powered visibility and cybersecurity solution for industrial control systems (ICS) in 2013, and we’ve been building AI into our platform ever since. Two shining examples of this innovation are Nozomi Guardian™ and Nozomi Vantage IQ™.
AI-Powered Anomaly and Threat Detection
Introduced in 2013, Nozomi Guardian uses AI algorithms to rapidly analyze the huge volumes of network communication and process variable data that are extremely difficult to evaluate any other way. This entails using adaptive learning to establish a network’s “normal” behavior and flag traffic patterns beyond set thresholds. The AI-driven analysis is then used to model each ICS in the environment and develop process and security profiles specific to it.
Once baselines are established, high-speed behavioral analytics are used to continuously monitor them. The result is rapid detection of anomalies, including zero-day attacks and critical process variable irregularities, before they can cause significant damage.
AI-Based Query and Analysis
In 2023 we introduced Nozomi Vantage IQ, the first AI-based analysis engine built specifically for OT environments. It uses AI to replicate learned experiences of seasoned security analysts and automate tedious tasks such as reviewing, correlating and prioritizing mountains of network, asset and alert data. For example, it uses deep neural networks to identify network activity patterns and predict and alert on abnormal bandwidth from any sensor’s baseline.
Co-Existing in an AI World
AI and ML have clear benefits for cybersecurity teams, helping them do astoundingly more with fewer resources. That’s a good thing, considering the cyber talent shortage, especially in specialized fields like OT and IoT. But adversaries are also reaping the benefits. As AI science and technology continues to evolve, so will cyberattack methods. It’s hard to say who’s winning the battle on any given day, but organizations need to know their adversaries’ capabilities and try to exceed them if they want to prevail in an AI-powered world.