Context Before Controls: Rethinking Utility Cybersecurity In The AI Era

Context Before Controls: Rethinking Utility Cybersecurity In The AI Era

By John Apostolo, HEXstream full-stack data analyst

Utilities no longer sit at the edges of the cyber-threat landscape—they’re right in the middle of it. In the past few years, incident reporting has shown steep, sometimes 80% year-over-year increases in ransomware activity against energy and utility providers.

Phishing remains one of the main ways those intrusions begin, especially when attackers are abusing remote access, supplier portals, or user endpoints to gain an initial foothold. A significant share of these campaigns trace back to a mix of financially motivated groups and state-aligned actors that see utilities as high-leverage targets: one compromise can ripple across an entire region, not just a single company. 

At the same time, AI has supercharged social engineering. Phishing campaigns written or assisted by large language models can now be generated at scale, tailored to specific roles or organizations, and tuned for tone and timing. Experiments with human subjects have found that these AI-crafted messages can drive click-through and credential-harvesting rates several times higher than generic phishing and often rival hand-crafted lures from experienced attackers.

Deepfake-enabled fraud has surged as well: the number of detected deepfake incidents has multiplied since 2022, and high-quality voice or video spoofs routinely slip past human judgment in controlled studies. In practical terms, more organizations are now seeing convincing “voices” or “faces” asking for money, data or access. 

In this reality, many security leaders in the utility sector have adopted an “assume compromise” mindset. Rather than betting on keeping every attacker out, they plan on the basis that motivated adversaries can and eventually will find a way in. The focus shifts toward rapid detection, containment, recovery and keeping critical operations resilient—even when perimeter controls fail. 

So where do you start? You start with context. Before thinking about tools or AI, you need a clear picture of what actually matters in your environment: 

  • Which applications and services keep the business and grid operations running?
  • Which accounts, identities, and credentials exist, and which ones are truly privileged?
  • Where do critical communications happen: email, collaboration tools, customer and vendor portals, control consoles, and external channels?

In parallel, you need an honest view of everyday behavior—your own and your team’s: 

  • How links and attachments are handled in practice.
  • How often passwords or patterns are reused across systems. 
  • Whether unusual MFA prompts are challenged when they feel “off.” 

How unexpected requests for money, data or access are verified, if at all. Once that ground truth is in place, AI becomes useful as a coach rather than a black box. A well-tuned model can help you: 

  • Spot weak authentication flows and suggest stronger approval paths. 
  • Highlight systems lagging on patches and recommend realistic upgrade priorities. 
  • Propose encryption, segmentation and backup strategies that fit the way you actually work. 
  • Walk you through why a particular email, portal message, or login attempt has the hallmarks of phishing or impersonation. 

Instead of replacing your judgment, AI sharpens it—turning ad-hoc instincts (“this feels wrong”) into patterns you can explain, teach and repeat. 

From there, the key question is how much autonomy you are comfortable giving your security tooling. You can think of it as four levels: 

Level 1 – Manual tools 
Humans plan and execute every step. Analysts choose what to scan, which logs to pull, what commands to run, and how to respond. Tools can be powerful, but they do exactly what an operator types and nothing more. 

Level 2 – LLM-assisted operations 
The model helps you think, but you still act. It drafts playbooks, suggests queries and commands, reviews configurations for obvious issues, summarizes alerts, and explains what it finds in plain language. Every change to a live system, however, is still made by a human. 

Level 3 – Semi-automated agents 
Agents can both plan and carry out routine, clearly bounded actions: running scheduled scans, enriching alerts with threat intel, pulling data from EDR/XDR, and filtering out obvious noise. They may auto-close low-risk tickets under well-defined rules, but humans retain final authority over anything that could affect operations, safety or customers. 

Level 4 – Cybersecurity AI Agents 
At this stage, agents can run end-to-end workflows in well-scoped domains: planning and executing tests, orchestrating scanning and validation, correlating findings, and proposing mitigations. Where explicitly allowed, they can also enact certain changes—like tightening a firewall rule or isolating a non-critical host—with humans supervising, approving high-impact actions, and able to overrule or roll back decisions. 

You don’t have to jump to Level 4 overnight—and in a critical infrastructure environment, you shouldn’t. The practical path is to move up one step at a time: first use AI to understand your context better, then to assist decisions, and only then to automate the safest and most repetitive parts of the work under tight guardrails. 

Across all four levels, the constant should be the same: your context, your critical assets, and your human judgment stay at the center. The technology—whether it’s a simple script, an LLM assistant, or a CAI agent—is there to extend your reach, not to replace it. 


Let's get your data streamlined today!