Hello All. Most OT deception projects fall into one of two categories: they’re either too simple to fool anyone—or too static to provide meaningful insight.

So, I decided to build something different. Today, I’m announcing AdaptiveGrid — an open-source, adaptive OT honeypot designed to simulate real industrial environments and capture high-fidelity attacker behavior.

Why AdaptiveGrid?

Traditional OT honeypots tend to rely on:

  • Static banners
  • Basic protocol responses
  • Limited interaction depth

The problem? Modern attackers—and even basic tooling—can quickly identify these as fake.

It’s designed not just to look like an OT environment, but to behave like one.

What Makes It “Adaptive”?

At the core of AdaptiveGrid is the idea that deception should evolve during the interaction.

Instead of a fixed response model, the platform:

  • Tracks attacker behavior over time
  • Groups activity into per-attacker cases
  • Scores actions based on intent (e.g., scanning vs. exploitation)
  • Adjusts logging, alerting, and response depth dynamically

This means a simple port scan is treated very differently than:

  • Repeated authentication attempts
  • Protocol-specific manipulation
  • Attempts to access engineering interfaces or project files

Key Features

AdaptiveGrid is designed for both real-world detection and research/demo environments:

High-Interaction OT Emulation

  • EtherNet/IP (CIP), Modbus, OPC UA simulation
  • Emulated controller identities (e.g., PLC-style responses)
  • Realistic engineering workstation behavior

Enterprise + OT Hybrid Lures

  • Fake engineering portals (HTTP/HTTPS)
  • SMB shares for project file access attempts
  • Authentication traps (including credential capture)

Advanced Telemetry & Logging

  • Structured timeline.jsonl per event
  • Full session logging and artifact capture
  • Credential hashing and storage for analysis

ATT&CK-Aligned Detection

  • Automatic mapping to MITRE ATT&CK techniques
  • Clear linkage between behavior and adversary tactics

Burst & Behavior Analytics

  • Detects rapid scanning and brute-force activity
  • Identifies escalation patterns in attacker behavior
  • Use of AI to compile data and provide analysis on behavior

Per-Attacker Case Management

  • One case per attacker (not per event)
  • Full timeline of actions from recon → interaction → exploitation

AdaptiveGrid is designed to complement platforms like Claroty, Nozomi Networks, and Microsoft Defender for IoT—not replace them.

AdaptiveGrid is being released as an open-source project, with a focus on:

  • Transparency
  • Community-driven improvements
  • Realistic OT simulation for defenders, researchers, and educators

Future enhancements will include:

  • Expanded protocol support
  • Deeper controller emulation
  • Integration with SIEM/SOAR platforms
  • Optional AI-assisted alert summarization

Read more

Hello All. As a follow-up to my recent Mythos post, I have been thinking about cyber-hygiene improvements organizations can make in their OT environments. In environments where patching may no longer be an option, tools like Sysmon shift the strategy from prevention to visibility and early detection. You’re essentially compensating for unpatchable risk by increasing your ability to observe attacker behavior in real time. In many cases, these Windows machines cannot run modern EDR tools due to their vintage and/or cannot support them in their current OT environment. On legacy Windows systems (including XP/7-era HMIs), Sysmon can be one of the few viable ways to regain some defensive ground.

Monitoring process creation is one of the highest-value use cases. Most successful attacks on legacy systems still require some form of execution—whether that’s cmd.exe, PowerShell (where present), dropped binaries, or living-off-the-land techniques. By baselining what “normal” looks like on an HMI and alerting on deviations (new parent-child relationships, unusual command-line arguments, unsigned binaries), you can catch activity that would otherwise go completely unnoticed.

DNS and outbound connectivity is even more critical in OT. Many of these systems are not supposed to communicate externally at all. So when you see DNS queries to Internet domains, failed or successful connections to external IPs, or even unusual internal name resolution patterns, that’s often a high-fidelity signal. In a world where AI-driven attacks may rapidly establish command-and-control, those early outbound indicators can be the only warning you get.

Feeding this telemetry into a SIEM is where the real value compounds. SIEMs will allow you to correlate across layers—endpoint behavior, network activity, and even OT protocol anomalies if you’re integrating tools like Nozomi or Claroty. This is where you move beyond simple alerting into detection engineering, building use cases such as:

  • Process execution + outbound connection within a short window
  • New binary execution on a system with no change tickets
  • DNS queries to rare or previously unseen domains
  • Lateral movement patterns combined with OT protocol access

That said, there are a few realities to be aware of.

First, Sysmon on legacy systems requires careful tuning. These environments are typically very static, which is actually an advantage—you can get to a high-confidence baseline quickly—but noisy configs will overwhelm both the system and your analysts. Minimal, targeted logging (process create, network connections, DNS if available, and maybe file creation in sensitive directories) tends to work best.

Second, performance and stability matter more than anything in OT. Even tools that access Windows natively such as Sysmon need to be validated in a lab or staging environment before deployment. The last thing you want is a monitoring control impacting an HMI tied to production or safety systems.

Third, this approach doesn’t reduce vulnerability exposure—it simply makes exploitation more visible. So it needs to sit alongside:

  • Strong network segmentation
  • Strict allow-listing where possible
  • Passive OT monitoring for protocol-level anomalies

The bigger picture is that this aligns perfectly with the shift we’re seeing in the wake of AI-driven vulnerability discovery. If we assume that new flaws will continue to be found in unpatchable systems, then behavioral monitoring becomes the control of last resort.

In that sense, deploying Sysmon isn’t just a technical improvement—it’s a strategic acknowledgment that: You may not be able to stop the exploit, but you can still catch the attacker.

Deploying Sysmon is straightforward. It is part of the Microsoft SysInternals toolkit and can be downloaded directly from Microsoft’s official site:

https://learn.microsoft.com/sysinternals/downloads/sysmon

Once downloaded, it can be installed with a configuration file that defines what events to capture. In OT environments, it’s important to keep this configuration lightweight and targeted—focusing on process creation, network connections, and DNS activity to avoid performance impacts on sensitive systems.

There are, however, important considerations. Legacy OT systems are often fragile, so any agent deployment must be tested carefully to ensure it does not impact stability. Additionally, Sysmon does not reduce the underlying risk—it simply makes attacker activity more visible. This means it should be combined with strong segmentation, strict access controls, and passive network monitoring.

Ultimately, this approach reflects a broader shift in cybersecurity. As AI-driven tools begin to uncover new vulnerabilities in long-abandoned platforms, organizations can no longer rely on patching as their primary defense. Instead, they must assume these systems are permanently exposed and focus on detecting what happens next.

In that reality, Sysmon is not just a tool—it’s a way to regain visibility in environments where control is otherwise limited.

Read more

Hello All. The emergence of Claude Mythos is forcing a rethink across the cybersecurity landscape. Unlike earlier tools that assisted analysts, this new class of AI is capable of autonomously identifying and chaining together previously unknown vulnerabilities. What makes this particularly significant is not just speed, but depth—these systems can analyze legacy codebases and uncover flaws that may have existed, unnoticed, for decades.

This raises an uncomfortable question for OT environments that still rely heavily on legacy infrastructure. Consider a typical HMI running Windows XP: long past end-of-life, unpatched, and often deeply embedded into operations. For years, the assumption has been that most meaningful vulnerabilities were discovered before vendor support ended, and that residual risk could be managed through isolation and compensating controls. That assumption no longer holds.

The reality is that vendors like Microsoft never “found everything.” Vulnerability discovery has always been constrained by human effort, available tooling, and prioritization. AI changes that equation entirely. Systems like Claude Mythos can now revisit old platforms with fresh analytical capability, identifying flaws that were previously invisible—not because they were impossible to find, but because no one had the means to find them efficiently.

The real challenge emerges when new vulnerabilities are discovered in systems that are no longer supported. There are no patches, no vendor fixes, and often no practical way to upgrade without significant operational disruption. In effect, organizations are left running infrastructure where newly discovered weaknesses may persist indefinitely, potentially exploited without ever being publicly disclosed.

For OT environments, the impact is amplified. These systems are designed for stability and uptime, not rapid change. They often rely on insecure-by-design protocols, lack modern endpoint protections, and cannot be easily segmented or monitored using traditional IT approaches. When AI accelerates both discovery and exploitation, the window between vulnerability identification and active use shrinks dramatically—sometimes to near zero.

This shifts the risk model entirely. Security teams can no longer rely solely on known vulnerabilities or published CVEs. Instead, they must assume that unknown weaknesses exist and may already be discoverable by adversaries using similar AI capabilities. The focus moves from patching to containment, from prevention to detection, and from trust in legacy stability to acceptance of continuous exposure.

Ultimately, Claude Mythos represents more than a technological advancement—it exposes a long-standing blind spot in how organizations think about legacy risk. Systems like Windows XP were never “fully secured”; they were simply no longer being examined. Now, with AI re-opening that examination at scale, OT leaders must confront a new reality: the greatest risks may be the ones that have been sitting quietly in their environments all along.

Read more