AI

Hello All. The emergence of Claude Mythos is forcing a rethink across the cybersecurity landscape. Unlike earlier tools that assisted analysts, this new class of AI is capable of autonomously identifying and chaining together previously unknown vulnerabilities. What makes this particularly significant is not just speed, but depth—these systems can analyze legacy codebases and uncover flaws that may have existed, unnoticed, for decades.

This raises an uncomfortable question for OT environments that still rely heavily on legacy infrastructure. Consider a typical HMI running Windows XP: long past end-of-life, unpatched, and often deeply embedded into operations. For years, the assumption has been that most meaningful vulnerabilities were discovered before vendor support ended, and that residual risk could be managed through isolation and compensating controls. That assumption no longer holds.

The reality is that vendors like Microsoft never “found everything.” Vulnerability discovery has always been constrained by human effort, available tooling, and prioritization. AI changes that equation entirely. Systems like Claude Mythos can now revisit old platforms with fresh analytical capability, identifying flaws that were previously invisible—not because they were impossible to find, but because no one had the means to find them efficiently.

The real challenge emerges when new vulnerabilities are discovered in systems that are no longer supported. There are no patches, no vendor fixes, and often no practical way to upgrade without significant operational disruption. In effect, organizations are left running infrastructure where newly discovered weaknesses may persist indefinitely, potentially exploited without ever being publicly disclosed.

For OT environments, the impact is amplified. These systems are designed for stability and uptime, not rapid change. They often rely on insecure-by-design protocols, lack modern endpoint protections, and cannot be easily segmented or monitored using traditional IT approaches. When AI accelerates both discovery and exploitation, the window between vulnerability identification and active use shrinks dramatically—sometimes to near zero.

This shifts the risk model entirely. Security teams can no longer rely solely on known vulnerabilities or published CVEs. Instead, they must assume that unknown weaknesses exist and may already be discoverable by adversaries using similar AI capabilities. The focus moves from patching to containment, from prevention to detection, and from trust in legacy stability to acceptance of continuous exposure.

Ultimately, Claude Mythos represents more than a technological advancement—it exposes a long-standing blind spot in how organizations think about legacy risk. Systems like Windows XP were never “fully secured”; they were simply no longer being examined. Now, with AI re-opening that examination at scale, OT leaders must confront a new reality: the greatest risks may be the ones that have been sitting quietly in their environments all along.

Read more

I recently had the privilege of joining an amazing group of cybersecurity professionals on a panel discussion organized by Mike Holcomb, Dylan Williams, Kate Johnson, Cooper Wilson, Tom Morgan, Tahmeed Khan, George A., Ahmed Al Saleh and of course Ezz who was the moderator.

Read more

The U.S. government has unveiled new security guidelines aimed at bolstering critical infrastructure against artificial intelligence (AI)-related threats.

“These guidelines are informed by the whole-of-government effort to assess AI risks across all sixteen critical infrastructure sectors, and address threats both to and from, and involving AI systems,” the Department of Homeland Security (DHS) said.

In addition, the agency said it’s working to facilitate safe, responsible, and trustworthy use of the technology in a manner that does not infringe on individuals’ privacy, civil rights, and civil liberties.

The new guidance concerns the use of AI to augment and scale attacks on critical infrastructure, adversarial manipulation of AI systems, and shortcomings in such tools that could result in unintended consequences, necessitating the need for transparency and secure by design practices to evaluate and mitigate AI risks.

Specifically, this spans four different functions such as govern, map, measure, and manage all through the AI lifecycle –

  • Establish an organizational culture of AI risk management
  • Understand your individual AI use context and risk profile
  • Develop systems to assess, analyze, and track AI risks
  • Prioritize and act upon AI risks to safety and security

“Critical infrastructure owners and operators should account for their own sector-specific and context-specific use of AI when assessing AI risks and selecting appropriate mitigations,” the agency said.

“Critical infrastructure owners and operators should understand where these dependencies on AI vendors exist and work to share and delineate mitigation responsibilities accordingly.”

The development arrives weeks after the Five Eyes (FVEY) intelligence alliance comprising Australia, Canada, New Zealand, the U.K., and the U.S. released a cybersecurity information sheet noting the careful setup and configuration required for deploying AI systems.

“The rapid adoption, deployment, and use of AI capabilities can make them highly valuable targets for malicious cyber actors,” the governments said.

“Actors, who have historically used data theft of sensitive information and intellectual property to advance their interests, may seek to co-opt deployed AI systems and apply them to malicious ends.”

The recommended best practices include taking steps to secure the deployment environment, review the source of AI models and supply chain security, ensure a robust deployment environment architecture, harden deployment environment configurations, validate the AI system to ensure its integrity, protect model weights, enforce strict access controls, conduct external audits, and implement robust logging.

The briefing by the NSA can be found here – https://www.nsa.gov/Press-Room/Press-Releases-Statements/Press-Release-View/Article/3741371/nsa-publishes-guidance-for-strengthening-ai-system-security/

The full report can be found here – https://media.defense.gov/2024/Apr/15/2003439257/-1/-1/0/CSI-DEPLOYING-AI-SYSTEMS-SECURELY.PDF

Read more