Infosec News

No one wants to see a blue screen on a Friday morning!

Not sure if folks are following the news, but a major bug has been identified with CrowdStrike Falcon stemming from a bad update. It is causing blue screens in Windows. Here is a statement from George Kurtz, CEO:

CrowdStrike is actively working with customers impacted by a defect found in a single content update for Windows hosts. Mac and Linux hosts are not impacted. This is not a security incident or cyberattack. The issue has been identified, isolated and a fix has been deployed. We refer customers to the support portal for the latest updates and will continue to provide complete and continuous updates on our website. We further recommend organizations ensure they’re communicating with CrowdStrike representatives through official channels. Our team is fully mobilized to ensure the security and stability of CrowdStrike customers.

There is a manual workaround, which is scriptable:

  • Boot Windows into Safe Mode or the Windows Recovery Environment
  • Navigate to the C:\Windows\System32\drivers\CrowdStrike directory
  • Locate the file matching ‘C-0000029*.sys’, and delete it.
  • Boot the system normally.

Just when we thought this was bad enough… Microsoft had an Azure outage. Looks like Azure Central US was down for hours. But had global impacts. Airlines and other major customers all over the world we impacted. some in India reverted to checking in passengers manually in excel spreadsheets. Also affected M365.

“We experienced a Storage incident in Central US which had downstream impact to a number of Azure services. This is currently mitigated; however, we are still in the process of validating recovery to a small percentage of those downstream services. This was communicated to affected customers via the Service Health dashboard in the Azure portal. We are also aware of an issue impacting Virtual Machines running Windows, running the CrowdStrike Falcon agent, which may encounter a bug check (BSOD) and get stuck in a restarting state. While this is an external dependency, we are currently investigating potential options for Azure customers to mitigate and will be providing updates via the status page here: https://azure.status.microsoft/en-gb/status/ as well as our Azure portal, where possible.”

Good luck everyone!

Read more

The U.S. government has unveiled new security guidelines aimed at bolstering critical infrastructure against artificial intelligence (AI)-related threats.

“These guidelines are informed by the whole-of-government effort to assess AI risks across all sixteen critical infrastructure sectors, and address threats both to and from, and involving AI systems,” the Department of Homeland Security (DHS) said.

In addition, the agency said it’s working to facilitate safe, responsible, and trustworthy use of the technology in a manner that does not infringe on individuals’ privacy, civil rights, and civil liberties.

The new guidance concerns the use of AI to augment and scale attacks on critical infrastructure, adversarial manipulation of AI systems, and shortcomings in such tools that could result in unintended consequences, necessitating the need for transparency and secure by design practices to evaluate and mitigate AI risks.

Specifically, this spans four different functions such as govern, map, measure, and manage all through the AI lifecycle –

  • Establish an organizational culture of AI risk management
  • Understand your individual AI use context and risk profile
  • Develop systems to assess, analyze, and track AI risks
  • Prioritize and act upon AI risks to safety and security

“Critical infrastructure owners and operators should account for their own sector-specific and context-specific use of AI when assessing AI risks and selecting appropriate mitigations,” the agency said.

“Critical infrastructure owners and operators should understand where these dependencies on AI vendors exist and work to share and delineate mitigation responsibilities accordingly.”

The development arrives weeks after the Five Eyes (FVEY) intelligence alliance comprising Australia, Canada, New Zealand, the U.K., and the U.S. released a cybersecurity information sheet noting the careful setup and configuration required for deploying AI systems.

“The rapid adoption, deployment, and use of AI capabilities can make them highly valuable targets for malicious cyber actors,” the governments said.

“Actors, who have historically used data theft of sensitive information and intellectual property to advance their interests, may seek to co-opt deployed AI systems and apply them to malicious ends.”

The recommended best practices include taking steps to secure the deployment environment, review the source of AI models and supply chain security, ensure a robust deployment environment architecture, harden deployment environment configurations, validate the AI system to ensure its integrity, protect model weights, enforce strict access controls, conduct external audits, and implement robust logging.

The briefing by the NSA can be found here – https://www.nsa.gov/Press-Room/Press-Releases-Statements/Press-Release-View/Article/3741371/nsa-publishes-guidance-for-strengthening-ai-system-security/

The full report can be found here – https://media.defense.gov/2024/Apr/15/2003439257/-1/-1/0/CSI-DEPLOYING-AI-SYSTEMS-SECURELY.PDF

Read more

Reuters reported last week that OpenAI staff researchers wrote a letter to the board warning an internal project named Q* could represent a breakthrough in creating AI that could surpass human intelligence in a range of fields. That letter was sent ahead of Altman’s firing.

The model, called Q* – and pronounced as “Q-Star” – was able to solve basic maths problems it had not seen before, according to the tech news site the Information, which added that the pace of development behind the system had alarmed some safety researchers. The ability to solve maths problems would be viewed as a significant development in AI.

Neither OpenAI nor its largest backer Microsoft have publicly confirmed the existence of Q*, much less the possibility that it is a dangerous breakthrough in AI technology. OpenAI didn’t respond to requests for comment.

These sorts of claims aren’t new, either. A Google engineer claimed in 2022 that an unreleased AI system had become sentient. The claim caused a brief flurry of excitement before the engineer was fired and the company denied the claim.

The only detail given in the report about Q*’s capabilities was that it could solve certain mathematical problems at the level of grade-school students. That has led to skepticism about how serious an advance Q* could be. Elon Musk suggested his own Grok chatbot could outdo Q* by both solving math problems and fundamental philosophical questions.

Should we be worried??

Read more