Cybersecurity Saturday

From the cybersecurity policy front,

  • Cyberscoop reports,
    • “The House Homeland Security Committee is digging into Anthropic’s AI model Mythos in a series of briefings and hearings, as questions proliferate on whether and how the federal government will make use of the technology touted for its ability to autonomously uncover cyber vulnerabilities.
    • “Wednesday [May 13] brought a closed-door briefing for the House Homeland Security Committee from Anthropic. The chairman of the panel’s cybersecurity subcommittee said he is planning to hold a hearing on the topic. And committee Democrats are requesting a classified briefing with Anthropic.
    • “A committee aide who attended the briefing said it included a live demonstration of Mythos, “allowing members to see firsthand how advanced AI can identify and reason through software vulnerabilities. What we saw reinforced the urgency of ensuring that federal agencies, including our civilian cyber defenders, can responsibly access and deploy the most advanced U.S. models to find and patch vulnerabilities before foreign adversaries or criminal actors exploit them.” * * *
    • “There’s a divide on which federal agencies are using Mythos thus far. For example: CISA reportedly isn’t, but the National Security Agency is.” 
  • GovCon Wire adds,
    • Anthropic’s Project Glasswing and Claude Mythos announcement may have sparked concerns across the cybersecurity community, but Pentagon technology leaders say the emergence of Mythos-style AI models could ultimately strengthen U.S. cyber defense capabilities rather than weaken them.
    • Katherine Sutton, DOW [Department of War] assistant secretary for cyber policy, emphasized that the focus should not solely remain on the offensive risks associated with advanced cyber AI, according to Breaking Defense. 
    • “I hear a lot of people talking about challenges and threats when they talk about Mythos,” Sutton said. “[But] there’s huge opportunity in these models. One of the foundational things that they’re going to enable is the development of secure code.”
  • Cyberscoop points out,
    • “Two of the most advanced artificial intelligence models — Anthropic’s Claude Mythos Preview and OpenAI’s GPT-5.5 — have significantly surpassed the already-accelerating pace at which AI systems are completing autonomous cybersecurity tasks, according to separate findings published Wednesday by the United Kingdom’s AI Security Institute (AISI) and Palo Alto Networks.
    • “The AISI, which conducts pre-deployment evaluations of frontier AI models on behalf of the British government, said both Claude Mythos Preview and GPT-5.5 have substantially exceeded the doubling trend the institute had been tracking since late 2024. Whether the results represent an isolated capability jump or the start of a new, faster trajectory remains unclear.”
  • Cybersecurity Dive relates,
    • “In February, a coalition that includes corporate titans JPMorgan Chase, Mastercard, AT&T and Berkshire Hathaway Energy launched the Alliance for Critical Infrastructure (ACI), vowing to take the lead in helping infrastructure sectors work more closely together to understand and mitigate the shared cybersecurity risks they face. Reading between the lines, the message was clear: The critical infrastructure community, increasingly alarmed at the Trump administration’s retreat from decades-long partnerships, is trying to fill the growing void of coordination and leadership.” * * *
    • “Government budget cuts and personnel losses have made it much harderfor agencies to support and advise infrastructure operators, and the White House has encouraged states to take over historically federal responsibilities for protecting local utilities. Amid those changes, infrastructure firms like the ones that founded the ACI say the private sector must step up.
    • “Ben Flatgard, the ACI’s chairman, noted that the private sector manages the vast majority of U.S. infrastructure. “We can’t outsource that responsibility or the risk management practices that come along with it,” he said in an interview with Cybersecurity Dive. “We need to own the solution for that as well.”
    • “Many experts say that while the government must retain a leadership role in protecting critical infrastructure, it’s a good sign that private companies want to assume more of the burden.”
  • Per a Cybersecurity and Infrastructure Security Agency (CISA) news release,
    • “CISA and the Group of Seven (G7) international partners—Germany, Canada, France, Italy, Japan, the United Kingdom, and the European Union—have released joint guidance, Software Bill of Materials for AI – Minimum Elements, to help public and private sector stakeholders improve transparency in their artificial intelligence (AI) systems and supply chains.
    • “A software bill of materials (SBOM) acts as an “ingredients list” for software that better positions organizations to understand their supply chains and make risk-informed decisions about how to protect their critical systems. The guidance builds on CISA’s previous work with federal and international partners to establish a shared vision for a software bill of materials and provides recommendations on minimum elements that should be included in an SBOM for AI. Because AI systems are software systems, these recommendations should be considered in addition to the general minimum elements for an SBOM
    • “While not exhaustive or mandatory, the supplemental minimal elements outlined in this guidance reflect the consensus of G7 experts and will expand over time to keep pace with the rapid advancement of AI technology.” 

From the cybersecurity breaches and vulnerabilities front,

  • Cybersecurity Dive lets us know,
    • “Seven out of every 10 organizations suffered at least one identity-related breach over the past year, according to a report released Tuesday [May 12] by Sophos. Organizations, on average, reported three separate identity-related incidents during that time.
    • ‘Two-thirds of ransomware victims said the cyberattack stemmed from an identity-related incident, said Sophos. The report is based on a survey of 5,000 IT and cybersecurity leaders across 17 countries. 
    • “The mean recovery cost was $1.64 million, read the report, and the median cost was $750,000. Seven of every 10 respondents reported recovery costs of more than $250,000.”
  • Bleeping Computer adds,
    • “Initial access broker KongTuke has moved to Microsoft Teams for social engineering attacks, taking as little as five minutes to gain persistent access to corporate networks.
    • “The threat actor tricks users into pasting a PowerShell command that ultimately delivers the ModeloRAT, which has been previously seen in ClickFix attacks [12].
    • “Initial access brokers (IAB) like KongTuke typically sell company network access to ransomware operators, who use it to deploy file-theft and data-encrypting malware.
    • “Cybercriminals have increasingly adopted Microsoft Teams in attacks, reaching out to company employees and pretending to be IT and help-desk staff.”
  • CISA added two known exploited vulnerabilities (KVEs) to its catalog this week.
  • Security Week reports,
    • ‘For the first time, Google has identified a zero-day exploit believed to have been developed using artificial intelligence.
    • “The company published a new report on Monday [May 11]. summarizing its observations on the use of AI in the cyber threat landscape, drawing on data collected recently by Gemini, Google Threat Intelligence Group (GTIG), and Mandiant. 
    • One of the most notable findings is that a prominent cybercrime group leveraged AI to develop a zero-day exploit designed to bypass two-factor authentication (2FA) on an open source web-based system administration tool. The exploit was implemented in a Python script.
    • The hacker group and the targeted tool have not been named, but Google said it worked with the impacted vendor to prevent mass exploitation, which appeared to be the threat actor’s plan.
    • “Although we do not believe Gemini was used, based on the structure and content of these exploits, we have high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability,” Google explained.
  • Fand
    • “Linux distributions are informing users about a new kernel vulnerability that can be exploited by a local attacker to escalate privileges to root.
    • “Dubbed Fragnesia and officially tracked as CVE-2026-46300, the issue resides in the kernel’s XFRM ESP-in-TCP subsystem, allowing an unprivileged attacker to gain root permissions by overwriting sensitive system files. 
    • “A majority of Linux distributions are affected, and they have started releasing patches.
    • “A proof-of-concept (PoC) exploit is available, but there is no evidence that Fragnesia has been exploited in the wild.
    • “Similar to Dirty Frag, Fragnesia exploits a vulnerability in the XFRM ESP-in-TCP subsystem to achieve a memory write primitive in the kernel,” Microsoft’s threat intelligence team said.” 
  • The Wall Street Journal relates.
    • “Security researchers say they have discovered a new way of circumventing Apple’s AAPL 1.07%increase; green up pointing triangle state-of-the art security technology, using techniques they discovered while testing an early version of Anthropic’s M”ythos AI software in April.
    • “:The researchers with Calif, a Palo Alto-based security research company, say the software they wrote links together two bugs and a handful of techniques to corrupt the Mac’s memory and then gain access to parts of the device that should be inaccessible.
    • “It is what’s known as a privilege escalation exploit, and if it were chained together with other attacks it could be used by a hacker to seize control of the computer.
    • “The technique is noteworthy because Apple has put so much effort into locking down MacOS, said Michał Zalewski, a security researcher who formerly worked at Google and who reviewed the Calif research but wasn’t involved in the testing. 
    • “Apple, which is deploying and testing frontier AI models to test and patch vulnerabilities, is reviewing the Calif report to validate its findings. “Security is our top priority, and we take reports of potential vulnerabilities very seriously,” a company spokeswoman said.”

From the ransomware front,

  • Cyberscoop reports,
    • “Instructure, the company behind Canvas, said it reached an agreement with the cybercriminals who threatened to leak a trove of sensitive data they claim was stolen during a prolonged cyberattack on the widely used education tech platform.
    • “Pressure was mounting on the company as widespread outages left schools, students and teachers temporarily unable to access critical data late last week when the company took Canvas offline after the attackers defaced the platform’s login page. By Friday, the company said Canvas — a central hub for K-12 and university coursework, exams, grades and communication — was back online and fully operational. 
    • “ShinyHunters, a decentralized crew of prolific cybercriminals that researchers affiliate with The Com, claimed responsibility for the attack on its data leak site and was attempting to extort the company for an unknown ransom amount. 
    • “Instructure didn’t outright say it paid a ransom, but insisted the agreement provided all necessary assurances. “The data was returned to us. We received digital confirmation of data destruction (shred logs),” the company said in an update Monday [May 11]. * * *
    • “The House Homeland Security Committee on Monday published a letter to [Instructure CEO Steve] Daly seeking a briefing with him or a senior leader at Instructure by May 21. 
  • and
    • “Foxconn, one of the world’s largest manufacturers of electronics sold by major tech vendors, is recovering from a cyberattack that disrupted some of the company’s factories in North America.
    • :Nitrogen, a ransomware group that’s known for targeting organizations in the manufacturing, construction and technology sectors, claimed responsibility for the attack on its data leak site and said it stole 8 terabytes of data spanning more than 11 million files. 
    • “The threat group posted screenshots of some of the allegedly stolen data and claimed it compromised “confidential instructions, projects and drawings from Intel, Apple, Google, Dell, Nvidia and many other projects.” 
    • “Foxconn is famously known as the primary assembler of Apple iPhones. Apple and the other companies allegedly impacted by the attack did not respond to a request for comment.” ***
    • “Nitrogen was first observed in 2023, using ALPHV, one of the most prevalent ransomware variants at that time, Cynthia Kaiser, senior vice president at Halcyon’s Ransomware Research Center, told CyberScoop. The group started using stolen code from Conti, another formerly prolific ransomware variant, in 2024 to build its own custom attack tools to hit Windows and VMware server environments, she added.”
  • Cybersecurity Dive relates,
    • “West Pharmaceutical Services on Wednesday [May 13] said it has contained a ransomware attack it suffered earlier this month and is restarting critical systems, including manufacturing, receiving and shipping, at certain locations, according to an update on its website
    • “The Exton, Pa.-based company, one of the world’s leading makers of drug-delivery devices and solutions, confirmed that data was stolen and encrypted in the attack, in a Monday filing with the Securities and Exchange Commission.” * * *
    • “Palo Alto Networks Unit 42, handled incident response to the attack, according to an assurance letter shared by the pharmaceutical services company. The letter confirms that the ransomware attack was contained and any malicious binaries and unauthorized persistence mechanisms were neutralized.” 
  • The HIPAA Journal adds,
    • Ransomware groups have claimed responsibility for attacks on Advanced Family Surgery Center in Tennessee, Orem Eye Clinic in Utah, and Belmont Aesthetic & Reconstructive Plastic Surgery in Virginia/Washington D.C.
  • Dark Reading notes,
    • “A new threat campaign is using RubyGems as a dead drop to store exfiltrated data, but the attacker’s long-term plans are less clear. 
    • “Software development security vendor Socket published research concerning a campaign dubbed “GemStuffer,” where an attacker abused the RubyGemspackage registry “as a data transport mechanism rather than a conventional malware distribution channel,” according to a blog post. RubyGems is a package manager for the Ruby programming language, and acts as a way for developers to distribute Ruby programs or libraries, which are referred to as “gems.”
  • Checkpoint Research posted its first quarter 2026 ransomware report.
    • Key Findings
      • Consolidation after peak fragmentation: The top 10 ransomware groups accounted for 71% of all Q1 2026 victims, a sharp reversal from the fragmentation seen in Q3 2025. The ransomware ecosystem is once again consolidating around fewer, more dominant operators.
      • Volume stabilization at historically high levels: There were 2,122 victims posted on data leak sites (DLS), making this period the second-highest Q1 on record. The long growth trend is stabilizing.
      • Qilin’s sustained dominance: Qilin maintained its position as the most prominent ransomware operation for the third consecutive quarter, posting 338 victims.
      • The Gentlemen is the breakout story of Q1 2026 reaching the third place on the global ransomware list, increasing their victim count from 40 victims in Q4 2025 to 166 in Q1 2026.
      • LockBit 5.0 comeback confirmed: LockBit posted 163 victims in Q1 2026, climbing to fourth place.
  • Dark Reading adds,
    • “Tables Turn on ‘The Gentlemen’ RaaS Gang With Data Leak
    • “An OPSEC failure provides a window into what helped the ransomware group rise: a generous affiliate model, opportunistic TTPs, and an effective organizational structure.”
  • CSO discusses the economics of Ransomware 3.0.
    • “The uncomfortable truth your board needs to hear is this: The question is no longer whether your organisation will face a sophisticated threat actor. For any organisation of meaningful size, operating in a connected supply chain, with digital customer relationships, the question is how well-prepared you are when it happens. The economics of ransomware as a criminal enterprise have never been stronger. Attack-as-a-service platforms have lowered the barrier to entry. Ransom payment data is analysed and used to calibrate future demands. These groups study your financial filings.
    • “Investing in incident response capability — in people, process and technology — is not a cost centre decision. It’s the only bet that pays off in both the prevention scenario and the response scenario. Insurance pays out after the damage is done. A mature response architecture reduces the damage itself.
    • “The organisations that navigated the Cl0p MOVEit campaign of 2023 with the least disruption weren’t the ones with the biggest insurance policies. They were the ones who had mapped their data flows, limited unnecessary MOVEit exposure and had a response team that could move within hours rather than days.”

From the cybersecurity defenses front,

  • Cybersecurity Dive reports,
    • “OpenAI on Monday [May 11] launched a new cybersecurity initiative called Daybreak, which uses its large language models, Codex’s agentic capabilities and security partners to root out risk and call defense into action. The rollout is OpenAI’s answer to Anthropic’s Mythos model which debuted to limited preview last month and has highlighted weak security spots in software across various industries. 
    • “Like with Anthropic’s Project Glasswing, which sought tech vendors to support Mythos, OpenAI will work with industry and government partners to deploy cyber-capable models that are meant to build autonomous cyber defense capabilities into software from the start. Cloudflare, Cisco, CrowdStrike, Oracle and Zscaler are among a group of companies already using the technology, OpenAI said. Unlike Mythos, Daybreak is publicly available, and companies can request an assessment of their security risks.
    • “As AI providers compete for their share of the enterprise market with cybersecurity tools, tech leaders should experiment with all of their options, said Jeff Pollard, VP, principal analyst at Forrester, in an email to CIO Dive. “Take someone with responsibility for innovation in tech and cybersecurity and have them play with these capabilities to see what they offer,” he said.”
  • and
    • “Organizations are allocating more money for security against physical threats but the money is coming with more board oversight, and confusion remains over who has the lead role in physical security and how to blend physical security with cybersecurity, an EY survey finds. 
    • “Almost 80% of organizations say they increased the allocation for physical security over their last budget cycle, in some cases by as much as 50%, according to the EY Forensic & Integrity Pulse, based on responses from 250 executives and board members to a March survey.  
    • “Leaders are beginning to recognize gaps in crisis management and physical security preparedness as threats and risk evolve,” EY says in the report, released May 5.”
  • Dark Reading adds,
    • “AI Drives Cybersecurity Investments, Widening ‘Valley of Death’
    • “In a role reversal, investment dollars in security startups exceeded the value of mergers and acquisitions in 1Q26 by more than $1 billion, a rare occurrence.”
  • Security Week notes,
    • “Mythos Proves Potent in Vulnerability Discovery, Less Convincing Elsewhere
    • “Independent benchmarking finds Mythos highly effective for source code audits, reverse engineering, and native-code analysis, though its exploit validation and reasoning capabilities remain inconsistent.”
  • TechTarget explains how to implement zero trust for AI.
  • CSO informs us,
    • “Penetration tests of AI-based systems are revealing a greater percentage of high-risk flaws than those discovered in legacy systems.
    • “Security consultancy Cobalt’s annual State of Pentesting Report reveals that 32% of all AI and large language model (LLM) findings are rated as high risk — nearly 2.5 times the rate (13%) of severe flaws found in enterprise security tests more generally.”
  • Here is a link to Dark Reading’s CISO Corner.

Leave a Reply

Your email address will not be published. Required fields are marked *