Thursday, May 14, 2026

Windows Zero-Days Expose BitLocker Bypasses And CTFMON Privilege Escalation

An anonymous cybersecurity researcher who disclosed three Microsoft Defender vulnerabilities has returned with two more zero-days involving a BitLocker bypass and a privilege escalation impacting Windows Collaborative Translation Framework (CTFMON).

The security defects have been codenamed YellowKey and GreenPlasma, respectively, by the researcher, who goes by the online aliases Chaotic Eclipse and Nightmare-Eclipse.

The researcher described YellowKey as "one of the most insane discoveries I ever found," likening the BitLocker bypass to functioning as a backdoor, as the bug is present only in the Windows Recovery Environment (WinRE), a built-in framework designed to troubleshoot and repair common unbootable operating system issues.

YellowKey affects Windows 11 and Windows Server 2022/2025. At a high level, it involves copying specially crafted "FsTx" files on a USB drive or the EFI partition, plugging the USB drive into the target Windows computer with BitLocker protections turned on, rebooting into WinRE, and triggering a shell by holding down the CTRL key.

"I think it will take a while even for MSRC to find the real root cause of the issue. I just never managed to understand why this vulnerability is sooo well hidden," the researcher explained. "Second thing is, no, TPM+PIN does not help, the issue is still exploitable regardless."

Security researcher Will Dormann, in a post shared on Mastodon, said, "I was able to reproduce [YellowKey] with a USB drive attached," adding, "it looks like Transactional NTFS bits on a USB Drive are able to delete the winpeshl.ini file on ANOTHER DRIVE (X:). And we get a cmd.exe prompt, with BitLocker unlocked instead of the expected Windows Recovery environment."

"While the TPM-only BitLocker bypass is indeed interesting, I think the buried lede here is that a \System Volume Information\FsTx directory on one volume has the ability to modify the contents of another volume when it is replayed," Dormann pointed out. "To me, this in and of itself sounds like a vulnerability."

The second vulnerability flagged by Chaotic Eclipse is a case of privilege escalation security that could be exploited to obtain a shell with SYSTEM permissions. It arises as a result of what has been described as Windows CTFMON arbitrary section creation.

The released proof-of-concept (PoC) is incomplete and lacks the necessary code to obtain a full SYSTEM shell. In its current form, the exploit can allow an unprivileged user to create arbitrary memory section objects within directory objects writable by SYSTEM, potentially enabling manipulation of privileged services or drivers that implicitly trust those paths, as a standard user does not have write access to the locations.

The development comes nearly a month after the researcher published three Defender zero-days dubbed BlueHammer, RedSun, and UnDefend after allegedly expressing dissatisfaction with Microsoft's handling of the vulnerability disclosure process. The shortcomings have since come under active exploitation in the wild.

While BlueHammer was officially assigned the identifier CVE-2026-33825 and patched by Microsoft last month, Chaotic Eclipse said the tech giant appears to have "silently" addressed RedSun without issuing any advisory.

"I hope you at least attempt to resolve the situation responsibly, I'm not sure what type of reaction you expected from me when you threw more gas on the fire after BlueHammer," the researcher said. "The fire will go as long as you want, unless you extinguish it or until there nothing left to burn."

Chaotic Eclipse also promised a "big surprise" for Microsoft, coinciding with the next Patch Tuesday release in June 2026.

When reached for comment, a Microsoft spokesperson had previously told The Hacker News that it "has a customer commitment to investigate reported security issues and update impacted devices to protect customers as soon as possible," and that it supports coordinated vulnerability disclosure, which the company said "helps ensure issues are carefully investigated and addressed before public disclosure."

BitLocker Downgrade Attack Uncovered

The development comes as French cybersecurity company Intrinsec detailed an attack chain against BitLocker that leverages a boot manager downgrade by exploiting CVE-2025-48804 (CVSS score: 6.8) to bypass the encryption protection on fully patched Windows 11 systems in under five minutes.

"The principle is as follows: the boot manager loads the System Deployment Image (SDI) file and the WIM referenced by it, and verifies the integrity of the legitimate WIM," Intrinsec said.

"However, when a second WIM is added to the SDI with a modified blob table, the boot manager checks the first (legitimate) WIM while simultaneously booting from the second (controlled by the attacker). This second WIM contains a WinRE image infected with 'cmd.exe,' which executes with the decrypted BitLocker volume."

While fixes released by Microsoft in July 2025 plugged this security defect in July 2025, security researcher Cassius Garat said the problem lies in the fact that Secure Boot only verifies a binary's signing certificate, not its version. As a result, a vulnerable version of "bootmgfw.efi" that does not contain the patch and is signed with the trusted PCA 2011 certificate can be used to get around BitLocker safeguards.

It's worth noting that Microsoft plans to retire the old PCA 2011 certificates next month. "And as long as it is not revoked, even an old, vulnerable boot manager can be loaded without triggering an alert," Intrinsec noted. To pull off the attack, a bad actor needs to have physical access to the target machine.

To counter the risk, it's essential to enable a BitLocker PIN at startup for preboot authentication and migrate the boot manager to the CA 2023 certificate and revoke the old PCA 2011 certificate.



from The Hacker News https://ift.tt/isrWxnC
via IFTTT

Wednesday, May 13, 2026

Azerbaijani Energy Firm Hit by Repeated Microsoft Exchange Exploitation

A threat actor with affiliations to China has been linked to a "multi-wave intrusion" targeting an unnamed Azerbaijani oil and gas company between late December 2025 and late February 2026, marking an expansion of its targeting.

The activity has been attributed by Bitdefender with moderate-to-high confidence to a hacking group known as FamousSparrow (aka UAT-9244), which shares some level of tactical overlap with clusters tracked under the monikers Earth Estries and Salt Typhoon.

The attack paves the way for the deployment of two distinct backdoors across three separate waves: Deed RAT (aka Snappybee), a successor of ShadowPad that's used by multiple China-nexus espionage groups, and TernDoor, which was recently discovered in attacks targeting telecommunications infrastructure in South America since 2024.

What's notable about the campaign is that it repeatedly leveraged the same vulnerable Microsoft Exchange Server entry point despite several remediation attempts, swapping backdoors each time: Deed RAT on December 25, 2025, TernDoor in late January/early February 2026, and a modified Deed RAT in late February 2026. The attackers are assessed to have exploited the ProxyNotShell chain to obtain initial access.

"This targeting extends the known FamousSparrow victimology into a region where Azerbaijan's role in European energy security has materially increased following the 2024 expiration of Russia's Ukraine gas transit agreement and 2026 Strait of Hormuz disruptions," the Romanian cybersecurity company said in a report shared with The Hacker News.

"The intrusion illustrates that actors will exploit and re-exploit the same access path until the original vulnerability is patched, compromised credentials are rotated, and the attacker's ability to return is fully disrupted."

The initial access is said to have been followed by attempts to deploy web shells to establish a persistent foothold, and ultimately deploy Deed RAT using an evolved DLL side-loading technique that leverages the legitimate LogMeIn Hamachi binary to load and launch a rogue DLL that's responsible for executing the main payload.

"Unlike standard DLL side-loading that relies on simple file replacement, this method overrides two specific exported functions within the malicious library," Bitdefender explained. "This creates a two-stage trigger that gates the Deed RAT loader's execution through the host application's natural control flow, further evolving the defense evasion capabilities of traditional DLL side-loading."

The attacks have also been found to conduct lateral movement to broaden their access within the compromised network and establish a redundant foothold to ensure resilience in the event that the activity is detected and removed.

The second wave, on the other hand, took place nearly a month after the initial intrusion, with the adversary attempting to unsuccessfully employ DLL side-loading to drop TernDoor by means of Mofu Loader, a shellcode loader previously attributed to GroundPeony.

The Azerbaijani firm was targeted a third time towards the end of February 2026, when the threat actors once again attempted to deploy a modified version of Deed RAT, indicating active efforts to refine and evolve its malware arsenal. This artifact uses "sentinelonepro [.]com" for command-and-control (C2).

"This intrusion should not be viewed as an isolated compromise, but as a sustained and adaptive operation conducted by an actor that repeatedly sought to regain and extend access within the victim environment," Bitdefender said. "Across multiple waves of activity, the same access path was revisited, new payloads were introduced, and additional footholds were established, underscoring a high degree of persistence and operational discipline."



from The Hacker News https://ift.tt/LiCWNxr
via IFTTT

What is Citrix DaaS Flex, and why desktop modernization needs a different operating model

Enterprise desktop modernization is no longer only about where desktops run. It is about how the environment is operated, how users are sized, how policies are enforced, and how much infrastructure work IT still owns after the move.

Many organizations are trying to move more workloads to cloud, improve resiliency, control cost, and reduce the infrastructure they manage directly. But the common answers often assume a cleaner environment than IT runs.

Cloud PC models can simplify planning, but they can also lead teams to overbuy. Some users need more. Many need less. Customer-managed cloud infrastructure gives IT more options, but it can leave the same teams responsible for sizing, patching, scaling, troubleshooting, and explaining the cost. Big-bang migrations add risk. And when experience issues show up across regions, devices, networks, or high-demand users, confidence in the plan starts to fade.

That is the core problem: enterprise desktop environments are not uniform. Different personas, workloads, applications, security requirements, and continuity needs rarely fit neatly into one standard desktop model.

A defensible DaaS strategy must prove three things. The experience must hold up in production, not just in a pilot. The operating model must reduce infrastructure work, not move it somewhere else. And security and resilience must remain clear as users, workloads, locations, and business requirements change.

Citrix DaaS Flex is built around those requirements.

Citrix DaaS Flex is a fully managed, persona-based Desktop-as-a-Service model hosted on Citrix-managed cloud infrastructure. Citrix operates the infrastructure layers while customers retain control over identity, applications, images, policies, and data.

That changes the operating model behind desktop modernization. Instead of asking teams to standardize every user, manage the cloud infrastructure themselves, or move everything at once, Citrix DaaS Flex gives organizations a managed, persona-based way to modernize in phases.

Operational management: reduce the infrastructure work

Most teams are not looking for a new place to run the same infrastructure work. They want to stop spending engineering cycles on platform layers that are not optimized for business outcomes.

With Citrix DaaS Flex, Citrix operates the platform layers that consume time and carry availability risk, including the control plane, access services, managed compute, patching, scaling, monitoring, and lifecycle operations.

Customers keep control of identity, networks, images, applications, policies, and data. That boundary lets IT focus on migration planning, image strategy, application readiness, policy design, support readiness, and user adoption.

User experience: match resources to the work

User experience problems often show up after the pilot, when the desktop must handle real networks, real peripherals, real applications, and users who do not all work the same way.

Citrix DaaS Flex lets organizations move by persona instead of treating every user as the same desktop requirement. Task workers, knowledge workers, power workers, contractors, and specialized users can be matched to the resources their work requires.

That helps teams avoid overbuilding for some users and under-serving others. Our market-leading Citrix HDX technology helps protect the experience after go-live, especially when users are working over high latency, low bandwidth, packet loss, unified communications workloads, or variable endpoint devices.

Security and resilience: keep the control boundary clear

Security teams rarely object to cloud desktops in theory. They object when it is not clear who owns what, what can be enforced, and what can be audited once users are working.

Citrix DaaS Flex supports hybrid and multi-cloud architectures so organizations can reduce concentration risk, support disaster recovery planning, and align deployments to regulatory and data residency requirements.

Customers retain control over identity, policies, applications, and data while Citrix operates the platform layers underneath. That gives security and risk teams clearer boundaries as requirements change.

Modernization without the cutover risk

The goal is not simply to move desktops to cloud. The goal is to modernize desktop delivery without creating a new infrastructure, operations, security, or resilience problem for IT to absorb.

Citrix DaaS Flex gives organizations a practical path forward: start with the personas or use cases that are ready, validate experience and cost, keep control over the decisions that matter, and expand without forcing a single high-stakes migration event.

That gives IT fewer infrastructure layers to operate, EUC teams a more practical way to phase migration, security teams clearer control boundaries, and the organization a desktop model that can adapt as user and workload requirements change.

For a deeper technical view, read the companion Tech Zone article on how Citrix DaaS Flex extends existing Citrix environments and how teams can plan the first workloads to move.

To learn more, visit our Citrix DaaS Flex and Citrix Platform Flex documentation.

To see how Citrix compares across key DaaS use cases, download the complimentary Gartner® Critical Capabilities for Desktop as a Service report.



from Citrix Blogs https://ift.tt/IBl5aEL
via IFTTT

Microsoft Patches 138 Vulnerabilities, Including DNS and Netlogon RCE Flaws

Microsoft on Tuesday released patches for 138 security vulnerabilities spanning its product portfolio, although none of them have been listed as publicly known or under active attack.

Of the 138 flaws, 30 are rated Critical, 104 are rated Important, three are rated Moderate, and one is rated Low in severity. As many as 61 vulnerabilities are classified as privilege escalation bugs, followed by 32 remote code execution, 15 information disclosure, 14 spoofing, eight denial-of-service, six security feature bypass, and two tampering flaws.

The update list also includes a vulnerability that was patched by AMD (CVE-2025-54518, CVSS score: 7.3) this month. It relates to a case of improper isolation of shared resources within the CPU operation cache on Zen 2-based products that could allow an attacker to corrupt instructions executed at a different privilege level, potentially resulting in privilege escalation.

The patches are also in addition to 127 security flaws that Google has addressed in Chromium, which forms the basis for Microsoft's Edge browser.

One of the most severe vulnerabilities patched by Redmond is CVE-2026-41096 (CVSS score: 9.8), a heap-based buffer overflow flaw impacting Windows DNS that could allow an unauthorized attacker to execute code over a network.

"An attacker could exploit this vulnerability by sending a specially crafted DNS response to a vulnerable Windows system, causing the DNS Client to incorrectly process the response and corrupt memory," Microsoft said. "In certain configurations, this could allow the attacker to run code remotely on the affected system without authentication."

Also fixed by Microsoft are several Critical- and Important-rated flaws -

  • CVE-2026-33109 (CVSS score: 9.9) - An improper access control in Azure Managed Instance for Apache Cassandra that allows an authorized attacker to execute code over a network. (Requires no customer action)
  • CVE-2026-42898 (CVSS score: 9.9) - A code injection vulnerability in Microsoft Dynamics 365 (on-premises) that allows an authorized attacker to execute code over a network.
  • CVE-2026-42823 (CVSS score: 9.9) - An improper access control in Azure Logic Apps that allows an authorized attacker to elevate privileges over a network.
  • CVE-2026-41089 (CVSS score: 9.8) - A stack-based buffer overflow in Windows Netlogon that allows an unauthorized attacker to execute code over a network without needing to sign in or have prior access by sending a specially crafted network request to a Windows server that is acting as a domain controller.
  • CVE-2026-33823 (CVSS score: 9.6) - An improper authorization in Microsoft Teams that allows an authorized attacker to disclose information over a network. (Requires no customer action)
  • CVE-2026-35428 (CVSS score: 9.6) - A command injection vulnerability in Azure Cloud Shell that allows an unauthorized attacker to perform spoofing over a network. (Requires no customer action)
  • CVE-2026-40379 (CVSS score: 9.3) - An exposure of sensitive information to an unauthorized actor in Azure Entra ID that allows an unauthorized attacker to perform spoofing over a network. (Requires no customer action)
  • CVE-2026-40402 (CVSS score: 9.3) - A user-after-free in Windows Hyper-V that allows an unauthorized attacker to gain SYSTEM privileges and access the Hyper-V host environment.
  • CVE-2026-41103 (CVSS score: 9.1) - An incorrect implementation of authentication algorithm in Microsoft SSO Plugin for Jira & Confluence that allows an unauthorized attacker to gain unauthorized access to Jira or Confluence as a valid user and perform actions with the same permissions as the compromised account.
  • CVE-2026-33117 (CVSS score: 9.1) - An improper authentication in Azure SDK that allows an unauthorized attacker to bypass a security feature over a network.
  • CVE-2026-42833 (CVSS score: 9.1) - An execution with unnecessary privileges in Microsoft Dynamics 365 (on-premises) that allows an authorized attacker to execute code over a network and gain the ability to interact with other tenant’s applications and content.
  • CVE-2026-33844 (CVSS score: 9.0) - An improper input validation in Azure Managed Instance for Apache Cassandra that allows an authorized attacker to execute code over a network. (Requires no customer action)

"This critical elevation of privilege vulnerability allows an unauthorized attacker to impersonate an existing user by presenting forged credentials, thus bypassing Entra ID," Adam Barnett, lead software engineer at Rapid7, said about CVE-2026-41103.

Jack Bicer, director of vulnerability research at Action1, described CVE-2026-42898 as a critical flaw that allows an authenticated attacker with low privileges to run arbitrary code over the network by manipulating process session data within Dynamics CRM.

"With no user interaction required, and the potential to impact systems beyond the vulnerable component's original security scope, this vulnerability poses serious enterprise risk: an attacker with only basic access could turn a business application server into a remote execution platform," Bicer said.

"Compromise of Dynamics 365 infrastructure can expose customer records, operational workflows, financial information, and integrated business systems. Since CRM environments often connect with identity services, databases, and enterprise applications, successful exploitation could lead to broader organizational compromise and operational disruption."

Organizations are also advised to update Windows Secure Boot certificates to their 2023 counterparts ahead of next month, when the 2011-issued certificates are set to expire. Microsoft first announced the change in November 2025.

"The most critical non-CVE update involves the mandatory rollout of updated Secure Boot certificates," Rain Baker, senior incident response specialist at Nightwing, said. "Devices failing to receive these updates before the June 26 deadline face 'catastrophic boot-level security failures' or degraded security states. Ensure your entire fleet successfully rotates to the new trust anchors before the June 26, 2026, deadline."

According to Satnam Narang, senior staff research engineer at Tenable, Microsoft has already patched over 500 CVEs five months into the year. This large volume of fixes reflects a broader industry trend where vulnerability discovery has scaled new highs, with a chunk of them flagged via artificial intelligence (AI)-powered approaches.

Microsoft, in a report published Tuesday, said AI-assisted vulnerability discovery is expected to increase the scale of Patch Tuesday releases in the coming months, adding 16 of the flaws fixed this month across the Windows networking and authentication stack were identified through its new multi-model AI-driven vulnerability discovery system, codenamed MDASH (short for multi-model agentic scanning harness).

"In this month's release, a greater share of the issues addressed were discovered by Microsoft, compared to prior months," Tom Gallagher, vice president of engineering at Microsoft Security Response Center, said. "Many of these were surfaced through AI investments and investigations across our engineering and research teams, including the use of Microsoft's new multi-model AI-driven scanning harness."

Microsoft also emphasized that the scale and speed of vulnerability discovery brought about by AI can raise operational demands and requires a consistent, disciplined approach to risk management in order to ensure that issues are mitigated quickly and fixed in a timely fashion.

"Stay current on supported operating systems, products, and patches, and revisit the speed and consistency of your patching cadence," Gallagher said. "Triage by exposure and impact, not raw count."

Other recommendations outlined by Microsoft include reducing unnecessary internet exposure, improving configuration hygiene, removing legacy authentication, enabling multi-factor authentication (MFA), enforcing strong access controls, segmenting environments to contain incidents, and investing in detection and response.

"The work of finding and fixing vulnerabilities continues to get faster, broader, and more rigorous across the industry," the tech giant said. "What we encourage in turn is a thoughtful look at whether the practices that worked well for the patching landscape of a few years ago are still well matched to where the landscape is heading."

"The fundamentals have not changed. The pace at which they need to be applied is changing, and the organizations that adjust with it will be the ones best positioned for what comes next."



from The Hacker News https://ift.tt/GTMKUWm
via IFTTT

Breaking things to keep them safe with Philippe Laulheret

Breaking things to keep them safe with Philippe Laulheret

In the latest Humans of Talos, Amy sits down with Senior Vulnerability Researcher Philippe Laulheret to demystify the world of ethical hacking. Philippe shares his unique journey from French engineering school to the front lines of cybersecurity, explaining how his lifelong love for solving puzzles helps him uncover critical security flaws before they can be exploited.

From his memorable experiment using a green onion to bypass a biometric fingerprint reader to his perspective on the reality of cybersecurity versus what we see in the movies, Philippe provides a fascinating look at the work that keeps our digital world safe.

Amy Ciminnisi: So, can you talk to me a little bit about what you do in vulnerability research?

Philippe Laulheret: I work in vulnerability research. Basically, my job is to find vulnerabilities in software, hardware, or things physically. It’s an interesting position because I usually get to choose which target I want to look at, which confuses people usually, because usually it’s a consulting role, or someone asks you to do that. But for us, we find vulnerabilities in things that we think are important. And then this way, people in different teams can write detection rules, and our customers are protected.

AC: I love that you get to kind of pick a niche and explore. How did you get into this?

PL: My deepest interest was more in reverse engineering, which is understanding how things work, software in particular. Throughout my whole life, I was really curious and really wanted to understand stuff. I guess research is an extension of that where you need to understand how the system works, and then it’s a puzzle where you need to find a way to break it. In my teenage years, I was really interested in that. I started playing Capture The Flag, which are challenges where people design exercises where there is a bug to find and exploit. It was really fun. I was doing that to stay sharp with my skills, and eventually, I was able to transition from regular development work to actual research. All those years of playing CTF really helped, even if it wasn't professional.

AC: Did you go to school initially for development work? What kind of career path led you here?

PL: Originally, as you can hear, I have a French accent. In France, we have engineering schools, which are fancy grad schools. The process is first you study very hard in math and physics, and then you go to grad school. At that time, I was convinced I wanted to do security, and I joined an electrical and computer engineering school. Somehow, in that school, I discovered an interest for different aspects of software development. I was getting interested in computer vision and other things. When I moved to the U.S. for development work instead of security work, I worked in a design studio for four years, which was really fun. I was making interactive installations. But as I said, I was playing CTF on the side to keep security pretty high in my head. Eventually, I moved to New York and joined a cybersecurity startup, and finally, I moved back to the Pacific Northwest, where I’m currently living, and I was finally able to do vulnerability research the way I wanted to.


Want to see more? Watch the full interview, and don’t forget to subscribe to our YouTube channel for future episodes of Humans of Talos.



from Cisco Talos Blog https://ift.tt/bhFVpsl
via IFTTT

NIST Narrows the NVD: What Container Security Programs Should Reassess

On April 15, NIST announced a prioritized enrichment model for the National Vulnerability Database. Most CVEs will still be published, but fewer will receive the CVSS scores, CPE mappings, and CWE classifications that container scanners and compliance programs have historically relied on.

The change formalizes a drift that has been visible to anyone pulling NVD feeds for the past two years. What shifted on April 15 is the expectation: NIST has now said plainly that it does not intend to return to full-coverage enrichment. For programs that built scanning, prioritization, and SLA workflows around the assumption that NVD sits as the authoritative secondary layer on top of CVE, that assumption is worth a structured review.

What changed

Three categories of CVEs will continue to receive full enrichment:

  • CVEs in CISA’s Known Exploited Vulnerabilities catalog, targeted within one business day
  • CVEs affecting software used within the federal government
  • CVEs affecting “critical software” as defined by Executive Order 14028

Everything else moves to a new “Not Scheduled” status. Organizations can request enrichment by emailing nvd@nist.gov, though no service-level timeline applies. NIST has also stopped duplicating CVSS scores when the submitting CNA provides one, and all unenriched CVEs published before March 1, 2026 have been moved into “Not Scheduled.”

The NIST volumes behind the decision

NIST cited a 263% increase in CVE submissions between 2020 and 2025, with Q1 2026 running roughly a third higher than the same period a year earlier. The rise tracks with a broader expansion in CVE numbering: more CNAs, more open source projects running their own disclosure processes, and more tooling surfacing issues that would not have reached CVE a few years ago.

Year

Published CVEs

Source

2023

~29,000

CVE.org

2024

~40,000

CVE.org

2025

~48,000

NIST

2026 (forecast)

~59,500 (median)

FIRST

AI is a compounding factor on both sides of this curve. In January, curl founder Daniel Stenberg shut down the project’s HackerOne bug bounty after six and a half years, citing “death by a thousand slops”: AI-generated reports that read like real research but described vulnerabilities that didn’t exist. Node.js, Django, and others have tightened intake under similar pressure. On the signal side, Anthropic’s April announcement of Claude Mythos Preview described a model that autonomously discovered thousands of zero-day vulnerabilities across every major operating system and web browser, including a 17-year-old unauthenticated RCE in FreeBSD’s NFS server. Earlier Anthropic research documented Claude Opus 4.6 finding and validating more than 500 high-severity vulnerabilities in production open source.

More noise and more real signal are heading toward the same pipeline. NIST enriched roughly 42,000 CVEs in 2025, its highest annual total, and still fell further behind incoming volume.

How it lands in compliance

The operational question is what programs have to document when NVD scoring is not available, and how consistently that documentation holds up across assessments.

Framework

NVD reference

Likely effect

FedRAMP

NVD CVSSv3 as original risk rating, with CVSSv2 and native scanner score as documented fallbacks

More variance in how remediation SLAs are applied across CSPs

PCI-DSS 4.0

Req. 11.3.2 external scans reference CVSS; ASV guidance points to NVD

More ambiguity on pass/fail determinations for unscored findings

NIST SP 800-53 (RA-5)

Lists NVD as an example source; permissive language

Lower direct impact, though auditors commonly expect CVSS-based severity evidence

DORA / SOC 2

No direct reference

Principles-based; audit expectations around severity rationale still apply

None of these frameworks break on their own. Mature vulnerability management programs generally have language in their SSPs and risk registers covering fallback scoring and exception handling. Programs that do not will likely need it before their next audit cycle.

The gap that is relevant to the container ecosystem

Two NVD inputs matter most for container scanning:

  • CPE applicability statements map a CVE to specific software packages. When CPE strings are missing, a scanner that matches primarily on CPE cannot determine which packages in an image are affected. The CVE exists in the database but is operationally invisible to the scan.
  • CVSS scores drive prioritization and SLA routing. Without a score, a CVE may surface as UNKNOWN severity or fall outside remediation workflows entirely.

Container images create a compounding effect here. Each image inherits packages from a base layer, application dependencies, and often a long transitive dependency chain. When any of those packages carries a CVE that NVD has not enriched, the gap propagates through every downstream image built on top of it. Scanners that draw on multiple advisory sources, and that match on package identifiers other than CPE, are less exposed.

Questions worth putting to image vendors

  • What advisory sources does your tooling use beyond NVD?
  • When a CVE has no NVD CVSS score, what does the tool display, and does it trigger remediation workflows?
  • How do you define “patched,” and is that definition in your written CVE policy?
  • Are your remediation SLAs measured from CVE disclosure date or NVD enrichment date?
  • Can a third-party scanner reproduce your clean-scan result against public advisory data?

Where Docker sits

Docker Hardened Images are designed so that vulnerability management in container workloads does not depend primarily on NVD enrichment. Each image ships with signed attestations for build provenance, SBOMs in both CycloneDX and SPDX formats, OpenVEX exploitability statements, and scan results. SBOMs are generated from the SLSA Build Level 3 pipeline rather than inferred from external databases, so package inventory is accurate regardless of NVD’s enrichment state. Hardened System Packages allow package-level patching independent of upstream distribution timelines, which means remediation is not gated on a distribution maintainer’s release cadence or on an NVD analyst’s queue. When a CVE is not exploitable in a specific image context, that assessment is published as a signed VEX document that third-party scanners including Trivy, Grype, and Wiz consume natively.

Docker Scout, the scanning layer that reads these attestations, aggregates 22 advisory sources including NVD, CISA KEV, EPSS, GitHub Advisory Database, and 13 Linux distribution security trackers. Scout matches on Package URLs (PURLs) rather than NVD’s CPE scheme, which allows package identification to continue when CPE strings are unavailable. NVD remains a valuable input to this architecture, one of several rather than the spine.

What to reassess

Audit open findings against the March 1, 2026 cutoff. Any CVE published before that date that has not received NVD enrichment has already been moved to “Not Scheduled.” Programs carrying open findings tied to those CVEs may have severity scores and CPE mappings in their trackers that no longer reflect an active NVD record. Verify that the scoring basis for those findings is documented and defensible independent of NVD.

For programs running DHI, the NVD policy change does not require an operational response. For programs evaluating container security vendors more broadly, the question worth elevating in the next procurement cycle is whether NVD is one source of vulnerability intelligence in their stack, or the primary one.

The NVD will continue to play a role. That role is narrowing, and the signals suggest it will keep narrowing. Programs that use the April announcement as a prompt to audit their data sources now will have a cleaner answer the next time a regulator, an auditor, or a board asks where their vulnerability data actually comes from.

Sources and further reading



from Docker https://ift.tt/c5AvywW
via IFTTT

GemStuffer Abuses 150+ RubyGems to Exfiltrate Scraped U.K. Council Portal Data

Cybersecurity researchers are calling attention to a new campaign dubbed GemStuffer that has targeted the RubyGems repository with more than 150 gems that use the registry as a data exfiltration channel rather than for malware distribution.

"The packages do not appear designed for mass developer compromise," Socket said. "Many have little or no download activity, and the payloads are repetitive, noisy, and unusually self-contained."

"Instead, the scripts fetch pages from U.K. local government democratic services portals, package the collected responses into valid .gem archives, and publish those gems back to RubyGems using hardcoded API keys."

The development comes as RubyGems temporarily disabled new account registration following what has been described as a major malicious attack. While it's not clear if the two sets of activities are related, the application security company said GemStuffer fits the "same abuse pattern," which involves using newly created packages with junk names to host the scraped data.

At a high level, the campaign abuses RubyGems as a place to stage the scraped council content. It does this by fetching hard-coded U.K. council portal URLs, packaging the HTTP responses into valid .gem archives, and publishing those archives to RubyGems using embedded registry credentials.

In some cases, the payload embedded within the gem creates a temporary RubyGems credential environment under "/tmp," overrides the HOME environment variant, builds a gem locally, and pushes it to RubyGems using the gem command-line interface (CLI), as opposed to depending on pre-existing RubyGems credentials on the target machine.

Other variants of the malicious gems have been found to eschew the CLI component in favor of uploading the archive directly to the RubyGems API via an HTTP POST request. Once the new gems have been published, all an attacker has to do is run a "gem fetch" command with the gem name and version to access the scraped data.

The novel scraping campaign has been found to target public-facing ModernGov portals used by Lambeth, Wandsworth, and Southwark, with an aim to collect committee meeting calendars, agenda item listings, linked PDF documents, officer contact information, and RSS feed content.It's not clear what exactly the end goals are, as the information appears to be publicly accessible anyway.

Socket has assessed that the systematic bulk collection and archival of this data raises the possibility that the attacker may be leveraging the "council portal access as a pivot to demonstrate capability against government infrastructure."

"It may be registry spam, a proof-of-concept worm, an automated scraper misusing RubyGems as a storage layer, or a deliberate test of package registry abuse," Socket said. "But the mechanics are intentional: repeated gem generation, version increments, hardcoded RubyGems credentials, direct registry pushes, and scraped data embedded inside package archives."



from The Hacker News https://ift.tt/KWgxVN2
via IFTTT

Tuesday, May 12, 2026

Docker AI Governance: Unlock Agent Autonomy, Safely

Introducing Docker AI Governance: centralized control over how agents execute, what they can reach on the network, which credentials they can use, and which MCP tools they can call, so every developer in your company can run AI agents safely, wherever they work.

Your laptop is the new prod

Agents are the biggest productivity unlock the modern workplace has seen in a generation, and engineering is where the shift is most obvious. Developers aren’t using agents to autocomplete a function anymore. They’re using them to read whole codebases, refactor across services, and ship entire products, end to end. Vibe coding is real, it’s shipping to main, and it’s happening on laptops everywhere today.

The same shift is moving through every other function. A new class of agents called Claws is already in production, sending emails, managing calendars, booking travel, pulling CRM data, reconciling reports, and querying production systems. Marketing, finance, sales, and support are adopting them as fast as engineering is, because the productivity gains are too large to ignore and the companies that move first will out-execute the ones that don’t. Org-wide rollouts that used to take quarters are landing in weeks.

What’s more interesting than the speed of adoption is where all of this actually runs. Agents and Claws live outside the systems enterprises spent two decades hardening. They don’t sit behind your CI/CD pipeline, they don’t live inside your VPC, and they don’t follow your IAM model. They run on the developer’s machine, with the developer’s credentials, reaching into private repos, production APIs, customer records, and the open internet, often in the same session. The laptop just became the most powerful node in your enterprise, and it also became the most exposed. Laptop and agent environments are the new prod, and they need to be governed like prod.

What governance actually has to solve

The instinct in most enterprises is to reach for the tools that already exist, but none of them see what an agent is doing. CI/CD doesn’t see it because the agent isn’t a pipeline. The VPC doesn’t see it because the laptop is outside the perimeter. IAM doesn’t see it because the agent is acting as the developer. The result is that CISOs can’t tell what an agent touched, what it ran, or where the data went, and they also can’t tell the business to slow down. This is the bind every security leader is in right now.

Strip the problem to first principles and an agent has two paths to do significant harm. It either executes code itself, touching files and opening network connections, or it calls a tool through an MCP server to act on an external system. Govern both paths and you’ve governed the agent. Miss either one and you haven’t.

That’s the test for any AI governance solution worth taking seriously, and it has two parts. The controls have to live at the runtime layer where the agent actually executes, not as advisory rules layered on top that a clever prompt can route around. And they have to work consistently wherever the agent ends up running, because agents don’t stay on the laptop. They migrate to CI runners, to staging clusters, to production. A policy that only holds in one of those places is a gap waiting to be found.

Why Docker

Docker is the only company that meets both parts of that test, and the reason is structural.

Docker built the sandbox that contains the first path. Every agent session runs inside an microVM-based isolated environment where filesystem and network access are controlled by a hard boundary, which means enforcement happens at the level of the process, not as a suggestion the agent can ignore. Docker built the MCP Gateway that contains the second path. Every tool call routes through a single chokepoint where it can be authenticated, authorized, and logged before it reaches the external system. These controls at a primitive level, Docker Sandboxes and Docker MCP Gateway, make enforcement strict instead of advisory. We own the substrate the agent is running on, so the policy isn’t a wrapper around someone else’s runtime, it’s the runtime.

The second part is what makes this durable. The same sandbox primitive runs on the developer’s laptop, inside Kubernetes, and across cloud environments, with the same policy model and the same enforcement guarantees. When an agent moves from a developer’s machine to a CI runner to a production cluster, the policy moves with it, because the runtime underneath is the same in all three places. No other vendor can say that, because no other vendor is the runtime. Endpoint security tools don’t extend to clusters. Cluster security tools don’t reach the laptop. Cloud security tools don’t run on either. Docker covers all three because Docker is what’s actually executing the agent in all three.

Docker AI Governance is the control plane that sits on top of that runtime. It turns the sandbox and the MCP Gateway into centralized policy, defined once in the admin console, enforced at every node the agent touches, and auditable from end to end.

How Docker AI Governance works

From a single admin console, security teams define and enforce policy across four control surfaces: network, filesystem, credentials, and MCP tools. One policy layer that doesn’t need a per-machine setup and that consistently works across thousands of developers.

Sandbox policy for network and filesystem. Admins define allow and deny rules for domains, IPs, and CIDRs, alongside mount rules for filesystem paths with read-only or read-write scope. Every agent session runs inside an isolated sandbox where only approved endpoints are reachable and only approved directories are mountable, with enforcement happening at the proxy and mount level rather than as an advisory layer the agent can ignore.

Credential governance. Agents are dangerous in proportion to what they can authenticate as, so Docker AI Governance controls which credentials, tokens, and secrets an agent session can see, scopes them to the duration of that session, and blocks exfiltration to unapproved destinations. Developers stop pasting tokens into prompts, and security stops wondering where those tokens ended up.

MCP tool governance. Admins control which MCP servers and tools are available through organization-wide managed policies, with unapproved servers blocked by default. Every MCP call flows through the same policy engine as network, filesystem, and credential requests, so there’s no separate surface to configure and no bypass path.

Role-based policy assignment. Different teams need different levels of access, and security research will reasonably require broader MCP usage than finance. Create policy groups, assign users through your IdP, and layer team-specific rules on top of organization-wide guardrails that can’t be overridden. It scales to thousands of developers through existing SAML and SCIM integrations with no per-user setup.

Audit and visibility. Every policy evaluation generates a structured event with user identity, timestamp, session context, and the rule that triggered the decision, and logs export cleanly to your existing SIEM and compliance systems. This is the evidence CISOs need to approve AI usage at scale rather than tolerate it under the table.

Automatic policy propagation. When a developer authenticates, their machine pulls the latest policy, and updates reach every device automatically. Admins define policy once and Docker enforces it everywhere.

What this unlocks

CISOs get the governance layer they’ve been missing and the confidence to approve agent usage at scale rather than block it. Platform teams get an easy way to set up governance: by defining a policy once and having it enforced everywhere with full audibility. This removes the operational burden of scaling AI adoption across the company. Developers get what agents promised in the first place: real speed and autonomy, with governance that stays out of the way. We built Docker AI Governance with these principles in mind: agents should be autonomous and governance should be invisible.

Available today

Docker AI Governance is available now. If you’re a security leader trying to close the AI governance gap, or a platform team ready to roll out agents without compromising control, it was built for you.

Contact sales to learn more.



from Docker https://ift.tt/vO5UJet
via IFTTT

RubyGems Suspends New Signups After Hundreds of Malicious Packages Are Uploaded

RubyGems, the standard package manager for the Ruby programming language, has temporarily paused account sign ups following what has been described as a "major malicious attack."

"We're dealing with a major malicious attack on Ruby Gems right now," Maciej Mensfeld, senior product manager for software supply chain security at Mend.io, said in a post on X. "Signups are paused for the time being. Hundreds of packages involved – mostly targeting us, but some carrying exploits."

Visitors to RubyGems' sign up page are now greeted with the message: "New account registration has been temporarily disabled."

Mend.io, which secures RubyGems, said it intends to release more details once the incident is contained. It's currently not known who is behind the attack.

The development comes as software supply chain attacks targeting the open-source ecosystems have been on the rise, with threat actors like TeamPCP compromising widely used packages to distribute credential-stealing malware capable of harvesting sensitive data and allowing the attackers to expand their reach.

In a report published Monday, Google said the credentials stolen from affected environments have been monetized through partnerships with ransomware and data theft extortion groups.

(This is a developing story. Please check back for more details.)



from The Hacker News https://ift.tt/VXhlYZb
via IFTTT

Defending consumer web properties against modern DDoS attacks

If you own, create, or maintain online services and web portals, you’re probably aware of the dramatic upswing in DDoS attacks on your domains. AI has democratized tooling not just for us but for threat actors as well. DDoS in this era has extended from simple bandwidth saturation to sophisticated, application-layer abuse. Defending against this activity now requires system-level design, beyond just the typical network-level filtering. As botnets continue to expand their footprint and evade identification, it is important for us to take a step back, assess the situation, and take a defense-in-depth approach to increase our resilience against this class of disruption.

DDoS activity across Bing and other online services at Microsoft has seen a large uptick in the past five to six years. As reported in the Microsoft Digital Defense Report 2025, Microsoft now processes more than 100 trillion security signals, blocks approximately 4.5 million new malware attempts, analyzes 38 million identity risk detections, and screens 5 billion emails for malicious content each day. This helps illustrate both the breadth of modern attack surfaces and the automation cyberattackers can now wield at industrial scale. When we narrow in specifically on DDoS, an even clearer trend emerges: beginning in mid-March of 2024, Microsoft observed a rise in network DDoS attacks that eventually reached approximately 4,500 cyberattacks per day by June 2024. And this persistent volume was paired with a shift toward more stealthy application-layer techniques.

In my role as Vice President, Intelligent Conversation and Communications Cloud Platform at Microsoft, I focus on helping the Microsoft AI and Bing teams build systems that are safe, resilient, and worthy of user trust, even under the sustained pressure we’re receiving from today’s cyberattackers. Whether you are responsible for a single public website or a large portfolio of consumer-facing applications, defending against modern DDoS attacks means more than just absorbing traffic. It means building defense-in-depth robust enough that, even if some attack traffic gets through, your service stays usable for the people who rely on it.

The nature of modern DDoS attacks

Early DDoS attacks were largely about volume. Cyberattackers would flood a target with traffic in an attempt to saturate network capacity and force an outage. While volumetric attacks still happen, most large services now have baseline protections that make this approach less effective on its own.

Modern DDoS attacks are more nuanced. They are often multi-vector, with a single campaign potentially including network-layer floods and application-layer abuse at the same time. Along with the exponential increase in the scale of these cyberattacks, they are also getting more tailored to stress specific applications and user flows. Application-layer attacks are gaining popularity because they are harder to distinguish from legitimate usage.

We also see threat actors utilizing a broader range of devices in botnets, including consumer Internet of Things (IoT) devices and misconfigured cloud workloads. In some cases, cyberattackers abuse legitimate cloud infrastructure to generate traffic that blends in with normal usage patterns. Edge systems, such as content delivery networks (CDNs) and front-door routing services, are increasingly targeted because they sit at the boundary between users and applications.

When attack traffic looks like normal user traffic, typical network-level blocklists aren’t very effective. You need sophisticated fingerprinting (starting with JA4), layered controls, and good operational visibility. This evolution is part of what makes defending against DDoS more than a networking problem. It is now a system design problem, an operational monitoring problem, and ultimately a trust problem.

A defense-in-depth framework

Even if you block 95% of malicious traffic, the remaining 5% can still be enough to take you down if it hits the right bottleneck. That’s why defense-in-depth matters.

A strong defensive posture starts with making abnormal traffic easier to spot and harder to exploit. Techniques like rate limiting, geo-fencing, and basic anomaly detection remain foundational. They are most effective when tuned to your specific traffic patterns. Cloud-native DDoS protection services play an important role here by absorbing large-scale attacks and surfacing telemetry that helps teams understand what is happening in real time. If you run on Azure, there are built-in options that can help when used as part of a broader design. Azure DDoS Protection is designed to mitigate network-layer cyberattacks and is intended to be used alongside application design best practices. At the edge, services like Azure Web Application Firewall (WAF) on Azure Front Door can provide centralized request inspection, managed rule sets, geo-filtering, and bot-related controls to reduce malicious traffic before it reaches your origins.

Microsoft publishes a range of Secure Future Initiative (SFI) guidance and engineering blogs that describe patterns we use internally to harden consumer services at scale, and if you’re looking to assess how robust your site’s current DDoS resilience posture is, here’s a simple tabular framework to work from:

StateAttributes and characteristicsReadiness posture (availability and latency)Risk profile (CISO perspective)
Level 1: Exposed
(Direct Origin/No CDN)
Architecture: Monolithic; Origin IP exposed through DNS A-records.
Detection: Manual log analysis post-incident; reactive alerts on server CPU spikes.
Mitigation: Null-routing by ISP (taking the site offline to save the network); manual firewall rules.
Key Signal: Immediate 503 errors during minor surges.
Fragile/Volatile

Availability: Single point of failure. Zero resilience to volumetric or L7 attacks.
Latency: Highly variable; degrades linearly with traffic load.
Recovery: Hours to days (manual intervention required).
Critical/Existential

Residual Risk: High. The organization accepts that any motivated attacker can cause total outage.
Financial Impact: Direct revenue loss proportional to downtime.
Reputation: Severe damage; loss of customer trust.
Level 2: Basic Protection
(Commodity CDN/ Volumetric Shield)
Architecture: Static assets cached at edge; Origin cloaked.
Detection: Threshold-based volumetric alerts (for example, more than 1 Gbps).
Mitigation: “Always-on” scrubbing for L3/L4 floods; basic geo-blocking.
Key Signal: Survival of SYN floods, but failure under HTTP floods.
Defensive/Static

Availability: Resilient to network floods; vulnerable to application exhaustion.
Latency: Improved for static content; poor for dynamic attacks.
Recovery: Minutes (automated scrubbing activation).
High/Managed

Residual Risk: Moderate-High. Application logic remains a soft target.
Blind Spot: Sophisticated bots bypass volumetric triggers.
Compliance: Meets basic continuity requirements but fails resilience stress tests.
Level 3: Advanced Edge
(Intelligent Filtering/WAF)
Architecture: Edge compute; Dynamic web application firewall (WAF); API Gateway enforcement.
Detection: Signature-based (JA3/JA4 fingerprinting); User-Agent analysis.
Mitigation: Rate limiting by fingerprint/behavior; CAPTCHA challenges.
Key Signal: High block rate of “bad” traffic with low false positives.
Proactive/Robust

Availability: High availability for most attack vectors, including low-and-slow.
Latency: Consistent; edge mitigation prevents origin saturation.
Recovery: Seconds (automated policy enforcement).
Medium/Controlled

Residual Risk: Medium. Shift to “sophisticated bot” risk (mimicking humans).
Focus: Quality of Service (QoS) and reducing false positives.
Investments: Shift from hardware to threat intelligence feeds.
Level 4: Resilient Architecture
(Graceful Degradation/
Bulkheading)
Architecture: Circuit Breakers; Load Shedding logic; defense-in-depth.
Detection: Service-level health checks; Dependency failure monitoring; outlier detection; trust scores.
Mitigation: Challenges/CAPTCHAs; Service Degradation Automated feature toggling (for example, disable “Reviews” to save “Checkout”).
Key Signal: “Limited Impact to Availability” during massive events.
Resilient/Adaptive

Availability: Core functions remain online; non-critical features degrade.
Latency: Controlled degradation; critical paths prioritized.
Recovery: Real-time (system self-stabilization).
Low/Tolerable

Residual Risk: Low. Business accepts degraded functionality to preserve revenue.
Narrative: “We operated through the attack with minimal user impact.”
Risk Appetite: Aligned with business continuity tiers.
Level 5: Autonomous Defense
(AI-Powered/
Predictive)
Architecture: Serverless edge logic; Multi-CDN failover; Chaos Engineering.
Detection: AI and machine learning predictive modeling; Zero-day pattern recognition.
Mitigation: Autonomous policy generation; Preemptive scaling.
Key Signal: Attack neutralized before human operator awareness.
Antifragile/Optimized

Availability: Near 100% through multi-redundancy and predictive scaling.
Latency: Optimized dynamically based on threat level.
Recovery: Instantaneous/Pre-emptive.
Minimal/Strategic

Residual Risk: Very low. Focus shifts to supply chain and novel vectors.
Posture: Continuous improvement through Red Teaming and Chaos experiments.
Leadership: Chief information security officer (CISO) drives industry intelligence sharing.

Planning for graceful degradation

One of the most common misconceptions about DDoS defense is that success means “no reduction in services.” In reality, even a partially successful attack can degrade performance enough to frustrate users or erode trust, without triggering a full outage. Graceful degradation is about maintaining core functionality even when systems are under stress. It means being deliberate about which user flows must remain available and which can be temporarily limited without causing disproportionate harm.

For example, our systems prioritize core scenarios over secondary features during extremely large cyberattacks. In practice, this can mean temporarily delaying nonessential personalization or shedding load from less critical features to preserve overall responsiveness. These decisions are made in advance and tested, not improvised during an incident. Here’s an example of how we might do that:

  • Prioritizing core user flows: We would focus on keeping core scenarios responsive. That might mean protecting one or two core scenarios while de-emphasizing secondary experiences.
  • Reducing expensive work first: Some parts of an experience are computationally heavier. Under attack pressure, those are candidates for temporary reduction, so the overall service stays usable.
  • Tiered experience under load: In extreme conditions, you can provide a better experience for users with higher trust signals while still offering an acceptable experience to everyone else. This is not about punishing lower trust users. It is about making sure your system can still serve legitimate demand when resources are constrained.
  • Clear user messaging: If you need to disable or simplify a feature temporarily, communicate it in a way that is honest and calm. You do not need to explain your internal architecture. You do need to be predictable.

Designing for resilience means assuming that individual components will fail or be stressed at some point. Systems that are built with that expectation tend to recover faster and maintain user trust more effectively than systems that aim for perfect uptime at all costs.

Get started improving your DDoS defense

If I could leave you with a single practical concept, it would be this: treat DDoS as a normal operating condition for internet-facing services. Build defense in depth. Assume some cyberattack traffic will get through. Design your service so it can degrade gracefully while protecting the user experiences that matter most.

Consumer trust is fragile and hard-earned. Developers and operators who think beyond raw availability, and who design for transparency, prioritization, and resilience, are better positioned to handle the realities of today’s cyberthreat landscape. Modern defensive strategies combine proactive controls, thoughtful architecture, and a clear understanding of what matters most to users.

For those interested in going deeper, I encourage you to explore the Secure Future Initiative resources and the other Office of the CISO blogs provided by my peers at Microsoft. Both of these resources frequently share practical patterns for building and operating resilient services at scale.

Microsoft
Deputy CISOs

To hear more from Microsoft Deputy CISOs, check out the OCISO blog series:

To stay on top of important security industry updates, explore resources specifically designed for CISOs, and learn best practices for improving your organization’s security posture, join the Microsoft CISO Digest distribution list.

Man with smile on face working with laptop

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.  

The post Defending consumer web properties against modern DDoS attacks appeared first on Microsoft Security Blog.



from Microsoft Security Blog https://ift.tt/8mV3AS6
via IFTTT

IBM Vault 2.0 adds UI enhancements and improved reporting visibility

Many educational resources exist outside of IBM Vault to help users get the most value out of their secrets management experience with Vault. From developer docs to videos authored in-house by HashiCorp, an IBM company, to user and community-generated content, users have access to many resources outside of the product. Becoming a Vault expert requires accessing disparate knowledge that isn’t contextualized within the product.With the recent release of IBM Vault 2.0, we’ve taken the opportunity to holistically reassess Vault with a clear goal: make Vault easier to use, so that customers can onboard the product without needing a PhD in Vault. 

Vault 2.0.1 includes improved reporting and visibility into consumption, helping organizations better understand how Vault is used across secrets management, key lifecycle management, identity brokering, and data protection. These enhancements provide greater transparency into usage patterns, enabling teams to improve operational visibility as well as support forecasting, planning, and governance initiatives.

A focus on Vault onboarding and adoption 

There are two key pillars we’re focusing on when planning the UI enhancements: 

  • How do we help customers easily understand and discover new features? How do we match our features easily and intuitively to customer problems rather than expecting customers to be experts in Vault documentation?

  • How do we enable customers to quickly, easily, with best practices implementation, adopt features that strengthen their ability to deliver their own roadmaps?     

UI enhancements available in IBM Vault 2.0 

Following are the key features and improvements designed to help customers onboard to Vault:  

  • Visual policy generator - provides a pre-filled, contextual UI form that generates policy snippets. The policy snippets can be copied for use in the Terraform Vault Provider (recommended) or saved to the Vault cluster.  

  • Onboarding wizard - starts customers off with simple questions about how they would use a feature and then generates an editable code snippet that supports their usage of the feature.  

  • Introductory pages for new and existing features - details the feature’s value add and other helpful information with a recommended quick-start action.  

  • Navigation bar revamp - groups features by customer problems to center the customer experience and contextualize the features that can best help them as they use Vault.  

Visual policy generator assists with code customization 

New Vault users have no permissions by default. Assigning permissions to use any feature or interact with any resource requires writing the custom code for a policy. This can create an operational burden for administrators and a barrier for feature adoption.  

In Vault 2.0, a contextual visual policy generator helps customers generate best practices and pre-filled policies from forms. The policy generator generates editable code, giving customers the option to get started with support. It’s customizable, so they can change the inputs as needed.

A screenshot of the IBM Vault UI showing the visual policy generator for ACL policy

A pre-filled contextual UI form is available to the customer to complete. Then, they will receive the properly formatted, best practices code for a Vault ACL policy, which they can save to their local Vault cluster or copy into their Terraform Vault Provider (or other script). 

The visual policy generator automatically detects and populates as much of the necessary information as possible to help Vault users get started faster. However, users should still review the code to confirm it meets their specifications and policies before using. 

Onboarding wizard simplifies turning on features 

Beginning with namespaces, Vault 2.0 includes an onboarding wizard pattern that guides users through the options available to them. This removes some of the choice overload that can overwhelm a user as they make the selection for the features that best meet their requirements.

A screenshot of the IBM Vault UI showing the onboarding wizard for Namespaces

Following the selection, the wizard generates a Terraform code snippet, CLI command or option to apply their choices in the UI at the end. The onboarding wizard is being released with support for Namespaces in 2.0.0, with plans to support additional features in the future. Customers can always reach out to their account team to request support for particular features, as product and experience feedback are valuable inputs in our planning.  

Introductory pages provide at-a-glance information about unused features  

Introductory features pages are being added to provide important details such as business case, use case analysis, links to documentation, and easy to follow diagrams so users can understand the value of Vault features without having to seek out external documentation. 

A screenshot of IBM Vault UI showing the introductory page to Namespaces

Guided starts are embedded in the intro pages for easy setup of new features. Estimated set-up time helps preview their time investment.  

This improvement is being rolled out in phases, with more introductory pages planned for other features.  

Navigation bar revamp centers the customer’s ‘problems to be solved’ 

Features are most valuable when they’re understood and used, but this can be difficult if they aren’t readily related to the customer’s challenges. We’ve revamped the navigation bar to group features by customer problems.   

The goal here is to align Vault with the workflows customers want to complete. Additionally, we have renamed ‘control groups’ to ‘access workflows’, which is more market standard and intuitive. 

Reporting enhancements support expanded Vault usage 

Secrets management isn’t the only use case for Vault. As customers have expanded to other use cases, we want to encourage them to continue experimenting for wider adoption of Vault. To that end, we’ve introduced reporting enhancements for visibility into licensing and consumption for customers who are licensed as such and have upgraded to Vault 2.x.x. and later. Usage is now measurable across the following units of measure for these Vault use cases: 

  • Secrets management: number of managed secrets 

  • Key lifecycle management: number of managed keys 

  • Identity brokering: number of credential units issued 

  • Data protection: number of data protection API calls 

These units of measure are directly aligned to security outcomes and support Vault users in providing visibility to security and compliance stakeholders – whether that’s with budget reconciliation or pre-planning with more granularity in their forecasting.   

With multiple use cases and the capability to measure across different units, these measurement and reporting enhancements allow Vault to deliver greater customer value with better cost-to-value alignment with pricing that encourages flexibility and growth.  

Learn more 

IBM Vault 2.0 became generally available on April 14, 2026. You can read about the release on our blog and access the Vault release notes in our developer docs. Stay tuned as more IBM Vault and other HashiCorp news is published.  



from HashiCorp Blog https://ift.tt/LzbUv2e
via IFTTT

Terraform adds cost visibility, project-level notifications, and more

In the past few months, the HashiCorp Terraform engineering team has continuously improved features to help organizations eliminate infrastructure blind spots and strengthen governance and security across the entire infrastructure lifecycle. The latest HCP Terraform and Terraform Enterprise improvements include:

·       Billable resource analytics (GA)

·       Project-level remote state sharing (GA)

·       Module testing for dynamic credentials (GA)

·       Project-level notification (GA)

·       Registry tagging (Beta)

Billable resource analytics

Feature gap: Organizations using resources under management  (RUM)-based billing face a significant visibility challenge when trying to understand where infrastructure costs are coming from. Until now, HCP Terraform customers could only view their total billable managed resources at the organization level. Deeper insights into where resources were being consumed weren't available in an easily accessible way. This made it difficult to estimate costs, predict future consumption, and determine which elements in an organization are consuming what percentage of used resources.

What's new: We are introducing the general availability of billable resource analytics for HCP Terraform. This new capability transforms how organizations manage infrastructure costs by providing users with comprehensive visibility into resource consumption across their entire organization. By breaking down the current totals of billable managed resources by project and workspace, decision makers gain the insights needed to reduce unnecessary spending and optimize their infrastructure investments. Available as a self-service view on the existing usage page, this capability eliminates delays in accessing critical cost data and empowers organizations to take immediate action on cost optimization opportunities.

Benefits:

  • Cost visibility and predictability: Organizations gain the visibility needed to proactively manage infrastructure spending rather than reacting to unexpected bills. By identifying high-consumption projects and workspaces, organization owners can work with engineering teams to right-size resources, eliminate waste, and stay within budget constraints.

  • Data-driven decision making: Leaders can make informed decisions about infrastructure investments based on actual consumption patterns rather than guesswork. The detailed breakdowns reveal which projects consume the highest resources and where consolidation opportunities exist. This can enable strategic resource allocation that aligns infrastructure spending with business priorities.

To see what billable resource analytics has to offer, any user on a paid HCP Terraform plan can access the new view on the existing usage page for their organization.

A screenshot of the IBM Terraform's Billable resource table

Project-level remote state sharing

Feature gap: Until now, platform teams managing large-scale infrastructure on HCP Terraform and Terraform Enterprise faced a difficult trade-off when sharing data between workspaces using the terraform_remote_state data source. They were limited to two primary options: sharing state with every workspace in the entire organization, which increased the security risk, or manually managing a list of workspaces, which was slow and error-prone.

For large enterprises, organization-wide sharing is often too broad, exposing sensitive configurations to unrelated teams and violating the principle of least privilege. Conversely, manually maintaining access lists for hundreds or thousands of workspaces creates a massive operational burden. This gap forced many customers into an unsustainable "multi-organization" model — creating numerous separate organizations just to maintain security boundaries — which is difficult to manage and impacts system performance.

What's new: We are introducing the GA of a new remote state sharing option: "Share with all workspaces in this project" for HCP Terraform and Terraform Enterprise 1.1.0 and later. This feature allows you to use projects as a true isolation boundary so that users can have an effective way to control their state sharing, increasing security and developer velocity.

When this setting is enabled in a workspace’s general settings, its remote state becomes automatically accessible to any other workspace within the same project. This access is dynamic: If a new workspace is added to the project, it immediately gains access to the shared state. If a workspace is moved to a different project, its access is automatically revoked, and its own shared state is re-scoped to its new project environment.

Benefits:

  • Enhanced security through isolation: Organizations can now enforce strict data boundaries at the project level, ensuring that sensitive infrastructure outputs are only visible to the teams and services that actually need them.

  • Operational efficiency at scale: By automating access within a project, platform teams no longer need to manually update workspace relationships. This "set and forget" approach significantly reduces the management overhead associated with large-scale state sharing.

  • Simplified governance: This feature unblocks the transition from complex multi-organization architectures to a more streamlined project-based model. This consolidation leads to better performance, easier reporting, and a more cohesive management experience for administrators.

We continue to recommend using the tfe_outputs data source in the HCP Terraform/Enterprise Provider to access remote state outputs in HCP Terraform or Terraform Enterprise. The tfe_outputs data source is more secure because it does not require full access to workspace state to fetch outputs.

To learn more, check out our Workspaces settings documentation.

Project-level notification

Feature gap: Until now, platform teams managing large-scale infrastructure on HCP Terraform faced a significant operational hurdle when configuring alerts and observability. They were required to configure notification settings, such as Slack webhooks, PagerDuty triggers, or email alerts manually on a workspace-by-workspace basis.

For enterprises scaling self-service infrastructure, maintaining these individual configurations across hundreds or thousands of workspaces created a massive operational burden. This gap often led to "silent failures," situations where critical errors in newly provisioned environments went completely unnoticed by operations teams because the workspace creator forgot to configure the necessary alerts. Platform teams were forced to rely on brittle, custom-built audit scripts just to verify that their infrastructure was being monitored.

What's new: We are excited to announce the general availability (GA) of project-level notifications for HCP Terraform and Terraform Enterprise. This feature allows you to use projects as a centralized control plane to define your observability standards, ensuring that users have an effective, automated way to monitor their infrastructure at scale.

When a notification destination and trigger are configured in a project's settings, those alerts automatically cascade to every workspace within that project. This inheritance is dynamic: If a new "no-code" workspace is provisioned inside the project, it immediately inherits the project's alert settings. If a workspace is moved to a different project, its inherited notifications are automatically updated to match its new environment.

Benefits:

  • Enhanced reliability through "monitoring-by-default": Organizations can now enforce a strict observability baseline at the project level. This acts as an automated safety net, ensuring that no infrastructure is deployed without the proper alerts in place.

  • Operational efficiency at scale: By automating notification inheritance within a project, platform teams no longer need to manually configure workspaces or maintain external audit scripts. This "set and forget" approach significantly reduces the toil previously associated with large-scale incident management.

  • Simplified governance: This feature unblocks the ability to standardize incident routing based on environment or business unit. For example, you can guarantee that every workspace in the "production" project routes directly to the SRE PagerDuty service, leading to faster mean time to resolution (MTTR) and a more cohesive management experience for administrators.

To learn more about standardizing your observability workflows, you can check out the notifications settings in the HCP Terraform project notifications documentation.

Module testing for dynamic credentials

Feature gap: Until now, there was a significant security disconnect between deploying infrastructure and testing it. While HCP Terraform users could use Dynamic Credentials for standard plan-and-apply operations, the native Terraform test framework often required a fallback to static credentials.

To run integration tests that interacted with real cloud resources, module authors were frequently forced to manually seed "test-only" AWS keys or Azure service principal secrets into their module testing environments. This created a secondary tier of "shadow secrets" that were often less rotated and less monitored than production credentials, creating a weak link in the secure supply chain and increasing the friction for developers who wanted to write robust, automated tests.

What’s new: We are extending dynamic credentials support to the Terraform module testing suite. This update allows the native testing workflow to leverage the same OIDC-based trust relationships used in standard runs.

Whether you are testing a module’s behavior in AWS, Azure, or Google Cloud, or verifying its interaction with HCP Vault, the testing environment can now automatically exchange an OIDC token for temporary, short-lived credentials. This ensures that your testing lifecycle is just as secure and "secret-less" as your production deployment lifecycle.

Benefits:

  • Unified security across the lifecycle: You no longer need to manage two different authentication methods, one for deployments and one for testing. OIDC now covers the entire journey from terraform test to terraform apply.

  • Reduced developer friction: Developers can now write and run integration tests in HCP Terraform without the manual hurdle of requesting and configuring static "test" keys. Tests "just work" as long as the OIDC trust is established.

  • Ephemeral testing environments: Because credentials are generated on –the fly for the duration of the test and expire immediately after, the risk of "orphaned" test credentials lingering in your environment is completely eliminated.

  • Consistent governance: Platform teams can now enforce a single, identity-based policy across the organization, ensuring that even experimental or test-phase infrastructure follows the principle of least privilege.

Next steps: To start using OIDC in your tests, ensure your workspace is configured for dynamic credentials and update your module's test files to leverage the provider's default authentication. You can find more details on integrating these features in the HCP Terraform testing documentation.

Registry tagging

Feature gap: Today, you can define tags as key-value pairs on your projects to organize them and track resource consumption, and a workspace automatically inherits its project’s tags. But until now, you didn’t have the ability to tag your registry artifacts (modules and providers), slowing discovery and association with projects and workspaces for usage guidance.  Also, there was no easy way to indicate the status of module and provider versions, such as “non-prod” or “prod.” This made choosing the appropriate version for downstream consumers challenging.

What's new:Today, we are excited to announce the public beta of registry tags, allowing the platform team to tag artifacts with project information and usage guidance. Project tags on registry artifacts, now available for all HCP Terraform users, and the new registry environment tags, available for HCP Terraform standard and premium, increase the accuracy and speed for downstream consumers when choosing artifacts without confusion or guesswork.

For example, a new module version with a “non-prod” tag indicates it should be used in non-production environments. After successful testing, teams that develop modules can change the tag to “prod” (or simply add the “prod” tag) to indicate that the version is approved for production environments. By doing so, the downstream consumers are clear about the appropriate version for their environment to use, and teams that develop modules can speed up module testing and promotion.

Benefits:

  • Enhanced security: Platform teams can distinguish the approved artifacts and artifact versions easily based on the tags, decreasing security risks caused by using the wrong artifacts and artifact versions. Promoting artifact versions with tag assignment from one environment designation to the next allows proper testing with deployments before a new version is deemed appropriate for production.

  • Operational efficiency at scale: Registry tags enable faster artifact discovery for use in the proper projects. Users can filter the registry on their project’s or environment’s tags to find the right options quickly.

  • Simplified governance: By tagging artifacts with your preexisting project tags, you extend the same classification categories you use today to your modules and providers, encouraging users to choose the artifacts that share tags with their projects.

To learn more, you can check out our Managing registry tags documentation.

Get started with HCP Terraform and Terraform Enterprise

You can try many of these new features now. If you are new to Terraform, sign up for an HCP account to get started today, and also check out our tutorials. HCP Terraform includes a $500 credit that allows users to quickly get started using features from any plan, including HCP Terraform Premium. Contact our sales team if you’re interested in trying our self-managed offering: Terraform Enterprise.



from HashiCorp Blog https://ift.tt/6q5OlED
via IFTTT