Friday, April 17, 2026

Advancing secret sync with workload identity federation

With the release of Vault Enterprise 2.0, we are continuing to modernize how organizations secure and distribute secrets across hybrid and multi-cloud environments. As part of this release, Vault secret sync now supports workload identity federation for cloud service provider destinations, eliminating the need for long-lived static cloud credentials. 

Modern cloud environments are built on short-lived identity, dynamic infrastructure, and policy-driven access. Yet many secret distribution mechanisms still rely on long-lived static credentials to connect systems together. 

Vault secret sync was designed to reduce secret sprawl by keeping secrets synchronized from Vault into cloud native secret stores such as AWS Secrets Manager, Azure Key Vault, and Google Secret Manager. With workload identity federation support in Vault Enterprise 2.0, secret sync becomes fully cloud native and replaces static credentials with short-lived federated identity tokens. 

This integration significantly reduces risk, simplifies operations, and aligns secret sync with modern identity-first security models. 

The challenge with long-lived root credentials 

Secret sync enables customers to securely distribute secrets from Vault to cloud provider secret stores, helping standardize secret management and reduce fragmentation across platforms. 

Until now, configuring cloud provider destinations required static credentials such as: 

  • AWS IAM access keys 

  • Azure service principal secrets 

  • GCP service account keys 

While functional, these credentials introduce both security and operational risk: 

  • Long-lived credentials increase the blast radius if leaked 

  • Manual rotation is required 

  • Expiration can cause silent sync failures 

  • Credentials tend to sprawl across systems and teams 

This is particularly concerning because cloud provider credentials often grant access to critical infrastructure. A leaked or expired cloud credential does not just break synchronization. It can expose sensitive infrastructure resources. 

For security conscious organizations, this model increasingly conflicts with internal policies that mandate short-lived identity and federated authentication. 

Workload identity federation as the industry standard 

Workload identity federation has become the modern standard for machine-to-machine authentication because it significantly reduces the risks associated with long-lived credentials. 

Traditional integrations often rely on static credentials such as API keys, service account keys, or service principal secrets. These credentials must be stored, distributed, and periodically rotated. If leaked or misconfigured, they can provide persistent access to critical infrastructure resources. 

Workload identity federation addresses this risk by replacing long-lived credentials with short-lived, identity-based access. 

Instead of storing credentials, systems: 

  • Present a trusted identity token, typically a signed JWT 

  • Exchange it with a cloud provider 

  • Receive a short-lived and scoped access token 

Each cloud provider implements this model slightly differently: 

  • AWS uses IAM roles with web identity 

  • Azure uses federated credentials 

  • GCP uses workload identity pools 

Despite these differences, the underlying model is consistent. No static secrets are stored. Access is granted through a short-lived token exchange based on an established trust relationship. 

This approach: 

  • Minimizes credential exposure 

  • Eliminates manual rotation 

  • Reduces the blast radius of credential compromise 

  • Aligns with zero trust principles 

  • Reduces operational overhead 

  • Provides auditable and policy-driven access 

Vault already supports workload identity federation for securing modern workloads. Now secret sync extends this identity-first model to cloud secret distribution. 

Extending secret sync to non-human identities and agentic AI 

This shift is especially important as organizations adopt non-human identities (NHIs) and agentic workflows powered by automation and AI. These systems operate at high velocity, often creating and consuming secrets dynamically across environments, which makes long-lived credentials both impractical and risky. By leveraging workload identity federation in secret sync, NHIs and autonomous agents can securely access cloud-native secret stores using short-lived, identity-based tokens instead of embedded credentials. This enables a more scalable and secure model for machine-to-machine access, where identity, policy, and context govern access in real time. As agentic systems become more prevalent, this approach ensures that secret distribution keeps pace and reduces credential sprawl, enforces least privilege, and strengthens the overall security posture without slowing down innovation. 

What is new in Vault secret sync 

With workload identity federation support for cloud provider destinations, Vault can now: 

  • Generate or use a trusted identity token 

  • Exchange that token with AWS, Azure, or GCP 

  • Obtain a short-lived cloud access token 

  • Use that token to synchronize secrets 

  • Automatically refresh tokens as needed 

What is eliminated: 

  • Long-lived IAM access keys 

  • Service principal passwords 

  • Service account key files 

  • Manual credential rotation processes 

What is gained: 

  • Short-lived, automatically refreshed credentials 

  • Reduced credential sprawl 

  • Lower blast radius 

  • Cloud-native authentication 

  • Stronger alignment with enterprise security policies 

Secret sync not only reduces secret sprawl. It now distributes secrets without introducing new credential risk. 

Simplifying security with secret sync 

For Vault administrators, security requirements are only one part of the equation. Operational efficiency and reliability are equally important when managing secrets across multiple cloud platforms. 

Many organizations now enforce strict security policies that require: 

  • No new static cloud credentials 

  • No long-lived IAM access keys 

  • Mandatory use of federated identity 

  • Strong auditability and centralized identity governance 

Previously, enabling secret sync required introducing static credentials into an otherwise modern security posture. These credentials had to be stored, rotated, and monitored, creating additional operational overhead and potential risk. 

With workload identity federation support, Vault admins can now enable secret sync without relying on static cloud credentials. This approach reduces the need to manage credential lifecycles while aligning with organizational security standards. 

Vault admins can now: 

  • Enable secret sync without violating security policy 

  • Remove legacy static credentials from their environment 

  • Reduce credential management overhead 

  • Improve operational efficiency and reliability 

  • Strengthen compliance and auditability 

By combining stronger security with simpler credential management, secret sync now aligns with zero trust and identity-first cloud security architectures while making operations easier for platform teams. 

Stronger security with simpler operations 

Workload identity federation improves both the security and operational reliability of secret synchronization. 

Static credentials introduce risk because they are long-lived and must be stored, rotated, and monitored. If leaked, they can be reused until revoked and often provide broad access to cloud infrastructure. 

With workload identity federation, Vault exchanges trusted identity tokens for short-lived cloud access tokens. These tokens are automatically refreshed and tightly scoped, which reduces the impact of credential exposure and minimizes the attack surface. 

This model also improves operational reliability. Static credentials can expire unexpectedly and cause synchronization failures that require manual intervention. Federated identity removes this dependency by relying on short-lived tokens that follow the cloud provider’s native authentication model. 

As a result, secret sync becomes both more secure and more resilient, while reducing the operational burden of managing cloud credentials. 

A more secure cloud-native future 

Cloud providers have made it clear that federated identity is the future of authentication. 

By integrating workload identity federation into Vault secret sync, one of the remaining static credential dependencies in cloud secret distribution workflows is eliminated. The result is: 

  • More secure 

  • More compliant 

  • More reliable 

  • More cloud native 

For platform and engineering teams, this removes the need for policy exceptions and strengthens the overall security posture of secret synchronization workflows. 

 Getting started 

As organizations continue to adopt cloud-native architectures, the shift away from static credentials is no longer optional, but foundational to reducing risk and operating at scale. By bringing workload identity federation to secret sync, Vault Enterprise 2.0 eliminates one of the last sources of long-lived credentials in cloud secret distribution, helping teams strengthen security while simplifying operations. The result is a more resilient, compliant, and truly cloud-native approach to managing secrets across environments.  

Ready to eliminate static credentials and modernize your secret distribution workflows? Upgrade to Vault Enterprise 2.0 and enable workload identity federation for secret sync today. 

 



from HashiCorp Blog https://ift.tt/dqbRfLZ
via IFTTT

Containing a domain compromise: How predictive shielding shut down lateral movement

In identity-based attack campaigns, any initial access activity can turn an already serious intrusion into a critical incident once it allows a threat actor to obtain domain-administration rights. At that point, the attacker effectively controls the Active Directory domain: they can change group memberships and Access Control Lists (ACLs), mint Kerberos tickets, replicate directory secrets, and push policy through mechanisms like Group Policy Objects (GPOs), among others.

What makes domain compromise especially challenging is how quickly it could happen: in many real-world cases, domain-level credentials are compromised immediately following the very first access, and once these credentials are exposed, they’re often abused immediately, well before defenders can fully scope what happened. Apart from this speed gap, responding to this type of compromise could also prove difficult. For one, incident responders can’t just simply “turn off” domain controllers, service accounts, or identity infrastructure and core services without risking business continuity. In addition, because compromised credential artifacts can spread fast and be replayed to expand access, restoring the identity infrastructure back to a trusted state usually means taking steps (for example, krbtgt rotation, GPO cleanup, and ACL validation) that could take additional time and effort in an already high-pressure situation.

These challenges highlight the need for a more proactive approach in disrupting and containing credential-based attacks as they happen. Microsoft Defender’s predictive shielding capability in automatic attack disruption helps address this need. Its ability to predict where attacks will pivot next and apply just in time hardening actions to  block credential abuse—including those targeting high-privilege accounts like domain admins—and lateral movement at near-real-time speed, shifting the advantageto the defenders.

Previously, we discussed how predictive shielding was able to disrupt a human-operated ransomware incident. In this blog post, we take a look at a real-world Active Directory domain compromise that illustrates the critical inflection point when a threat actor achieves domain -level control. We walk through the technical details of the incident to highlight attacker tradecraft, the operational challenges defenders face after domain compromise, and the value of proactive, exposure-based containment that predictive shielding provides.

Predictive shielding overview

Predictive shielding is a capability in Microsoft Defender’s automatic attack disruption that helps stop the spread of identity-based attacks, before an attacker fully operationalizes stolen credentials. Instead of waiting for an account to be observed doing something malicious, predictive shielding focuses on moments when credentials are likely exposed: when Defender sees high-confidence signals of credential theft activity on a device, it can proactively restrict the accounts that might have been exposed there.

Essentially, predictive shielding works as follows:

  • Defender detects post-breach activity strongly associated with credential exposure on a device.
  • It evaluates which high-privilege identities were likely exposed in that context.
  • It applies containment to those identities to reduce the attacker’s ability to pivot, limiting lateral movement paths and high-impact identity operations while the incident is being investigated and remediated. The intent is to close the “speed gap” where attackers can reuse newly exposed credentials faster than responders can scope, reset, and clean up.

This capability is available as an out-of-the-box enhancement for Microsoft Defender for Endpoint P2 customers who meet the Microsoft Defender prerequisites.

The following section revisits a real-world domain compromise that showcases how attack disruption and predictive shielding changed the outcome by acting on exposure, rather than just observed abuse. Interestingly, this case happened just as we’re rolling out the predictive shielding, so you can see the changes in both attacker tradecraft and the detection and response actions before and after this capability was deployed.

Attack chain overview

In June 2025, a public sector organization was targeted by a threat actor. This threat actor progressed methodically: initial exploitation, local escalation, directory reconnaissance, credential access, and expansion into Microsoft Exchange and identity infrastructure.  

Figure 1. Attack diagram of the domain compromise.

Initial entry: Pre-domain compromise

The campaign began at the edge: a file-upload flaw in an internet-facing Internet Information Services (IIS) server was abused to plant and launch a web shell. The attacker then simultaneously performed various reconnaissance activities using the compromised account through the web shell and escalated their privileges to NT AUTHORITY\SYSTEM by abusing a Potato-class token impersonation primitive (for example, BadPotato).

The discovery commands observed in the attack include the following example:

Using the compromised IIS service account, the attacker attempted to reset the passwords of high-impact identities, a common technique used to gain control over accounts without performing credential dumping. The attacker also deployed Mimikatz to dump logon secrets (for example, MSV, LSASS, and SAM), harvesting credentials that are exposed on the device.

Had predictive shielding been released at this point, automated restrictions on exposed accounts could have stopped the intrusion before it expanded beyond the single-host foothold. However, at the time of the incident, this capability hasn’t been deployed to customers yet.

Key takeaway: At this stage of an attack, it’s important to keep the containment host‑scoped. Defenders should prioritize blocking credential theft and stopping escalation before it reaches the identity infrastructure.

First pivot: Directory credential materialization and Exchange delegation

Within 24 hours, the attacker abused privileged accounts and remotely created a scheduled task on a domain controller. The task initiated NTDS snapshot activity and packaged the output using makecab.exe, enabling offline access to directory credential material that’s suitable for abusing credentials at scale:

Because the first malicious action by the abused account already surfaced the entire Active Directory credentials, stopping its path for total domain compromise was no longer feasible.

The threat actor then planted a Godzilla web shell on Exchange Server, used a privileged context to enumerate accounts with ApplicationImpersonation role assignments, and granted full access to a delegated principal across mailboxes using Add‑MailboxPermission. This access allowed the threat actor to read and manipulate all mailbox contents.

The attack also used Impacket’s atexec.py to enumerate the role assignments remotely. Its use triggered the attack disruption capability in Defender, revoking the account sessions of an admin account and blocking it from further use.

Following the abused account’s disruption, the attacker attempted several additional actions, such as resetting the disrupted account’s and other accounts’ passwords. They also attempted to dump credentials of a Veeam backup device.

Key takeaway: This pivot is a turning point. Once directory credentials and privileged delegation are in play, the scope and impact of an incident expand fast. Defenders should prioritize protecting domain controllers, privileged identities, and authentication paths.

Scale and speed: Tool return, spraying, and lateral movement

Weeks later, the threat actor returned with an Impacket tooling (for example, secretsdump and PsExec) that resulted in repeated disruptions by Defender against the abused accounts that they used. These disruptions forced the attacker to pivot to other compromised accounts and exhaust their resources.

Following Defender’s disruptions, the threat actor then launched a broad password spray from the initially compromised IIS server, unlocking access to at least 14 servers through password reuse. They also attempted remote credential dumping against a couple of domain controllers and an additional IIS server using multiple domain and service principals.

Key takeaway: Even though automatic attack disruption acted right away, the attacker already possessed multiple credentials due to the previous large-scale credential dumping. This scenario showcases the race to detect and disrupt credential abuse and is the reason we’re introducing predictive shielding to preemptively disrupt exposed accounts at risk.

Predictive shielding breaks the chain: Exposure-centric containment

In the second phase of the attack, we activated predictive shielding. When exposure signals surfaced (for example, credential dumping attempts and replay from compromised hosts), automated containment blocked new sign-in attempts and interactive pivots not only for the abused accounts, but also for context-linked identities that are active on the same compromised surfaces.

Attack disruption contained high-privileged principals to prevent these accounts from being abused. Crucially, when a high-tier Enterprise or Schema Admin credential was exposed, predictive shielding contained it pre-abuse, preventing what would normally become a catastrophic escalation.

Second pivot: Alternative paths to new credentials

With high-value identities pre-contained, the threat actor pivoted to exploiting Apache Tomcat servers. They compromised three Tomcat servers, dropped the Godzilla web shell, and launched the PowerShell-based Invoke-Mimikatz command to harvest additional credentials. At one point, the attacker operated under Schema Admin:

They then used Impacket WmiExec to access Microsoft Entra Connect servers and attempt to extract Entra Connect synchronization credentials. The account used for this pivot was later contained, limiting further lateral movement.

Last attempts and shutdown

In the final phase of the attack, the threat actor attempted a full LSASS dump on a file sharing server using comsvcs.dll MiniDump under a domain user account, followed by additional NTDS activity:

Attack disruption in Defender repeatedly severed sessions and blocked new sign-ins made by the threat actor. On July 28, 2025, the attack campaign lost momentum and stopped.

How predictive shielding changed the outcome

Before compromising a domain, attackers are mostly constrained by the hosts they control. However, even a small set of exposed credentials could remove their constraints and give them broad access through privileged authentication and delegated pathways. The blast radius spreads fast, time pressure spikes, and containment decisions become riskier because identity infrastructure and high-privilege accounts are production dependencies.

The incident we revisited earlier almost followed a similar pattern. It unfolded while predictive shielding was still being launched, so the automated predictive containment capability only became active at the midway of the attack campaign. During the attack’s first stages, the threat actor had room to scale—they returned with new tooling, launched a broad password spray attack, and expanded access across multiple servers. They also attempted remote credential dumping against domain controllers and servers.

When predictive shielding went live, it helped shift the story and we then saw the change of pace—instead of reacting to each newly abused account, the capability allowed Defender to act preemptively and turn credential theft attempts into blocked pivots. Defender was able to block new sign-ins and interactive pivots, not just for the single abused account, but also for context-linked identities that were active on the same compromised surfaces.

With high-value identities pre-contained, the adversary shifted tradecraft and chased other credential sources, but each of their subsequent attempts triggered targeted containment that limited their lateral reach until they lost momentum and stopped. How this incident concluded is the operational “tell” that containment is working, in that once privileged pivots get blocked, threat actors often hunt for alternate credential sources, and defenses must continue following the moving blast radius.

As predictive shielding matures, it will continue to expand its prediction logic and context-linked identities.

MITRE ATT&CK® techniques observed

The following table maps observed behaviors to ATT&CK®.

Tactics shown are per technique definition.

Tactic(s) Technique ID Technique name Observed details
Initial Access T1190 Exploit Public-Facing Application Exploited a file-upload vulnerability in an IIS server to drop a web shell.
Persistence T1505.003 Server Software Component: Web Shell Deployed web shells for persistent access.
Execution T1059.001 Command and Scripting Interpreter: PowerShell Used PowerShell for Exchange role queries, mailbox permission changes, and Invoke-Mimikatz.
Privilege Escalation T1068 Exploitation for Privilege Escalation Used BadPotato to escalate to SYSTEM on an IIS server.
Credential Access T1003.001 OS Credential Dumping: LSASS Memory Dumped LSASS using Mimikatz and comsvcs.dll MiniDump.
Credential Access T1003.003 OS Credential Dumping: NTDS Performed NTDS-related activity using ntdsutil snapshot/IFM workflows on a domain controller.
Execution; Persistence; Privilege Escalation T1053.005 Scheduled Task/Job: Scheduled Task Created remote scheduled tasks to execute under SYSTEM on a domain controller.
Discovery T1087.002 Account Discovery: Domain Account Enumerated domain groups and accounts using net group and AD Explorer.
Lateral Movement T1021.002 Remote Services: SMB/Windows Admin Shares Used admin shares/SMB-backed tooling (for example, PsExec) for lateral movement.
Lateral Movement T1021.003 Remote Services: Windows Remote Management Used WmiExec against Microsoft Entra Connect servers.
Credential Access T1110.003 Brute Force: Password Spraying Performed password spraying leading to access across at least 14 servers.
Collection T1114.002 Email Collection: Remote Email Collection Expanded mailbox access broadly through impersonation or permission changes.
Command and Control T1071.001 Application Layer Protocol: Web Protocols Web shells communicated over HTTP/S.
Defense Evasion T1070.004 Indicator Removal on Host: File Deletion Used cleanup scripts (for example, del.bat) to remove dump artifacts.
Persistence; Privilege Escalation T1098 Account Manipulation Manipulated permissions and roles to expand access and sustain control.
Credential Access T1078 Valid Accounts Reused compromised service and domain accounts for access and lateral movement.

Learn more

For more information about automatic attack disruption and predictive shielding, see the following Microsoft Learn articles:

The post Containing a domain compromise: How predictive shielding shut down lateral movement appeared first on Microsoft Security Blog.



from Microsoft Security Blog https://ift.tt/WeF9yUi
via IFTTT

Three Microsoft Defender Zero-Days Actively Exploited; Two Still Unpatched

Huntress is warning that threat actors are exploiting three recently disclosed security flaws in Microsoft Defender to gain elevated privileges in compromised systems.

The activity involves the exploitation of three vulnerabilities that are codenamed BlueHammer (requires GitHub sign-in), RedSun, and UnDefend, all of which were released as zero-days by a researcher known as Chaotic Eclipse (aka Nightmare-Eclipse) in response to Microsoft's handling of the vulnerability disclosure process.

While both BlueHammer and RedSun are local privilege escalation (LPE) flaws impacting Microsoft Defender, UnDefend can be used to trigger a denial-of-service (DoS) condition and effectively block definition updates.

Microsoft moved to address BlueHammer as part of its Patch Tuesday updates released earlier this month. The vulnerability is being tracked under the CVE identifier CVE-2026-33825. However, the other flaws do not have a fix as of writing.

In a series of posts shared on X, Huntress said it observed all three flaws being exploited in the wild, with BlueHammer being weaponized since April 10, 2026, followed by the use of RedSun and UnDefend proof-of-concept (PoC) exploits on April 16.

"These invocations followed after typical enumeration commands: whoami /priv, cmdkey /list, net group, and others that indicate hands-on-keyboard threat actor activity," it added.

The cybersecurity vendor said it has taken steps to isolate the affected organization to prevent further post-exploitation. The Hacker News has reached out to Microsoft for comment, and we will update the story if we hear back.



from The Hacker News https://ift.tt/M2jRTt7
via IFTTT

The Good, the Bad and the Ugly in Cybersecurity – Week 16

The Good | U.S. Authorities Seize W3LL Phishing Ring & Jail DPRK IT Worker Scheme Facilitators

The FBI has dismantled the “W3LL” phishing platform, seized its infrastructure, and arrested its alleged developer in its first joint crackdown on a phishing kit developer together with Indonesian authorities. Sold for $500 per kit, W3LL-enabled criminals to clone login portals, steal credentials, bypass MFA using adversary-in-the-middle techniques, and launch business email compromise attacks.

The W3LL Store interface (Source: Group-IB)

Through the W3LL Store marketplace, more than 25,000 compromised accounts were sold, fueling over $20 million in attempted fraud. Even after the storefront shut down in 2023, the operation continued through encrypted channels under new branding. It was then used against over 17,000 victims worldwide after W3LL gave cybercriminals an end-to-end phishing service. Investigators say the takedown disrupted a major criminal ecosystem that helped more than 500 threat actors steal access, hijack accounts, and commit financial fraud.

From the DoJ, two U.S. nationals have been sentenced for helping North Korean IT workers pose as American residents and secure remote jobs at more than 100 U.S. companies, including Fortune 500 firms. Court documents note that between 2021 and 2024, the scheme generated over $5 million for the DPRK and caused about $3 million in losses to victim companies. The defendants used stolen identities from over 80 U.S. citizens, created fake companies and financial accounts, and hosted company-issued laptops in U.S. homes so North Korean workers could secretly access corporate networks.

U.S. officials said the operation endangered national security by placing DPRK operatives inside American businesses. Kejia Wang will receive nine years in prison, while Zhenxing Wang is sentenced to over seven years. Authorities say the broader network remains active, with additional suspects still at large, as North Korea continues using fraudulent remote workers to fund government operations and evade sanctions.

The Bad | New “AgingFly” Malware Breaches Ukrainian Governments & Hospitals

Ukraine’s CERT-UA has uncovered a new malware campaign using a toolset called “AgingFly” to target local governments, hospitals, and possibly Ukrainian defense personnel.

The attack (UAC-0247) begins with phishing emails disguised as humanitarian aid offers that lure victims into downloading malicious shortcut files. These files trigger a chain of scripts and loaders that ultimately deploy AgingFly, a C# malware strain that gives attackers remote control of infected systems.

Example of chain of damage (Source: CERT-UA)

Once installed, AgingFly can execute commands, steal files, capture screenshots, log keystrokes, and deploy additional payloads. It also uses PowerShell scripts to update configurations and retrieve command and control (C2) server details through Telegram, helping the malware remain flexible and persistent.

One notable feature is that it downloads pre-built command handlers as source code from the server and compiles them directly on the infected machine, reducing its static footprint and helping it evade signature-based detection tools.

Investigators found that the attackers use open-source tools such as ChromElevator to steal saved passwords and cookies from Chromium-based browsers, and ZAPiDESK to decrypt WhatsApp data. Additional tools like RustScan, Ligolo-ng, and Chisel support reconnaissance, tunneling, and lateral movement across compromised networks. CERT-UA says the campaign has impacted at least a dozen organizations and may also have targeted members of Ukraine’s defense forces.

To reduce exposure, the agency recommends blocking the execution of LNK, HTA, and JavaScript files, along with restricting trusted Windows utilities such as PowerShell and mshta.exe that are abused in the attack chain.

The Ugly | Attackers Exploit Nginx Auth Bypass Vulnerability to Hijack Servers

A critical vulnerability in Nginx UI, tracked as CVE-2026-33032, is being actively exploited in the wild to achieve full server takeover without authentication.

The flaw stems from an exposed /mcp_message endpoint in systems using Model Context Protocol (MCP) support, which fails to enforce proper authentication controls. As a result, remote attackers can invoke privileged MCP functions, including modifying configuration files, restarting services, and forcing automatic reloads to effectively gain complete control over affected Nginx servers.

The attacker-controlled page by nginx (Source: Pluto Security)

Security researchers have reported that exploitation requires only network access. Attackers initiate a session via Server-Sent Events, open an MCP connection, retrieve a session ID, and then use it to send unauthenticated requests to the vulnerable endpoint.

This grants access to all available MCP tools, executing destructive capabilities like injecting malicious server blocks, exfiltrating configuration data, and triggering service restarts.

The vulnerability was patched in version 2.3.4 shortly after the disclosure, but a more secure release, 2.3.6, is now recommended. Despite the fix, active exploitation in the wild has been confirmed with proof-of-concept code publicly available.

Nginx UI is widely used, with over 11,000 GitHub stars and hundreds of thousands of Docker pulls, and scans suggest roughly 2,600 exposed instances remain vulnerable globally. Attackers can establish MCP sessions, reuse session IDs, and chain requests to escalate privileges, enabling stealthy persistence, configuration tampering, and full administrative control over exposed systems.

Organizations are urged to update immediately, as attackers can fully compromise systems through a single unauthenticated request, bypassing traditional security controls and gaining persistent control over web infrastructure.



from SentinelOne https://ift.tt/VQRdHjq
via IFTTT

Google Blocks 8.3B Policy-Violating Ads in 2025, Launches Android 17 Privacy Overhaul

Google this week announced a new set of Play policy updates to strengthen user privacy and protect businesses against fraud, even as it revealed it blocked or removed over 8.3 billion ads globally and suspended 24.9 million accounts in 2025.

The new policy updates relate to contact and location permissions in Android, allowing third-party apps to access the contact lists and a user's location in a more privacy-friendly manner. This includes a new Contact Picker, which offers a standardized, secure, and searchable interface for contact selection.

"This feature allows users to grant apps access only to the specific contacts they choose, aligning with Android's commitment to data transparency and minimized permission footprints," Google said.

Previously, apps requiring access to a specific user's contacts relied on READ_CONTACTS, an overly broad permission that granted apps the ability to access all contacts and their associated information. With the latest change introduced in Android 17, apps can specify which fields from a contact they need, such as phone numbers or email addresses, as opposed to reading the entire record.

The updated policy will require all applicable apps to use the picker (or the Android Sharesheet) as the main way to access users' contacts, with READ_CONTACTS now reserved only for apps that can't function without it. It's advised to entirely remove the READ_CONTACTS permission from the app manifest declaration if it's targeting Android versions 17 (currently in beta) and later.

"If your app requires full, ongoing access to a user's contact list to function, you must justify this need by submitting a Play Developer Declaration in the Play Console," Google noted.

The second policy change revolves around a streamlined location button that Google has introduced in Android 17 that enables apps to request one-time access to a user's precise location. In doing so, it allows the user to make a better choice about how much information they want to share and for what duration. What's more, a persistent indicator will appear to alert a user every time a non-system app accesses their location.

To comply with this update, developers are being urged to review their apps' location usage to ensure that they are requesting the minimum amount of location data necessary for them to function.

"If your app targets Android 17 and above and uses precise location for discrete, temporary actions, implement the location button by adding the onlyForLocationButton flag in your manifest," the tech giant said. "If your app requires persistent, precise location to function, you will need to submit a Play Developer Declaration in Play Console to show why the new button or coarse location isn't sufficient for your app's core features."

The declaration form is expected to be available before October 2026, with pre-review checks in the Play Console to go live starting October 27 to identify potential contacts or location permissions policy issues.

Google is also implementing a secure way for businesses to transfer ownership of their apps through a native account transfer feature built into Play Console so as to stay protected against fraud. The company is recommending that app developers handle account ownership changes through this feature starting May 27, 2026.

"That means that unofficial transfers (like sharing login credentials or buying and selling accounts on third-party marketplaces), which leave your business vulnerable, are not permitted," it said.

Google Takes Aim at Malvertising

The changes to the Android ecosystem come as Google said it's harnessing the capabilities of Gemini, its artificial intelligence (AI) model, to detect and block malicious ads on its platform. More than 99% of policy-violating ads were caught by its systems in 2025 before they were shown to users, it noted.

"Unlike earlier keyword-based systems, our latest models better understand intent, helping us spot malicious content and preemptively block it, even when it's designed to evade detection," Keerat Sharma, vice president and general manager of Ads Privacy and Safety at Google, said in a post shared with The Hacker News.

Taken together, the company removed or blocked 602 million ads and 4 million accounts that were associated with scams or scam-related activity last year. More than 4.8 billion ads were restricted, and over 480 million web pages were actioned for attempting to serve sexually explicit content, weapons promotion, online gambling, alcohol, tobacco, and malware.

In contrast, Google suspended over 39.2 million advertiser accounts in 2024, and stopped 5.1 billion bad ads, restricted 9.1 billion ads, and blocked or restricted ads on 1.3 billion pages.

"Bad actors are using generative AI to create deceptive ads at scale, and Gemini helps us detect and block them in real time," Google said. "By the end of last year, the majority of Responsive Search Ads created in Google Ads were reviewed instantly, and harmful content was blocked at submission -- a capability we plan to bring to more ad formats this year."



from The Hacker News https://ift.tt/5CveR8V
via IFTTT

ShapeBlue Announced as Diamond Sponsor of the CloudStack Collaboration Conference 2026 in Edinburgh

We are pleased to share that ShapeBlue will once again be a Diamond Sponsor of the CloudStack Collaboration Conference 2026, taking place from 18 to 20 November 2026 in Edinburgh, Scotland.

As the premier annual CloudStack event, #CloudStackCollab brings together users, developers and service providers from across the global Apache CloudStack community. It is the key moment in the year to connect, share knowledge, and explore how CloudStack continues to evolve as a powerful, production-ready open-source cloud platform.

ShapeBlue is proud to support this event as a Diamond Sponsor, reinforcing our long-standing commitment to the CloudStack ecosystem. Having worked with CloudStack for over a decade, contributing to its development and helping organisations deploy and scale production clouds worldwide, this event is always a highlight in our calendar.

 

What to expect in Edinburgh

CloudStack Collaboration Conference 2026 will feature:

  • Technical deep dives from core committers and contributors
  • Real-world case studies from operators running CloudStack in production
  • Roadmap discussions and community updates
  • Networking opportunities with the global CloudStack community

Whether you are already running CloudStack or exploring its capabilities, the conference offers valuable insights and direct access to the people building and operating the platform.

 

Call for Papers now open

The Call for Papers is also live. If you have a story, use case, or technical insight to share with the community, we strongly encourage you to submit a session! Presenting at the CloudStack Collaboration Conference 2026 offers a unique opportunity to showcase your work on a global stage, in front of industry leaders and key contributors from across the Apache CloudStack ecosystem.

 

Join us there

As a Diamond Sponsor of the premier annual CloudStack event, ShapeBlue looks forward to connecting with the community in Edinburgh.

We hope to see you there!

The post ShapeBlue Announced as Diamond Sponsor of the CloudStack Collaboration Conference 2026 in Edinburgh appeared first on ShapeBlue.



from CloudStack Consultancy & CloudStack... https://ift.tt/0UYImj4
via IFTTT

Thursday, April 16, 2026

Networking at the Edge of Space: How Aerostar International Powers Connectivity at 123,000 Feet

What happens when your network leaves Earth?

Not metaphorically but literally.

At altitudes exceeding 100,000 feet, where temperatures plunge to -80°C and physical access is impossible for months at a time, traditional networking equipment simply fails. But for Aerostar, operating in these extreme conditions isn’t theoretical; it’s mission-critical.

aerostar-3

Customer Overview: Aerospace innovators reaching the edge of space

Aerostar, a business unit of TCOM, is a South Dakota-based pioneer in high-altitude balloon and unmanned aerial system (UAS) operations. Aerostar’s platforms support NASA instrumentation testing, government communications relay, disaster response, and commercial applications like environmental monitoring. Their operations require robust, reliable network connectivity in extreme environmental conditions, often at altitudes exceeding 70,000 ft and temperatures below -80°C.

To support these unique missions, Aerostar selected the Netgate® 2100 router with pfSense Plus® for its small footprint, low power consumption, and proven resilience in extreme temperatures, enabling secure, reliable networking for stratospheric payloads.

“The combination of durability, low power, and open-source flexibility made the Netgate 2100 the perfect solution for our high-altitude missions. It’s not just a router, it’s the backbone of our stratospheric networking operations.”
Aaron Wyant, Aerostar International LLC

Challenge: Networking where no network has gone before

Aerostar previously relied on Ubiquiti Edge Routers, but end-of-life hardware and growing demands for port density, lower power consumption, and environmental durability necessitated a new solution.

Aerostar’s high-altitude operations present unprecedented networking challenges:

  • Extreme temperatures

    Flights routinely encounter temperatures ranging from -40°C to -80°C, well beyond the capabilities of standard commercial routers.

  • Limited payload space

    Routers must be compact and lightweight to fit within specialized balloon frames or UAS platforms.

  • High reliability

    Networking hardware must remain operational for months with no maintenance, powering payloads via solar panels and batteries.

  • Complex connectivity

    The systems require multiple Ethernet ports, VPN capabilities, and support for a variety of network protocols to handle telemetry, command-and-control (C2), and real-time data streaming.

“Each payload is fully self-contained, powered by solar energy during the day and battery reserves at night, operating for months in the stratosphere with no external power or physical access.”

Aaron Wyant, Aerostar International LLC

Solution: Local LAN, global reach, no matter how high you fly

aero-starAerostar deployed the Netgate 2100s, leveraging pfSense software, for high-altitude and remote networking.

Beneath each balloon sits a payload frame carrying critical instruments. On 90% of balloon flights, Aerostar deploys a router just as you would in any standard network. Recently, they’ve paired a Starlink Mini with the router, providing a WAN interface while all payload devices connect via a local LAN.

The 2100s were paired with industrial-rated switches as needed, creating a modular, mission-ready networking solution suitable for one-time-use balloon flights or longer-duration unmanned aerial missions.

Key features included:

  • Extreme temperature performance

    The 2100 successfully operated in the vacuum chamber, testing down to -80°C, well beyond typical commercial hardware specifications.

  • Low power consumption

    At 4 watts, the 2100 reduced power requirements compared to previous hardware, a critical factor for solar- and battery-powered payloads.

  • Compact Design

    Its small size and lightweight design allowed it to be integrated into balloon payload frames without affecting flight dynamics.

  • Flexible connectivity

    Four LAN ports enabled local networking for payload sensors and instruments, while WAN and VPN capabilities ensured secure communications.

  • Open-source flexibility
    pfSense Plus enabled customization to meet specific telemetry and routing requirements unique to each flight mission.

Results & Benefits: Reliable connectivity at the edge of the stratosphere

With the Netgate 2100, Aerostar achieved:

  • Reliable networking in extreme environments

    Routers maintained operation at stratospheric altitudes with temperatures as low as -80°C for almost a one-year mission

  • Reduced payload power and weight

    Lower power consumption and a compact form factor optimized balloon flight efficiency.

  • Scalable deployment

    Modular design enables quick integration across diverse UAS and balloon platforms.

  • Enhanced mission capability

    Secure, real-time telemetry and command-and-control for NASA instrumentation, government communications, disaster response, and commercial monitoring.

  • Future readiness

    Support for the upcoming TAA certification ensures broader deployment potential across government and commercial missions.

Final Thought

Most networking solutions are designed for controlled environments.

This one wasn’t.

It was built to operate where:

  • There is no infrastructure
  • There is no maintenance
  • And failure isn’t an option

And that’s exactly why it works.

Want to see it in action?

Watch the full mission video and explore how networking is evolving beyond Earth.

 

About Aerostar

Netgate is proud to support a partner committed to making the world more connected and secure.

Aerostar represents the evolution of innovation in action, building on decades of lighter-than-air expertise to become a leader in high-altitude platforms and advanced manufacturing. Today, Aerostar leverages cutting-edge technology, such as the Netgate 2100 in stratospheric systems, to connect, protect, and support critical missions worldwide, from communications relay to life-saving applications, as it continues to push the boundaries of what’s possible at 70,000-plus feet.

Aerostar has taken lighter-than-air technologies to all new heights by leveraging the most brilliant minds, materials, and machinery for over 70 years to connect, protect, and save lives. Their platforms support NASA, government, and commercial applications worldwide, delivering innovative solutions for data collection, communications, and disaster response.

Learn more about Aerostar

About Netgate

Netgate develops pfSense software-based routers that deliver enterprise-grade networking, security, and flexibility across commercial, government, and remote applications. pfSense Plus software, the world’s leading firewall, router, and VPN solution, provides secure network edge and cloud networking solutions for millions of deployments worldwide.

About the Netgate 2100 with pfSense Plus

The Netgate 2100 is a compact yet powerful security gateway appliance that pairs purpose-built hardware with the pfSense Plus software platform, delivering enterprise-grade firewall, routing, and VPN capabilities in a small desktop form factor. It features a dual-core ARM Cortex‑A53 CPU, 4 GB DDR4 RAM, and flexible Ethernet connectivity (WAN RJ45/SFP combo plus four LAN ports).

The Netgate 2100 proves its versatility by powering high-altitude balloon payloads and connecting Starlink Mini WAN interfaces to onboard LAN networks, while withstanding extreme temperatures and altitudes. pfSense Plus offers advanced firewall filtering, VPNs, multi-WAN load balancing, IDS/IPS, and detailed traffic management, all managed via an intuitive web interface. With passive cooling, low power draw, and expansion options for additional storage, the Netgate 2100 is a small device capable of delivering reliable, scalable, and secure networking, even at the edge of the stratosphere.

Learn more about the Netgate 2100 with pfSense Plus



from Blog https://ift.tt/ONrV012
via IFTTT

Why MicroVMs: The Architecture Behind Docker Sandboxes

Last week, we launched Docker Sandboxes with a bold goal: to deliver the strongest agent isolation in the market.

This post unpacks that claim, how microVMs enable it, and some of the architectural choices we made in this approach.

The Problem With Every Other Approach

Every sandboxing model asks you to give something up. We looked at the top four approaches.

Full VMs offer strong isolation, but general-purpose VMs weren’t designed for ephemeral, session-heavy agent workflows. Some VMs built for specific workloads can spin up more effectively on modern hardware, but the general-purpose VM experience (slow cold starts, heavy resource overhead) pushes developers toward skipping isolation entirely.

Containers are fast and are the way modern applications are built. But for an autonomous agent that needs to build and run its own Docker containers, which coding agents routinely do, you hit Docker-in-Docker, which requires elevated privileges that undermine the isolation you set up in the first place. Agents need a real Docker environment to do development work, and containers alone don’t give you that cleanly.

WASM / V8 isolates are fast to spin up, but the isolation model is fundamentally different. You’re running isolates, not operating systems. Even providers of isolate-based sandboxes have acknowledged that hardening V8 is difficult, and that security bugs in the V8 engine surface more frequently than in mature hypervisors. Beyond the security model, there’s a practical gap: your agent can’t install system packages or run arbitrary shell commands. For a coding agent that needs a real development environment, WASM isn’t one.

Not using any sandboxing is fast, obviously. It’s also a liability. One rm -rf, one leaked .env, one rogue network call, and the blast radius is your entire machine.

Why MicroVMs

Docker Sandboxes run each agent session inside a dedicated microVM with a private Docker daemon isolated by the VM boundary, and no path back to the host.

That one sentence contains three architectural decisions worth unpacking.

Screenshot 2026 04 09 at 3.23.44 PM

Dedicated microVM. Each sandbox gets its own kernel. It’s hardware-boundary isolation, the same kind you get from a full VM. A compromised or runaway agent can’t reach the host, other sandboxes, or anything outside its environment. If it tries to escape, it hits a wall.

Private, VM-isolated Docker daemon. This is the key differentiator for coding agents. AI is going to result in more container workloads, not fewer. Containers are how applications are developed, and agents need a Docker environment to do that development. Docker Sandboxes give each agent its own Docker daemon running inside a microVM, fully isolated by the VM boundary. Your agent gets full docker build, docker run, and docker compose support with no socket mounting, no host-level privileges, none of the security compromises other approaches require. This means we treat agents as we would a human developer, giving them a true developer environment so they can actually complete tasks across the SDLC.

No path back to the host. File access, network policies, and secrets are defined before the agent runs, not enforced by the agent itself. This is an important distinction. An LLM deciding its own security boundaries is not a security model. The bounding box has to come from infrastructure, not from a system prompt.

Why We Built a New VMM

Choosing microVMs was the easy part. Running them where developers actually work was the hard part.

We looked hard at existing options, but none of them were designed for what we needed. Firecracker, the most well-known microVM runtime, was designed for cloud infrastructure, specifically Linux/KVM environments like AWS Lambda. It has no native support for macOS or Windows, full stop. That’s fine for server-side workloads, but coding agents don’t run in the cloud. They run on developer laptops, across macOS, Windows, and Linux. 

We could have shimmed an existing VMM into working across platforms, creating translation layers on macOS and workarounds on Windows, but bolting cross-platform support onto a Linux-first VMM means fighting abstractions that were never designed for it. That’s how you end up with fragile, layered workarounds that break the “it just works” promise and create the friction that makes developers skip sandboxing altogether.

So we built a new VMM, purpose-built for where coding agents actually run.

It runs natively on all three platforms using each OS’s native hypervisor: Apple’s Hypervisor.framework, Windows Hypervisor Platform, and Linux KVM. A single codebase for three platforms and zero translation layers.

This matters because it means agents get kernel-level isolation optimized for each specific OS. Cold starts are fast because there’s no abstraction tax. A developer on a MacBook gets the same isolation guarantees and startup performance as a developer on a Linux workstation or a Windows machine.

Building a VMM from scratch is not a small undertaking. But the alternative, asking developers to accept slower starts, degraded compatibility, or platform-specific caveats, is exactly the kind of asterisk that makes people run agents on the host instead. Our approach removes that asterisk at the hypervisor level.

Fast Cold Starts

We rebuilt the virtualization layer from scratch, optimizing for fast spin up and fast tear downs. Cold starts are fast. This matters for one reason: if the sandbox is slow, developers skip it. Every friction point between “start agent” and “agent is running” is a reason to run on the host instead. With near-instant starts, there is no performance reason to run outside it.

What This Means In Practice

Here’s the concrete version of what this architecture gives you:

Full development environment. Agents can clone repos, install dependencies, run test suites, build Docker images, spin up multi-container services, and open pull requests, all inside the sandbox. Nothing is stubbed out or simulated. Agents are treated as developers and given what they need to complete tasks end to end. 

Scoped access, not all-or-nothing. You define the boundary: exactly which files and directories the agent can see, which network endpoints it can reach, and which secrets it receives. Credentials are injected at runtime and outside the MicroVM boundary, never baked into the environment.

Disposable by design. If an agent goes off track, delete the sandbox and start fresh in seconds. There is no state to clean up and nothing to roll back on your host.

Works with every major agent. Claude Code, Codex, OpenCode, GitHub Copilot, Gemini CLI, Kiro, Docker Agent, and next-generation autonomous systems like OpenClaw and NanoClaw. Same isolation, same speed, one sandbox model across all of them.

For Teams

Individual developers can install and run Docker Sandboxes today, standalone, no Docker Desktop license required. 

For teams that want centralized filesystem and network policies that can be enforced across an organization and scale sandboxed execution, get in touch to learn about enterprise deployment.

The Tradeoff That Isn’t

The pitch for sandboxing has always come with an asterisk: yes, it’s safer, but you’ll pay for it in speed, compatibility, or workflow friction.

MicroVMs eliminate that asterisk. You get VM-grade isolation with cold starts fast enough that there’s no reason to skip it, and full Docker support inside the sandbox. There is no tradeoff.

Your agents should be running autonomously. They just shouldn’t be running without any guardrails.

Use Sandboxes in Seconds

Install Sandboxes with a single command.

macOS
brew install docker/tap/sbx   

Windows
winget install Docker.sbx  

Read the docs to learn more.



from Docker https://ift.tt/U8YylMB
via IFTTT

Agentic AI changes the shape of trust

Most security models were built around a simple idea: people log in, systems respond, and access is reviewed over time. That idea held up through the shift to cloud and automation. It still mostly works for services and pipelines. 

Agentic AI changes that balance, and the gap it creates isn't theoretical. It's operational, compounding, and increasingly easy to miss until something breaks. 

How access gets away from you 

Here's a scenario that's already playing out across enterprise environments. A user delegates a task to an AI cowork agent — say, pulling data from a CRM, cross-referencing a financial system, and generating a report. Identity is assumed by the cowork agent, the agent completes the task, and everything looks fine. 

Except the access path it opened doesn't close cleanly. The role it assumed stays warm. The credential it used gets cached. Three months later, during an incident review, someone asks: which agent did that? Under whose authority? What else did it touch? Nobody can answer with confidence, because at the time, the access looked like a user doing their job. 

That's the shape of the problem. Not a dramatic breach, a quiet accumulation of access that nobody explicitly approved, and nobody thought to revoke. 

The identity model we built doesn't fit the world we're entering 

Traditional identity and access management was built around people. Even when we expanded it to applications and services, the underlying assumptions stayed largely intact: identities were provisioned deliberately; permissions were reviewed periodically, and access decisions were made ahead of time. 

Agents don't fit that shape. They request access dynamically. They call new tools. They assume roles. In some cases, they generate credentials to complete a task. Over time, those interactions can create access paths no one explicitly approved, reviewed, or even anticipated, leading to privilege that grows not in one obvious jump, but in small steps that are easy to miss individually and hard to see in aggregate. 

This isn't just another form of complexity. It's a fundamentally new kind of identity control gap, and it has two distinct flavors. 

The first is delegated access where agents are acting on behalf of a human, inheriting that user's identity to carry out a task. Copilots and coding assistants work this way. The agent does things the user could do, but the user isn't watching every step. The second is autonomous access where agents are operating with their own identity, authenticating independently, and taking action outside the scope of any individual user's authority. Infrastructure agents and workflow orchestrators work this way. Both models are legitimate. Both create real governance challenges. And in most environments today, the controls for each are being built separately, inconsistently, or not at all. 

A new attack surface 

When agents inherit a user's identity, their actions are indistinguishable from a human's. When something goes wrong, there's no clean way to separate intent from execution or what a person approved versus what an agent decided on its own. When agents operate autonomously, they carry their own credentials, which means every agent is a potential target, a potential source of sprawl, and a potential audit gap. 

Either way, machine identities already outnumber human ones in most enterprises. Agents accelerate that imbalance. Every workflow needs credentials. Every tool call needs access. When teams are under pressure to move fast, those credentials tend to stick around longer than they should — long-lived, overly permissive, and sometimes shared — because managing uniqueness at scale feels like overhead no one has time for right now. 

That's how secrets end up in code; roles accumulate privileges, and access quietly spreads. 

Why Day 1 isn't the hard part 

Most organizations don't fail at securing the initial deployment. They define roles, plug in existing IAM, and move forward. The failure happens later. 

Secrets rotate late, or not at all. Certificates expire unexpectedly. IAM roles quietly accumulate permissions. Remediation happens once, then drifts. And agents make this worse in a specific way. An autonomous system that can modify infrastructure and trigger workflows doesn't just inherit existing gaps, it can reintroduce vulnerabilities that were previously fixed, undoing security work between review cycles without anyone noticing. 

When incidents happen, the lack of clear attribution becomes the real problem. Teams struggle to answer basic questions like who authorized this action, which agent executed it, what credentials were used, what changed, and when? For regulated industries, that uncertainty isn't just inconvenient; it can stop an audit in its tracks. 

Static controls in a moving system 

Most security controls still assume access is something you grant in advance. Once authenticated, a system trusts that identity until something changes. 

Agents don't respect that boundary. A delegated agent might legitimately access a system in one moment and, two steps later in its reasoning chain, try to reach something its user never intended to authorize. An autonomous agent might operate across three cloud environments in the span of a single workflow. Context shifts constantly. What was appropriate five minutes ago may not be appropriate now. 

This is where existing tools run into their limits. IAM platforms focus on who you are. PAM was built around how humans access systems. Secrets management focuses on storing credentials. None of these were designed for an identity that changes context at machine speed, across environments, with no natural pauses. 

Trust can't be a one-time decision in these environments. Identity needs to be verified continuously, not just at login. Access needs to be scoped to the task at hand, not the lifecycle of the agent. Credentials should expire naturally, tied to specific context and purpose, without relying on cleanup processes that run quarterly. And authorization decisions have to happen at the point of action, not when something is provisioned. 

Extending zero trust to non-human identities 

For teams already working toward zero trust, agentic AI exposes the next gap to close, and it's the gap where existing controls are most likely to fail. 

The principles still apply: least privilege, continuous verification, strong identity at the center. What changes is the surface and the speed. Zero trust as most organizations have implemented it was designed for humans authenticating to systems. It assumed a person would log in, establish a session, and do work within that session. Agents don't work in sessions. They work in actions, thousands of them, across environments, triggered by other agents, chained into workflows that no human is watching in real time. 

Extending zero trust to agents means every agent has its own verifiable identity, not a shared key or borrowed role. It means access is temporary by default, and when a task ends, permissions should too. It means credentials are short-lived and issued just-in-time, not stored and rotated on a schedule. And it means actions are observable not just as events, but as attributable decisions: which identity authorized this, under what scope, on whose behalf. 

That's not a theoretical posture. It's a concrete set of controls that already exist for human-centric workflows like dynamic secrets, certificate-based identity, policy-enforced access, and comprehensive audit logging. The engineering challenge is extending them to cover agents at the scale and speed they operate. 

Moving forward without losing control 

Agentic AI isn't experimental anymore. Teams are adopting it because it works, and the pressure to move fast is real. 

The challenge is that speed creates the conditions for the scenario described at the start of this piece: access that accumulates quietly, through behavior rather than design, until the audit question comes and nobody has a clean answer. That's not a failure of tooling so much as a failure of assumptions. Security models that were built for a world where access was provisioned deliberately and reviewed periodically are now applied to systems that provision dynamically and never stop. 

The organizations that will handle this well aren't the ones that slow down adoption. They're the ones that connect identity, access, and execution into a coherent picture where every agent has a clear identity, every action is attributable, and the controls are enforced at the moment work happens, not the moment it's reviewed. 

That's how autonomy becomes something you can actually rely on. Not because you're watching everything, but because the system itself knows what it should and shouldn't do, and leaves a clear record either way. 

To learn more, check out our use case page or watch our explainer video.  



from HashiCorp Blog https://ift.tt/PLmVXDZ
via IFTTT

Vault Enterprise 2.0 modernizes identity security at scale

Vault Enterprise 2.0 is now generally available, delivering new capabilities to help organizations secure, scale, and simplify secrets management across modern infrastructure. This release strengthens identity-based access, improves credential lifecycle automation, and enables high-performance encryption for emerging workloads, while continuing to enhance usability and integrations across the ecosystem. 

Key features in Vault Enterprise 2.0 include: 

  • Secret distribution with workload identity federation to eliminate reliance on long-lived static credentials and improve security across hybrid and multi-cloud environments 

  • Expanded credential rotation capabilities for Linux to reduce operational risk and enforce short-lived access 

  • Envelope encryption for streaming and large-scale workloads to enable high-performance data protection without sacrificing centralized control 

  • Enhanced integrations with Terraform, Kubernetes, and public certificate authorities to streamline infrastructure and application workflows 

  • Improved user experience with a redesigned UI and guided onboarding to accelerate time to value and simplify Vault adoption 

Adoption of a new versioning pattern and support model 

HashiCorp Vault is transitioning to a new release and support model aligned with IBM versioning and lifecycle practices, which is why the product is moving directly from version 1.21 to 2.0.0. This shift does not reflect a significant change in Vault architecture, as such version changes would normally represent, but rather reflects a move away from HashiCorp’s previous long-term support approach, and toward the IBM Support Cycle-2 policy, which is designed to provide clearer lifecycle expectations. Under this model, each major (“V”) milestone release will receive at least two years of standard support, with extended support options available to ensure continuity for mission-critical workloads. Extended support includes an initial third year with critical bug fixes, usage support, and select security updates, followed by ongoing support (years four through six) for usage guidance and known issue assistance. This approach delivers a more predictable and durable support framework while aligning Vault with the broader IBM product lifecycle strategy. For more detail on IBM versioning and support patterns, see: IBM Software product versioning explained and IBM Software Support Lifecycle Policies

Vault Enterprise leads in securing human and non-human identities 

Identity management with Vault continues to evolve with new capabilities that support centralized policy management, reduce risks from long-lived secrets with improved rotation, and enforce traceability for increased auditability and transparency.  

Smarter rotation and simplified role management  

Local account password rotation for RHEL, Ubuntu and additional Linux distributions is now generally available. With this capability, engineers and platform teams that use Vault can set secret management policies that reduce credential complexity and set rotation and lease time periods, as well as other criteria that limit breaches’ blast radius and impact at a policy-level.  

Systems administrators now have central control of user account credentials on local Linux systems. Previously, they would have a gap in control, as local root users might use a common password shared across systems. Now, with password management in Vault, access to these systems can be controlled and audited, and overall risk is limited by unique time-bound passwords for each system.  

In addition, systems administrators who need to manage thousands of machines across various data centers can rely on automation to update local account passwords without manually logging it for each system they manage. Automating this critical security task improves the overall posture by reducing the risk of manual errors and adding auditability for compliance reporting, key for continued acceleration and securing the growing number of machines.  

Vault operators will now benefit from seamless Vault onboarding that will not require maintenance windows. Each account will now be able to rotate its own credentials, and Vault operators will have fine-grained control over automatic rotation of LDAP account passwords. This reduces the burden of managing privileged accounts and decreases the blast radius of credential exposure for static roles.  

Secure streaming workloads on the edge with in-place encryption 

Vault Enterprise 2.0 also introduces enhanced support for encrypting large artifacts and streaming workloads in Vault to enable envelope encryption with the Transit secrets engine. Rather than sending full payloads to Vault, applications can now encrypt data locally using ephemeral key encryption keys (KEKs), while Vault continues to manage and protect those keys through centralized policy and access controls. This approach preserves Vault as the root of trust while significantly improving performance, scalability, and efficiency for high-throughput and large-scale data processing use cases. 

These capabilities are already being applied in real-world scenarios, such as with ariso.ai, where Vault serves as the centralized key management layer while encryption occurs at the edge across distributed AI pipelines. This allows organizations to scale encryption alongside data-intensive workloads without introducing bottlenecks, while still enforcing strong governance and security policies. As part of this release, envelope encryption positions Vault to better support modern AI and streaming architectures by combining centralized control with distributed execution. 

Scale secret distribution with identity-first access and secret sync 

Organizations managing secrets across hybrid and multi-cloud environments often rely on long-lived static credentials, such as IAM access keys, service principals, or service account keys, to enable integrations like secret synchronization. While functional, this model creates significant security and operational challenges: increased blast radius if credentials are leaked, manual rotation overhead, risk of silent failures due to expiration, and widespread credential sprawl across systems and teams. These issues are increasingly at odds with modern security mandates that prioritize short-lived, identity-based access and zero trust principles

 Vault Enterprise 2.0 addresses these challenges by introducing workload identity federation to secret sync, replacing static credentials with short-lived, dynamically exchanged tokens based on trusted identity. This approach eliminates the need to store or rotate credentials, reduces risk exposure, and aligns secret distribution with cloud-native authentication models across AWS, Azure, and GCP. The result is stronger security, improved reliability, and simplified operations, enabling organizations to securely scale secret management, support non-human and agentic workloads, and maintain compliance without adding operational burden. 

Secure workload identity with the SPIFFE secrets engine 

A new SPIFFE secrets engine is generally available with Vault Enterprise 2.0. Organizations whose workloads rely on SPIFFE can now use tokens issued directly by Vault. With this release, JWT SVID identity tokens can now be requested after successful authentication with Vault. Reinforcing short-lived JWT SVIDs with automatically rotated identities reduces risks associated with long-lived tokens and missed rotations due to manual processes, and decreases blast radius in the event of a token leak.  

As Vault continues to set the pace for leading in non-human identity management, capabilities that support fine-grained workload access control enhance organizations’ capacity to secure ephemeral workloads. The SPIFFE secrets engine simplifies operations across heterogeneous environments and continues to strengthen identity guarantees for non-human workloads. Secure, short-lived, and verifiable identities for workloads practically scale the application of zero trust principles, especially in cloud-native environments. Lighter weight and more portable workload identities integrate more smoothly in these modern systems.  

Vault continues to reinforce optimized security operations 

Unified, automated approach to public and private certificates 

Customers can now request and manage public PKI certificates through Vault, which will track and manage the request to a public CA. This capability provides increased support for teams that need to deliver services secured with publicly trusted certificates while continuing to move at the speed of development. Platform teams can now take advantage of an integrated workflow within Vault to manage both privately and publicly issued certificates for increased operational efficiency.   

Reduced operations costs with SCIM integrations 

Currently in public beta, SCIM server support lets users connect Vault with any SCIM-compliant identity provider. SCIM clients such as SailPoint, Okta, and more are better integrated for improved group and user lifecycle management. SCIM integration allows Vault operators more flexibility by reducing the manual process for syncing users, groups, and group memberships to Vault. Deprovisioning via policy rather than manual process mitigates the risks of persistent user credentials. Teams working toward more consistent and centralized governance can depend on SCIM integration to do so with this Vault capability and can access this beta feature in Vault 2.0.  

Expanding support for Terraform ephemeral resources  

Bridging secure lifecycle management with infrastructure lifecycle management, improvements to the Terraform Vault provider enhance Vault infrastructure as code and secure secret consumption. With these improvements, managing Vault (e.g. auth methods, secret engines and policies) via Terraform further ensures consistency, repeatability and auditability of secrets management for infrastructure and the applications that depend on it. Teams gain even more efficiencies across the infrastructure with Vault-backed secret retrieval during provisioning, without hardcoding and with automated credential rotation.  

Enhancing the Vault UI for discoverability and usability 

Vault Enterprise 2.0 introduces an enhanced UI with a guided onboarding experience that helps teams configure foundational features quickly and correctly. New and returning users are directed toward recommended Vault usage faster, with a curated startup path that accelerates time to value.  

The onboarding wizard is now generally available and is designed to evolve beyond initial setup, with additional wizards planned to make Vault guidance an ongoing experience rather than a one-time task.  

Contextual and embedded enhancements have also been introduced to support better feature discoverability. The support and documentation that previously lived only in Vault developer documents are now being delivered in-product, so users don’t need to leave Vault to learn how to use Vault. 

Adoption can be accelerated when teams get the right help. The visual policy generator is also generally available and helps teams create secure policies without writing JSON or HCL from scratch. This reduces the learning curve for new users and administrators and improves efficiencies with consistent and recommended policy patterns across teams that use Vault.  

Vault Enterprise 2.0 upgrade details 

Vault Enterprise 2.0 delivers meaningful advancements in identity lifecycle automation, workload interoperability, usability, onboarding, and operational transparency. These improvements lower barriers to adoption while strengthening Vault’s core mission: secure, reliable, consistent secrets and identity management at enterprise scale.  

You can explore the full list of updates, including those that are available in Community Edition, by reviewing the Vault 2.0 changelog.  

As with previous releases, we recommend testing new releases in staging or isolated environments before deploying them to production. If you encounter any issues, please report them via the Vault GitHub issue tracker or start a discussion in the Vault community forum. If you believe you have discovered a security vulnerability, please report it responsibly by emailing security@hashicorp.com Avoid using public channels for security issues. For details, refer to our security policy and PGP key.  


To learn more about HCP Vault or Vault Enterprise, visit the Vault product page.  



from HashiCorp Blog https://ift.tt/S7gxU81
via IFTTT