Wednesday, May 13, 2026

GemStuffer Abuses 150+ RubyGems to Exfiltrate Scraped U.K. Council Portal Data

Cybersecurity researchers are calling attention to a new campaign dubbed GemStuffer that has targeted the RubyGems repository with more than 150 gems that use the registry as a data exfiltration channel rather than for malware distribution.

"The packages do not appear designed for mass developer compromise," Socket said. "Many have little or no download activity, and the payloads are repetitive, noisy, and unusually self-contained."

"Instead, the scripts fetch pages from U.K. local government democratic services portals, package the collected responses into valid .gem archives, and publish those gems back to RubyGems using hardcoded API keys."

The development comes as RubyGems temporarily disabled new account registration following what has been described as a major malicious attack. While it's not clear if the two sets of activities are related, the application security company said GemStuffer fits the "same abuse pattern," which involves using newly created packages with junk names to host the scraped data.

At a high level, the campaign abuses RubyGems as a place to stage the scraped council content. It does this by fetching hard-coded U.K. council portal URLs, packaging the HTTP responses into valid .gem archives, and publishing those archives to RubyGems using embedded registry credentials.

In some cases, the payload embedded within the gem creates a temporary RubyGems credential environment under "/tmp," overrides the HOME environment variant, builds a gem locally, and pushes it to RubyGems using the gem command-line interface (CLI), as opposed to depending on pre-existing RubyGems credentials on the target machine.

Other variants of the malicious gems have been found to eschew the CLI component in favor of uploading the archive directly to the RubyGems API via an HTTP POST request. Once the new gems have been published, all an attacker has to do is run a "gem fetch" command with the gem name and version to access the scraped data.

The novel scraping campaign has been found to target public-facing ModernGov portals used by Lambeth, Wandsworth, and Southwark, with an aim to collect committee meeting calendars, agenda item listings, linked PDF documents, officer contact information, and RSS feed content.It's not clear what exactly the end goals are, as the information appears to be publicly accessible anyway.

Socket has assessed that the systematic bulk collection and archival of this data raises the possibility that the attacker may be leveraging the "council portal access as a pivot to demonstrate capability against government infrastructure."

"It may be registry spam, a proof-of-concept worm, an automated scraper misusing RubyGems as a storage layer, or a deliberate test of package registry abuse," Socket said. "But the mechanics are intentional: repeated gem generation, version increments, hardcoded RubyGems credentials, direct registry pushes, and scraped data embedded inside package archives."



from The Hacker News https://ift.tt/KWgxVN2
via IFTTT

Tuesday, May 12, 2026

Docker AI Governance: Unlock Agent Autonomy, Safely

Introducing Docker AI Governance: centralized control over how agents execute, what they can reach on the network, which credentials they can use, and which MCP tools they can call, so every developer in your company can run AI agents safely, wherever they work.

Your laptop is the new prod

Agents are the biggest productivity unlock the modern workplace has seen in a generation, and engineering is where the shift is most obvious. Developers aren’t using agents to autocomplete a function anymore. They’re using them to read whole codebases, refactor across services, and ship entire products, end to end. Vibe coding is real, it’s shipping to main, and it’s happening on laptops everywhere today.

The same shift is moving through every other function. A new class of agents called Claws is already in production, sending emails, managing calendars, booking travel, pulling CRM data, reconciling reports, and querying production systems. Marketing, finance, sales, and support are adopting them as fast as engineering is, because the productivity gains are too large to ignore and the companies that move first will out-execute the ones that don’t. Org-wide rollouts that used to take quarters are landing in weeks.

What’s more interesting than the speed of adoption is where all of this actually runs. Agents and Claws live outside the systems enterprises spent two decades hardening. They don’t sit behind your CI/CD pipeline, they don’t live inside your VPC, and they don’t follow your IAM model. They run on the developer’s machine, with the developer’s credentials, reaching into private repos, production APIs, customer records, and the open internet, often in the same session. The laptop just became the most powerful node in your enterprise, and it also became the most exposed. Laptop and agent environments are the new prod, and they need to be governed like prod.

What governance actually has to solve

The instinct in most enterprises is to reach for the tools that already exist, but none of them see what an agent is doing. CI/CD doesn’t see it because the agent isn’t a pipeline. The VPC doesn’t see it because the laptop is outside the perimeter. IAM doesn’t see it because the agent is acting as the developer. The result is that CISOs can’t tell what an agent touched, what it ran, or where the data went, and they also can’t tell the business to slow down. This is the bind every security leader is in right now.

Strip the problem to first principles and an agent has two paths to do significant harm. It either executes code itself, touching files and opening network connections, or it calls a tool through an MCP server to act on an external system. Govern both paths and you’ve governed the agent. Miss either one and you haven’t.

That’s the test for any AI governance solution worth taking seriously, and it has two parts. The controls have to live at the runtime layer where the agent actually executes, not as advisory rules layered on top that a clever prompt can route around. And they have to work consistently wherever the agent ends up running, because agents don’t stay on the laptop. They migrate to CI runners, to staging clusters, to production. A policy that only holds in one of those places is a gap waiting to be found.

Why Docker

Docker is the only company that meets both parts of that test, and the reason is structural.

Docker built the sandbox that contains the first path. Every agent session runs inside an microVM-based isolated environment where filesystem and network access are controlled by a hard boundary, which means enforcement happens at the level of the process, not as a suggestion the agent can ignore. Docker built the MCP Gateway that contains the second path. Every tool call routes through a single chokepoint where it can be authenticated, authorized, and logged before it reaches the external system. These controls at a primitive level, Docker Sandboxes and Docker MCP Gateway, make enforcement strict instead of advisory. We own the substrate the agent is running on, so the policy isn’t a wrapper around someone else’s runtime, it’s the runtime.

The second part is what makes this durable. The same sandbox primitive runs on the developer’s laptop, inside Kubernetes, and across cloud environments, with the same policy model and the same enforcement guarantees. When an agent moves from a developer’s machine to a CI runner to a production cluster, the policy moves with it, because the runtime underneath is the same in all three places. No other vendor can say that, because no other vendor is the runtime. Endpoint security tools don’t extend to clusters. Cluster security tools don’t reach the laptop. Cloud security tools don’t run on either. Docker covers all three because Docker is what’s actually executing the agent in all three.

Docker AI Governance is the control plane that sits on top of that runtime. It turns the sandbox and the MCP Gateway into centralized policy, defined once in the admin console, enforced at every node the agent touches, and auditable from end to end.

How Docker AI Governance works

From a single admin console, security teams define and enforce policy across four control surfaces: network, filesystem, credentials, and MCP tools. One policy layer that doesn’t need a per-machine setup and that consistently works across thousands of developers.

Sandbox policy for network and filesystem. Admins define allow and deny rules for domains, IPs, and CIDRs, alongside mount rules for filesystem paths with read-only or read-write scope. Every agent session runs inside an isolated sandbox where only approved endpoints are reachable and only approved directories are mountable, with enforcement happening at the proxy and mount level rather than as an advisory layer the agent can ignore.

Credential governance. Agents are dangerous in proportion to what they can authenticate as, so Docker AI Governance controls which credentials, tokens, and secrets an agent session can see, scopes them to the duration of that session, and blocks exfiltration to unapproved destinations. Developers stop pasting tokens into prompts, and security stops wondering where those tokens ended up.

MCP tool governance. Admins control which MCP servers and tools are available through organization-wide managed policies, with unapproved servers blocked by default. Every MCP call flows through the same policy engine as network, filesystem, and credential requests, so there’s no separate surface to configure and no bypass path.

Role-based policy assignment. Different teams need different levels of access, and security research will reasonably require broader MCP usage than finance. Create policy groups, assign users through your IdP, and layer team-specific rules on top of organization-wide guardrails that can’t be overridden. It scales to thousands of developers through existing SAML and SCIM integrations with no per-user setup.

Audit and visibility. Every policy evaluation generates a structured event with user identity, timestamp, session context, and the rule that triggered the decision, and logs export cleanly to your existing SIEM and compliance systems. This is the evidence CISOs need to approve AI usage at scale rather than tolerate it under the table.

Automatic policy propagation. When a developer authenticates, their machine pulls the latest policy, and updates reach every device automatically. Admins define policy once and Docker enforces it everywhere.

What this unlocks

CISOs get the governance layer they’ve been missing and the confidence to approve agent usage at scale rather than block it. Platform teams get an easy way to set up governance: by defining a policy once and having it enforced everywhere with full audibility. This removes the operational burden of scaling AI adoption across the company. Developers get what agents promised in the first place: real speed and autonomy, with governance that stays out of the way. We built Docker AI Governance with these principles in mind: agents should be autonomous and governance should be invisible.

Available today

Docker AI Governance is available now. If you’re a security leader trying to close the AI governance gap, or a platform team ready to roll out agents without compromising control, it was built for you.

Contact sales to learn more.



from Docker https://ift.tt/vO5UJet
via IFTTT

RubyGems Suspends New Signups After Hundreds of Malicious Packages Are Uploaded

RubyGems, the standard package manager for the Ruby programming language, has temporarily paused account sign ups following what has been described as a "major malicious attack."

"We're dealing with a major malicious attack on Ruby Gems right now," Maciej Mensfeld, senior product manager for software supply chain security at Mend.io, said in a post on X. "Signups are paused for the time being. Hundreds of packages involved – mostly targeting us, but some carrying exploits."

Visitors to RubyGems' sign up page are now greeted with the message: "New account registration has been temporarily disabled."

Mend.io, which secures RubyGems, said it intends to release more details once the incident is contained. It's currently not known who is behind the attack.

The development comes as software supply chain attacks targeting the open-source ecosystems have been on the rise, with threat actors like TeamPCP compromising widely used packages to distribute credential-stealing malware capable of harvesting sensitive data and allowing the attackers to expand their reach.

In a report published Monday, Google said the credentials stolen from affected environments have been monetized through partnerships with ransomware and data theft extortion groups.

(This is a developing story. Please check back for more details.)



from The Hacker News https://ift.tt/VXhlYZb
via IFTTT

Defending consumer web properties against modern DDoS attacks

If you own, create, or maintain online services and web portals, you’re probably aware of the dramatic upswing in DDoS attacks on your domains. AI has democratized tooling not just for us but for threat actors as well. DDoS in this era has extended from simple bandwidth saturation to sophisticated, application-layer abuse. Defending against this activity now requires system-level design, beyond just the typical network-level filtering. As botnets continue to expand their footprint and evade identification, it is important for us to take a step back, assess the situation, and take a defense-in-depth approach to increase our resilience against this class of disruption.

DDoS activity across Bing and other online services at Microsoft has seen a large uptick in the past five to six years. As reported in the Microsoft Digital Defense Report 2025, Microsoft now processes more than 100 trillion security signals, blocks approximately 4.5 million new malware attempts, analyzes 38 million identity risk detections, and screens 5 billion emails for malicious content each day. This helps illustrate both the breadth of modern attack surfaces and the automation cyberattackers can now wield at industrial scale. When we narrow in specifically on DDoS, an even clearer trend emerges: beginning in mid-March of 2024, Microsoft observed a rise in network DDoS attacks that eventually reached approximately 4,500 cyberattacks per day by June 2024. And this persistent volume was paired with a shift toward more stealthy application-layer techniques.

In my role as Vice President, Intelligent Conversation and Communications Cloud Platform at Microsoft, I focus on helping the Microsoft AI and Bing teams build systems that are safe, resilient, and worthy of user trust, even under the sustained pressure we’re receiving from today’s cyberattackers. Whether you are responsible for a single public website or a large portfolio of consumer-facing applications, defending against modern DDoS attacks means more than just absorbing traffic. It means building defense-in-depth robust enough that, even if some attack traffic gets through, your service stays usable for the people who rely on it.

The nature of modern DDoS attacks

Early DDoS attacks were largely about volume. Cyberattackers would flood a target with traffic in an attempt to saturate network capacity and force an outage. While volumetric attacks still happen, most large services now have baseline protections that make this approach less effective on its own.

Modern DDoS attacks are more nuanced. They are often multi-vector, with a single campaign potentially including network-layer floods and application-layer abuse at the same time. Along with the exponential increase in the scale of these cyberattacks, they are also getting more tailored to stress specific applications and user flows. Application-layer attacks are gaining popularity because they are harder to distinguish from legitimate usage.

We also see threat actors utilizing a broader range of devices in botnets, including consumer Internet of Things (IoT) devices and misconfigured cloud workloads. In some cases, cyberattackers abuse legitimate cloud infrastructure to generate traffic that blends in with normal usage patterns. Edge systems, such as content delivery networks (CDNs) and front-door routing services, are increasingly targeted because they sit at the boundary between users and applications.

When attack traffic looks like normal user traffic, typical network-level blocklists aren’t very effective. You need sophisticated fingerprinting (starting with JA4), layered controls, and good operational visibility. This evolution is part of what makes defending against DDoS more than a networking problem. It is now a system design problem, an operational monitoring problem, and ultimately a trust problem.

A defense-in-depth framework

Even if you block 95% of malicious traffic, the remaining 5% can still be enough to take you down if it hits the right bottleneck. That’s why defense-in-depth matters.

A strong defensive posture starts with making abnormal traffic easier to spot and harder to exploit. Techniques like rate limiting, geo-fencing, and basic anomaly detection remain foundational. They are most effective when tuned to your specific traffic patterns. Cloud-native DDoS protection services play an important role here by absorbing large-scale attacks and surfacing telemetry that helps teams understand what is happening in real time. If you run on Azure, there are built-in options that can help when used as part of a broader design. Azure DDoS Protection is designed to mitigate network-layer cyberattacks and is intended to be used alongside application design best practices. At the edge, services like Azure Web Application Firewall (WAF) on Azure Front Door can provide centralized request inspection, managed rule sets, geo-filtering, and bot-related controls to reduce malicious traffic before it reaches your origins.

Microsoft publishes a range of Secure Future Initiative (SFI) guidance and engineering blogs that describe patterns we use internally to harden consumer services at scale, and if you’re looking to assess how robust your site’s current DDoS resilience posture is, here’s a simple tabular framework to work from:

StateAttributes and characteristicsReadiness posture (availability and latency)Risk profile (CISO perspective)
Level 1: Exposed
(Direct Origin/No CDN)
Architecture: Monolithic; Origin IP exposed through DNS A-records.
Detection: Manual log analysis post-incident; reactive alerts on server CPU spikes.
Mitigation: Null-routing by ISP (taking the site offline to save the network); manual firewall rules.
Key Signal: Immediate 503 errors during minor surges.
Fragile/Volatile

Availability: Single point of failure. Zero resilience to volumetric or L7 attacks.
Latency: Highly variable; degrades linearly with traffic load.
Recovery: Hours to days (manual intervention required).
Critical/Existential

Residual Risk: High. The organization accepts that any motivated attacker can cause total outage.
Financial Impact: Direct revenue loss proportional to downtime.
Reputation: Severe damage; loss of customer trust.
Level 2: Basic Protection
(Commodity CDN/ Volumetric Shield)
Architecture: Static assets cached at edge; Origin cloaked.
Detection: Threshold-based volumetric alerts (for example, more than 1 Gbps).
Mitigation: “Always-on” scrubbing for L3/L4 floods; basic geo-blocking.
Key Signal: Survival of SYN floods, but failure under HTTP floods.
Defensive/Static

Availability: Resilient to network floods; vulnerable to application exhaustion.
Latency: Improved for static content; poor for dynamic attacks.
Recovery: Minutes (automated scrubbing activation).
High/Managed

Residual Risk: Moderate-High. Application logic remains a soft target.
Blind Spot: Sophisticated bots bypass volumetric triggers.
Compliance: Meets basic continuity requirements but fails resilience stress tests.
Level 3: Advanced Edge
(Intelligent Filtering/WAF)
Architecture: Edge compute; Dynamic web application firewall (WAF); API Gateway enforcement.
Detection: Signature-based (JA3/JA4 fingerprinting); User-Agent analysis.
Mitigation: Rate limiting by fingerprint/behavior; CAPTCHA challenges.
Key Signal: High block rate of “bad” traffic with low false positives.
Proactive/Robust

Availability: High availability for most attack vectors, including low-and-slow.
Latency: Consistent; edge mitigation prevents origin saturation.
Recovery: Seconds (automated policy enforcement).
Medium/Controlled

Residual Risk: Medium. Shift to “sophisticated bot” risk (mimicking humans).
Focus: Quality of Service (QoS) and reducing false positives.
Investments: Shift from hardware to threat intelligence feeds.
Level 4: Resilient Architecture
(Graceful Degradation/
Bulkheading)
Architecture: Circuit Breakers; Load Shedding logic; defense-in-depth.
Detection: Service-level health checks; Dependency failure monitoring; outlier detection; trust scores.
Mitigation: Challenges/CAPTCHAs; Service Degradation Automated feature toggling (for example, disable “Reviews” to save “Checkout”).
Key Signal: “Limited Impact to Availability” during massive events.
Resilient/Adaptive

Availability: Core functions remain online; non-critical features degrade.
Latency: Controlled degradation; critical paths prioritized.
Recovery: Real-time (system self-stabilization).
Low/Tolerable

Residual Risk: Low. Business accepts degraded functionality to preserve revenue.
Narrative: “We operated through the attack with minimal user impact.”
Risk Appetite: Aligned with business continuity tiers.
Level 5: Autonomous Defense
(AI-Powered/
Predictive)
Architecture: Serverless edge logic; Multi-CDN failover; Chaos Engineering.
Detection: AI and machine learning predictive modeling; Zero-day pattern recognition.
Mitigation: Autonomous policy generation; Preemptive scaling.
Key Signal: Attack neutralized before human operator awareness.
Antifragile/Optimized

Availability: Near 100% through multi-redundancy and predictive scaling.
Latency: Optimized dynamically based on threat level.
Recovery: Instantaneous/Pre-emptive.
Minimal/Strategic

Residual Risk: Very low. Focus shifts to supply chain and novel vectors.
Posture: Continuous improvement through Red Teaming and Chaos experiments.
Leadership: Chief information security officer (CISO) drives industry intelligence sharing.

Planning for graceful degradation

One of the most common misconceptions about DDoS defense is that success means “no reduction in services.” In reality, even a partially successful attack can degrade performance enough to frustrate users or erode trust, without triggering a full outage. Graceful degradation is about maintaining core functionality even when systems are under stress. It means being deliberate about which user flows must remain available and which can be temporarily limited without causing disproportionate harm.

For example, our systems prioritize core scenarios over secondary features during extremely large cyberattacks. In practice, this can mean temporarily delaying nonessential personalization or shedding load from less critical features to preserve overall responsiveness. These decisions are made in advance and tested, not improvised during an incident. Here’s an example of how we might do that:

  • Prioritizing core user flows: We would focus on keeping core scenarios responsive. That might mean protecting one or two core scenarios while de-emphasizing secondary experiences.
  • Reducing expensive work first: Some parts of an experience are computationally heavier. Under attack pressure, those are candidates for temporary reduction, so the overall service stays usable.
  • Tiered experience under load: In extreme conditions, you can provide a better experience for users with higher trust signals while still offering an acceptable experience to everyone else. This is not about punishing lower trust users. It is about making sure your system can still serve legitimate demand when resources are constrained.
  • Clear user messaging: If you need to disable or simplify a feature temporarily, communicate it in a way that is honest and calm. You do not need to explain your internal architecture. You do need to be predictable.

Designing for resilience means assuming that individual components will fail or be stressed at some point. Systems that are built with that expectation tend to recover faster and maintain user trust more effectively than systems that aim for perfect uptime at all costs.

Get started improving your DDoS defense

If I could leave you with a single practical concept, it would be this: treat DDoS as a normal operating condition for internet-facing services. Build defense in depth. Assume some cyberattack traffic will get through. Design your service so it can degrade gracefully while protecting the user experiences that matter most.

Consumer trust is fragile and hard-earned. Developers and operators who think beyond raw availability, and who design for transparency, prioritization, and resilience, are better positioned to handle the realities of today’s cyberthreat landscape. Modern defensive strategies combine proactive controls, thoughtful architecture, and a clear understanding of what matters most to users.

For those interested in going deeper, I encourage you to explore the Secure Future Initiative resources and the other Office of the CISO blogs provided by my peers at Microsoft. Both of these resources frequently share practical patterns for building and operating resilient services at scale.

Microsoft
Deputy CISOs

To hear more from Microsoft Deputy CISOs, check out the OCISO blog series:

To stay on top of important security industry updates, explore resources specifically designed for CISOs, and learn best practices for improving your organization’s security posture, join the Microsoft CISO Digest distribution list.

Man with smile on face working with laptop

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.  

The post Defending consumer web properties against modern DDoS attacks appeared first on Microsoft Security Blog.



from Microsoft Security Blog https://ift.tt/8mV3AS6
via IFTTT

IBM Vault 2.0 adds UI enhancements and improved reporting visibility

Many educational resources exist outside of IBM Vault to help users get the most value out of their secrets management experience with Vault. From developer docs to videos authored in-house by HashiCorp, an IBM company, to user and community-generated content, users have access to many resources outside of the product. Becoming a Vault expert requires accessing disparate knowledge that isn’t contextualized within the product.With the recent release of IBM Vault 2.0, we’ve taken the opportunity to holistically reassess Vault with a clear goal: make Vault easier to use, so that customers can onboard the product without needing a PhD in Vault. 

Vault 2.0.1 includes improved reporting and visibility into consumption, helping organizations better understand how Vault is used across secrets management, key lifecycle management, identity brokering, and data protection. These enhancements provide greater transparency into usage patterns, enabling teams to improve operational visibility as well as support forecasting, planning, and governance initiatives.

A focus on Vault onboarding and adoption 

There are two key pillars we’re focusing on when planning the UI enhancements: 

  • How do we help customers easily understand and discover new features? How do we match our features easily and intuitively to customer problems rather than expecting customers to be experts in Vault documentation?

  • How do we enable customers to quickly, easily, with best practices implementation, adopt features that strengthen their ability to deliver their own roadmaps?     

UI enhancements available in IBM Vault 2.0 

Following are the key features and improvements designed to help customers onboard to Vault:  

  • Visual policy generator - provides a pre-filled, contextual UI form that generates policy snippets. The policy snippets can be copied for use in the Terraform Vault Provider (recommended) or saved to the Vault cluster.  

  • Onboarding wizard - starts customers off with simple questions about how they would use a feature and then generates an editable code snippet that supports their usage of the feature.  

  • Introductory pages for new and existing features - details the feature’s value add and other helpful information with a recommended quick-start action.  

  • Navigation bar revamp - groups features by customer problems to center the customer experience and contextualize the features that can best help them as they use Vault.  

Visual policy generator assists with code customization 

New Vault users have no permissions by default. Assigning permissions to use any feature or interact with any resource requires writing the custom code for a policy. This can create an operational burden for administrators and a barrier for feature adoption.  

In Vault 2.0, a contextual visual policy generator helps customers generate best practices and pre-filled policies from forms. The policy generator generates editable code, giving customers the option to get started with support. It’s customizable, so they can change the inputs as needed.

A screenshot of the IBM Vault UI showing the visual policy generator for ACL policy

A pre-filled contextual UI form is available to the customer to complete. Then, they will receive the properly formatted, best practices code for a Vault ACL policy, which they can save to their local Vault cluster or copy into their Terraform Vault Provider (or other script). 

The visual policy generator automatically detects and populates as much of the necessary information as possible to help Vault users get started faster. However, users should still review the code to confirm it meets their specifications and policies before using. 

Onboarding wizard simplifies turning on features 

Beginning with namespaces, Vault 2.0 includes an onboarding wizard pattern that guides users through the options available to them. This removes some of the choice overload that can overwhelm a user as they make the selection for the features that best meet their requirements.

A screenshot of the IBM Vault UI showing the onboarding wizard for Namespaces

Following the selection, the wizard generates a Terraform code snippet, CLI command or option to apply their choices in the UI at the end. The onboarding wizard is being released with support for Namespaces in 2.0.0, with plans to support additional features in the future. Customers can always reach out to their account team to request support for particular features, as product and experience feedback are valuable inputs in our planning.  

Introductory pages provide at-a-glance information about unused features  

Introductory features pages are being added to provide important details such as business case, use case analysis, links to documentation, and easy to follow diagrams so users can understand the value of Vault features without having to seek out external documentation. 

A screenshot of IBM Vault UI showing the introductory page to Namespaces

Guided starts are embedded in the intro pages for easy setup of new features. Estimated set-up time helps preview their time investment.  

This improvement is being rolled out in phases, with more introductory pages planned for other features.  

Navigation bar revamp centers the customer’s ‘problems to be solved’ 

Features are most valuable when they’re understood and used, but this can be difficult if they aren’t readily related to the customer’s challenges. We’ve revamped the navigation bar to group features by customer problems.   

The goal here is to align Vault with the workflows customers want to complete. Additionally, we have renamed ‘control groups’ to ‘access workflows’, which is more market standard and intuitive. 

Reporting enhancements support expanded Vault usage 

Secrets management isn’t the only use case for Vault. As customers have expanded to other use cases, we want to encourage them to continue experimenting for wider adoption of Vault. To that end, we’ve introduced reporting enhancements for visibility into licensing and consumption for customers who are licensed as such and have upgraded to Vault 2.x.x. and later. Usage is now measurable across the following units of measure for these Vault use cases: 

  • Secrets management: number of managed secrets 

  • Key lifecycle management: number of managed keys 

  • Identity brokering: number of credential units issued 

  • Data protection: number of data protection API calls 

These units of measure are directly aligned to security outcomes and support Vault users in providing visibility to security and compliance stakeholders – whether that’s with budget reconciliation or pre-planning with more granularity in their forecasting.   

With multiple use cases and the capability to measure across different units, these measurement and reporting enhancements allow Vault to deliver greater customer value with better cost-to-value alignment with pricing that encourages flexibility and growth.  

Learn more 

IBM Vault 2.0 became generally available on April 14, 2026. You can read about the release on our blog and access the Vault release notes in our developer docs. Stay tuned as more IBM Vault and other HashiCorp news is published.  



from HashiCorp Blog https://ift.tt/LzbUv2e
via IFTTT

Terraform adds cost visibility, project-level notifications, and more

In the past few months, the HashiCorp Terraform engineering team has continuously improved features to help organizations eliminate infrastructure blind spots and strengthen governance and security across the entire infrastructure lifecycle. The latest HCP Terraform and Terraform Enterprise improvements include:

·       Billable resource analytics (GA)

·       Project-level remote state sharing (GA)

·       Module testing for dynamic credentials (GA)

·       Project-level notification (GA)

·       Registry tagging (Beta)

Billable resource analytics

Feature gap: Organizations using resources under management  (RUM)-based billing face a significant visibility challenge when trying to understand where infrastructure costs are coming from. Until now, HCP Terraform customers could only view their total billable managed resources at the organization level. Deeper insights into where resources were being consumed weren't available in an easily accessible way. This made it difficult to estimate costs, predict future consumption, and determine which elements in an organization are consuming what percentage of used resources.

What's new: We are introducing the general availability of billable resource analytics for HCP Terraform. This new capability transforms how organizations manage infrastructure costs by providing users with comprehensive visibility into resource consumption across their entire organization. By breaking down the current totals of billable managed resources by project and workspace, decision makers gain the insights needed to reduce unnecessary spending and optimize their infrastructure investments. Available as a self-service view on the existing usage page, this capability eliminates delays in accessing critical cost data and empowers organizations to take immediate action on cost optimization opportunities.

Benefits:

  • Cost visibility and predictability: Organizations gain the visibility needed to proactively manage infrastructure spending rather than reacting to unexpected bills. By identifying high-consumption projects and workspaces, organization owners can work with engineering teams to right-size resources, eliminate waste, and stay within budget constraints.

  • Data-driven decision making: Leaders can make informed decisions about infrastructure investments based on actual consumption patterns rather than guesswork. The detailed breakdowns reveal which projects consume the highest resources and where consolidation opportunities exist. This can enable strategic resource allocation that aligns infrastructure spending with business priorities.

To see what billable resource analytics has to offer, any user on a paid HCP Terraform plan can access the new view on the existing usage page for their organization.

A screenshot of the IBM Terraform's Billable resource table

Project-level remote state sharing

Feature gap: Until now, platform teams managing large-scale infrastructure on HCP Terraform and Terraform Enterprise faced a difficult trade-off when sharing data between workspaces using the terraform_remote_state data source. They were limited to two primary options: sharing state with every workspace in the entire organization, which increased the security risk, or manually managing a list of workspaces, which was slow and error-prone.

For large enterprises, organization-wide sharing is often too broad, exposing sensitive configurations to unrelated teams and violating the principle of least privilege. Conversely, manually maintaining access lists for hundreds or thousands of workspaces creates a massive operational burden. This gap forced many customers into an unsustainable "multi-organization" model — creating numerous separate organizations just to maintain security boundaries — which is difficult to manage and impacts system performance.

What's new: We are introducing the GA of a new remote state sharing option: "Share with all workspaces in this project" for HCP Terraform and Terraform Enterprise 1.1.0 and later. This feature allows you to use projects as a true isolation boundary so that users can have an effective way to control their state sharing, increasing security and developer velocity.

When this setting is enabled in a workspace’s general settings, its remote state becomes automatically accessible to any other workspace within the same project. This access is dynamic: If a new workspace is added to the project, it immediately gains access to the shared state. If a workspace is moved to a different project, its access is automatically revoked, and its own shared state is re-scoped to its new project environment.

Benefits:

  • Enhanced security through isolation: Organizations can now enforce strict data boundaries at the project level, ensuring that sensitive infrastructure outputs are only visible to the teams and services that actually need them.

  • Operational efficiency at scale: By automating access within a project, platform teams no longer need to manually update workspace relationships. This "set and forget" approach significantly reduces the management overhead associated with large-scale state sharing.

  • Simplified governance: This feature unblocks the transition from complex multi-organization architectures to a more streamlined project-based model. This consolidation leads to better performance, easier reporting, and a more cohesive management experience for administrators.

We continue to recommend using the tfe_outputs data source in the HCP Terraform/Enterprise Provider to access remote state outputs in HCP Terraform or Terraform Enterprise. The tfe_outputs data source is more secure because it does not require full access to workspace state to fetch outputs.

To learn more, check out our Workspaces settings documentation.

Project-level notification

Feature gap: Until now, platform teams managing large-scale infrastructure on HCP Terraform faced a significant operational hurdle when configuring alerts and observability. They were required to configure notification settings, such as Slack webhooks, PagerDuty triggers, or email alerts manually on a workspace-by-workspace basis.

For enterprises scaling self-service infrastructure, maintaining these individual configurations across hundreds or thousands of workspaces created a massive operational burden. This gap often led to "silent failures," situations where critical errors in newly provisioned environments went completely unnoticed by operations teams because the workspace creator forgot to configure the necessary alerts. Platform teams were forced to rely on brittle, custom-built audit scripts just to verify that their infrastructure was being monitored.

What's new: We are excited to announce the general availability (GA) of project-level notifications for HCP Terraform and Terraform Enterprise. This feature allows you to use projects as a centralized control plane to define your observability standards, ensuring that users have an effective, automated way to monitor their infrastructure at scale.

When a notification destination and trigger are configured in a project's settings, those alerts automatically cascade to every workspace within that project. This inheritance is dynamic: If a new "no-code" workspace is provisioned inside the project, it immediately inherits the project's alert settings. If a workspace is moved to a different project, its inherited notifications are automatically updated to match its new environment.

Benefits:

  • Enhanced reliability through "monitoring-by-default": Organizations can now enforce a strict observability baseline at the project level. This acts as an automated safety net, ensuring that no infrastructure is deployed without the proper alerts in place.

  • Operational efficiency at scale: By automating notification inheritance within a project, platform teams no longer need to manually configure workspaces or maintain external audit scripts. This "set and forget" approach significantly reduces the toil previously associated with large-scale incident management.

  • Simplified governance: This feature unblocks the ability to standardize incident routing based on environment or business unit. For example, you can guarantee that every workspace in the "production" project routes directly to the SRE PagerDuty service, leading to faster mean time to resolution (MTTR) and a more cohesive management experience for administrators.

To learn more about standardizing your observability workflows, you can check out the notifications settings in the HCP Terraform project notifications documentation.

Module testing for dynamic credentials

Feature gap: Until now, there was a significant security disconnect between deploying infrastructure and testing it. While HCP Terraform users could use Dynamic Credentials for standard plan-and-apply operations, the native Terraform test framework often required a fallback to static credentials.

To run integration tests that interacted with real cloud resources, module authors were frequently forced to manually seed "test-only" AWS keys or Azure service principal secrets into their module testing environments. This created a secondary tier of "shadow secrets" that were often less rotated and less monitored than production credentials, creating a weak link in the secure supply chain and increasing the friction for developers who wanted to write robust, automated tests.

What’s new: We are extending dynamic credentials support to the Terraform module testing suite. This update allows the native testing workflow to leverage the same OIDC-based trust relationships used in standard runs.

Whether you are testing a module’s behavior in AWS, Azure, or Google Cloud, or verifying its interaction with HCP Vault, the testing environment can now automatically exchange an OIDC token for temporary, short-lived credentials. This ensures that your testing lifecycle is just as secure and "secret-less" as your production deployment lifecycle.

Benefits:

  • Unified security across the lifecycle: You no longer need to manage two different authentication methods, one for deployments and one for testing. OIDC now covers the entire journey from terraform test to terraform apply.

  • Reduced developer friction: Developers can now write and run integration tests in HCP Terraform without the manual hurdle of requesting and configuring static "test" keys. Tests "just work" as long as the OIDC trust is established.

  • Ephemeral testing environments: Because credentials are generated on –the fly for the duration of the test and expire immediately after, the risk of "orphaned" test credentials lingering in your environment is completely eliminated.

  • Consistent governance: Platform teams can now enforce a single, identity-based policy across the organization, ensuring that even experimental or test-phase infrastructure follows the principle of least privilege.

Next steps: To start using OIDC in your tests, ensure your workspace is configured for dynamic credentials and update your module's test files to leverage the provider's default authentication. You can find more details on integrating these features in the HCP Terraform testing documentation.

Registry tagging

Feature gap: Today, you can define tags as key-value pairs on your projects to organize them and track resource consumption, and a workspace automatically inherits its project’s tags. But until now, you didn’t have the ability to tag your registry artifacts (modules and providers), slowing discovery and association with projects and workspaces for usage guidance.  Also, there was no easy way to indicate the status of module and provider versions, such as “non-prod” or “prod.” This made choosing the appropriate version for downstream consumers challenging.

What's new:Today, we are excited to announce the public beta of registry tags, allowing the platform team to tag artifacts with project information and usage guidance. Project tags on registry artifacts, now available for all HCP Terraform users, and the new registry environment tags, available for HCP Terraform standard and premium, increase the accuracy and speed for downstream consumers when choosing artifacts without confusion or guesswork.

For example, a new module version with a “non-prod” tag indicates it should be used in non-production environments. After successful testing, teams that develop modules can change the tag to “prod” (or simply add the “prod” tag) to indicate that the version is approved for production environments. By doing so, the downstream consumers are clear about the appropriate version for their environment to use, and teams that develop modules can speed up module testing and promotion.

Benefits:

  • Enhanced security: Platform teams can distinguish the approved artifacts and artifact versions easily based on the tags, decreasing security risks caused by using the wrong artifacts and artifact versions. Promoting artifact versions with tag assignment from one environment designation to the next allows proper testing with deployments before a new version is deemed appropriate for production.

  • Operational efficiency at scale: Registry tags enable faster artifact discovery for use in the proper projects. Users can filter the registry on their project’s or environment’s tags to find the right options quickly.

  • Simplified governance: By tagging artifacts with your preexisting project tags, you extend the same classification categories you use today to your modules and providers, encouraging users to choose the artifacts that share tags with their projects.

To learn more, you can check out our Managing registry tags documentation.

Get started with HCP Terraform and Terraform Enterprise

You can try many of these new features now. If you are new to Terraform, sign up for an HCP account to get started today, and also check out our tutorials. HCP Terraform includes a $500 credit that allows users to quickly get started using features from any plan, including HCP Terraform Premium. Contact our sales team if you’re interested in trying our self-managed offering: Terraform Enterprise.



from HashiCorp Blog https://ift.tt/6q5OlED
via IFTTT

New TrickMo Variant Uses TON C2 and SOCKS5 to Create Android Network Pivots

Cybersecurity researchers have flagged a new version of the TrickMo Android banking trojan that uses The Open Network (TON) for command-and-control (C2).

The new variant, observed by ThreatFabric between January and February 2026, has been observed actively targeting banking and cryptocurrency wallet users in France, Italy, and Austria.

"TrickMo relies on a runtime-loaded APK  (dex.module), used also by the previous variant, but updated with new features adding new network-oriented functionality, including reconnaissance, SSH tunnelling, and SOCKS5 proxying capabilities that allow infected devices to function as programmable network pivots and traffic-exit nodes," the Dutch mobile security company said in a report shared with The Hacker News.

TrickMo is the name assigned to a device takeover (DTO) malware that's been active in the wild since late 2019. It was first flagged by CERT-Bund and IBM X-Force, describing its ability to abuse Android's accessibility services to hijack one-time passwords (OTPs).

It's also equipped with a wide range of features to phish for credentials, log keystrokes, record screen, facilitate live screen streaming, intercept SMS messages, essentially granting the operator complete remote control of the device.

The latest versions, labeled TrickMo C, are distributed via phasing websites and dropper apps, the latter of which serve as a conduit for a dynamically loaded APK ("dex.module") that's retrieved at runtime from attacker-controlled infrastructure. A notable shift in the architecture entails the use of the TON decentralized blockchain for stealthy C2 communications.

"TrickMo carries an embedded native TON proxy that the host APK starts on a loopback port at process start," ThreatFabric said. "The bot's HTTP client is wired through that proxy, so every outbound command-and-control request is addressed to an .adnl hostname and resolved through the TON overlay."

Dropper apps containing the malware masquerade as adult versions of TikTok, whereas the actual malware impersonates Google Play Services -

  • com.app16330.core20461 or com.app15318.core1173 (Dropper)
  • uncle.collop416.wifekin78 or nibong.lida531.butler836 (TrickMo)

While previous iterations of "dex.module" implemented the accessibility-driven remote control functionality through a socket.io-based channel, the new version utilizes a network-operative subsystem that turns the malware into a tool for managed foothold than a traditional banking trojan.

The subsystem supports commands like curl, dnslookup, ping, telnet, and traceroute, giving the attacker a "remote shell-equivalent for network reconnaissance from the victim's network position, including any internal corporate or home network the device is currently associated with," per ThreatFabric.

Another important feature is a SOCKS5 proxy that turns the compromised device into a network exit node that routes malicious traffic, while defeating IP-based fraud-detection signatures on banking, e-commerce and cryptocurrency exchange services.

Furthermore, TrickMo includes two dormant features that bundle the Pine hooking framework and declare extensive NFC-related permissions. But neither of them are actually implemented. This likely indicates the core developers are looking to expand on the trojan's capabilities in the future. 

"Instead of relying on conventional DNS and public internet infrastructure, the malware communicates through .adnl endpoints routed via an embedded local TON proxy, reducing the effectiveness of traditional takedown and network-blocking efforts while making the traffic blend with legitimate TON activity," ThreatFabric said.

"This latest variant also expands the operational role of infected devices through SSH tunnelling and authenticated SOCKS5 proxying, effectively turning compromised phones into programmable network pivots and traffic-exit nodes whose connections originate from the victim’s own network environment."



from The Hacker News https://ift.tt/zo0Y2ip
via IFTTT

Veeam High Availability Cluster: failover and automation – Part 2

After creating a Veeam High Availability Cluster, the next step is to verify how the environment behaves when the primary node becomes unavailable. In this part of the series, we walk through a simulated primary node failure and show how to perform a manual failover to the secondary node.

The article then explains how to use Veeam ONE to monitor the HA cluster state, detect primary node communication issues, and configure action handling for a faster recovery workflow. This helps reduce downtime for the Veeam Backup & Replication server and gives administrators better visibility into HA cluster events.

Primary node failure

To test whether the Veeam High Availability Cluster works as expected, you need to take the Primary Node offline by simulating a failure.

From the vSphere Client, power off the Primary Node of the Veeam High Availability Cluster. Right click the Primary Node and select Power > Power off.

 

wp-image-34101

 

Ensure the connection is lost from the Veeam Backup & Replication Console.

 

wp-image-34102

 

Manual failover of the Veeam High Availability Cluster

Once the Primary Node has failed, close and re-open the Veeam Console. The process takes about 10 minutes to complete the failover.

Enter the IP Address of the HA Cluster and click Connect. You cannot initiate a failover using the cluster DNS name.

 

wp-image-34104

 

The new certificate thumbprint is detected. Click Yes to trust the server.

 

wp-image-34105

 

The system detects the Primary Node has failed. Click Connect to connect to the Secondary Node.

 

wp-image-34106

 

Enter the credential to login to the Secondary Node and click Sign in.

 

wp-image-34107

 

Click Failover.

 

wp-image-34108

 

The system attempts to connect the cluster through the Secondary Node.

 

wp-image-34109

 

The Veeam Console now opens showing a warning about the missing Secondary Node in the HA Cluster. The previously configured Secondary Nodes has assumed the Primary Node role after failover.

 

wp-image-34111

 

Automate failover operation with Veeam ONE

Although manual failover is a simple task to perform, an automated failover process can be a better and more efficient option to maintain protection for mission-critical VMs.

 

wp-image-34112

 

To automate the failover operation for a Veeam High Availability Cluster, Veeam ONE must be installed in your infrastructure. Make sure Veeam ONE is installed and configured.

Enable monitoring in the Veeam appliance

To configure automated failover, you need to register the two Veeam 13 Appliances in Veeam ONE. To allow correct registration with Veeam ONE, you need to enable Data Collection from the Veeam Host Management Console.

Using your preferred browser, type the address https://<Veeam_Primary_Node>:10443. Enter the veeamadmin credentials and click Sign in.

 

wp-image-34113

 

Go to Backup Infrastructure and click Submit Request in the Data Collection section.

 

wp-image-34114

 

Click OK.

 

wp-image-34115

 

The request is placed in Waiting for approval status. Click veeamadmin > Sign out to logoff the current user.

 

wp-image-34116

 

Enter the veeamso (security officer) credentials and click Sign in.

 

wp-image-34117

 

Select the pending request and click Approve.

 

wp-image-34118

 

Once the request has been approved, click veeamso > Sign out.

 

wp-image-34119

 

Login again with the veeamadmin account to verify the request is displayed as Request Approved.

 

wp-image-34120

 

Before proceeding with the configuration of Veeam ONE, repeat the same procedure for the Secondary Node.

Register Veeam Nodes in Veeam ONE

Access the Veeam ONE server and login to the Veeam ONE Web Client.

 

wp-image-34121

 

In the Overview tab, click Data Source > Add server.

 

wp-image-34122

 

Select Veeam Backup & Replication.

 

wp-image-34123

 

Enter the DNS name or IP Address of the Primary Node then click Next.

 

wp-image-34124

 

Click Add credentials to specify the user for authenticating against the Primary Node.

 

wp-image-34125

 

Select Standard account.

 

wp-image-34126

 

Specify the veeamadmin credentials and click Finish.

 

wp-image-34127

 

Make sure veeamadmin is seleted in the Credentials drop-down menu and click Next.

 

wp-image-34128

 

Click Trust and Continue to accept the certificate.

 

wp-image-34129

 

Click Finish to register the Primary Node.

 

wp-image-34130

 

The system detects the node members of the cluster and starts installing the required agents.

 

wp-image-34131

 

After a few seconds, the agents are installed successfully.

 

wp-image-34132

 

Configure automated failover

Now login to Veeam ONE Client and click Connect to proceed with the configuration of the automated Veeam High Availability Cluster failover.

 

wp-image-34133

 

Go to the Alert Management area and select Veeam Backup & Replication. In the Filter box type ha cluster to filter HA cluster options.

 

wp-image-34134

 

Right click Veeam Backup & Replication HA cluster primary node state and select Edit.

 

wp-image-34135

 

Navigate to the Action section and select Automatic in the Execution type drop-down menu. Clic Save to save the configuration.

 

wp-image-34136

 

Test automated failover

From the vSphere Client, Power Off the Veeam Primary Node (veeam-v13sa in the example).

 

wp-image-34137

 

The Veeam Console loses the connection with the Primary Node.

 

wp-image-34138

 

In the Veeam ONE Client, go to the Veeam Backup & Replication section and click on the grayed-out Primary Node. In the Alarms tab of the right pane, the failure of the node is detected.

 

wp-image-34139

 

The system requires about 10 minutes to complete the automated failover operation. When the failover is completed, the Secondary Node assumes the role of Primary Node restoring the Veeam Server functionality.

 

wp-image-34140

 

Leveraging the Veeam ONE’s capability to trigger an automated failover when the Primary Node fails allows you to maintain maximum efficiency of the Veeam Backup Server, limiting the service outage to just a few minutes.



from StarWind Blog https://ift.tt/2NQGqeS
via IFTTT