Thursday, February 19, 2026

What will knowledge work be in 18 months? Look at what AI is doing to coding right now.

There’s a lot of buzz about something big happening in software engineering thanks to the latest batch of AI models. However, most knowledge workers think this is just a “coding thing” which doesn’t apply to them. They’re wrong.

Dan Shapiro, Glowforge CEO and Wharton Research Fellow, recently published a five-level framework which maps the level of AI assistance for coding from simple searches all the way to a “dark factory,” where AI is essentially just a black box that turns specs into software.

I want to walk through those five levels, because I think this pattern also applies to knowledge work, and we knowledge workers are not far behind coders in this regard.

Shapiro’s five levels of AI use in coding

(I’ve condensed but mostly used his words here)

  • Level 0: AI is spicy autocomplete. You’re doing manual coding and not a character hits the disk without your approval. You might use AI as a search super search engine, or occasionally accept a suggestion, but the code is unmistakably yours.
  • Level 1: AI is a coding intern. You offload discrete tasks to AI. “Write a unit test.” “Add a docstring.” You’re seeing speedups, but you’re still moving at the rate you type.
  • Level 2: AI is a junior developer. You’re a “pair programmer” with AI and now have a junior buddy to hand off all your boring stuff to. You code in a flow state and are more productive than you’ve ever been. Shapiro says 90% of “AI-native” developers are here, and the danger is from level 2, and every level after it, the coder feels like they’ve maxed out and they’re done. But they’re not.
  • Level 3: AI is a developer. You’re not the developer anymore. (That’s your AI’s job.) You’re the manager. You’re the human in the loop. Your coding agent is always running in multiple tabs, and you spend your days reviewing code and changes. For many people, this feels like things got worse. Almost everyone tops out here.
  • Level 4: AI is an engineering team. Now you’re not even a developer manager, you’re a product manager. You write specs, argue with the AI about specs, craft skills plan schedules, then leave for 12 hours and check if the tests pass. (Shapiro says he’s here.)
  • Level 5: AI is a dark software factory. You’re the engineering manager who sets the goals of the system in plain English. The AI defines implementation, writes code, tests, fixes bugs, and ships. It’s not really a software process anymore. It’s a black box that turns specs into software.

AI + coding future thinking also applies to AI + knowledge work

Nate B. Jones covers the AI-and-software engineering beat better than almost anyone. His YouTube videos are “required watching” for me. I realized recently that everything he says about how AI is impacting software engineering also applies to AI impacting knowledge workers. For example, some quotes of his from recent videos about coding which apply verbatim to knowledge worker using AI:

“The bottleneck has shifted. You are now the manager of however many agents you can keep track of productively. Your productive capacity is limited now only by your attention span and your ability to scope tasks well.”

“These are supervision problems, not capability problems. And the solution isn’t to do the work yourself. It’s to get better at your management skills.”

If we do some simple Mad Libs style find-and-replace, Nate’s also a pretty good “future of work” strategist! Just swap out:

  • code → deliverables / work product / output
  • engineer → knowledge worker
  • technical leader → business leader
  • implementation → producing deliverables
  • system → outcome
  • tests → success criteria
  • codebase → work stream
  • syntax → formatting

Let’s try that on some more quotes from his recent videos:

“It’s less time writing code deliverables. It’s much more time defining what you want. It’s much more time evaluating whether you got there.”

“Most engineers knowledge workers have spent years developing their intuitions around implementation producing deliverables and those are now not super useful. The new skill is describing the system outcome precisely enough that AI can build it, and then writing tests success criteria that capture what you actually need, and reviewing AI-generated code output for subtle conceptual errors rather than simple syntax formatting mistakes.”

“If you’re not thinking through what you want done, the speed can lead you to very quickly build a giant pile of code work product that’s not very useful. That is a superpower that everyone has been handed for better or worse and we are about to see who is actually able to think well.”

“We need to think as technical business leaders about where engineers knowledge workers should stand in relation to the code AI-generated output based on the risk profile of that codebase work stream itself.”

That last one is illustrates the power of this perfectly. That concept applies to software engineering, but I never would have thought about it in the context of knowledge work. Yet it 100% applies there as well. Which strategic deliverables require human review and which can you trust to the Dark Knowledge Factory? 

The five levels of AI use in knowledge work

Now let’s take Shapiro’s five levels of AI use in coding and translate them to knowledge work. (Some of these loosely map to my own 7-stage roadmap for human-AI collaboration in the workplace from six months ago, though Shapiro’s levels address the relationship between humans and AI, whereas I focused on the mechanics of the collaboration.)

Putting Shapiro’s coding levels through our Mad Libs code-to-knowledge work translator:

  • Level 0: AI is a spicy search engine. You’re doing the knowledge work and not a word hits the page without your approval. You might use AI as a super search engine, or occasionally accept a suggested sentence, but the deliverable is unmistakably yours. This is most enterprise knowledge workers today.
  • Level 1: AI is a research intern. You offload discrete tasks to AI. “Summarize this document.” “Draft a response to this email.” You’re seeing speedups, but you’re still moving at the rate you type. You’re still the one producing the deliverable. This is most people’s experience with Office Copilot
  • Level 2: AI is a junior analyst. You’re “pair working” with AI and now have a junior buddy to hand off all your boring stuff to. You’re in a flow state and more productive than you’ve ever been. Workers at this level use persistent AI collaboration spaces, like Google NotebookLM, Claude Projects, or Copilot Notebooks. Like their coding counterparts, starting at Level 2 and every level after it, knowledge workers feel like once they’re here that they’re done and they’ve maxed out. But they haven’t.
  • Level 3: AI is an analyst. You’re not the one producing work anymore. (That’s your AI’s job.) You’re the manager. You’re the human in the loop. Your AI is always running and you spend your days reviewing and editing everything it generates. Strategy decks, market analyses, competitive intelligence, communications. Your life is tracked changes. For some workers, this feels like things got worse. Almost everyone tops out here. This is where workers using a personal AI knowledge system / “second brain” are. This is where I am.
  • Level 4: AI is a strategy team. Now you’re not even a manager, you’re a director. You don’t write deliverables or even review them line by line. You write specs for deliverables. You define what a good competitive analysis looks like, what the acceptance criteria are, and what scenarios it needs to handle. You craft the prompts, system instructions, and the evaluation rubrics. Then you walk away and check if the output passes your scenarios.
  • Level 5: AI is a dark knowledge factory. You are the executive who sets the goals of the organization in plain English. The AI defines the approach, produces deliverables, evaluates quality, iterates, and ships. It’s not really a work process anymore. It’s a black box that turns business intent into business outcomes. A handful of people run what used to be an entire analyst function. The verification framework is the intellectual property, not the reports themselves.

But how do you know the AI’s work is any good?

I feel like I can follow along the analogy through Level 3, but Levels 4 and 5 seem weird to me and it’s hard to see exactly how they would apply to knowledge work. (Heh, funny I’m personally at Level 3 and as Shapiro wrote, people after Level 2 think that whatever level they’re at is the top.)

The hardest question at Levels 4 and 5 is the same whether you’re writing code or strategy memos: how do you verify the output without a human reviewing every piece?

In code, the answer turned out to be end-to-end behavioral tests stored separately from the codebase (so the AI can’t cheat). For knowledge work, I think it maps to something like:

  • You define what “good” looks like (for a strategy recommendation, a presentation, etc.) and you deliberately keep those separate from the AI so it can’t game the criteria. These need to be real and deep things, like “Does this account for the competitor’s likely response? Does this identify second-order effects? Would the CFO approve this?”
  • Then once the main AI generates the content, a different AI uses your verification docs and is prompted to be a skeptical board member, a hostile competitor, or a regulatory lawyer and tries to find flaws. So this way, the verification loop isn’t human, it’s AI verifying AI against criteria that humans defined.

(There will be a lot of interesting work done here in the next year!)

AI’s true impact to knowledge work is just beginning

As I wrote in the opening, most talk about AI’s impact today is focused on software engineering. But coding was the beachhead, not the destination. Software was first because code has built-in verification layers, specific syntax, and billions of pages on the internet about how to write good code.

Knowledge work is next, but the timeline will be more compressed. (We have better AI now and lots of lessons from the software world.) If frontier coding teams are at Level 4-5 today while frontier knowledge workers are only at Level 1-2, a pretty good way to know what knowledge work looks like in 18 months is to look at what coders are doing right now.

As we progress towards this future, remember that the bottleneck keeps moving. At Level 1 it’s, “how fast can you produce work?” At Level 4 it’s, “how precisely can you specify what should exist?” By Level 5 it’s, “how rigorously can you verify that it’s good?” Level 5 of knowledge work will introduce a governance problem that nobody has a playbook for yet. Who owns the specs? Who defines the verifications? Who’s making sure the Dark Knowledge Factory isn’t producing hallucinated strategy recommendations that look right but fall apart under scrutiny?

Most enterprises don’t have the governance infrastructure for any of this, and it’s coming whether they’re ready or not.


Read more & connect

Join the conversation and discuss this post on LinkedIn. You can find all my posts on my author page (or via RSS).



from Citrix Blogs https://ift.tt/yGRPxQp
via IFTTT

ThreatsDay Bulletin: OpenSSL RCE, Foxit 0-Days, Copilot Leak, AI Password Flaws & 20+ Stories

  1. Privacy model hardening

    Google announced the first beta version of Android 17, with two privacy and security enhancements: the deprecation of Cleartext Traffic Attribute and support for HPKE Hybrid Cryptography to enable secure communication using a combination of public key and symmetric encryption (AEAD). "If your app targets (Android 17) or higher and relies on usesCleartextTraffic='true' without a corresponding Network Security Configuration, it will default to disallowing cleartext traffic," Google said. "You are encouraged to migrate to Network Security Configuration files for granular control."

  2. RaaS expands cross-platform reach

    A new analysis of the LockBit 5.0 ransomware has revealed that the Windows version packs in various defense evasion and anti-analysis techniques, including packing, DLL unhooking, process hollowing, patching Event Tracing for Windows (ETW) functions, and log clearing. "What's notable among the multiple systems support is its proclaimed capability to 'work on all versions of Proxmox,'" Acronis said. "Proxmox is an open-source virtualization platform and is being adopted by enterprises as an alternative to commercial hypervisors, which makes it another prime target of ransomware attacks." The latest version also introduces dedicated builds tailored for enterprise environments, highlighting the continued evolution of ransomware-as-a-service (RaaS) operations.

  3. Mac users lured via nested obfuscation

    Cybersecurity researchers have detailed a new evolution of the ClickFix social engineering tactic targeting macOS users. "Dubbed Matryoshka due to its nested obfuscation layers, this variant uses a fake installation/fix flow to trick victims into executing a malicious Terminal command," Intego said. "While the ClickFix tactic is not new, this campaign introduces stronger evasion techniques — including an in-memory, compressed wrapper and API-gated network communications — designed to hinder static analysis and automated sandboxes." The campaign primarily targets users attempting to visit software review sites, leveraging typosquatting in the URL name to redirect them to fake sites and activate the infection chain.

  4. Loader pipeline drives rapid domain takeover

    Another new ClickFix campaign detected in February 2026 has been observed delivering a malware-as-a-service (MaaS) loader known as Matanbuchus 3.0. Huntress, which dissected the attack chain, said the ultimate objective of the intrusion was to deploy ransomware or exfiltrate data based on the fact that the threat actor rapidly progressed from initial access to lateral movement to domain controllers via PsExec, rogue account creation, and Microsoft Defender exclusion staging. The attack also led to the deployment of a custom implant dubbed AstarionRAT that supports 24 commands to facilitate credential theft, SOCKS5 proxy, port scanning, reflective code loading, and shell execution. According to data from the cybersecurity company, ClickFix fueled 53% of all malware loader activity in 2025.

  5. Typosquat chain targets macOS credentials

    In yet another ClickFix campaign, threat actors are relying on the "reliable trick" to host malicious instructions on fake websites disguised as Homebrew ("homabrews[.]org") to trick users into pasting them on the Terminal app under the pretext of installing the macOS package manager. In the attack chain documented by Hunt.io, the commands in the typosquatted Homebrew domain are used to deliver a credential-harvesting loader and a second-stage macOS infostealer dubbed Cuckoo Stealer. "The injected installer looped on password prompts using 'dscl . -authonly,' ensuring the attacker obtained working credentials before deploying the second stage," Hunt.io said. "Cuckoo Stealer is a full-featured macOS infostealer and RAT: It establishes LaunchAgent persistence, removes quarantine attributes, and maintains encrypted HTTPS command-and-control communications. It collects browser credentials, session tokens, macOS Keychain data, Apple Notes, messaging sessions, VPN and FTP configurations, and over 20 cryptocurrency wallet applications." The use of "dscl . -authonly" has been previously observed in attacks deploying Atomic Stealer.

  6. Phobos affiliate detained in Europe

    Authorities from Poland's Central Bureau for Combating Cybercrime (CBZC) have detained a 47-year-old man over suspected ties to the Phobos ransomware group. He faces a potential prison sentence of up to five years. The CBZC said the "47-year-old used encrypted messaging to contact the Phobos criminal group, known for conducting ransomware attacks," adding the suspect's devices contained logins, passwords, credit card numbers, and server IP addresses that could have been used to launch "various attacks, including ransomware." The arrest is part of Europol's Operation Aether, which targets the 8Base ransomware group, believed to be linked to Phobos. It has been almost exactly a year since international law enforcement dismantled the 8Base crew. More than 1,000 organizations around the world have been targeted in Phobos ransomware attacks, and the cybercriminals are believed to have obtained over $16 million in ransom payments.

  7. Industrial ransomware surge accelerates

    There has been a sharp rise in the number of ransomware groups targeting industrial organizations as cybercriminals continue to exploit vulnerabilities in operational technology (OT) and industrial control systems (ICS), Dragos warned. A total of 119 ransomware groups targeting industrial organizations were tracked during 2025, a 49% increase from the 80 tracked in 2024. 2025 saw 3,300 industrial organizations around the world hit by ransomware, compared with 1693 in 2024. The most targeted sector was manufacturing, followed by transportation. In addition, a hacking group tracked as Pyroxene has been observed conducting "supply chain-leveraged attacks targeting defense, critical infrastructure, and industrial sectors, with operations expanding from the Middle East into North America and Western Europe." It often leverages initial access provided by PARISITE, to enable movement from IT into OT networks. Pyroxene overlaps with activity attributed to Imperial Kitten (aka APT35), a threat actor affiliated with the cyber arm of the Islamic Revolutionary Guard Corps (IRGC).

  8. Copilot bypassed DLP safeguards

    Microsoft confirmed a bug (CW1226324) that let Microsoft 365 Copilot summarize confidential emails from Sent Items and Drafts folders since January 21, 2026, without users' permission, bypassing data loss prevention (DLP) policies put in place to safeguard sensitive data. A fix was deployed by the company on February 3, 2026. However, the company did not disclose how many users or organizations were affected. "Users' email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat," Microsoft said. "The Microsoft 365 Copilot "work tab" Chat is summarizing email messages even though these email messages have a sensitivity label applied, and a DLP policy is configured. A code issue is allowing items in the sent items and draft folders to be picked up by Copilot even though confidential labels are set in place."

  9. Jira trials weaponized for spam

    Threat actors are abusing the trust and reputation associated with Atlassian Jira Cloud and its connected email system to run automated spam campaigns and bypass traditional email security. To accomplish this, the operators created Atlassian Cloud trial accounts using randomized naming conventions, allowing them to generate disposable Jira Cloud instances at scale. "Emails were tailored to target specific language groups, targeting English, French, German, Italian, Portuguese, and Russian speakers — including highly skilled Russian professionals living abroad," Trend Micro said. "These campaigns not only distributed generic spam, but also specifically targeted sectors such as government and corporate entities." The attacks, active from late December 2025 through late January 2026, primarily targeted organizations using Atlassian Jira. The goal was to get recipients to open the emails and click on malicious links, which would initiate a redirect chain powered by the Keitaro Traffic Distribution System (TDS) and then finally lead them to pages peddling investment scams and online casino landing sites, suggesting that financial gain was likely the main objective.

  10. GitLab SSRF now federally mandated patch

    The U.S. Cybersecurity and Infrastructure Security Agency (CISA), on February 18, 2026, added CVE-2021-22175 to its Known Exploited Vulnerabilities (KEV) catalog, requiring Federal Civilian Executive Branch (FCEB) agencies to apply the patch by March 11, 2026. "GitLab contains a server-side request forgery (SSRF) vulnerability when requests to the internal network for webhooks are enabled," CISA said. In March 2025, GreyNoise revealed that a cluster of about 400 IP addresses was actively exploiting multiple SSRF vulnerabilities, including CVE-2021-22175, to target susceptible instances in the U.S., Germany, Singapore, India, Lithuania, and Japan.

  11. Telegram bots fuel Fortune 500 phishing

    An elusive, financially motivated threat actor dubbed GS7 has been targeting Fortune 500 companies in a new phishing campaign that leverages trusted company branding with lookalike websites aimed at harvesting credentials via Telegram bots. The campaign, codenamed Operation DoppelBrand, targets top financial institutions, including Wells Fargo, USAA, Navy Federal Credit Union, Fidelity Investments, and Citibank, as well as technology, healthcare, and telecommunications firms worldwide. Victims are lured through phishing emails and redirected to counterfeit pages where credentials are harvested and transmitted to Telegram bots controlled by the attacker. According to SOCRadar, the group itself, however, has a history stretching back to 2022. The threat actor is said to have registered more than 150 malicious domains in recent months using registrars such as NameCheap and OwnRegistrar, and routing traffic through Cloudflare to evade detection. GS7's end goals include not only harvesting credentials, but also downloading remote management and monitoring (RMM) tools like LogMeIn Resolve on victim systems to enable remote access or the deployment of malware. This has raised the possibility that the group may even act as an initial access broker (IAB), selling the access to ransomware groups or other affiliates.

  12. Remcos shifts to live C2 surveillance

    Phishing emails disguised as invoices, job offers, or government notices are being used to distribute a new variant of Remcos RAT to facilitate comprehensive surveillance and control over infected systems. "The latest Remcos variant has been observed exhibiting a significant change in behaviour compared to previous versions," Point Wild said. "Instead of stealing and storing data locally on the infected system, this variant establishes direct online command-and-control (C2) communication, enabling real-time access and control. In particular, it leverages the webcam to capture live video streams, allowing attackers to monitor targets remotely. This shift from local data exfiltration to live, online surveillance represents an evolution in Remcos’ capabilities, increasing the risk of immediate espionage and persistent monitoring."

  13. China-made vehicles restricted on bases

    Poland's Ministry of Defence has banned Chinese cars, and other motor vehicles equipped with technology to record position, images, or sound, from entering protected military facilities due to national security concerns and to "limit the risk of access to sensitive data." The ban also extends to connecting work phones to infotainment systems in motor vehicles produced in China. The ban isn't permanent: the Defence Ministry has called for the development of a vetting process to allow carmakers to undergo a security assessment that, if passed, can allow their vehicles to enter protected facilities. "Modern vehicles equipped with advanced communication systems and sensors can collect and transmit data, so their presence in protected zones requires appropriate safety regulations," the Polish Army said. The measures introduced are preventive and comply with the practices of NATO countries and other allies to ensure the highest standards of defense infrastructure protection. They are part of a wider process of adapting security procedures to the changing technological environment and current requirements for the protection of critical infrastructure."

  14. DKIM replay fuels invoice scams

    Bad actors are abusing legitimate invoices and dispute notifications from trusted vendors, such as PayPal, Apple, DocuSign, and Dropbox Sign (formerly HelloSign), to bypass email security controls. "These platforms often allow users to enter a 'seller name' or add a custom note when creating an invoice or notification," Casey-owned INKY said. "Attackers abuse this functionality by inserting scam instructions and a phone number into those user-controlled fields. They then send the resulting invoice or dispute notice to an email address they control, ensuring the malicious content is embedded in a legitimate, vendor-generated message." Because these emails originate from a legitimate company, they bypass checks like Domain-based Message Authentication, Reporting and Conformance (DMARC). As soon as the legitimate email is received, the attacker proceeds to forward it to the intended targets, allowing the "authentic looking" message to land in the victims' inboxes. The attack is known as a DKIM replay attack.

  15. RMM abuse surges 277%

    A new report from Huntress has revealed that the abuse of Remote Monitoring and Management (RMM) software surged 277% year-over-year, accounting for 24% of all observed incidents. Threat actors have begun to increasingly favor these tools because they are ubiquitous in enterprise environments, and the trusted nature of the RMM software allows malicious activity to blend in with legitimate usage, making detection harder for defenders. They also offer increased stealth, persistence, and operational efficiency. "As cybercriminals built entire playbooks around these legitimate, trusted tools to drop malware, steal credentials, and execute commands, the use of traditional hacking tools plummeted by 53%, while remote access trojans and malicious scripts dropped by 20% and 11.7%, respectively," the company said.

  16. Texas targets China-linked tech firms

    Texas Attorney General Ken Paxton has sued TP-Link for "deceptively marketing its networking devices and allowing the Chinese Communist Party ('CCP') to access American consumers' devices in their homes." Paxton's lawsuit alleges that TP Link's products have been used by Chinese hacking groups to launch cyber attacks against the U.S. and that the company is subject to Chinese data laws, which it said require firms operating in the country to support its intelligence services by "divulging Americans' data." In a second lawsuit, Paxton also accused Anzu Robotics of misleading Texas consumers about the "origin, data practices, and security risks of its drones." Paxton's office described the company's products as "21st century Trojan horse linked to the CCP."

  17. MetaMask backdoor expands DPRK campaign

    The North Korea-linked campaign known as Contagious Interview is designed to target IT professionals working in cryptocurrency, Web3, and artificial intelligence sectors to steal sensitive data and financial information using malware such as BeaverTail and InvisibleFerret. However, recent iterations of the campaign have expanded their data theft capabilities by tampering with the MetaMask wallet extension (if it's installed) through a lightweight JavaScript backdoor that shares the same functionality as InvisibleFerret, according to security researcher Seongsu Park. "Through the backdoor, attackers instruct the infected system to download and install a fake version of the popular MetaMask cryptocurrency wallet extension, complete with a dynamically generated configuration file that makes it appear legitimate," Park said. "Once installed, the compromised MetaMask extension silently captures the victim's wallet unlock password and transmits it to the attackers’ command-and-control server, giving them complete access to cryptocurrency funds."

  18. Booking.com kits hit hotels, guests

    Bridewell has warned of a resurgence in malicious activity targeting the hotel and retail sector. "The primary motivation driving this incident is financial fraud, targeting two victims: hotel businesses and hotel customers, in sequential order," security researcher Joshua Penny said. "The threat actor(s) utilize impersonation of the Booking.com platform through two distinct phishing kits dedicated to harvesting credentials and banking information from each victim, respectively." It's worth noting that the activity shares overlap with a prior activity wave disclosed by Sekoia in November 2025, although the use of a dedicated phishing kit is a new approach by either the same or new operators.

  19. EPMM exploits enable persistent access

    The recently disclosed security flaws in Ivanti Endpoint Manager Mobile (EPMM) have been exploited by bad actors to establish a reverse shell, deliver JSP web shells, conduct reconnaissance, and download malware, including Nezha, cryptocurrency miners, and backdoors for remote access. The two critical vulnerabilities, CVE-2026-1281 and CVE-2026-1340, allow unauthenticated attackers to remotely execute arbitrary code on target servers, granting them full control over mobile device management (MDM) infrastructure without requiring user interaction or credentials. According to Palo Alto Networks Unit 42, the campaign has affected state and local government, healthcare, manufacturing, professional and legal services, and high technology sectors in the U.S., Germany, Australia, and Canada. "Threat actors are accelerating operations, moving from initial reconnaissance to deploying dormant backdoors designed to maintain long-term access even after organizations apply patches," the cybersecurity company said. In a related development, Germany's Federal Office for Information Security (BSI) has reported evidence of exploitation since the summer of 2025 and has urged organizations to audit their systems for indicators of compromise (IoCs) as far back as July 2025.

  20. AI passwords lack true randomness

    New research by Irregular has found that passwords generated directly by a large language model (LLM) may appear strong but are fundamentally insecure, as "LLMs are designed to predict tokens – the opposite of securely and uniformly sampling random characters." The artificial intelligence (AI) security company said it detected LLM-generated passwords in the real world as part of code development tasks instead of leaning on traditional secure password generation methods. "People and coding agents should not rely on LLMs to generate passwords," the company said. "LLMs are optimized to produce predictable, plausible outputs, which is incompatible with secure password generation. AI coding agents should be directed to use secure password generation methods instead of relying on LLM-output passwords. Developers using AI coding assistants should review generated code for hardcoded credentials and ensure agents use cryptographically secure methods or established password managers."

  21. PDF engine flaws enable account takeover

    Cybersecurity researchers have discovered more than a dozen vulnerabilities (CVE-2025-70401, CVE-2025-70402, and CVE-2025-66500) in popular PDF platforms from Foxit and Apryse, potentially allowing attackers to exploit them for account takeover, session hijacking, data exfiltration, and arbitrary JavaScript execution. "Rather than isolated bugs, the issues cluster around recurring architectural failures in how PDF platforms handle untrusted input across layers," Novee Security researchers Lidor Ben Shitrit, Elad Meged, and Avishai Fradlis said. "Several vulnerabilities were exploitable with a single request and affected trusted domains commonly embedded inside enterprise applications." The issues have been addressed by both Apryse and Foxit through product updates.

  22. Training labs expose cloud backdoors

    A "widespread" security issue has been discovered where security vendors inadvertently expose deliberately vulnerable training applications, such as OWASP Juice Shop, DVWA, bWAPP, and Hackazon, to the public internet. This can open organizations to severe security risks when they are executed from a privileged cloud account. "Primarily deployed for internal testing, product demonstrations, and security training, these applications were frequently left accessible in their default or misconfigured states," Pentera Labs said. "These critical flaws not only allowed attackers full control over the compromised compute engine but also provided pathways for lateral movement into sensitive internal systems. Violations of the principle of least privilege and inadequate sandboxing measures further facilitated privilege escalation, endangering critical infrastructure and sensitive organizational data." Further analysis has determined that threat actors are exploiting this blind spot to plant web shells, cryptocurrency miners, and persistence mechanisms on compromised systems.

  23. Evasion loader refines C2 stealth

    The malware loader known as Oyster (aka Broomstick or CleanUpLoader) has continued to evolve into early 2026, fine-tuning its C2 infrastructure and obfuscation methods, per findings from Sekoia. The malware is distributed mainly through fake websites that distribute installers for legitimate software like Microsoft Teams, with the core payload often deployed as a DLL for persistent execution. "The initial stage leverages excessive legitimate API call hammering and simple anti-debugging traps to thwart static analysis," the company said. "The core payload is delivered in a highly obfuscated manner. The final stage implements a robust C2 communication protocol that features a dual-layer server infrastructure and highly-customized data encoding."

  24. Stealer taunts researchers in code

    Noodlophile is the name given to an information-stealing malware that has been distributed via fake AI tools promoted on Facebook. Assessed to be the work of a threat actor based in Vietnam, it was first documented by Morphisec in May 2025. Since then, there have been other reports detailing various campaigns, such as UNC6229 and PXA Stealer, orchestrated by Vietnamese cybercriminals. Morphisec's latest analysis of Noodlophile has revealed that the threat actor "padded the malware with millions of repeats of a colorful Vietnamese phrase translating to 'f*** you, Morphisec,'" suggesting that the operators were not thrilled about getting exposed. "Not just to vent frustration over disrupted campaigns, but also to bloat the file and crash AI-based analysis tools that are based on the Python disassemble library – dis.dis(obj)," security researcher Michael Gorelik said.

  25. Crypto library RCE risk patched

    The OpenSSL project has patched a stack buffer overflow flaw that can lead to remote code execution attacks under certain conditions. The vulnerability, tracked as CVE-2025-15467, resides in how the library processes Cryptographic Message Syntax data. Threat actors can use CMS packets with maliciously crafted AEAD parameters to crash OpenSSL and run malicious code. CVE-2025-15467 is one of 12 issues that were disclosed by AISLE late last month. Another high-severity vulnerability is CVE-2025-11187, which could trigger a stack-based buffer overflow due to a missing validation.

  26. Machine accounts expand delegation risk

    New research from Silverfort has cleared a "common assumption" that Kerberos delegation -- which allows a service to request resources or perform actions on behalf of a user -- applies not just to human users, but also to machine accounts as well. In other words, a computer account can be delegated on behalf of highly privileged machine identities such as domain controllers. "That means a service trusted for delegation can act not just on behalf of other users, but also on behalf of machine accounts, the most critical non-human identities (NHIs) in any domain," Silverfort researcher Dor Segal said. "The risk is obvious. If an adversary can leverage delegation, it can act on behalf of sensitive machine accounts, which in many environments hold privileges equivalent to Domain Administrator." To counter the risk, it's advised to run "Set-ADAccountControl -Identity “HOST01$” -AccountNotDelegated $true" for each sensitive machine account.



from The Hacker News https://ift.tt/MVv5Ftb
via IFTTT

How Medplum Secured Their Healthcare Platform with Docker Hardened Images (DHI)

Special thanks to Cody Ebberson and the Medplum team for their open-source contribution and for sharing their migration experience with the community. A real-world example of migrating a HIPAA-compliant EHR platform to DHI with minimal code changes.

Healthcare software runs on trust. When patient data is at stake, security isn’t just a feature but a fundamental requirement. For healthcare platform providers, proving that trust to enterprise customers is an ongoing challenge that requires continuous investment in security posture, compliance certifications, and vulnerability management.

That’s why we’re excited to share how Medplum, an open-source healthcare platform serving over 20 million patients, recently migrated to Docker Hardened Images (DHI). This migration demonstrates exactly what we designed DHI to deliver: enterprise-grade security with minimal friction. Medplum’s team made the switch with just 54 lines of changes across 5 files—a near net-zero code change that dramatically improved their security posture.

Medplum is a headless EHR—the platform handles patient data, clinical workflows, and compliance so developers can focus on building healthcare apps. Built by and for healthcare developers, the platform provides:

  • HIPAA and SOC2 compliance out of the box
  • FHIR R4 API for healthcare data interoperability
  • Self-hosted or managed deployment options
  • Support for 20+ million patients across hundreds of practices

With over 500,000 pulls on Docker Hub for their medplum-server image, Medplum has become a trusted foundation for healthcare developers worldwide. As an open-source project licensed under Apache 2.0, their entire codebase—including Docker configurations—is publicly available on GitHub. This transparency made their DHI migration a perfect case study for the community.

Diagram of Medplum as headless EHR

Caption: Medplum is a headless EHR — the platform handles patient data, clinical workflows, and compliance so developers can focus on building healthcare apps.

Medplum is developer-first. It’s not a plug-and-play low-code tool—it’s designed for engineering teams that want a strong FHIR-based foundation with full control over the codebase.

The Challenge: Vulnerability Noise and Security Toil

Healthcare software development comes with unique challenges. Integration with existing EHR systems, compliance with regulations like HIPAA, and the need for robust security all add complexity and cost to development cycles.

“The Medplum team found themselves facing a challenge common to many high-growth platforms: “Vulnerability Noise.” Even with lean base images, standard distributions often include non-essential packages that trigger security flags during enterprise audits. For a company helping others achieve HIPAA compliance, every “Low” or “Medium” CVE (Common Vulnerability and Exposure) requires investigation and documentation, creating significant “security toil” for their engineering team.”

Reshma Khilnani

CEO, Medplum

Medplum addresses this by providing a compliant foundation. But even with that foundation, their team found themselves facing another challenge common to high-growth platforms: “Vulnerability Noise.”

Healthcare is one of the most security-conscious industries. Medplum’s enterprise customers—including Series C and D funded digital health companies—don’t just ask about security; they actively verify it. These customers routinely scan Medplum’s Docker images as part of their security due diligence.

Even with lean base images, standard distributions often include non-essential packages that trigger security flags during enterprise audits. For a company helping others achieve HIPAA compliance, every “Low” or “Medium” CVE requires investigation and documentation. This creates significant “security toil” for their engineering team.

The First Attempt: Distroless

This wasn’t Medplum’s first attempt at solving the problem. Back in November 2024, the team investigated Google’s distroless images as a potential solution.

The motivations were similar to what DHI would later deliver:

  • Less surface area in production images, and therefore less CVE noise
  • Smaller images for faster deployments
  • Simpler build process without manual hardening scripts

The idea was sound. Distroless images strip away everything except the application runtime—no shell, no package manager, minimal attack surface. On paper, it was exactly what Medplum needed.

But the results were mixed. Image sizes actually increased. Build times went up. There were concerns about multi-architecture support for native dependencies. The PR was closed without merging.

The core problem remained: many CVEs in standard images simply aren’t actionable. Often there isn’t a fix available, so all you can do is document and explain why it doesn’t apply to your use case. And often the vulnerability is in a corner of the image you’re not even using—like Perl, which comes preinstalled on Debian but serves no purpose in a Node.js application.

Fully removing these unused components is the only real answer. The team knew they needed hardened images. They just hadn’t found the right solution yet.

The Solution: Docker Hardened Images

When Docker made Hardened Images freely available under Apache 2.0, Medplum’s team saw an opportunity to simplify their security posture while maintaining compatibility with their existing workflows.

By switching to Docker Hardened Images, Medplum was able to offload the repetitive work of OS-level hardening—like configuring non-root users and stripping out unnecessary binaries—to Docker. This allowed them to provide their users with a “Secure-by-Default” image that meets enterprise requirements without adding complexity to their open-source codebase.

This shift is particularly significant for an open-source project. Rather than maintaining custom hardening scripts that contributors need to understand and maintain, Medplum can now rely on Docker’s expertise and continuous maintenance. The security posture improves automatically with each DHI update, without requiring changes to Medplum’s Dockerfiles.

“By switching to Docker Hardened Images, Medplum was able to offload the repetitive work of OS-level hardening—like configuring non-root users and stripping out unnecessary binaries—to Docker. This allowed their users to provide their users with a “Secure-by-Default” image that meets enterprise requirements without adding complexity to their open-source codebase.”

Cody Ebberson

CTO, Medplum

The Migration: Real Code Changes

The migration was remarkably clean. Previously, Medplum’s Dockerfile required manual steps to ensure security best practices. By moving to DHI, they could simplify their configuration significantly.

Let’s look at what actually changed. Here’s the complete server Dockerfile after the migration:

# Medplum production Dockerfile
# Uses Docker "Hardened Images":
# https://hub.docker.com/hardened-images/catalog/dhi/node/guides

# Supported architectures: linux/amd64, linux/arm64

# Stage 1: Build the application and install production dependencies
FROM dhi.io/node:24-dev AS build-stage
ENV NODE_ENV=production
WORKDIR /usr/src/medplum
ADD ./medplum-server-metadata.tar.gz ./
RUN npm ci --omit=dev && \
  rm package-lock.json

# Stage 2: Create the runtime image
FROM dhi.io/node:24 AS runtime-stage
ENV NODE_ENV=production
WORKDIR /usr/src/medplum
COPY --from=build-stage /usr/src/medplum/ ./
ADD ./medplum-server-runtime.tar.gz ./

EXPOSE 5000 8103

ENTRYPOINT [ "node", "--require", "./packages/server/dist/otel/instrumentation.js", "packages/server/dist/index.js" ]

Notice what’s not there:

  • No groupadd or useradd commands — DHI runs as non-root by default
  • No chown commands — permissions are already correct
  • No USER directive — the default user is already non-privileged

Before vs. After: Server Dockerfile

Before (node:24-slim):

FROM node:24-slim
ENV NODE_ENV=production
WORKDIR /usr/src/medplum

ADD ./medplum-server.tar.gz ./

# Install dependencies, create non-root user, and set permissions
RUN npm ci && \
  rm package-lock.json && \
  groupadd -r medplum && \
  useradd -r -g medplum medplum && \
  chown -R medplum:medplum /usr/src/medplum

EXPOSE 5000 8103

# Switch to the non-root user
USER medplum

ENTRYPOINT [ "node", "--require", "./packages/server/dist/otel/instrumentation.js", "packages/server/dist/index.js" ]

After (dhi.io/node:24):

FROM dhi.io/node:24-dev AS build-stage
ENV NODE_ENV=production
WORKDIR /usr/src/medplum
ADD ./medplum-server-metadata.tar.gz ./
RUN npm ci --omit=dev && rm package-lock.json

FROM dhi.io/node:24 AS runtime-stage
ENV NODE_ENV=production
WORKDIR /usr/src/medplum
COPY --from=build-stage /usr/src/medplum/ ./
ADD ./medplum-server-runtime.tar.gz ./

EXPOSE 5000 8103

ENTRYPOINT [ "node", "--require", "./packages/server/dist/otel/instrumentation.js", "packages/server/dist/index.js" ]

The migration also introduced a cleaner multi-stage build pattern, separating metadata (package.json files) from runtime artifacts.

Before vs. After: App Dockerfile (Nginx)

The web app migration was even more dramatic:

Before (nginx-unprivileged:alpine):

FROM nginxinc/nginx-unprivileged:alpine

# Start as root for permissions
USER root

COPY <<EOF /etc/nginx/conf.d/default.conf
# ... nginx config ...
EOF

ADD ./medplum-app.tar.gz /usr/share/nginx/html
COPY ./docker-entrypoint.sh /docker-entrypoint.sh

# Manual permission setup
RUN chown -R 101:101 /usr/share/nginx/html && \
    chown 101:101 /docker-entrypoint.sh && \
    chmod +x /docker-entrypoint.sh

EXPOSE 3000

# Switch back to non-root
USER 101

ENTRYPOINT ["/docker-entrypoint.sh"]

After (dhi.io/nginx:1):

FROM dhi.io/nginx:1

COPY <<EOF /etc/nginx/nginx.conf
# ... nginx config ...
EOF

ADD ./medplum-app.tar.gz /usr/share/nginx/html
COPY ./docker-entrypoint.sh /docker-entrypoint.sh

EXPOSE 3000

ENTRYPOINT ["/docker-entrypoint.sh"]

Results: Improved Security Posture

After merging the changes, Medplum’s team shared their improved security scan results. The migration to DHI resulted in:

  • Dramatically reduced CVE count – DHI’s minimal base means fewer packages to patch
  • Non-root by default – No manual user configuration required
  • No shell access in production – Reduced attack surface for container escape attempts
  • Continuous patching – All DHI images are rebuilt when upstream security updates are available

For organizations that require stronger guarantees, Docker Hardened Images Enterprise adds SLA-backed remediation timelines, image customizations, and FIPS/STIG variants.

Most importantly, all of this was achieved with zero functional changes to the application. The same tests passed, the same workflows worked, and the same deployment process applied.

CI/CD Integration

Medplum also updated their GitHub Actions workflow to authenticate with the DHI registry:

- name: Login to Docker Hub
  uses: docker/login-action@v2.2.0
  with:
    username: $
    password: $

- name: Login to Docker Hub Hardened Images
  uses: docker/login-action@v2.2.0
  with:
    registry: dhi.io
    username: $
    password: $

This allows their CI/CD pipeline to pull hardened base images during builds. The same Docker Hub credentials work for both standard and hardened image registries.

The Multi-Stage Pattern for DHI

One pattern worth highlighting from Medplum’s migration is the use of multi-stage builds with DHI variants:

  1. Build stage: Use dhi.io/node:24-dev which includes npm/yarn for installing dependencies
  2. Runtime stage: Use dhi.io/node:24 which is minimal and doesn’t include package managers

This pattern ensures that build tools never make it into the production image, further reducing the attack surface. It’s a best practice for any containerized Node.js application, and DHI makes it straightforward by providing purpose-built variants for each stage.

Medplum’s Production Architecture

Medplum’s hosted offering runs on AWS using containerized workloads. Their medplum/medplum-server image—built on DHI base images—now deploys to production.

Medplum production architecture

Here’s how the build-to-deploy flow works:

  1. Build time: GitHub Actions pulls dhi.io/node:24-dev and dhi.io/node:24 as base images
  2. Push: The resulting hardened image is pushed to medplum/medplum-server on Docker Hub
  3. Deploy: AWS Fargate pulls medplum/medplum-server:latest and runs the hardened container

The deployed containers inherit all DHI security properties—non-root execution, minimal attack surface, no shell—because they’re built on DHI base images. This demonstrates that DHI works seamlessly with production-grade infrastructure including:

  • AWS Fargate/ECS for container orchestration
  • Elastic Load Balancing for high availability
  • Aurora PostgreSQL for managed database
  • ElastiCache for Redis caching
  • CloudFront for CDN and static assets

No infrastructure changes were required. The same deployment pipeline, the same Fargate configuration—just a more secure base image.

Why This Matters for Healthcare

For healthcare organizations evaluating container security, Medplum’s migration offers several lessons:

1. Eliminating “Vulnerability Noise”

The biggest win from DHI isn’t just security—it’s reducing the operational burden of security. Fewer packages means fewer CVEs to investigate, document, and explain to customers. For teams without dedicated security staff, this reclaimed time is invaluable.

2. Compliance-Friendly Defaults

HIPAA requires covered entities to implement technical safeguards including access controls and audit controls. DHI’s non-root default and minimal attack surface align with these requirements out of the box. For companies pursuing SOC 2 Type 2 certification—which Medplum implemented from Day 1—or HITRUST certification, DHI provides a stronger foundation for the technical controls auditors evaluate.

3. Reduced Audit Surface

When security teams audit container configurations, DHI provides a cleaner story. Instead of explaining custom hardening scripts or why certain CVEs don’t apply, teams can point to Docker’s documented hardening methodology, SLSA Level 3 provenance, and independent security validation by SRLabs. This is particularly valuable during enterprise sales cycles where customers scan vendor images as part of due diligence.

4. Practicing What You Preach

For platforms like Medplum that help customers achieve compliance, using hardened images isn’t just good security—it’s good business. When you’re helping healthcare organizations meet regulatory requirements, your own infrastructure needs to set the example.

5. Faster Security Response

With DHI Enterprise, critical CVEs are patched within 7 days. For healthcare organizations where security incidents can have regulatory implications, this SLA provides meaningful risk reduction—and a concrete commitment to share with customers.

Conclusion

Medplum’s migration to Docker Hardened Images demonstrates that improving container security doesn’t have to be painful. With minimal code changes—54 additions and 52 deletions—they achieved:

  • Secure-by-Default images that meet enterprise requirements
  • Automatic non-root execution
  • Dramatically reduced CVE surface
  • Simplified Dockerfiles with no manual hardening scripts
  • Less “security toil” for their engineering team
  • A stronger compliance story for enterprise customers

By offloading OS-level hardening to Docker, Medplum can focus on what they do best—building healthcare infrastructure—while their security posture improves automatically with each DHI update.

For a platform with 500,000+ Docker Hub pulls serving healthcare organizations worldwide, this migration shows that DHI is ready for production workloads at scale. More importantly, it shows that security improvements can actually reduce operational burden rather than add to it.

For platforms helping others achieve compliance, practicing what you preach matters. With Docker Hardened Images, that just got a lot easier.

Ready to harden your containers? Explore the Docker Hardened Images documentation or browse the free DHI catalog to find hardened versions of your favorite base images.

Resources



from Docker https://ift.tt/FrB17Ve
via IFTTT

From Exposure to Exploitation: How AI Collapses Your Response Window

We’ve all seen this before: a developer deploys a new cloud workload and grants overly broad permissions just to keep the sprint moving. An engineer generates a "temporary" API key for testing and forgets to revoke it. In the past, these were minor operational risks, debts you’d eventually pay down during a slower cycle.

In 2026, “Eventually” is Now

But today, within minutes, AI-powered adversarial systems can find that over-permissioned workload, map its identity relationships, and calculate a viable route to your critical assets. Before your security team has even finished their morning coffee, AI agents have simulated thousands of attack sequences and moved toward execution.

AI compresses reconnaissance, simulation, and prioritization into a single automated sequence. The exposure you created this morning can be modeled, validated, and positioned inside a viable attack path before your team has lunch.

The Collapse of the Exploitation Window

Historically, the exploitation window favored the defender. A vulnerability was disclosed, teams assessed their exposure, and remediation followed a predictable patch cycle. AI has shattered that timeline.

In 2025, over 32% of vulnerabilities were exploited on or before the day the CVE was issued. The infrastructure powering this is massive, with AI-powered scan activity reaching 36,000 scans per second.

But it’s not just about speed; it’s about context. Only 0.47% of identified security issues are actually exploitable. While your team burns cycles reviewing the 99.5% of "noise," AI is laser-focused on the 0.5% that matters, isolating the small fraction of exposures that can be chained into a viable route to your critical assets.

To understand the threat, we must look at it through two distinct lenses: how AI accelerates attacks on your infrastructure, and how your AI infrastructure itself introduces a new attack surface.

Scenario #1: AI as an Accelerator

AI attackers aren't necessarily using "new" exploits. They are exploiting the exact same CVEs and misconfigurations they always have, but they are doing it with machine speed and scale.

Automated vulnerability chaining

Attackers no longer need a "Critical" vulnerability to breach you. They use AI to chain together "Low" and "Medium" issues, a stale credential here, a misconfigured S3 bucket there. AI agents can ingest identity graphs and telemetry to find these convergence points in seconds, doing work that used to take human analysts weeks.

Identity sprawl as a weapon

Machine identities now outnumber human employees 82 to 1. This creates a massive web of keys, tokens, and service accounts. AI-driven tools excel at "identity hopping", mapping token exchange paths from a low-security dev container to an automated backup script, and finally to a high-value production database.

Social Engineering at scale

Phishing has surged 1,265% because AI allows attackers to mirror your company’s internal tone and operational "vibe" perfectly. These aren't generic spam emails; they are context-aware messages that bypass the usual "red flags" employees are trained to spot.

Scenario #2: AI as the New Attack Surface

While AI accelerates attacks on legacy systems, your own AI adoption is creating entirely new vulnerabilities. Attackers aren't just using AI; they are targeting it.

The Model Context Protocol and Excessive Agency

When you connect internal agents to your data, you introduce the risk that it will be targeted and turned into a "confused deputy." Attackers can use prompt injection to trick your public-facing support agents into querying internal databases they should never access. Sensitive data surfaces and is exfiltrated by the very systems you trusted to protect it, all while looking like authorized traffic.

Poisoning the Well

The results of these attacks extend far beyond the moment of exploitation. By feeding false data into an agent's long-term memory (Vector Store), attackers create a dormant payload. The AI agent absorbs this poisoned information and later serves it to users. Your EDR tools see only normal activity, but the AI is now acting as an insider threat.

Supply Chain Hallucinations

Finally, attackers can poison your supply chain before they ever touch your systems. They use LLMs to predict the "hallucinated" package names that AI coding assistants will suggest to developers. By registering these malicious packages first (slopsquatting), they ensure developers inject backdoors directly into your CI/CD pipeline.

Reclaiming the Response Window

Traditional defense cannot match AI speed because it measures success by the wrong metrics. Teams count alerts and patches, treating volume as progress, while adversaries exploit the gaps that accumulate from all this noise.

An effective strategy for staying ahead of attackers in the era of AI must focus on one simple, yet critical question: which exposures actually matter for an attacker moving laterally through your environment?

To answer this, organizations must shift from reactive patching to Continuous Threat Exposure Management (CTEM). It is an operational pivot designed to align security exposure with actual business risk.

AI-enabled attackers don’t care about isolated findings. They chain exposures together into viable paths to your most critical assets. Your remediation strategy needs to account for that same reality: focus on the convergence points where multiple exposures intersect, where one fix eliminates dozens of routes.

The ordinary operational decisions your teams made this morning can become a viable attack path before lunch. Close the paths faster than AI can compute them, and you reclaim the window of exploitation.

Note: This article was thoughtfully written and contributed for our audience by Erez Hasson, Director of Product Marketing at XM Cyber.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/kHinIFr
via IFTTT

Fake IPTV Apps Spread Massiv Android Malware Targeting Mobile Banking Users

Cybersecurity researchers have disclosed details of a new Android trojan called Massiv that's designed to facilitate device takeover (DTO) attacks for financial theft.

The malware, according to ThreatFabric, masquerades as seemingly harmless IPTV apps to deceive victims, indicating that the activity is primarily singling out users looking for the online TV applications.

"This new threat, while only seen in a limited number of rather targeted campaigns, already poses a great risk to the users of mobile banking, allowing its operators to remotely control infected devices and perform device takeover attacks with further fraudulent transactions performed from the victim's banking accounts," the Dutch mobile security company said in a report shared with The Hacker News.

Like various Android banking malware families, Massiv supports a wide range of features to facilitate credential theft through a number of methods: screen streaming through Android's MediaProjection API, keylogging, SMS interception, and fake overlays served atop banking and financial apps. The overlay asks users to enter their credentials and credit card details.

One such campaign has been found to target gov.pt, a Portuguese public administration app that allows users to store identification documents and manage the Digital Mobile Key (aka Chave Móvel Digital or CMD). The overlay tricks users into entering their phone number and PIN code, likely in an effort to bypass Know Your Customer (KYC) verification.

ThreatFabric said it identified cases where scammers used the information captured through these overlays to open new banking accounts in the victim's name, allowing them to be used for money laundering or getting loans approved without the actual victim's knowledge.

In addition, it serves as a fully functional remote-control tool, granting the operator the ability to access the victim's device stealthily while showing a black screen overlay to conceal the malicious activity. These techniques, realized by abusing Android's accessibility services, have also been observed in several other Android bankers like Crocodilus, Datzbro, and Klopatra.

"However, some applications implement protection against screen capture," the company explained. "To bypass it, Massiv uses so-called UI-tree mode -- it traverses AccessibilityWindowInfo roots and recursively processes AccessibilityNodeInfo objects."

This is done so as to build a JSON representation of visible text and content descriptions, UI elements, screen coordinates, and interaction flags that indicate whether the UI element is clickable, editable, focused, or enabled. Only nodes that are visible and have text are exported to the attacker, who can then determine the next course of action by issuing specific commands to interact with the device.

The malware is equipped to carry out a wide range of malicious actions -

  • Enable black overlay, mute sounds and vibration
  • Send device information
  • Perform click and swipe actions
  • Alter clipboard with specific text
  • Disable black screen
  • Turn on/off screen streaming
  • Unlock device with pattern
  • Serve overlays for an app, device pattern lock, or PIN
  • Download ZIP archive with overlays for targeted applications
  • Download and install APK files
  • Open Battery Optimization, Device Admin, and Play Protect settings screens
  • Rquest for permissions to access SMS messages, install APK packages, 
  • Clear log databases on the device

Massiv is distributed in the form of dropper apps mimicking IPTV apps via SMS phishing. Once installed and launched, the dropper prompts the victim to install an "important" update by granting it permissions to install software from external sources. The names of the malicious artifacts are listed below -

  • IPTV24 (hfgx.mqfy.fejku) - Dropper
  • Google Play (hobfjp.anrxf.cucm) - Massiv

"In most of the cases observed, it is just masquerading," ThreatFabric said. "No actual IPTV applications were infected or initially contained malicious code. Usually, the dropper that mimics an IPTV app opens a WebView with an IPTV website in it, while the actual malware is already installed and running on the device."

The majority of Android malware campaigns using TV-related droppers have targeted Spain, Portugal, France, and Turkey over the past six months.

Massiv is the latest entrant to an already crowded Android threat landscape, reflecting the continuing demand for such turnkey solutions among cybercriminals.

"While not yet observed being promoted as Malware-as-a-Service, Massiv's operator shows clear signs of going this path, introducing API keys to be used in malware communication with the backend," ThreatFabric said. "Code analysis revealed ongoing development, with more features likely to be introduced in the future."



from The Hacker News https://ift.tt/ge5mot6
via IFTTT

CRESCENTHARVEST Campaign Targets Iran Protest Supporters With RAT Malware

Cybersecurity researchers have disclosed details of a new campaign dubbed CRESCENTHARVEST, likely targeting supporters of Iran's ongoing protests to conduct information theft and long-term espionage.

The Acronis Threat Research Unit (TRU) said it observed the activity after January 9, with the attacks designed to deliver a malicious payload that serves as a remote access trojan (RAT) and information stealer to execute commands, log keystrokes, and exfiltrate sensitive data. It's currently not known if any of the attacks were successful.

"The campaign exploits recent geopolitical developments to lure victims into opening malicious .LNK files disguised as protest-related images or videos," researchers Subhajeet Singha, Eliad Kimhy, and Darrel Virtusio said in a report published this week.

"These files are bundled with authentic media and a Farsi-language report providing updates from 'the rebellious cities of Iran.' This pro- protest framing appears to be intended to increase credibility and to attract Farsi-speaking Iranians seeking protest-related information."

CRESCENTHARVEST, although unattributed, is believed to be the work of an Iran-aligned threat group. The discovery makes it the second such campaign identified as going after specific individuals in the aftermath of the nationwide protests in Iran that began towards the end of 2025.

Last month, French cybersecurity company HarfangLab detailed a threat cluster dubbed RedKitten that targeted non-governmental organizations and individuals involved in documenting recent human rights abuses in Iran with an aim to infect them with a custom backdoor known as SloppyMIO.

According to Acronis, the exact initial access vector used to distribute the malware is not known. However, it's suspected that the threat actors are relying on spear-phishing or "protracted social engineering efforts" in which the operators build rapport with the victims over time before sending the malicious payloads.

It's worth noting that Iranian hacking groups like Charming Kitten and Tortoiseshell have a storied history of engaging in sophisticated social-engineered attacks that involve approaching prospective targets under fake personas and cultivating a relationship with them, in some cases even stretching for years, before weaponizing the trust to infect them with malware.

"The use of Farsi language content for social engineering and the distributed files depicting the protests in heroic terms suggest an intent to attract Farsi-speaking individuals of Iranian origin, who are in support of the ongoing protests," the Swiss-based security company noted.

The starting point of the attack chain is a malicious RAR archive that claims to contain information related to the Iranian protests, including various images and videos, along with two Windows shortcut (LNK) files that masquerade as an image or a video file by using the double extension trick (*.jpg.lnk or *.mp4.lnk).

The deceptive file, once launched, contains PowerShell code to retrieve another ZIP archive, while simultaneously opening a harmless image or video, tricking the victim into thinking that they have interacted with a benign file.

Present within the ZIP archive is a legitimate Google-signed binary ("software_reporter_tool.exe") shipped as part of Chrome's cleanup utility and several DLL files, including two rogue libraries that are sideloaded by the executable to realize the threat actor's objectives -

  • urtcbased140d_d.dll, a C++ implant that extracts and decrypts Chrome's app-bound encryption keys through COM interfaces. It shares overlaps with an open-source project known as ChromElevator.
  • version.dll (aka CRESCENTHARVEST), a remote access tool that lists installed antivirus products and security tools, enumerates local user accounts on the device, loads DLLs, harvests system metadata, browser credentials, Telegram desktop account data, and keystrokes.

CRESCENTHARVEST employs Windows Win HTTP APIs to communicate with its command-and-control (C2) server ("servicelog-information[.]com"), allowing it to blend in with regular traffic. Some of the supported commands are listed below -

  • Anti, to run anti-analysis checks
  • His, to steal browser history
  • Dir, to list directories
  • Cwd, to get the current working directory
  • Cd, to change directory
  • GetUser, to get user information
  • ps, to run PowerShell commands (not working)
  • KeyLog, to activate keylogger
  • Tel_s, to steal Telegram session data
  • Cook, to steal browser cookies
  • Info, to steal system information
  • F_log, to steal browser credentials
  • Upload, to upload files
  • shell, to run shell commands

"The CRESCENTHARVEST campaign represents the latest chapter in a decade-long pattern of suspected nation-state cyber espionage operations targeting journalists, activists, researchers, and diaspora communities globally," Acronis said. "Much of what we observed in CRESCENTHARVEST reflects well-established tradecraft: LNK-based initial access, DLL side-loading through signed binaries, credential harvesting and social engineering aligned to current events."



from The Hacker News https://ift.tt/lqRXdiW
via IFTTT