Saturday, February 21, 2026

AI-Assisted Threat Actor Compromises 600+ FortiGate Devices in 55 Countries

A Russian-speaking, financially motivated threat actor has been observed taking advantage of commercial generative artificial intelligence (AI) services to compromise over 600 FortiGate devices located in 55 countries.

That's according to new findings from Amazon Threat Intelligence, which said it observed the activity between January 11 and February 18, 2026.

"No exploitation of FortiGate vulnerabilities was observed—instead, this campaign succeeded by exploiting exposed management ports and weak credentials with single-factor authentication, fundamental security gaps that AI helped an unsophisticated actor exploit at scale," CJ Moses, Chief Information Security Officer (CISO) of Amazon Integrated Security, said in a report.

The tech giant described the threat actor as having limited technical capabilities, a constraint they overcame by relying on multiple commercial generative AI tools to implement various phases of the attack cycle, such as tool development, attack planning, and command generation.

While one AI tool served as the primary backbone of the operation, the attackers also relied on a second AI tool as a fallback to assist with pivoting within a specific compromised network. The names of the AI tools were not disclosed.

The threat actor is assessed to be driven by financial gain and not associated with any advanced persistent threat (APT) with state-sponsored resources. As recently highlighted by Google, generative AI tools are being increasingly adopted by threat actors to scale and accelerate their operations, even if they don't equip them with novel uses of the technology.

If anything, the emergence of AI tools illustrates how capabilities that were once off-limits to novice or technically challenged threat actors are becoming increasingly feasible, further lowering the barrier to entry for cybercrime and enabling them to come up with attack methodologies.

"They are likely a financially motivated individual or small group who, through AI augmentation, achieved an operational scale that would have previously required a significantly larger and more skilled team," Moses said.

Amazon's investigation into the threat actor's activity has revealed that they have successfully compromised multiple organizations’ Active Directory environments, extracted complete credential databases, and even targeted backup infrastructure, likely in a lead-up to ransomware deployment.

What's interesting here is that rather than devising ways to persist within hardened environments or those that had employed sophisticated security controls, the threat actor chose to drop the target altogether and move to a relatively softer victim. This indicates the use of AI as a way to bridge their skill gap for easy pickings.

Amazon said it identified publicly accessible infrastructure managed by the attackers that hosted various artifacts pertinent to the campaign. This included AI-generated attack plans, victim configurations, and source code for custom tooling. The entire modus operandi is akin to an "AI-powered assembly line for cybercrime," the company added.

At its core, the attacks enabled the threat actor to breach FortiGate appliances, allowing it to extract full device configurations that, in turn, made it possible to glean credentials, network topology information, and device configuration information.

This involved systematic scanning of FortiGate management interfaces exposed to the internet across ports 443, 8443, 10443, and 4443, followed by attempts to authenticate using commonly reused credentials. The activity was sector-agnostic, indicating automated mass scanning for vulnerable appliances. The scans originated from the IP address 212.11.64[.]250.

The stolen data was then used to burrow deeper into targeted networks and conduct post-exploitation activities, including reconnaissance for vulnerability scanning using Nuclei, Active Directory compromise, credential harvesting, and efforts to access backup infrastructure that align with typical ransomware operations.

Data gathered by Amazon shows that the scanning activity resulted in organizational-level compromise, causing multiple FortiGate devices belonging to the same entity to be accessed. The compromised clusters have been detected across South Asia, Latin America, the Caribbean, West Africa, Northern Europe, and Southeast Asia.

"Following VPN access to victim networks, the threat actor deploys a custom reconnaissance tool, with different versions written in both Go and Python," the company said.

"Analysis of the source code reveals clear indicators of AI-assisted development: redundant comments that merely restate function names, simplistic architecture with disproportionate investment in formatting over functionality, naive JSON parsing via string matching rather than proper deserialization, and compatibility shims for language built-ins with empty documentation stubs."

Some of the other steps undertaken by the threat actor following the reconnaissance phase are listed below -

  • Achieve domain compromise via DCSync attacks.
  • Move laterally across the network via pass-the-hash/pass-the-ticket attacks, NTLM relay attacks, and remote command execution on Windows hosts.
  • Target Veeam Backup & Replication servers to deploy credential harvesting tools and programs aimed at exploiting known Veeam vulnerabilities (e.g., CVE-2023-27532 and CVE-2024-40711).

Another noteworthy finding is the threat actor's pattern of repeatedly running into failures when trying to exploit anything beyond the "most straightforward, automated attack paths," with their own documentation recording that the targets had either patched the services, closed the required ports, or had no vulnerable exploitation vectors.

With Fortinet appliances becoming an attractive target for threat actors, it's essential that organizations ensure management interfaces are not exposed to the internet, change default and common credentials, rotate SSL-VPN user credentials, implement multi-factor authentication for administrative and VPN access, and audit for unauthorized administrative accounts or connections.

It's also essential to isolate backup servers from general network access, ensure all software programs are up-to-date, and monitor for unintended network exposure.

"As we expect this trend to continue in 2026, organizations should anticipate that AI-augmented threat activity will continue to grow in volume from both skilled and unskilled adversaries," Moses said. "Strong defensive fundamentals remain the most effective countermeasure: patch management for perimeter devices, credential hygiene, network segmentation, and robust detection for post-exploitation indicators."



from The Hacker News https://ift.tt/xbkKhuP
via IFTTT

EC-Council Expands AI Certification Portfolio to Strengthen U.S. AI Workforce Readiness and Security

With $5.5 trillion in global AI risk exposure and 700,000 U.S. workers needing reskilling, four new AI certifications and Certified CISO v4 help close the gap between AI adoption and workforce readiness.

EC-Council, creator of the world-renowned Certified Ethical Hacker (CEH) credential and a global leader in applied cybersecurity education, today launched its Enterprise AI Credential Suite, with four new role-based AI certifications debuting alongside Certified CISO v4, an overhauled executive cyber leadership program. The dual launch is the largest single expansion of EC-Council’s portfolio in its 25-year history. It addresses a structural gap that no single tool, platform, or policy can solve alone: AI is scaling faster than the workforce trained to run, secure, and govern it.

The launch aligns with U.S. priorities on workforce development and applied AI education outlined in Executive Order 14179, the July 2025 AI Action Plan’s workforce development pillar, and Executive Orders 14277 and 14278, which emphasize expanding AI education pathways and building job-relevant skills across professional and skilled-trade roles, at a time when organizations are moving AI from pilot projects into everyday operations and decision-making.

That urgency is visible in both economic exposure and workforce capacity. IDC estimates that unmanaged AI risk could reach $5.5 trillion globally, while Bain & Company projects a 700,000-person AI and cybersecurity reskilling gap in the United States. The International Monetary Fund (IMF) and the World Economic Forum (WEF) have also pointed to workforce readiness, rather than access to technology, as a primary constraint on AI-driven productivity and growth, especially as adoption accelerates across sectors.

Security pressure is rising in parallel with adoption. Eighty-seven percent of organizations report AI-driven attacks, and generative AI traffic has surged by 890 percent, expanding attack surfaces that many teams are still learning how to defend, while AI capability remains concentrated, with 67 percent of AI talent located in just 15 U.S. cities and women representing only 28 percent of the AI workforce, highlighting persistent access and participation gaps as demand increases.

“AI is moving from experimentation to infrastructure, and the workforce has to move with it,” said Jay Bavisi, Group President, EC-Council. “These programs are built to give professionals practical capability across adoption, security, and governance, so organizations can scale AI with confidence and clear accountability.”

Role-Aligned Certifications

The Enterprise AI Credential Suite is structured to mirror how AI capability is developed in practice. Artificial Intelligence Essentials (AIE) serves as the baseline, building practical AI fluency and responsible usage across roles, and it is supported by EC-Council’s proprietary Adopt. Defend. Govern. (ADG) framework, which defines how AI should be operationalized at scale in real environments.

Adopt: Prepare teams to deploy AI deliberately, with readiness and safeguards

Defend: Secure AI systems against threats such as prompt injection, data poisoning, model exploitation, and AI supply-chain compromise

Govern: Embed accountability, oversight, and risk management into AI systems from the outset

Within this structure, the four new certifications align directly to specific workforce needs across the AI lifecycle.

  • Artificial Intelligence Essentials (AIE) builds foundational AI literacy.
  • Certified AI Program Manager (CAIPM) equips to translate AI strategy into execution, aligning teams, governance, and delivery to drive measurable ROI and enterprise-scale intelligence.
  • Certified Offensive AI Security Professional (COASP) builds elite capabilities to test vulnerabilities in LLMs, simulate exploits, and secure AI infrastructure hardening enterprises against emerging threats.
  • Certified Responsible AI Governance & Ethics (CRAGE) credential focuses on Responsible AI, Governance and Ethics at enterprise scale with NIST/ISO compliance.

Alongside the new AI certifications, Certified CISO v4 updates executive cyber leadership education for AI-driven risk environments, strengthening leadership readiness as intelligent systems become part of core business operations and security decision-making.

“Security leaders are now accountable for systems that learn, adapt, and influence outcomes at speed,” Bavisi added. “Certified CISO v4 prepares leaders to manage AI-driven risk with clarity, strengthen governance, and make informed decisions when responsibility is on the line.”

The portfolio also builds on EC-Council’s long-standing work with government and defense organizations, including its existing DoD 8140 baseline certification recognition, as AI security and workforce readiness take on greater national importance.

To explore the full range of training and certification opportunities, visit the EC-Council AI Courses library.

About EC-Council:

EC-Council is the creator of the Certified Ethical Hacker (CEH) program and a leader in cybersecurity education. Founded in 2001, EC-Council’s mission is to provide high-quality training and certifications for cybersecurity professionals to keep organizations safe from cyber threats. EC-Council offers over 200 certifications and degrees in various cybersecurity domains, including forensics, security analysis, threat intelligence, and information security.

An ISO/IEC 17024 accredited organization, EC-Council has certified over 350,000 professionals worldwide, with clients ranging from government agencies to Fortune 100 companies. EC-Council is the gold standard in cybersecurity certification, trusted by the U.S. Department of Defense, the Army, Navy, Air Force, and leading global corporations.

For more information, visit: www.eccouncil.org

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/CaRzV5g
via IFTTT

Friday, February 20, 2026

State of Agentic AI Report: Key Findings

Based on Docker’s State of Agentic AI report, a global survey of more than 800 developers, platform engineers, and technology decision-makers, this blog summarizes key findings of what’s really happening as agentic AI scales within organizations. Drawing on insights from decision-makers and purchase influencers worldwide, we’ll give you a preview on not only where teams are seeing early wins but also what’s still missing to move from experimentation to enterprise-grade adoption.

Rapid adoption, early maturity

60% of organizations already have AI agents in production, and 94% view building agents as a strategic priority, but most deployments remain internal and focused on productivity and operational efficiency.

Security and complexity are the top barriers

40% of respondents cite security as the #1 challenge in scaling agentic AI, with 45% struggling to ensure tools are secure and enterprise-ready. Technical complexity compounds the challenge. One in three organizations (33%) report orchestration difficulties as multi-model and multi-cloud environments proliferate (79% of organizations run agents across two or more environments).

MCP shows promise but isn’t enterprise-ready

85% of teams are familiar with the Model Context Protocol (MCP), yet most report significant security, configuration, and manageability issues that prevent production-scale deployment.

Want the full picture? Download the latest State of Agentic AI report to explore deeper insights and practical recommendations for scaling agentic AI in your organization.

Fear of vendor lock-in is real

Enterprises worry about dependencies in core agent and agentic infrastructure layers such as model hosting, LLM providers, and even cloud platforms. Seventy-six percent of global  respondents report active concerns about vendor lock-in, rising to 88% in France, 83%
in Japan, and 82% in the UK. 

Containerization remains foundational

94% use containers for agent development or production, and 98% follow the same cloud-native workflows as traditional software, establishing containers as the proven substrate for agentic AI infrastructure.

Long-term outlook

Rather than a “year of the agents,” the data points to a decade-long transformation. Organizations are laying the governance and trust foundations now for scalable, enterprise-grade agent ecosystems.

agentic ai blog

The path forward

The path forward doesn’t require reinvention so much as consolidation around a trust layer: access to trusted content and components that can be safely discovered and reused; secure-by-default runtimes; standardized orchestration and policy; and portable, auditable packaging. Agentic AI’s near-term value is already real in internal workflows; unlocking the next wave depends on standardizing how we secure, orchestrate, and ship agents. Teams that invest now in this trust layer, on top of the container foundations they already know, will be first to scale agents from local productivity to durable, enterprise-wide outcomes.

Download the full Agentic AI report for more insights and recommendations on how to scale agents for enterprise.  

Learn more:



from Docker https://ift.tt/JVmlhNx
via IFTTT

BeyondTrust Flaw Used for Web Shells, Backdoors, and Data Exfiltration

Threat actors have been observed exploiting a recently disclosed critical security flaw impacting BeyondTrust Remote Support (RS) and Privileged Remote Access (PRA) products to conduct a wide range of malicious actions, including deploying VShell and 

The vulnerability, tracked as CVE-2026-1731 (CVSS score: 9.9), allows attackers to execute operating system commands in the context of the site user.

In a report published Thursday, Palo Alto Networks Unit 42 said it detected the security flaw being actively exploited in the wild for network reconnaissance, web shell deployment, command-and-control (C2), backdoor and remote management tool installs, lateral movement, and data theft.

The campaign has targeted financial services, legal services, high technology, higher education, wholesale and retail, and healthcare sectors across the U.S., France, Germany, Australia, and Canada.

The cybersecurity company described the vulnerability as a case of sanitization failure that enables an attacker to leverage the affected "thin-scc-wrapper" script that's reachable via WebSocket interface to inject and execute arbitrary shell commands in the context of the site user.

"While this account is distinct from the root user, compromising it effectively grants the attacker control over the appliance's configuration, managed sessions and network traffic," security researcher Justin Moore said.

The current scope of attacks exploiting the flaw range from reconnaissance to backdoor deployment -

  • Using a custom Python script to gain access to an administrative account.
  • Installing multiple web shells across directories, including a PHP backdoor that's capable of executing raw PHP code or running arbitrary PHP code without writing new files to disk, as well as a bash dropper that establishes a persistent web shell.
  • Deploying malware such as VShell and Spark RAT.
  • Using out-of-band application security testing (OAST) techniques to validate successful code execution and fingerprint compromised systems.
  • Executing commands to stage, compress and exfiltrate sensitive data, including configuration files, internal system databases and a full PostgreSQL dump, to an external server.

"The relationship between CVE-2026-1731 and CVE-2024-12356 highlights a localized, recurring challenge with input validation within distinct execution pathways," Unit 42 said.

"CVE-2024-12356's insufficient validation was using third-party software (postgres), while CVE-2026-1731's insufficient validation problem occurred in the BeyondTrust Remote Support (RS) and older versions of the BeyondTrust Privileged Remote Access (PRA) codebase."

With CVE-2024-12356 exploited by China-nexus threat actors like Silk Typhoon, the cybersecurity company noted that CVE-2026-1731 could also be a target for sophisticated threat actors.

The development comes as the U.S. Cybersecurity and Infrastructure Security Agency (CISA) updated its Known Exploited Vulnerabilities (KEV) catalog entry for CVE-2026-1731 to confirm that the bug has been exploited in ransomware campaigns.



from The Hacker News https://ift.tt/JPkxWXy
via IFTTT

Cline CLI 2.3.0 Supply Chain Attack Installed OpenClaw on Developer Systems

In yet another software supply chain attack, the open-source, artificial intelligence (AI)-powered coding assistant Cline CLI was updated to stealthily install OpenClaw, a self-hosted autonomous AI agent that has become exceedingly popular in the past few months.

"On February 17, 2026, at 3:26 AM PT, an unauthorized party used a compromised npm publish token to publish an update to Cline CLI on the NPM registry: cline@2.3.0," the maintainers of the Cline package said in an advisory. "The published package contains a modified package.json with an added postinstall script: 'postinstall": "npm install -g openclaw@latest.'"

As a result, this causes OpenClaw to be installed on the developer's machine when Cline version 2.3.0 is installed. Cline said no additional modifications were introduced to the package and there was no malicious behavior observed. However, it noted that the installation of OpenClaw was not authorized or intended.

The supply chain attack affects all users who installed the Cline CLI package published on npm, specifically version 2.3.0, during an approximately eight-hour window between 3:26 a.m. PT and 11:30 a.m. PT on February 17, 2026. The incident does not impact Cline's Visual Studio Code (VS Code) extension and JetBrains plugin.

To mitigate the unauthorized publication, Cline maintainers have released version 2.4.0. Version 2.3.0 has since been deprecated and the compromised token has been revoked. Cline also said the npm publishing mechanism has been updated to support OpenID Connect (OIDC) via GitHub Actions.

In a post on X, the Microsoft Threat Intelligence team said it observed a "small but noticeable uptick" in OpenClaw installations on February 17, 2026, as a result of the supply chain compromise of the Cline CLI package. According to StepSecurity, the compromised Cline package was downloaded roughly 4,000 times during the eight-hour stretch.

Users are advised to update to the latest version, check their environment for any unexpected installation of OpenClaw, and remove it if not required.

"Overall impact is considered low, despite high download counts: OpenClaw itself is not malicious, and the installation does not include the installation/start of the Gateway daemon," Endor Labs researcher Henrik Plate said.

"Still, this event emphasizes the need for package maintainers to not only enable trusted publishing, but also disable publication through traditional tokens – and for package users to pay attention to the presence (and sudden absence) of corresponding attestations."

Leveraging Clinejection to Leak Publication Secrets

While it's currently not clear who is behind the breach of the npm package and what their end goals were, it comes after security researcher Adnan Khan discovered that attackers could steal the repository's authentication tokens through prompt injection by taking advantage of the fact that it is configured to automatically triage any incoming issue raised on GitHub.

"When a new issue is opened, the workflow spins up Claude with access to the repository and a broad set of tools to analyze and respond to the issue," Khan explained. "The intent: automate first-response to reduce maintainer burden."

But a misconfiguration in the workflow meant that it gave Claude excessive permissions to achieve arbitrary code execution within the default branch. This aspect, combined with a prompt injection embedded within the GitHub issue title, could be exploited by an attacker with a GitHub account to trick the AI agent into running arbitrary commands and compromise production releases.

This shortcoming, which builds up PromptPwnd, has been codenamed Clinejection. It was introduced in a source code commit made on December 21, 2025. The attack chain is outlined below -

  • Prompt Claude to run arbitrary code in issue triage workflow
  • Evict legitimate cache entries by filling the cache with more than 10GB of junk data, triggering GitHub's Least Recently Used (LRU) cache eviction policy
  • Set poisoned cache entries matching the nightly release workflow's cache keys
  • Wait for the nightly publish to run at around 2 a.m. UTC and trigger on the poisoned cache entry

"This would allow an attacker to obtain code execution in the nightly workflow and steal the publication secrets," Khan noted. "If a threat actor were to obtain the production publish tokens, the result would be a devastating supply chain attack."

"A malicious update pushed through compromised publication credentials would execute in the context of every developer who has the extension installed and set to update automatically."

In other words, the attack sequence employs GitHub Actions cache poisoning to pivot from the triage workflow to a highly privileged workflow, such as the Publish Nightly Release and Publish NPM Nightly workflows, and steal the nightly publication credentials, which have the same access as those used for production releases.

As it turns out, this is exactly what happened, with the unknown threat actor weaponizing an active npm publish token (referred to as NPM_RELEASE_TOKEN or NPM_TOKEN) to authenticate with the Node.js registry and publish Cline version 2.3.0.

"We have been talking about AI supply chain security in theoretical terms for too long, and this week it became an operational reality," Chris Hughes, VP of Security Strategy at Zenity, said in a statement shared with The Hacker News. "When a single issue title can influence an automated build pipeline and affect a published release, the risk is no longer theoretical. The industry needs to start recognizing AI agents as privileged actors that require governance."



from The Hacker News https://ift.tt/nQ7s1dY
via IFTTT

ClickFix Campaign Abuses Compromised Sites to Deploy MIMICRAT RAT

Cybersecurity researchers have disclosed details of a new ClickFix campaign that abuses compromised legitimate sites to deliver a previously undocumented remote access trojan (RAT) called MIMICRAT (aka AstarionRAT).

"The campaign demonstrates a high level of operational sophistication: compromised sites spanning multiple industries and geographies serve as delivery infrastructure, a multi-stage PowerShell chain performs ETW and AMSI bypass before dropping a Lua-scripted shellcode loader, and the final implant communicates over HTTPS on port 443 using HTTP profiles that resemble legitimate web analytics traffic," Elastic Security Labs said in a Friday report.

According to the enterprise search and cybersecurity company, MIMICRAT is a custom C++ RAT with support for Windows token impersonation, SOCKS5 tunneling, and a set of 22 commands for comprehensive post-exploitation capabilities. The campaign was discovered earlier this month.

It's also assessed to share tactical and infrastructural overlaps with another ClickFix campaign documented by Huntress that leads to the deployment of the Matanbuchus 3.0 loader, which then serves as a conduit for the same RAT. The end goal of the attack is suspected to be ransomware deployment or data exfiltration.

In the infection sequence highlighted by Elastic, the entry point is bincheck[.]io, a legitimate Bank Identification Number (BIN) validation service that was breached to inject malicious JavaScript code that's responsible for loading an externally hosted PHP script. The PHP script then proceeds to deliver the ClickFix lure by displaying a fake Cloudflare verification page and instructing the victim to copy and paste a command into the Windows Run dialog to address the issue.

This, in turn, leads to the execution of a PowerShell command, which then contacts a command-and-control (C2) server to fetch a second-stage PowerShell script that patches Windows event logging (ETW) and antivirus scanning (AMSI) before dropping a Lua-based loader. In the final stage, the Lua script decrypts and executes in memory shellcode that delivers MIMICRAT.

The Trojan uses HTTPS for communicating with the C2 server, allowing it to accept two dozen commands for process and file system control, interactive shell access, token manipulation, shellcode injection, and SOCKS proxy tunneling.

"The campaign supports 17 languages, with the lure content dynamically localized based on the victim's browser language settings to broaden its effective reach," security researcher Salim Bitam said. "Identified victims span multiple geographies, including a USA-based university and multiple Chinese-speaking users documented in public forum discussions, suggesting broad opportunistic targeting."



from The Hacker News https://ift.tt/02TtW9z
via IFTTT

Ukrainian National Sentenced to 5 Years in North Korea IT Worker Fraud Case

A 29-year-old Ukrainian national has been sentenced to five years in prison in the U.S. for his role in facilitating North Korea's fraudulent information technology (IT) worker scheme.

In November 2025, Oleksandr "Alexander" Didenko pleaded guilty to wire fraud conspiracy and aggravated identity theft for stealing the identities of U.S. citizens and selling them to IT workers to help them land jobs at 40 U.S. companies and draw regular salaries, which were then funneled back to the regime to support its weapons programs. He was apprehended by Polish authorities in late 2024, and later extradited to the U.S.

Didenko has also been ordered to serve 12 months of supervised release and to pay $46,547.28 in restitution. Last year, Didenko also agreed to forfeit more than $1.4 million, which includes about $181,438 in U.S. dollars and cryptocurrency seized from him and his co-conspirators.

The defendant is said to have run a website named Upworksell[.]com to help overseas IT workers buy or rent stolen or borrowed identities since the start of 2021. The IT workers abused these identities to apply for jobs on freelance work platforms based in California and Pennsylvania. The site was seized by authorities on May 16, 2024.

In addition, Didenko paid individuals in the U.S. to receive and host laptops at their residences in Virginia, Tennessee and California. The idea was to give the impression that the workers were located in the country, when, in reality, they were connecting remotely from countries like China, where they were dispatched to.

As part of the criminal scheme, Didenko managed as many as 871 proxy identities and facilitated the operation of at least three U.S.-based laptop farms. One of the computers was sent to a laptop farm run by Christina Marie Chapman in Arizona. Chapman was arrested in May 2024 and sentenced to 102 months in prison in July 2025 for participating in the scheme.

Furthermore, he enabled his North Korean clients to access the U.S. financial system through Money Service Transmitters instead of having to open an account at a bank within the U.S. These money transfer services were used to move employment income to foreign bank accounts. Officials said Didenko's clients were paid hundreds of thousands of dollars for their work.

"Defendant Didenko's scheme funneled money from Americans and U.S. businesses, into the coffers of North Korea, a hostile regime," said U.S. Attorney Jeanine Ferris Pirro. "Today, North Korea is not only a threat to the homeland from afar, it is an enemy within."

"By using stolen and fraudulent identities, North Korean actors are infiltrating American companies, stealing information, licensing, and data that is harmful to any business. But more than that, money paid to these so-called employees goes directly to munitions programs in North Korea."

Despite continued law enforcement actions, the Hermit Kingdom's conspiracy shows no signs of stopping. If anything, the operation has continued to evolve with new tactics and techniques to evade detection.

According to a report from threat intelligence firm Security Alliance (SEAL) last week, the IT workers have begun to apply for remote positions using real LinkedIn accounts of individuals they're impersonating in an effort to make their fraudulent applications look authentic.



from The Hacker News https://ift.tt/8QnOKWd
via IFTTT

FBI Reports 1,900 ATM Jackpotting Incidents Since 2020, $20M Lost in 2025

The U.S. Federal Bureau of Investigation (FBI) has warned of an increase in ATM jackpotting incidents across the country, leading to losses of more than $20 million in 2025.

The agency said 1,900 ATM jackpotting incidents have been reported since 2020, out of which 700 took place last year. In December 2025, the U.S. Department of Justice (DoJ) said about $40.73 million has been collectively lost to jackpotting attacks since 2021.

"Threat actors exploit physical and software vulnerabilities in ATMs and deploy malware to dispense cash without a legitimate transaction," the FBI said in a Thursday bulletin.

The jackpotting attacks involve the use of specialized malware, such as Ploutus, to infect ATMs and force them to dispense cash. In most cases, cybercriminals have been observed gaining unauthorized access to the machines by opening an ATM face with widely available generic keys.

Cybersecurity

There are at least two different ways by which the malware is deployed: Removing the ATM's hard drive, followed by either connecting it to their computer, copying it to the hard drive, attaching it back to the ATM, and rebooting the ATM, or replacing it entirely with a foreign hard drive preloaded with the malware and rebooting it.

Regardless of the method used, the end result is the same. The malware is designed to interact directly with the ATM hardware, thereby getting around any security controls present in the original ATM software.

Because the malware does not require a connection to an actual bank card or customer account to dispense cash, it can be used against ATMs of different manufacturers with little to no code changes, as the underlying Windows operating system is exploited during the attack.

Ploutus was first observed in Mexico in 2013. Once installed, it grants threat actors complete control over an ATM, enabling them to trigger cash-outs that the FBI said can occur in minutes and are harder to detect until after the money is withdrawn.

"Ploutus malware exploits the eXtensions for Financial Services (XFS), the layer of software that instructs an ATM what to physically do," the FBI explained.

Cybersecurity

"When a legitimate transaction occurs, the ATM application sends instructions through XFS for bank authorization. If a threat actor can issue their own commands to XFS, they can bypass bank authorization entirely and instruct the ATM to dispense cash on demand."

The agency has outlined a long list of recommendations that organizations can adopt to mitigate jackpotting risks. This includes tightening physical security by installing threat sensors, setting up security cameras, and changing standard locks on ATM devices.

Other measures involve auditing ATM devices, changing default credentials, configuring an automatic shutdown mode once indicators of compromise are detected, enforcing device allowlisting to prevent connection of unauthorized devices, and maintaining logs.

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/uaRLPy0
via IFTTT

Thursday, February 19, 2026

What will knowledge work be in 18 months? Look at what AI is doing to coding right now.

There’s a lot of buzz about something big happening in software engineering thanks to the latest batch of AI models. However, most knowledge workers think this is just a “coding thing” which doesn’t apply to them. They’re wrong.

Dan Shapiro, Glowforge CEO and Wharton Research Fellow, recently published a five-level framework which maps the level of AI assistance for coding from simple searches all the way to a “dark factory,” where AI is essentially just a black box that turns specs into software.

I want to walk through those five levels, because I think this pattern also applies to knowledge work, and we knowledge workers are not far behind coders in this regard.

Shapiro’s five levels of AI use in coding

(I’ve condensed but mostly used his words here)

  • Level 0: AI is spicy autocomplete. You’re doing manual coding and not a character hits the disk without your approval. You might use AI as a search super search engine, or occasionally accept a suggestion, but the code is unmistakably yours.
  • Level 1: AI is a coding intern. You offload discrete tasks to AI. “Write a unit test.” “Add a docstring.” You’re seeing speedups, but you’re still moving at the rate you type.
  • Level 2: AI is a junior developer. You’re a “pair programmer” with AI and now have a junior buddy to hand off all your boring stuff to. You code in a flow state and are more productive than you’ve ever been. Shapiro says 90% of “AI-native” developers are here, and the danger is from level 2, and every level after it, the coder feels like they’ve maxed out and they’re done. But they’re not.
  • Level 3: AI is a developer. You’re not the developer anymore. (That’s your AI’s job.) You’re the manager. You’re the human in the loop. Your coding agent is always running in multiple tabs, and you spend your days reviewing code and changes. For many people, this feels like things got worse. Almost everyone tops out here.
  • Level 4: AI is an engineering team. Now you’re not even a developer manager, you’re a product manager. You write specs, argue with the AI about specs, craft skills plan schedules, then leave for 12 hours and check if the tests pass. (Shapiro says he’s here.)
  • Level 5: AI is a dark software factory. You’re the engineering manager who sets the goals of the system in plain English. The AI defines implementation, writes code, tests, fixes bugs, and ships. It’s not really a software process anymore. It’s a black box that turns specs into software.

AI + coding future thinking also applies to AI + knowledge work

Nate B. Jones covers the AI-and-software engineering beat better than almost anyone. His YouTube videos are “required watching” for me. I realized recently that everything he says about how AI is impacting software engineering also applies to AI impacting knowledge workers. For example, some quotes of his from recent videos about coding which apply verbatim to knowledge worker using AI:

“The bottleneck has shifted. You are now the manager of however many agents you can keep track of productively. Your productive capacity is limited now only by your attention span and your ability to scope tasks well.”

“These are supervision problems, not capability problems. And the solution isn’t to do the work yourself. It’s to get better at your management skills.”

If we do some simple Mad Libs style find-and-replace, Nate’s also a pretty good “future of work” strategist! Just swap out:

  • code → deliverables / work product / output
  • engineer → knowledge worker
  • technical leader → business leader
  • implementation → producing deliverables
  • system → outcome
  • tests → success criteria
  • codebase → work stream
  • syntax → formatting

Let’s try that on some more quotes from his recent videos:

“It’s less time writing code deliverables. It’s much more time defining what you want. It’s much more time evaluating whether you got there.”

“Most engineers knowledge workers have spent years developing their intuitions around implementation producing deliverables and those are now not super useful. The new skill is describing the system outcome precisely enough that AI can build it, and then writing tests success criteria that capture what you actually need, and reviewing AI-generated code output for subtle conceptual errors rather than simple syntax formatting mistakes.”

“If you’re not thinking through what you want done, the speed can lead you to very quickly build a giant pile of code work product that’s not very useful. That is a superpower that everyone has been handed for better or worse and we are about to see who is actually able to think well.”

“We need to think as technical business leaders about where engineers knowledge workers should stand in relation to the code AI-generated output based on the risk profile of that codebase work stream itself.”

That last one is illustrates the power of this perfectly. That concept applies to software engineering, but I never would have thought about it in the context of knowledge work. Yet it 100% applies there as well. Which strategic deliverables require human review and which can you trust to the Dark Knowledge Factory? 

The five levels of AI use in knowledge work

Now let’s take Shapiro’s five levels of AI use in coding and translate them to knowledge work. (Some of these loosely map to my own 7-stage roadmap for human-AI collaboration in the workplace from six months ago, though Shapiro’s levels address the relationship between humans and AI, whereas I focused on the mechanics of the collaboration.)

Putting Shapiro’s coding levels through our Mad Libs code-to-knowledge work translator:

  • Level 0: AI is a spicy search engine. You’re doing the knowledge work and not a word hits the page without your approval. You might use AI as a super search engine, or occasionally accept a suggested sentence, but the deliverable is unmistakably yours. This is most enterprise knowledge workers today.
  • Level 1: AI is a research intern. You offload discrete tasks to AI. “Summarize this document.” “Draft a response to this email.” You’re seeing speedups, but you’re still moving at the rate you type. You’re still the one producing the deliverable. This is most people’s experience with Office Copilot
  • Level 2: AI is a junior analyst. You’re “pair working” with AI and now have a junior buddy to hand off all your boring stuff to. You’re in a flow state and more productive than you’ve ever been. Workers at this level use persistent AI collaboration spaces, like Google NotebookLM, Claude Projects, or Copilot Notebooks. Like their coding counterparts, starting at Level 2 and every level after it, knowledge workers feel like once they’re here that they’re done and they’ve maxed out. But they haven’t.
  • Level 3: AI is an analyst. You’re not the one producing work anymore. (That’s your AI’s job.) You’re the manager. You’re the human in the loop. Your AI is always running and you spend your days reviewing and editing everything it generates. Strategy decks, market analyses, competitive intelligence, communications. Your life is tracked changes. For some workers, this feels like things got worse. Almost everyone tops out here. This is where workers using a personal AI knowledge system / “second brain” are. This is where I am.
  • Level 4: AI is a strategy team. Now you’re not even a manager, you’re a director. You don’t write deliverables or even review them line by line. You write specs for deliverables. You define what a good competitive analysis looks like, what the acceptance criteria are, and what scenarios it needs to handle. You craft the prompts, system instructions, and the evaluation rubrics. Then you walk away and check if the output passes your scenarios.
  • Level 5: AI is a dark knowledge factory. You are the executive who sets the goals of the organization in plain English. The AI defines the approach, produces deliverables, evaluates quality, iterates, and ships. It’s not really a work process anymore. It’s a black box that turns business intent into business outcomes. A handful of people run what used to be an entire analyst function. The verification framework is the intellectual property, not the reports themselves.

But how do you know the AI’s work is any good?

I feel like I can follow along the analogy through Level 3, but Levels 4 and 5 seem weird to me and it’s hard to see exactly how they would apply to knowledge work. (Heh, funny I’m personally at Level 3 and as Shapiro wrote, people after Level 2 think that whatever level they’re at is the top.)

The hardest question at Levels 4 and 5 is the same whether you’re writing code or strategy memos: how do you verify the output without a human reviewing every piece?

In code, the answer turned out to be end-to-end behavioral tests stored separately from the codebase (so the AI can’t cheat). For knowledge work, I think it maps to something like:

  • You define what “good” looks like (for a strategy recommendation, a presentation, etc.) and you deliberately keep those separate from the AI so it can’t game the criteria. These need to be real and deep things, like “Does this account for the competitor’s likely response? Does this identify second-order effects? Would the CFO approve this?”
  • Then once the main AI generates the content, a different AI uses your verification docs and is prompted to be a skeptical board member, a hostile competitor, or a regulatory lawyer and tries to find flaws. So this way, the verification loop isn’t human, it’s AI verifying AI against criteria that humans defined.

(There will be a lot of interesting work done here in the next year!)

AI’s true impact to knowledge work is just beginning

As I wrote in the opening, most talk about AI’s impact today is focused on software engineering. But coding was the beachhead, not the destination. Software was first because code has built-in verification layers, specific syntax, and billions of pages on the internet about how to write good code.

Knowledge work is next, but the timeline will be more compressed. (We have better AI now and lots of lessons from the software world.) If frontier coding teams are at Level 4-5 today while frontier knowledge workers are only at Level 1-2, a pretty good way to know what knowledge work looks like in 18 months is to look at what coders are doing right now.

As we progress towards this future, remember that the bottleneck keeps moving. At Level 1 it’s, “how fast can you produce work?” At Level 4 it’s, “how precisely can you specify what should exist?” By Level 5 it’s, “how rigorously can you verify that it’s good?” Level 5 of knowledge work will introduce a governance problem that nobody has a playbook for yet. Who owns the specs? Who defines the verifications? Who’s making sure the Dark Knowledge Factory isn’t producing hallucinated strategy recommendations that look right but fall apart under scrutiny?

Most enterprises don’t have the governance infrastructure for any of this, and it’s coming whether they’re ready or not.


Read more & connect

Join the conversation and discuss this post on LinkedIn. You can find all my posts on my author page (or via RSS).



from Citrix Blogs https://ift.tt/yGRPxQp
via IFTTT

ThreatsDay Bulletin: OpenSSL RCE, Foxit 0-Days, Copilot Leak, AI Password Flaws & 20+ Stories

  1. Privacy model hardening

    Google announced the first beta version of Android 17, with two privacy and security enhancements: the deprecation of Cleartext Traffic Attribute and support for HPKE Hybrid Cryptography to enable secure communication using a combination of public key and symmetric encryption (AEAD). "If your app targets (Android 17) or higher and relies on usesCleartextTraffic='true' without a corresponding Network Security Configuration, it will default to disallowing cleartext traffic," Google said. "You are encouraged to migrate to Network Security Configuration files for granular control."

  2. RaaS expands cross-platform reach

    A new analysis of the LockBit 5.0 ransomware has revealed that the Windows version packs in various defense evasion and anti-analysis techniques, including packing, DLL unhooking, process hollowing, patching Event Tracing for Windows (ETW) functions, and log clearing. "What's notable among the multiple systems support is its proclaimed capability to 'work on all versions of Proxmox,'" Acronis said. "Proxmox is an open-source virtualization platform and is being adopted by enterprises as an alternative to commercial hypervisors, which makes it another prime target of ransomware attacks." The latest version also introduces dedicated builds tailored for enterprise environments, highlighting the continued evolution of ransomware-as-a-service (RaaS) operations.

  3. Mac users lured via nested obfuscation

    Cybersecurity researchers have detailed a new evolution of the ClickFix social engineering tactic targeting macOS users. "Dubbed Matryoshka due to its nested obfuscation layers, this variant uses a fake installation/fix flow to trick victims into executing a malicious Terminal command," Intego said. "While the ClickFix tactic is not new, this campaign introduces stronger evasion techniques — including an in-memory, compressed wrapper and API-gated network communications — designed to hinder static analysis and automated sandboxes." The campaign primarily targets users attempting to visit software review sites, leveraging typosquatting in the URL name to redirect them to fake sites and activate the infection chain.

  4. Loader pipeline drives rapid domain takeover

    Another new ClickFix campaign detected in February 2026 has been observed delivering a malware-as-a-service (MaaS) loader known as Matanbuchus 3.0. Huntress, which dissected the attack chain, said the ultimate objective of the intrusion was to deploy ransomware or exfiltrate data based on the fact that the threat actor rapidly progressed from initial access to lateral movement to domain controllers via PsExec, rogue account creation, and Microsoft Defender exclusion staging. The attack also led to the deployment of a custom implant dubbed AstarionRAT that supports 24 commands to facilitate credential theft, SOCKS5 proxy, port scanning, reflective code loading, and shell execution. According to data from the cybersecurity company, ClickFix fueled 53% of all malware loader activity in 2025.

  5. Typosquat chain targets macOS credentials

    In yet another ClickFix campaign, threat actors are relying on the "reliable trick" to host malicious instructions on fake websites disguised as Homebrew ("homabrews[.]org") to trick users into pasting them on the Terminal app under the pretext of installing the macOS package manager. In the attack chain documented by Hunt.io, the commands in the typosquatted Homebrew domain are used to deliver a credential-harvesting loader and a second-stage macOS infostealer dubbed Cuckoo Stealer. "The injected installer looped on password prompts using 'dscl . -authonly,' ensuring the attacker obtained working credentials before deploying the second stage," Hunt.io said. "Cuckoo Stealer is a full-featured macOS infostealer and RAT: It establishes LaunchAgent persistence, removes quarantine attributes, and maintains encrypted HTTPS command-and-control communications. It collects browser credentials, session tokens, macOS Keychain data, Apple Notes, messaging sessions, VPN and FTP configurations, and over 20 cryptocurrency wallet applications." The use of "dscl . -authonly" has been previously observed in attacks deploying Atomic Stealer.

  6. Phobos affiliate detained in Europe

    Authorities from Poland's Central Bureau for Combating Cybercrime (CBZC) have detained a 47-year-old man over suspected ties to the Phobos ransomware group. He faces a potential prison sentence of up to five years. The CBZC said the "47-year-old used encrypted messaging to contact the Phobos criminal group, known for conducting ransomware attacks," adding the suspect's devices contained logins, passwords, credit card numbers, and server IP addresses that could have been used to launch "various attacks, including ransomware." The arrest is part of Europol's Operation Aether, which targets the 8Base ransomware group, believed to be linked to Phobos. It has been almost exactly a year since international law enforcement dismantled the 8Base crew. More than 1,000 organizations around the world have been targeted in Phobos ransomware attacks, and the cybercriminals are believed to have obtained over $16 million in ransom payments.

  7. Industrial ransomware surge accelerates

    There has been a sharp rise in the number of ransomware groups targeting industrial organizations as cybercriminals continue to exploit vulnerabilities in operational technology (OT) and industrial control systems (ICS), Dragos warned. A total of 119 ransomware groups targeting industrial organizations were tracked during 2025, a 49% increase from the 80 tracked in 2024. 2025 saw 3,300 industrial organizations around the world hit by ransomware, compared with 1693 in 2024. The most targeted sector was manufacturing, followed by transportation. In addition, a hacking group tracked as Pyroxene has been observed conducting "supply chain-leveraged attacks targeting defense, critical infrastructure, and industrial sectors, with operations expanding from the Middle East into North America and Western Europe." It often leverages initial access provided by PARISITE, to enable movement from IT into OT networks. Pyroxene overlaps with activity attributed to Imperial Kitten (aka APT35), a threat actor affiliated with the cyber arm of the Islamic Revolutionary Guard Corps (IRGC).

  8. Copilot bypassed DLP safeguards

    Microsoft confirmed a bug (CW1226324) that let Microsoft 365 Copilot summarize confidential emails from Sent Items and Drafts folders since January 21, 2026, without users' permission, bypassing data loss prevention (DLP) policies put in place to safeguard sensitive data. A fix was deployed by the company on February 3, 2026. However, the company did not disclose how many users or organizations were affected. "Users' email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat," Microsoft said. "The Microsoft 365 Copilot "work tab" Chat is summarizing email messages even though these email messages have a sensitivity label applied, and a DLP policy is configured. A code issue is allowing items in the sent items and draft folders to be picked up by Copilot even though confidential labels are set in place."

  9. Jira trials weaponized for spam

    Threat actors are abusing the trust and reputation associated with Atlassian Jira Cloud and its connected email system to run automated spam campaigns and bypass traditional email security. To accomplish this, the operators created Atlassian Cloud trial accounts using randomized naming conventions, allowing them to generate disposable Jira Cloud instances at scale. "Emails were tailored to target specific language groups, targeting English, French, German, Italian, Portuguese, and Russian speakers — including highly skilled Russian professionals living abroad," Trend Micro said. "These campaigns not only distributed generic spam, but also specifically targeted sectors such as government and corporate entities." The attacks, active from late December 2025 through late January 2026, primarily targeted organizations using Atlassian Jira. The goal was to get recipients to open the emails and click on malicious links, which would initiate a redirect chain powered by the Keitaro Traffic Distribution System (TDS) and then finally lead them to pages peddling investment scams and online casino landing sites, suggesting that financial gain was likely the main objective.

  10. GitLab SSRF now federally mandated patch

    The U.S. Cybersecurity and Infrastructure Security Agency (CISA), on February 18, 2026, added CVE-2021-22175 to its Known Exploited Vulnerabilities (KEV) catalog, requiring Federal Civilian Executive Branch (FCEB) agencies to apply the patch by March 11, 2026. "GitLab contains a server-side request forgery (SSRF) vulnerability when requests to the internal network for webhooks are enabled," CISA said. In March 2025, GreyNoise revealed that a cluster of about 400 IP addresses was actively exploiting multiple SSRF vulnerabilities, including CVE-2021-22175, to target susceptible instances in the U.S., Germany, Singapore, India, Lithuania, and Japan.

  11. Telegram bots fuel Fortune 500 phishing

    An elusive, financially motivated threat actor dubbed GS7 has been targeting Fortune 500 companies in a new phishing campaign that leverages trusted company branding with lookalike websites aimed at harvesting credentials via Telegram bots. The campaign, codenamed Operation DoppelBrand, targets top financial institutions, including Wells Fargo, USAA, Navy Federal Credit Union, Fidelity Investments, and Citibank, as well as technology, healthcare, and telecommunications firms worldwide. Victims are lured through phishing emails and redirected to counterfeit pages where credentials are harvested and transmitted to Telegram bots controlled by the attacker. According to SOCRadar, the group itself, however, has a history stretching back to 2022. The threat actor is said to have registered more than 150 malicious domains in recent months using registrars such as NameCheap and OwnRegistrar, and routing traffic through Cloudflare to evade detection. GS7's end goals include not only harvesting credentials, but also downloading remote management and monitoring (RMM) tools like LogMeIn Resolve on victim systems to enable remote access or the deployment of malware. This has raised the possibility that the group may even act as an initial access broker (IAB), selling the access to ransomware groups or other affiliates.

  12. Remcos shifts to live C2 surveillance

    Phishing emails disguised as invoices, job offers, or government notices are being used to distribute a new variant of Remcos RAT to facilitate comprehensive surveillance and control over infected systems. "The latest Remcos variant has been observed exhibiting a significant change in behaviour compared to previous versions," Point Wild said. "Instead of stealing and storing data locally on the infected system, this variant establishes direct online command-and-control (C2) communication, enabling real-time access and control. In particular, it leverages the webcam to capture live video streams, allowing attackers to monitor targets remotely. This shift from local data exfiltration to live, online surveillance represents an evolution in Remcos’ capabilities, increasing the risk of immediate espionage and persistent monitoring."

  13. China-made vehicles restricted on bases

    Poland's Ministry of Defence has banned Chinese cars, and other motor vehicles equipped with technology to record position, images, or sound, from entering protected military facilities due to national security concerns and to "limit the risk of access to sensitive data." The ban also extends to connecting work phones to infotainment systems in motor vehicles produced in China. The ban isn't permanent: the Defence Ministry has called for the development of a vetting process to allow carmakers to undergo a security assessment that, if passed, can allow their vehicles to enter protected facilities. "Modern vehicles equipped with advanced communication systems and sensors can collect and transmit data, so their presence in protected zones requires appropriate safety regulations," the Polish Army said. The measures introduced are preventive and comply with the practices of NATO countries and other allies to ensure the highest standards of defense infrastructure protection. They are part of a wider process of adapting security procedures to the changing technological environment and current requirements for the protection of critical infrastructure."

  14. DKIM replay fuels invoice scams

    Bad actors are abusing legitimate invoices and dispute notifications from trusted vendors, such as PayPal, Apple, DocuSign, and Dropbox Sign (formerly HelloSign), to bypass email security controls. "These platforms often allow users to enter a 'seller name' or add a custom note when creating an invoice or notification," Casey-owned INKY said. "Attackers abuse this functionality by inserting scam instructions and a phone number into those user-controlled fields. They then send the resulting invoice or dispute notice to an email address they control, ensuring the malicious content is embedded in a legitimate, vendor-generated message." Because these emails originate from a legitimate company, they bypass checks like Domain-based Message Authentication, Reporting and Conformance (DMARC). As soon as the legitimate email is received, the attacker proceeds to forward it to the intended targets, allowing the "authentic looking" message to land in the victims' inboxes. The attack is known as a DKIM replay attack.

  15. RMM abuse surges 277%

    A new report from Huntress has revealed that the abuse of Remote Monitoring and Management (RMM) software surged 277% year-over-year, accounting for 24% of all observed incidents. Threat actors have begun to increasingly favor these tools because they are ubiquitous in enterprise environments, and the trusted nature of the RMM software allows malicious activity to blend in with legitimate usage, making detection harder for defenders. They also offer increased stealth, persistence, and operational efficiency. "As cybercriminals built entire playbooks around these legitimate, trusted tools to drop malware, steal credentials, and execute commands, the use of traditional hacking tools plummeted by 53%, while remote access trojans and malicious scripts dropped by 20% and 11.7%, respectively," the company said.

  16. Texas targets China-linked tech firms

    Texas Attorney General Ken Paxton has sued TP-Link for "deceptively marketing its networking devices and allowing the Chinese Communist Party ('CCP') to access American consumers' devices in their homes." Paxton's lawsuit alleges that TP Link's products have been used by Chinese hacking groups to launch cyber attacks against the U.S. and that the company is subject to Chinese data laws, which it said require firms operating in the country to support its intelligence services by "divulging Americans' data." In a second lawsuit, Paxton also accused Anzu Robotics of misleading Texas consumers about the "origin, data practices, and security risks of its drones." Paxton's office described the company's products as "21st century Trojan horse linked to the CCP."

  17. MetaMask backdoor expands DPRK campaign

    The North Korea-linked campaign known as Contagious Interview is designed to target IT professionals working in cryptocurrency, Web3, and artificial intelligence sectors to steal sensitive data and financial information using malware such as BeaverTail and InvisibleFerret. However, recent iterations of the campaign have expanded their data theft capabilities by tampering with the MetaMask wallet extension (if it's installed) through a lightweight JavaScript backdoor that shares the same functionality as InvisibleFerret, according to security researcher Seongsu Park. "Through the backdoor, attackers instruct the infected system to download and install a fake version of the popular MetaMask cryptocurrency wallet extension, complete with a dynamically generated configuration file that makes it appear legitimate," Park said. "Once installed, the compromised MetaMask extension silently captures the victim's wallet unlock password and transmits it to the attackers’ command-and-control server, giving them complete access to cryptocurrency funds."

  18. Booking.com kits hit hotels, guests

    Bridewell has warned of a resurgence in malicious activity targeting the hotel and retail sector. "The primary motivation driving this incident is financial fraud, targeting two victims: hotel businesses and hotel customers, in sequential order," security researcher Joshua Penny said. "The threat actor(s) utilize impersonation of the Booking.com platform through two distinct phishing kits dedicated to harvesting credentials and banking information from each victim, respectively." It's worth noting that the activity shares overlap with a prior activity wave disclosed by Sekoia in November 2025, although the use of a dedicated phishing kit is a new approach by either the same or new operators.

  19. EPMM exploits enable persistent access

    The recently disclosed security flaws in Ivanti Endpoint Manager Mobile (EPMM) have been exploited by bad actors to establish a reverse shell, deliver JSP web shells, conduct reconnaissance, and download malware, including Nezha, cryptocurrency miners, and backdoors for remote access. The two critical vulnerabilities, CVE-2026-1281 and CVE-2026-1340, allow unauthenticated attackers to remotely execute arbitrary code on target servers, granting them full control over mobile device management (MDM) infrastructure without requiring user interaction or credentials. According to Palo Alto Networks Unit 42, the campaign has affected state and local government, healthcare, manufacturing, professional and legal services, and high technology sectors in the U.S., Germany, Australia, and Canada. "Threat actors are accelerating operations, moving from initial reconnaissance to deploying dormant backdoors designed to maintain long-term access even after organizations apply patches," the cybersecurity company said. In a related development, Germany's Federal Office for Information Security (BSI) has reported evidence of exploitation since the summer of 2025 and has urged organizations to audit their systems for indicators of compromise (IoCs) as far back as July 2025.

  20. AI passwords lack true randomness

    New research by Irregular has found that passwords generated directly by a large language model (LLM) may appear strong but are fundamentally insecure, as "LLMs are designed to predict tokens – the opposite of securely and uniformly sampling random characters." The artificial intelligence (AI) security company said it detected LLM-generated passwords in the real world as part of code development tasks instead of leaning on traditional secure password generation methods. "People and coding agents should not rely on LLMs to generate passwords," the company said. "LLMs are optimized to produce predictable, plausible outputs, which is incompatible with secure password generation. AI coding agents should be directed to use secure password generation methods instead of relying on LLM-output passwords. Developers using AI coding assistants should review generated code for hardcoded credentials and ensure agents use cryptographically secure methods or established password managers."

  21. PDF engine flaws enable account takeover

    Cybersecurity researchers have discovered more than a dozen vulnerabilities (CVE-2025-70401, CVE-2025-70402, and CVE-2025-66500) in popular PDF platforms from Foxit and Apryse, potentially allowing attackers to exploit them for account takeover, session hijacking, data exfiltration, and arbitrary JavaScript execution. "Rather than isolated bugs, the issues cluster around recurring architectural failures in how PDF platforms handle untrusted input across layers," Novee Security researchers Lidor Ben Shitrit, Elad Meged, and Avishai Fradlis said. "Several vulnerabilities were exploitable with a single request and affected trusted domains commonly embedded inside enterprise applications." The issues have been addressed by both Apryse and Foxit through product updates.

  22. Training labs expose cloud backdoors

    A "widespread" security issue has been discovered where security vendors inadvertently expose deliberately vulnerable training applications, such as OWASP Juice Shop, DVWA, bWAPP, and Hackazon, to the public internet. This can open organizations to severe security risks when they are executed from a privileged cloud account. "Primarily deployed for internal testing, product demonstrations, and security training, these applications were frequently left accessible in their default or misconfigured states," Pentera Labs said. "These critical flaws not only allowed attackers full control over the compromised compute engine but also provided pathways for lateral movement into sensitive internal systems. Violations of the principle of least privilege and inadequate sandboxing measures further facilitated privilege escalation, endangering critical infrastructure and sensitive organizational data." Further analysis has determined that threat actors are exploiting this blind spot to plant web shells, cryptocurrency miners, and persistence mechanisms on compromised systems.

  23. Evasion loader refines C2 stealth

    The malware loader known as Oyster (aka Broomstick or CleanUpLoader) has continued to evolve into early 2026, fine-tuning its C2 infrastructure and obfuscation methods, per findings from Sekoia. The malware is distributed mainly through fake websites that distribute installers for legitimate software like Microsoft Teams, with the core payload often deployed as a DLL for persistent execution. "The initial stage leverages excessive legitimate API call hammering and simple anti-debugging traps to thwart static analysis," the company said. "The core payload is delivered in a highly obfuscated manner. The final stage implements a robust C2 communication protocol that features a dual-layer server infrastructure and highly-customized data encoding."

  24. Stealer taunts researchers in code

    Noodlophile is the name given to an information-stealing malware that has been distributed via fake AI tools promoted on Facebook. Assessed to be the work of a threat actor based in Vietnam, it was first documented by Morphisec in May 2025. Since then, there have been other reports detailing various campaigns, such as UNC6229 and PXA Stealer, orchestrated by Vietnamese cybercriminals. Morphisec's latest analysis of Noodlophile has revealed that the threat actor "padded the malware with millions of repeats of a colorful Vietnamese phrase translating to 'f*** you, Morphisec,'" suggesting that the operators were not thrilled about getting exposed. "Not just to vent frustration over disrupted campaigns, but also to bloat the file and crash AI-based analysis tools that are based on the Python disassemble library – dis.dis(obj)," security researcher Michael Gorelik said.

  25. Crypto library RCE risk patched

    The OpenSSL project has patched a stack buffer overflow flaw that can lead to remote code execution attacks under certain conditions. The vulnerability, tracked as CVE-2025-15467, resides in how the library processes Cryptographic Message Syntax data. Threat actors can use CMS packets with maliciously crafted AEAD parameters to crash OpenSSL and run malicious code. CVE-2025-15467 is one of 12 issues that were disclosed by AISLE late last month. Another high-severity vulnerability is CVE-2025-11187, which could trigger a stack-based buffer overflow due to a missing validation.

  26. Machine accounts expand delegation risk

    New research from Silverfort has cleared a "common assumption" that Kerberos delegation -- which allows a service to request resources or perform actions on behalf of a user -- applies not just to human users, but also to machine accounts as well. In other words, a computer account can be delegated on behalf of highly privileged machine identities such as domain controllers. "That means a service trusted for delegation can act not just on behalf of other users, but also on behalf of machine accounts, the most critical non-human identities (NHIs) in any domain," Silverfort researcher Dor Segal said. "The risk is obvious. If an adversary can leverage delegation, it can act on behalf of sensitive machine accounts, which in many environments hold privileges equivalent to Domain Administrator." To counter the risk, it's advised to run "Set-ADAccountControl -Identity “HOST01$” -AccountNotDelegated $true" for each sensitive machine account.



from The Hacker News https://ift.tt/MVv5Ftb
via IFTTT