Monday, December 15, 2025

FreePBX Patches Critical SQLi, File-Upload, and AUTHTYPE Bypass Flaws Enabling RCE

Dec 15, 2025Ravie LakshmananVulnerability / Software Security

Multiple security vulnerabilities have been disclosed in the open-source private branch exchange (PBX) platform FreePBX, including a critical flaw that could result in an authentication bypass under certain configurations.

The shortcomings, discovered by Horizon3.ai and reported to the project maintainers on September 15, 2025, are listed below -

  • CVE-2025-61675 (CVSS score: 8.6) - Numerous authenticated SQL injection vulnerabilities impacting four unique endpoints (basestation, model, firmware, and custom extension) and 11 affected parameters that enable read and write access to the underlying SQL database
  • CVE-2025-61678 (CVSS score: 8.6) - An authenticated arbitrary file upload vulnerability that allows an attacker to exploit the firmware upload endpoint to upload a PHP web shell after obtaining a valid PHPSESSID and run arbitrary commands to leak the contents of sensitive files (e.g., "/etc/passwd")
  • CVE-2025-66039 (CVSS score: 9.3) - An authentication bypass vulnerability that occurs when the "Authorization Type" (aka AUTHTYPE) is set to "webserver," allowing an attacker to log in to the Administrator Control Panel via a forged Authorization header
Cybersecurity

It's worth mentioning here that the authentication bypass is not vulnerable in the default configuration of FreePBX, given that the "Authorization Type" option is only displayed when the three following values in the Advanced Settings Details are set to "Yes":

  • Display Friendly Name
  • Display Readonly Settings, and
  • Override Readonly Settings

However, once the prerequisite is met, an attacker could send crafted HTTP requests to sidestep authentication and insert a malicious user into the "ampusers" database table, effectively accomplishing something similar to CVE-2025-57819, another flaw in FreePBX that was disclosed as having been actively exploited in the wild in September 2025.

"These vulnerabilities are easily exploitable and enable authenticated/unauthenticated remote attackers to achieve remote code execution on vulnerable FreePBX instances," Horizon3.ai security researcher Noah King said in a report published last week.

The issues have been addressed in the following versions -

  • CVE-2025-61675 and CVE-2025-61678 - 16.0.92 and 17.0.6 (Fixed on October 14, 2025)
  • CVE-2025-66039 - 16.0.44 and 17.0.23 (Fixed on December 9, 2025)

In addition, the option to choose an authentication provider has now been removed from Advanced Settings and requires users to set it manually through the command-line using fwconsole. As temporary mitigations, FreePBX has recommended that users set "Authorization Type" to "usermanager," set "Override Readonly Settings" to "No," apply the new configuration, and reboot the system to disconnect any rogue sessions.

"If you did find that web server AUTHTYPE was enabled inadvertently, then you should fully analyze your system for signs of any potential compromise," it said.

Users are also displayed a warning on the dashboard, stating "webserver" may offer reduced security compared to "usermanager." For optimal protection, it's advised to avoid using this authentication type.

"It's important to note that the underlying vulnerable code is still present and relies on authentication layers in front to provide security and access to the FreePBX instance," King said. "It still requires passing an Authorization header with a Basic base64 encoded username:password."

"Depending on the endpoint, we noticed a valid username was required. In other cases, such as the file upload shared above, a valid username is not required, and you can achieve remote code execution with a few steps, as outlined. It is best practice not to use the authentication type webserver as it appears to be legacy code."



from The Hacker News https://ift.tt/g6ZzJuw
via IFTTT

A Browser Extension Risk Guide After the ShadyPanda Campaign

In early December 2025, security researchers exposed a cybercrime campaign that had quietly hijacked popular Chrome and Edge browser extensions on a massive scale.

A threat group dubbed ShadyPanda spent seven years playing the long game, publishing or acquiring harmless extensions, letting them run clean for years to build trust and gain millions of installs, then suddenly flipping them into malware via silent updates. In total, about 4.3 million users installed these once-legitimate add-ons, which suddenly went rogue with spyware and backdoor capabilities.

This tactic was essentially a browser extension supply-chain attack.

The ShadyPanda operators even earned featured and verified badges in the official Chrome Web Store and Microsoft Edge Add-ons site for some extensions, reinforcing user confidence. Because extension updates happen automatically in the background, the attackers were able to push out malicious code without users noticing a thing.

Once activated in mid-2024, the compromised extensions became a fully fledged remote code execution (RCE) framework inside the browser. They could download and run arbitrary JavaScript with full access to the browser's data and capabilities. This gave the attackers a range of spyware powers, from monitoring every URL and keystroke, to injecting malicious scripts into web pages, to exfiltrating browsing data and credentials.

One of the worst capabilities was session cookie and token theft, stealing the authentication tokens that websites use to keep users logged in. The extensions could even impersonate entire SaaS accounts (like Microsoft 365 or Google Workspace) by hijacking those session tokens.

Why Browser Extensions Are a SaaS Security Nightmare#

For SaaS security teams, ShadyPanda's campaign shows us a lot. It proved that a malicious browser extension can effectively become an intruder with keys to your company's SaaS kingdom. If an extension grabs a user's session cookie or token, it can unlock that user's accounts in Slack, Salesforce, or any other web service they're logged into.

In this case, millions of stolen session tokens could have led to unauthorized access to enterprise emails, files, chat messages, and more, all without triggering the usual security alarms. Traditional identity defenses like MFA were bypassed, because the browser session was already authenticated and the extension was piggybacking on it.

The risk extends beyond just the individual user. Many organizations allow employees to install browser extensions freely, without the scrutiny applied to other software. Browser extensions often slip through without oversight, yet they can access cookies, local storage, cloud auth sessions, active web content, and file downloads.

This blurs the line between endpoint security and cloud security. A malicious extension can be run on the user's device (an endpoint issue), but it directly compromises cloud accounts and data (an identity/SaaS issue). ShadyPanda vividly shows the need to bridge endpoint and SaaS identity defense: security teams should think about treating the browser as an extension of the SaaS attack surface.

Steps to Reduce Browser Extension Risk#

So based on all of this, what can organizations do to reduce the risk of another ShadyPanda situation? Below is a practical guide with steps to tighten your defenses against malicious browser extensions.

1. Enforce Extension Allow Lists and Governance#

Start by regaining control over which extensions can run in your environment. Conduct an audit of all extensions installed across the company's browsers (both corporate-managed and BYOD if possible) and remove any that are unnecessary, unvetted, or high risk.

It's wise to require business justification for extensions that need broad permissions (for example, any addon that can read all website data). Use enterprise browser management tools to implement an allow list so that only approved extensions can be installed. This policy ensures new or unknown extensions are blocked by default, cutting off the long tail of random installs.

Remember that popular extensions aren't automatically safe, ShadyPanda's malware hid in popular, trusted extensions that people had used for years. Treat all extensions as guilty until proven innocent by vetting them through your security team's approval process.

2. Treat Extension Access Like OAuth Access#

Shift your mindset to treat browser extensions similarly to third-party cloud apps in terms of the access they grant. In practice, this means integrating extension oversight into your identity and access management processes.

Just as you might keep a catalog of authorized OAuth integrations, do the same for extensions. Map out what SaaS data or actions an extension could touch - for example, if an extension can read all web traffic, it effectively can read your SaaS application data in transit; if it can read cookies, it can impersonate the user on any service.

Because malicious extensions can steal session tokens, your identity security tools should watch for signs of session hijacking: configure alerts for bizarre login patterns, like an OAuth token being used from two different locations, or an access attempt that bypasses MFA checks.

The key point is to manage extensions with the same caution as any app that has been granted access to your data. Limit extension permissions where possible, and if an employee leaves the company or changes roles, ensure that high-risk extensions are removed just as you would revoke unneeded app access.

3. Audit Extension Permissions Regularly#

Make extension review a recurring part of your security program, similar to quarterly access reviews or app assessments. Every few months, inventory the extensions and their permissions in use across your organization.

Pay attention to what data or browser features each extension can access. For each extension, ask: Do we still need this? Has it requested any new permissions? Has its developer or ownership changed?

Attackers often buy out benign extensions or slip in new maintainers before pushing bad updates. By reviewing the extension publisher and update history, you can spot red flags.

Also, watch for any extension that suddenly asks for broader permissions than before – that's a clue it may have turned malicious.

4. Monitor for Suspicious Extension Behavior#

Because browsers usually auto-update extensions silently, a trusted add-on can become malicious overnight with no obvious warning to the user. Security teams should therefore implement monitoring to catch silent compromise.

This can include technical measures and user-awareness cues.

On the technical side, consider logging and analyzing extension activity: for example, monitor browser extension installations, update events, or unusual network calls from extensions (like frequent communication with unknown external domains).

Some organizations inspect browser logs or use endpoint agents to flag if an extension's files change unexpectedly. If possible, you might restrict or stage extension updates - for instance, testing updates on a subset of machines before wide deployment.

On the user side, educate employees to report if an extension that has been installed for a long time suddenly starts behaving differently (new UI changes, unexpected pop-ups, or performance issues could hint at a malicious update). The goal is to shorten the window between an extension going bad and your team detecting and removing it.

Bridging Endpoint and SaaS Security (How Reco Can Help)#

The ShadyPanda incident shows that attackers don't always need zero-day exploits to infiltrate our systems; sometimes, they just need patience, user trust, and an overlooked browser extension. For security teams, it's a lesson that browser extensions are part of your attack surface.

The browser is effectively an endpoint that sits between your users and your SaaS applications, so it's important to bring extension management and monitoring into your overall security strategy. By enforcing allow lists, auditing permissions, monitoring updates, and treating extensions like the powerful third-party apps they are, you can drastically reduce the risk of an extension becoming your weakest link.

Finally, consider how modern SaaS security platforms can support these efforts.

New solutions, such as dynamic SaaS security platforms, are emerging to help organizations get a handle on these kinds of risks. Reco's Dynamic SaaS Security platform is designed to continuously map and monitor SaaS usage (including risky connected apps and extensions) and provide identity-driven threat detection.

With the right platform, you can gain unified visibility into extensions across your environment and detect suspicious activity in real time. Reco can help bridge the gap between endpoint and cloud by correlating browser-side risks with SaaS account behavior, giving security teams a cohesive defense. By taking these proactive steps and leveraging tools like Reco to automate and scale your SaaS security, you can stay one step ahead of the next ShadyPanda.

Request a Demo: Get Started With Reco.

Note: This article is expertly written and contributed by Gal Nakash, Co-founder & CPO of Reco. Gal is a former Lieutenant Colonel in the Israeli Prime Minister's Office. He is a tech enthusiast with a background as a security researcher and hacker. Gal has led teams in multiple cybersecurity areas, with expertise in the human element.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/iBs7dvE
via IFTTT

The CIO’s M&A integration dilemma: speed vs. security

A merger or acquisition can be one of the most defining and demanding moments in a CIO’s career. While the CEO and CFO stand in the spotlight of the press release, the CIO is tasked with the complex reality on the ground: seamlessly combining two distinct, complex, and often culturally different IT ecosystems – and often hoping that there are no bad surprises, beyond the ones they already know about. The mandate is clear; drive the integration that unlocks the deal’s promised synergies. Yet, this must be balanced with the critical responsibility of being the guardian of the company’s digital assets, standing vigilant against the security breaches that often loom over M&A projects.

The traditional playbook, which focuses heavily on infrastructure integration, often struggles to keep pace with the demands of modern business. This forces a critical question onto the CIO’s desk: How can we provide immediate, seamless access to vital applications and data to accelerate synergy realization without introducing catastrophic security exposures?

The core conflict: productivity vs. protection

During an acquisition, the CIO is inevitably squeezed by two powerful and often opposing forces that define the integration dilemma.

The productivity mandate

The business expects immediacy. Newly acquired employees need to be productive on Day One. Every day that access to critical systems—such as CRM or ERP platforms—is delayed represents lost value and lost deal synergy. Momentum can stall, new coworkers may feel disconnected and frustrated, and the deals expected Return on Investment (ROI) begins to erode. The pressure to deliver quick, uninhibited access is intense.

The protection mandate

On the flip side, integration introduces a security nightmare. The acquiring company often has zero immediate visibility into the acquired company’s endpoints, security policies, or user behaviors. Questions abound: Is their network already compromised? Are their systems fully patched? Have their employees been adequately trained to spot sophisticated phishing attempts? Are there insider threats or disgruntled employees seeking to steal data?

Integrating two corporate networks is more than a technical exercise; it’s an inheritance of risk. An IBM survey highlighted the significant danger, disclosing that “More than one in three executives surveyed said they have experienced data breaches that can be attributed to M&A activity during integration.” Directly merging networks is, in many cases, an open invitation to malware, ransomware, and insider threats. You inherit all the technical debt and potential cyber risk.

Why the traditional network merge approach can fail

The conventional “big bang” approach, centered on complete merging of the two corporate networks, is frequently slow, expensive, and fragile. The problems here include:

  • The lack of agility: This all-or-nothing strategy lacks the necessary speed and flexibility for a modern M&A timeline.
  • There’s a high risk of failure: A single misconfiguration can lead to cascading failures, resulting in disruptive outages for both the acquiring and acquired entities.

In this approach, there’s also a false choice, for which the outcomes of both can be very damaging. Either insecurely rush the integration to meet business demands or securely delay it for months or even years, putting the deal value at risk.

This is why many CIOs are moving beyond the infrastructure-centric model to explore strategies that prioritize more granular user and application access over full network consolidation.

A user- and app-centric model

Instead of attempting the complex task of merging two distinct network architectures, a modern strategy suggests creating a secure, single point of access, a virtual front door, through which all users can reach the resources they need, regardless of the physical location of those resources. This is where platforms built on the principles of Zero Trust offer a compelling strategic advantage.

Zero Trust operates on the powerful concept: “Never trust, always verify.” It fundamentally assumes that threats can exist anywhere—both outside and inside the network—effectively eliminating the idea of a traditional, secure network perimeter.

Consider the contrast with a traditional VPN:

Traditional VPN  Zero Trust Access 
Acts like an old-fashioned key: Grants entry to the building, allowing a user to freely roam the internal network after initial access.  Functions like a smart badge: Doesn’t just grant building entry, but dynamically grants access only to the specific, authorized resources for a specific time. 
Allows broad network access, which is cited as a critical weakness. A VPN Exposure Report found that “69% of breaches stemmed from third-party VPN access.” Access is logged, analyzed in real-time, and based on the principle of least privilege. 

For M&A, this shift turns a complex, years-long network integration challenge into a more manageable, granular security exercise.

Components of a secure integration strategy

A strategic integration plan leveraging modern access control offers distinct benefits that address both the productivity and security mandates, simultaneously.

The secure digital perimeter

Modern access platforms replace the traditional, network-centric VPN with a model that creates virtual fences around applications.

  • Granular access: When an employee from the acquired company needs to use a critical resource, they are granted a secure connection only to that specific application (e.g., the acquired company’s CRM).
  • Lateral movement prevention: The user is never placed “on” the underlying network. This is crucial, as it means their machine cannot scan for other vulnerable servers, nor can an attacker move laterally across the internal network if the endpoint is compromised.
  • Least Privilege: This granular, least-privilege access model is the foundation for significantly de-risking the M&A integration process.

Modern desktop and application virtualization platforms can deliver secure, centralized access to business-critical apps and data. This helps simplify management, improve scalability, and support a consistent user experience. This strategy safeguards sensitive data from leakage on unmanaged devices and allows newly acquired employees to be productive on day one, without the costs and risks traditionally tied to VPNs or the logistics of shipping managed hardware.

Enforcing continuous and adaptive trust

A core tenet of modern security is that verification shouldn’t end once a user is “in the door”—it must be a continuous process throughout the entire session.

  • Proactive threat detection: Systems can utilize machine learning to establish a baseline of normal user behavior and proactively detect anomalies that may signal potential threats.
  • Automated adaptive controls: If a suspicious event occurs, for example, if a newly integrated employee attempts to download a massive customer list at 3 AM from an unusual geographical location, the system can automatically apply adaptive security controls. This might include presenting an additional Multi-Factor Authentication (MFA) challenge, automatically disabling clipboard or printing functions, or even immediately terminating the session.

This automated vigilance can significantly reduce the intense burden on the security operations team during chaotic and high-stakes M&A.

The strategic shift in your M&A playbook

By adopting a strategy that fundamentally decouples application access from the underlying network access, CIOs can transform the M&A integration challenge. The internal conversation fundamentally shifts:

  • Old question: “How long will it take to merge the two networks?”
  • New question: “Which user groups need access to which specific applications today?”

This change in focus allows IT to deliver value immediately and securely. IT moves from being an integration bottleneck to becoming a strategic enabler of M&A success.

Navigating this complex, strategic shift requires a clear and deliberate plan. To help de-risk technical integration and accelerate value capture from your next acquisition, take a deeper dive into our whitepaper, The CIO’s M&A Playbook: Accelerating value and de-risking integration and companion e-book How Citrix cuts months off M&A time to value.



from Citrix Blogs https://ift.tt/C0ScF52
via IFTTT

Phantom Stealer Spread by ISO Phishing Emails Hitting Russian Finance Sector

Cybersecurity researchers have disclosed details of an active phishing campaign that's targeting a wide range of sectors in Russia with phishing emails that deliver Phantom Stealer via malicious ISO optical disc images.

The activity, codenamed Operation MoneyMount-ISO by Seqrite Labs, has primarily singled out finance and accounting entities, with those in the procurement, legal, payroll verticals emerging as secondary targets.

"This campaign employs a fake payment confirmation lure to deliver the Phantom information-stealing malware through a multi-stage attachment chain," the cybersecurity company said.

The infection chain begins with a phishing email that masquerades as legitimate financial communications, urging recipients to confirm a recent bank transfer. Attached to the email is a ZIP archive that claims to contain additional details, but, instead, contains an ISO file that, when launched, mounts on the system as a virtual CD drive.

The ISO image ("Подтверждение банковского перевода.iso" or "Bank transfer confirmation.iso") serves as an executable that's designed to launch Phantom Stealer by means of an embedded DLL ("CreativeAI.dll").

Phantom Stealer is capable of extracting data from cryptocurrency wallet browser extensions installed in Chromium-based browsers and desktop wallet apps, as well as grab files, Discord authentication tokens, and browser-related passwords, cookies, and credit card details.

It also monitors clipboard content, logs keystrokes, and runs a series of checks to detect virtualized, sandboxed, or analysis environments, and if so, aborts its execution. Data exfiltration is achieved via a Telegram bot or to an attacker-controlled Discord webhook. On top of that, the stealer enables file transfer to an FTP server.

In recent months, Russian organizations, mainly human resources and payroll departments, have also been targeted by phishing emails that employ lures related to bonuses or internal financial policies to deploy a previously undocumented implant named DUPERUNNER that loads AdaptixC2, an open-source command-and-control (C2) framework.

Dubbed DupeHike, the campaign has been attributed to a threat cluster named UNG0902.

"The ZIP has been used as a preliminary source of spear-phishing-based infection containing decoys with PDF and LNK extension, which downloads the implant DUPERUNNER, which finally executes the Adaptix C2 Beacon," Seqrite said.

The LNK file ("Документ_1_О_размере_годовой_премии.pdf.lnk" or "Document_1_On_the_amount_of_the_annual_bonus.pdf.lnk"), in turn, proceeds to download DUPERUNNER from an external server using "powershell.exe." The primary responsibility of the implant is to retrieve and display a decoy PDF and launch AdaptixC2 by injecting it into a legitimate Windows process like "explorer.exe," "notepad.exe," and "msedge.exe."

Other phishing campaigns have taken aim at finance, legal, and aerospace sectors in Russia to distribute Cobalt Strike and malicious tools like Formbook, DarkWatchman, and PhantomRemote that are capable of data theft and hands-on keyboard control. The email servers of compromised Russian companies are used to send the spear-phishing messages.

French cybersecurity company Intrinsec has attributed the intrusion set targeting the Russian aerospace industry to hacktivists aligned with Ukrainian interests. The activity, detected between June and September 2025, shares overlaps with Hive0117, Operation CargoTalon, and Rainbow Hyena (aka Fairy Trickster, Head Mare, and PhantomCore).

Some of these efforts have also been found to redirect users to phishing login pages hosted on the InterPlanetary File System (IPFS) and Vercel, designed to steal credentials associated with Microsoft Outlook and Bureau 1440, a Russian aerospace company.

"The campaigns observed between June and September 2025 [...] aimed at compromising entities actively cooperating with Russia's army amidst the current conflict with Ukraine, largely assessed by the Western sanctions imposed on them," Intrinsec said.



from The Hacker News https://ift.tt/C8iLcaE
via IFTTT

VolkLocker Ransomware Exposed by Hard-Coded Master Key Allowing Free Decryption

Dec 15, 2025Ravie LakshmananRansomware / Cybercrime

The pro-Russian hacktivist group known as CyberVolk (aka GLORIAMIST) has resurfaced with a new ransomware-as-a-service (RaaS) offering called VolkLocker that suffers from implementation lapses in test artifacts, allowing users to decrypt files without paying an extortion fee.

According to SentinelOne, VolkLocker (aka CyberVolk 2.x) emerged in August 2025 and is capable of targeting both Windows and Linux systems. It's written in Golang.

"Operators building new VolkLocker payloads must provide a bitcoin address, Telegram bot token ID, Telegram chat ID, encryption deadline, desired file extension, and self-destruct options," security researcher Jim Walter said in a report published last week.

Once launched, the ransomware attempts to escalate privileges, performs reconnaissance and system enumeration, including checking local MAC address prefixes against known virtualization vendors like Oracle and VMware. In the next stage, it lists all available drives and determines the files to be encrypted based on the embedded configuration.

VolkLocker uses AES-256 in Galois/Counter Mode (GCM) for encryption through Golang's "crypto/rand" package. Every encrypted file is assigned a custom extension such as .locked or .cvolk.

However, an analysis of the test samples has uncovered a fatal flaw where the locker's master keys are not only hard-coded in the binaries, but are also used to encrypt all files on a victim system. More importantly, the master key is also written to a plaintext file in the %TEMP% folder ("C:\Users\AppData\Local\Temp\system_backup.key").

Since this backup key file is never deleted, the design blunder enables self-recovery. That said, VolkLocker has all the hallmarks typically associated with a ransomware strain. It makes Windows Registry modifications to thwart recovery and analysis, deletes volume shadow copies, and terminates processes associated with Microsoft Defender Antivirus and other common analysis tools.

However, where it stands out is in the use of an enforcement timer, which wipes the content of user folders, viz. Documents, Desktop, Downloads, and Pictures, if victims fail to pay within 48 hours or enter the wrong decryption key three times.

CyberVolk's RaaS operations are managed through Telegram, costing prospective customers between $800 and $1,100 for either a Windows or Linux version, or between $1,600 and $2,200 for both operating systems. VolkLocker payloads come with built-in Telegram automation for command-and-control, allowing users to message victims, initiate file decryption, list active victims, and get system information.

As of November 2025, the threat actors have advertised a remote access trojan and keylogger, both priced at $500 each, indicating a broadening of their monetization strategy.

CyberVolk launched its own RaaS in June 2024. Known for conducting distributed denial-of-service (DDoS) and ransomware attacks on public and government entities to support Russian government interests, it's believed to be of Indian origin.

"Despite repeated Telegram account bans and channel removals throughout 2025, CyberVolk has reestablished its operations and expanded its service offerings," Walter said. "Defenders should see CyberVolk's adoption of Telegram-based automation as a reflection of broader trends among politically-motivated threat actors. These groups continue to lower barriers for ransomware deployment while operating on platforms that provide convenient infrastructure for criminal services."



from The Hacker News https://ift.tt/p7mAh6n
via IFTTT

Sunday, December 14, 2025

Private MCP Catalogs and the Path to Composable Enterprise AI

Most discussions about Model Context Protocol infrastructure ask how to govern thousands of AI tools and monitor which MCP servers are running. This question is table stakes but undershoots the possibilities. A better question is how we can unleash MCP to drive developer creativity from a trusted foundation.

The first question produces a phone book of curated, controlled, static resources. The second points toward an AI playground where agents and developers interact and learn from each other. What if private catalogs of MCP servers become composable playlists that encourage mixing, reshaping, and myriad combinations of tool calls? This requires treating MCP catalogs as OCI artifacts, not databases.

Cloud-native computing created feedback loops where infrastructure became code, deployments became declarative, and operational knowledge became shareable artifacts. MCP catalogs need to follow the same path. OCI artifacts, immutable versioning, and container-native workflows provide the model because they represent a well-understood approach that balances trust with creative evolution.

Trust Boundaries That Expand and Learn

iTunes provided a store. Spotify provided a store plus algorithmic discovery, playlist sharing, and taste profiles that improved over time. Private MCP catalogs can enable the same evolution. Today, this means curated, verified collections. Tomorrow, this becomes the foundation for self-improving discovery systems.

Tens of thousands of MCP servers are scattered across GitHub, registries, and forums. Community registries like mcp.so, Smithery, Glama, and PulseMCP are attempting to organize this ecosystem, but provenance remains unclear and quality varies wildly. Private catalogs with tighter access controls offer centralized discovery, enhanced security through vetted servers, and visibility into which tools developers actually use. Organizations can build curated subsets of approved servers, add proprietary internal servers, and selectively import from community registries. This solves the phone book problem. 

When Output Becomes Input

image1 3

The real opportunity is when the work agents do creates shareable artifacts plus organizational learning automatically. Your agent faces a complex problem analyzing customer churn across three data sources. The MCP gateway then constructs a profile capturing the tools, API keys, sequence of operations, and documentation about what worked. That profile becomes an OCI artifact in your registry.

Next month, another team faces a similar problem. Their agent pulls your profile as a starting point, adapts it, and pushes a refined version. The customer success team creates a churn profile combining data warehouse connectors, visualization tools, and notification servers. The sales team imports that profile, adds CRM connectors, and uses it to strategize on renewals. They publish their enhanced version back to the catalog. Teams stop rebuilding identical solutions and instead reuse or remix. Knowledge is captured, shared, and refined.

Why OCI Makes This Possible

Treating catalogs as immutable OCI artifacts lets agents pin to versions or profiles. Your production agents use catalog v2.3 while QA uses v2.4, and they do not drift. Without this, Agent A mysteriously fails because the database connector it relied on got silently updated with breaking changes. Audit trails become straightforward. You can prove which tools were available when incident X occurred. OCI-based catalogs are the only approach that makes catalogs and agents first-class infrastructure fully addressable with GitOps tooling.

OCI with containers delivers two benefits that matter for MCP. First, containers provide hermetic but customizable and context-rich security boundaries. The MCP server runs in a sandboxed container with explicit network policies, filesystem isolation, and resource limits. Secret injection happens through standard mechanisms with no credentials in prompts. This is key if MCP servers execute arbitrary code or have filesystem access.

Second, containers and the associated OCI versioning appends reusable governance tooling in just the right way, matching other governance tooling in your general container stack and workflow. Because catalogs are OCI artifacts, image scanning works the same. Signing and provenance use Cosign on catalogs just like images. Harbor, Artifactory, and other registries already have sophisticated access controls. Policy enforcement through OPA applies to catalog usage as it does to container deployments. Your FedRAMP-approved container registry handles MCP catalogs too. Your security team does not need to learn new tools.

From Phone Books and iTunes to Intelligent Platforms and Spotify

Organizations can evolve to dynamic discovery within trust boundaries. An MCP gateway allows the agent to query the catalog at runtime, select the appropriate tool, and instantiate only what it needs. With Docker’s Dynamic MCPs in the MCP Gateway, the agent can also call built-in tools like mcp-find and mcp-add to search curated catalogs, pull and start new MCP servers on demand, and drop them when they are no longer needed, instead of hard-coding tool lists and configs. Dynamic MCPs keep unused tools out of the model’s context, reduce token bloat, and let agents assemble just-in-time workflows from a much larger pool of MCP servers. The longer-term vision goes further. The gateway captures semantic intelligence around how users interact with MCPs, learns which tools combine effectively, and suggests relevant servers based on how similar problems were previously solved.

The longer-term vision goes further. The gateway captures semantic intelligence around how users interact with MCPs, learns which tools combine effectively, and suggests relevant servers based on how similar problems were previously solved. Teams both learn from and add to this knowledge feedback loop, Private catalog users discover new MCPs, mix MCPs in useful ways, and develop new ways of doing things,inspired by their own thoughts and by suggestions from the MCP gateway. This process also provides live reinforcement learning, imparting wisdom and context to the system that can benefit and everyone using the gateway.  This is organizational memory as infrastructure, emergent from actual agent work that blends human and machine intelligence in unlimited ways..

The container-native approach using private catalogs, dynamic MCP for runtime discovery, profiles as OCI artifacts, and sandboxed execution builds a composable, secure foundation for this future AI playground. How can we unleash MCP to drive developer creativity from a trusted foundation? Treat it like we treated containers but afford it the privileges that AI deserves as agentic, intelligent systems. Private MCP catalogs endowed with semantic intelligence and context understanding,  built atop OCI versioned infrastructure, running in safe agent sandboxes, are the first step toward that vision.



from Docker https://ift.tt/GcKlDvR
via IFTTT

Will there be a market for expert AI agents?

How difficult will it be to train and build an AI Agent that has expertise in a given domain? Will it happen in the next year, or 3 years or 10 years? And who will benefit in the marketplace from his evolution? 

SHOW: 984

SHOW TRANSCRIPT: The Cloudcast #984 Transcript

SHOW VIDEO: https://youtube.com/@TheCloudcastNET 

CLOUD NEWS OF THE WEEK: http://bit.ly/cloudcast-cnotw

CHECK OUT OUR NEW PODCAST: "CLOUDCAST BASICS"

SHOW SPONSORS:

  • [Mailtrap] Try Mailtrap for free
  • [Interconnected] Interconnected is a new series from Equinix diving into the infrastructure that keeps our digital world running. With expert guests and real-world insights, we explore the systems driving AI, automation, quantum, and more. Just search “Interconnected by Equinix”.

SHOW NOTES

WHAT WOULD BE THE STAGES OF AN EXPERT AGENT?

  • Train it on a set of standard knowledge (e.g. Masters of Accounting, Auditing, International Tax)
  • Train it on a set of well-defined case studies, to provide industry context
  • Train it on a set of adjacent case studies and other domains (business, law, specific industries)
  • How to train corner cases?
  • How to train gray areas like ethics, morality, or cost-benefit analysis? 
  • Who is motivated to train these experts? 
  • What would the cost of these experts be? Can it be similar to a human, or need to be a fraction, or a premium? 
  • Is there a way to build memory (e.g. experience) without disclosing client information? 
  • Is there a way to build shareable knowledge between agents for reinforcement training/learning?


FEEDBACK?



from The Cloudcast (.NET) https://ift.tt/x91GqtX
via IFTTT

Saturday, December 13, 2025

CISA Adds Actively Exploited Sierra Wireless Router Flaw Enabling RCE Attacks

Dec 13, 2025Ravie LakshmananNetwork Security / Vulnerability

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Friday added a high-severity flaw impacting Sierra Wireless AirLink ALEOS routers to its Known Exploited Vulnerabilities (KEV) catalog, following reports of active exploitation in the wild.

CVE-2018-4063 (CVSS score: 8.8/9.9) refers to an unrestricted file upload vulnerability that could be exploited to achieve remote code execution by means of a malicious HTTP request.

"A specially crafted HTTP request can upload a file, resulting in executable code being uploaded, and routable, to the webserver," the agency said. "An attacker can make an authenticated HTTP request to trigger this vulnerability."

Details of the six-year-old flaw were publicly shared by Cisco Talos in April 2019, describing it as an exploitable remote code execution vulnerability in the ACEManager "upload.cgi" function of Sierra Wireless AirLink ES450 firmware version 4.9.3. Talos reported the flaw to the Canadian company in December 2018.

"This vulnerability exists in the file upload capability of templates within the AirLink 450," the company said. "When uploading template files, you can specify the name of the file that you are uploading."

"There are no restrictions in place that protect the files that are currently on the device, used for normal operation. If a file is uploaded with the same name of the file that already exists in the directory, then we inherit the permissions of that file."

Talos noted that some of the files that exist in the directory (e.g., "fw_upload_init.cgi" or "fw_status.cgi") have executable permissions on the device, meaning an attacker can send HTTP requests to the "/cgi-bin/upload.cgi" endpoint to upload a file with the same name to achieve code execution.

This is compounded by the fact that ACEManager runs as root, thereby causing any shell script or executable uploaded to the device to also run with elevated privileges.

The addition of CVE-2018-4063 to the KEV catalog comes a day after a honeypot analysis conducted by Forescout over a 90-day period revealed that industrial routers are the most attacked devices in operational technology (OT) environments, with threat actors attempting to deliver botnet and cryptocurrency miner malware families like RondoDox, Redtail, and ShadowV2 by exploiting the following flaws -

Attacks have also been recorded from a previously undocumented threat cluster named Chaya_005 that weaponized CVE-2018-4063 in early January 2024 to upload an unspecified malicious payload with the name "fw_upload_init.cgi." No further successful exploitation efforts have been detected since then.

"Chaya_005 appears to be a broader reconnaissance campaign testing multiple vendor vulnerabilities rather than focusing on a single one," Forescout Research – Vedere Labs said, adding it's likely the cluster is no longer a "significant threat."

In light of active exploitation of CVE-2018-4063, Federal Civilian Executive Branch (FCEB) agencies are advised to update their devices to a supported version or discontinue the use of the product by January 2, 2026, since it has reached end-of-support status.



from The Hacker News https://ift.tt/KvZ7iRe
via IFTTT

Friday, December 12, 2025

New Advanced Phishing Kits Use AI and MFA Bypass Tactics to Steal Credentials at Scale

Cybersecurity researchers have documented four new phishing kits named BlackForce, GhostFrame, InboxPrime AI, and Spiderman that are capable of facilitating credential theft at scale.

BlackForce, first detected in August 2025, is designed to steal credentials and perform Man-in-the-Browser (MitB) attacks to capture one-time passwords (OTPs) and bypass multi-factor authentication (MFA). The kit is sold on Telegram forums for anywhere between €200 ($234) and €300 ($351).

The kit, according to Zscaler ThreatLabz researchers Gladis Brinda R and Ashwathi Sasi, has been used to impersonate over 11 brands, including Disney, Netflix, DHL, and UPS. It's said to be in active development.

"BlackForce features several evasion techniques with a blocklist that filters out security vendors, web crawlers, and scanners," the company said. "BlackForce remains under active development. Version 3 was widely used until early August, with versions 4 and 5 being released in subsequent months."

Phishing pages connected to the kit have been found to use JavaScript files with what has been described as "cache busting" hashes in their names (e.g., "index-[hash].js"), thereby forcing the victim's web browser to download the latest version of the malicious script instead of using a cached version.

In a typical attack using the kit, victims who click on a link are redirected to a malicious phishing page, after which a server-side check filters out crawlers and bots, before serving them a page that's designed to mimic a legitimate website. Once the credentials are entered on the page, the details are captured and sent to a Telegram bot and a command-and-control (C2) panel in real-time using an HTTP client called Axios.

When the attacker attempts to log in with the stolen credentials on the legitimate website, an MFA prompt is triggered. At this stage, the MitB techniques are used to display a fake MFA authentication page to the victim's browser through the C2 panel. Should the victim enter the MFA code on the bogus page, it's collected and used by the threat actor to gain unauthorized access to their account.

"Once the attack is complete, the victim is redirected to the homepage of the legitimate website, hiding evidence of the compromise and ensuring the victim remains unaware of the attack," Zscaler said.

GhostFrame Fuels 1M+ Stealth Phishing Attacks#

Another nascent phishing kit that has gained traction since its discovery in September 2025 is GhostFrame. At the heart of the kit's architecture is a simple HTML file that appears harmless while hiding its malicious behavior within an embedded iframe, which leads victims to a phishing login page to steal Microsoft 365 or Google account credentials.

"The iframe design also allows attackers to easily switch out the phishing content, try new tricks or target specific regions, all without changing the main web page that distributes the kit," Barracuda security researcher Sreyas Shetty said. "Further, by simply updating where the iframe points, the kit can avoid being detected by security tools that only check the outer page."

Attacks using the GhostFrame kit commence with typical phishing emails that claim to be about business contracts, invoices, and password reset requests, but are designed to take recipients to the fake page. The kit uses anti-analysis and anti-debugging to prevent attempts to inspect it using browser developer tools, and generates a random subdomain each time someone visits the site.

The visible outer pages come with a loader script that's responsible for setting up the iframe and responding to any messages from the HTML element. This can include changing the parent page's title to impersonate trusted services, modifying the site favicon, or redirecting the top-level browser window to another domain.

In the final stage, the victim is sent to a secondary page containing the actual phishing components through the iframe delivered via the constantly changing subdomain, thereby making it harder to block the threat. The kit also incorporates a fallback mechanism in the form of a backup iframe appended at the bottom of the page in the event the loader JavaScript fails or is blocked.

InboxPrime AI Phishing Kit Automates Email Attacks#

If BlackForce follows the same playbook as other traditional phishing kits, InboxPrime AI goes a step further by leveraging artificial intelligence (AI) to automate mass mailing campaigns. It's advertised on a 1,300-member-strong Telegram channel under a malware-as-a-service (MaaS) subscription model for $1,000, granting purchasers a perpetual license and full access to the source code.

"It is designed to mimic real human emailing behavior and even leverages Gmail's web interface to evade traditional filtering mechanisms," Abnormal researchers Callie Baron and Piotr Wojtyla said.

"InboxPrime AI blends artificial intelligence with operational evasion techniques and promises cybercriminals near-perfect deliverability, automated campaign generation, and a polished, professional interface that mirrors legitimate email marketing software."

The platform employs a user-friendly interface that allows customers to manage accounts, proxies, templates, and campaigns, mirroring commercial email automation tools. One of its core features is a built-in AI-powered email generator, which can produce entire phishing emails, including the subject lines, in a manner that mimics legitimate business communication.

In doing so, these services further lower the barrier to entry for cybercrime, effectively eliminating the manual work that goes into drafting such emails. In its place, attackers can configure parameters, such as language, topic, or industry, email length, and desired tone, which the toolkit uses as inputs to generate convincing lures that match the chosen theme.

What's more, the dashboard enables users to save the produced email as a reusable template, complete with support for spintax to create variations of the email messages by substituting certain template variables. This ensures that no two phishing emails look identical and helps them bypass signature-based filters that look for similar content patterns.

Some of the other supported features in InboxPrime AI are listed below -

  • A real-time spam diagnostic module that can analyze a generated email for common spam-filter triggers and suggest precise corrections
  • Sender identity randomization and spoofing, enabling attackers to customize display names for each Gmail session

"This industrialization of phishing has direct implications for defenders: more attackers can now launch more campaigns with more volume, without any corresponding increase in defender bandwidth or resources," Abnormal said. "This not only accelerates campaign launch time but also ensures consistent message quality, enables scalable, thematic targeting across industries, and empowers attackers to run professional-looking phishing operations without copywriting expertise."

Spiderman Creates Pixel-Perfect Replicas of European Banks#

The third phishing kit that has come under the cybersecurity radar is Spiderman, which permits attackers to target customers of dozens of European banks and online financial services providers, such as Blau, CaixaBank, Comdirect, Commerzbank, Deutsche Bank, ING, O2, Volksbank, Klarna, and PayPal.

"Spiderman is a full-stack phishing framework that replicates dozens of European banking login pages, and even some government portals," Varonis researcher Daniel Kelley said. "Its organized interface provides cybercriminals with an all-in-one platform to launch phishing campaigns, capture credentials, and manage stolen session data in real-time."

What's notable about the modular kit is that its seller is marketing the solution in a Signal messenger group that has about 750 members, marking a departure from Telegram. Germany, Austria, Switzerland, and Belgium are the primary targets of the phishing service.

Like in the case of BlackForce, Spiderman utilizes various techniques like ISP allowlisting, geofencing, and device filtering to ascertain that only the intended targets can access the phishing pages. The toolkit is also equipped to capture cryptocurrency wallet seed phrases, intercept OTP and PhotoTAN codes, and trigger prompts to gather credit card data.

"This flexible, multi-step approach is particularly effective in European banking fraud, where login credentials alone often aren't enough to authorize transactions," Kelley explained. "After capturing credentials, Spiderman logs each session with a unique identifier so the attacker can maintain continuity through the entire phishing workflow."

Hybrid Salty-Tycoon 2FA Attacks Spotted#

BlackForce, GhostFrame, InboxPrime AI, and Spiderman are the latest additions to a long list of phishing kits like Tycoon 2FA, Salty 2FA, Sneaky 2FA, Whisper 2FA, Cephas, and Astaroth (not to be confused with a Windows banking trojan of the same name) that have emerged over the past year.

In a report published earlier this month, ANY.RUN said it observed a new Salty-Tycoon hybrid that's already bypassing detection rules tuned to either of them. The new attack wave coincides with a sharp drop in Salty 2FA activity in late October 2025, with early stages matching Salty2FA, while later stages load code that reproduces Tycoon 2FA's execution chain.

"This overlap marks a meaningful shift; one that weakens kit-specific rules, complicates attribution, and gives threat actors more room to slip past early detection," the company said.

"Taken together, this provides clear evidence that a single phishing campaign, and, more interestingly, a single sample, contains traces of both Salty2FA and Tycoon, with Tycoon serving as a fallback payload once the Salty infrastructure stopped working for reasons that are still unclear."



from The Hacker News https://ift.tt/LElTI2R
via IFTTT

Is AI the New Insider Threat?

undefined Imgur 8

Insider threats have always been difficult to manage because they blur the line between trusted access and risky behavior. 

With generative AI, these risks aren’t tied to malicious insiders misusing credentials or bypassing controls; they come from well-intentioned employees simply trying to get work done faster. Whether it’s developers refactoring code, analysts summarizing long reports, or marketers drafting campaigns, the underlying motivation is almost always productivity and efficiency.

Unfortunately, that’s precisely what makes this risk so difficult to manage. Employees don’t see themselves as creating security problems; they’re solving bottlenecks. Security is an afterthought at best. 

This gap in perception creates an opportunity for missteps. By the time IT or security teams realize an AI tool has been widely adopted, patterns of risky use may already be deeply embedded in workflows.

Right now, AI use in the workplace is a bit of a free-for-all. And when everyone’s saying “it’s fun” and “everyone’s doing it”, it feels like being back in high school: no one wants to be *that* person telling them to stop because it’s risky. 

But, as security, we do have a responsibility.

In this article, I explore the risks of unmanaged AI use, why existing security approaches fall short, and suggest one thing I believe we can do to balance users’ enthusiasm with responsibility (without being the party pooper).

Examples of Risky AI Use

The risks of AI use in the workplace usually fall into one of three categories:

  • Sensitive data breaches: A single pasted transcript, log, or API key may seem minor, but once outside company boundaries, it’s effectively gone, subject to provider retention and analysis.
  • Intellectual property leakage: Proprietary code, designs, or research drafts fed into AI tools can erode competitive advantage if they become training data or are exposed via prompt injection.
  • Regulatory and compliance violations: Uploading regulated data HIPAA, GDPR, etc. into unsanctioned AI systems can trigger fines or legal action, even if no breach occurs.

What makes these risks especially difficult is their subtlety. They emerge from everyday workflows, not obvious policy violations, which means they often go unnoticed until the damage is done.

Shadow AI

For years, Shadow IT has meant unsanctioned SaaS apps, messaging platforms, or file storage systems. 

Generative AI is now firmly in this category. 

Employees don’t think that pasting text into a chatbot like ChatGPT introduces a new system to the enterprise. In practice, however, they’re moving data into an external environment with no oversight, logging, or contractual protection.

What’s different about Shadow AI is the lack of visibility: unlike past technologies, it often leaves no obvious logs, accounts, or alerts for security teams to follow. With cloud file-sharing, security teams could trace uploads, monitor accounts created with corporate emails, or detect suspicious network traffic. 

But AI use often looks like normal browser activity. And while some security teams do scan what employees paste into web forms, those controls are limited. 

Which brings us to the real problem: we don’t really have the tools to manage AI use properly. Not yet, at least.

Controls Are Lacking

We all see people trying to get work done faster, and we know we should be putting some guardrails in place, but the options out there are either expensive, complicated, or still figuring themselves out.

The few available AI governance and security tools have clear limitations (even though their marketing might try to convince you otherwise):

  • Emerging AI governance platforms offer usage monitoring, policy enforcement, and guardrails around sensitive data, but they’re often expensive, complex, or narrowly focused.
  • Traditional controls like DLP and XDR catch structured data such as phone numbers, IDs, or internal customer records, but they struggle with more subtle, hard-to-detect information: source code, proprietary algorithms, or strategic documents.

Even with these tools, the pace of AI adoption means security teams are often playing catch-up. The reality is that while controls are improving, they rarely keep up with how quickly employees are exploring AI.

Lessons from Past Security Blind Spots

Employees charging ahead with new tools while security teams scramble to catch up is not so different from the early days of cloud file sharing: employees flocked to Dropbox or Google Drive before IT had sanctioned solutions. Or think back to the rise of “bring your own device” (BYOD), when personal phones and laptops started connecting to corporate networks without clear policies in place.

Both movements promised productivity, but they also introduced risks that security teams struggled to manage retroactively.

Generative AI is repeating this pattern, only at a much faster rate. While cloud tools or BYOD require some setup, or at least a decision to connect a personal device, AI tools are available instantly in a browser. The barrier to entry is practically zero. That means adoption can spread through an organization long before security leaders even realize it’s happening. 

And as with cloud and BYOD, the sequence is familiar: employee adoption comes first, controls follow later, and those retroactive measures are almost always costlier, clumsier, and less effective than proactive governance.

So What Can We Do?

Remember: AI-driven insider risk isn’t about bad actors but about good people trying to be productive and efficient. (OK, maybe with a few lazy ones thrown in for good measure.) It’s ordinary rather than malicious behavior that’s unfortunately creating unnecessary exposure. 

That means there’s one measure every organization can implement immediately: educating employees.

Education works best when it’s practical and relatable. Think less “compliance checkbox,” and more “here’s a scenario you’ve probably been in.” That’s how you move from fuzzy awareness to actual behavior change.

Here are three steps that make a real difference:

  • Build awareness with real examples. Show how pasting code, customer details, or draft plans into a chatbot can have the same impact as posting them publicly. That’s the “aha” moment most people need.
undefined Imgur 9
  • Emphasize ownership. Employees already know they shouldn’t reuse passwords or click suspicious links; AI use should be framed in the same personal-responsibility terms. The goal is a culture where people feel they’re protecting the company, not just following rules.
  • Set clear boundaries. Spell out which categories of data are off-limits PII, source code, unreleased products, regulated records) and offer safe alternatives like internal AI sandboxes. Clarity reduces guesswork and removes the temptation of convenience.

Until governance tools mature, these low-friction steps form the strongest defense we have.

If you can enable people to harness AI’s productivity while protecting your critical data, you reduce today’s risks. And you’re better prepared for the regulations and oversight that are certain to follow.



from Docker https://ift.tt/Imkd1lp
via IFTTT

The Good, the Bad and the Ugly in Cybersecurity – Week 50

The Good | U.S. & Spanish Officials Crack Down on Hacktivist & Identity Theft Activities

U.S. officials have charged Ukrainian national Victoria Dubranova for allegedly supporting Russian state-backed hacktivist groups in global critical infrastructure attacks. Extradited earlier this year, Dubranova faces trials in February and April 2026 tied to her suspected involvement in NoName057(16) and CyberArmyofRussia_Reborn (CARR), respectively.

The indictment states that NoName057(16) operated as a state-sanctioned effort involving multiple threat actors and a government-created IT center. Their tooling includes a custom DDoS called ‘DDoSia’ used to launch attacks against government and financial agencies as well as critical transportation.

Prosecutors say Russia’s military intelligence service funded and directed CARR, a hacktivist group with over 75,000 Telegram followers and a long record of attacks. Damage to U.S. water systems, an ammonia leak at a Los Angeles facility, and targeting of nuclear and election infrastructure are all attributed to CARR. Dubranova faces up to 27 years on CARR-related charges and 5 years on NoName charges. Multi-million dollar rewards are in place for information on either threat group.

In Spain, authorities have arrested a 19-year-old hacker for the alleged theft and sale of 64 million records stolen from nine organizations. The suspect faces charges including cybercrime, unauthorized access, and privacy violations.

The investigation first started in June after breaches at the unnamed firms were reported. Police later confirmed that the suspect possessed millions of stolen records containing full names, addresses, emails, phone numbers, DNI numbers, and IBAN codes. He reportedly tried to sell the data on multiple forums using six accounts and five pseudonyms.

While officers have seized cryptocurrency wallets containing proceeds from the alleged sales, the total number of individuals affected remains unclear. Given the scale of the crime, Spanish authorities emphasize the seriousness of attempting to monetize stolen personal information.

The Bad | Malicious VS Code Extensions Deploy Stealthy Infostealer Malware

Two malicious Visual Studio Code extensions, Bitcoin Black and Codo AI, were recently discovered on Microsoft’s VS Code Marketplace, infecting developers with information-stealing malware. Each disguised as a harmless color theme and an AI coding assistant, the extensions were published under the alias ‘BigBlack’. While download counts are still low at the time of this writing, both packages point to a clear intent to compromise developer environments.

Researchers note that earlier versions of Bitcoin Black used a PowerShell script to fetch a password-protected payload, briefly flashing a visible window that could alert users. The latest version now has a hidden batch script that quietly downloads a DLL and executable via curl, significantly reducing detection risk. Meanwhile, Codo AI delivers legitimate code-completion via ChatGPT or DeepSeek but embeds a malicious payload alongside these features.

Both extensions deploy the Lightshot screenshot tool paired with a malicious DLL that uses DLL hijacking to load an infostealer called runtime.exe. Once executed, the malware creates a directory under %APPDATA%\Local\ and begins exfiltrating sensitive data from system details and clipboard content to WiFi passwords, screenshots, installed software lists, and running processes. Finally, it launches Chrome and Edge in headless mode to extract cookies and hijack active sessions, targeting several crypto wallets including Phantom, MetaMask, and Exodus.

VirusTotal report for Lightshot.dll (Source: Koi.ai)
VirusTotal report for Lightshot.dll (Source: Koi.ai)

Microsoft has since removed both extensions from the Marketplace and the malicious DLL is already flagged by 29 of 72 antivirus engines on VirusTotal. Developers are advised to install extensions only from trusted publishers and stay alert to atypical extension behavior.

The Ugly | CyberVolk Resurfaces With New Telegram-Powered RaaS ‘VolkLocker’

CyberVolk, a pro-Russia hacktivist persona first identified in late 2024, resurfaced this August with a revamped ransomware-as-a-service (RaaS) offering known as VolkLocker (CyberVolk 2.x). SentinelLABS reported this week that the group has pivoted to using Telegram for both automation and customer interaction; however, operations are being undercut by payloads that retain artifacts, allowing victims to recover their files.

VolkLocker is written in Golang and supports both Windows and Linux. Payloads are distributed largely unprotected, with RaaS operators instructed to use UPX for packing. Builders must supply key configuration values including a Bitcoin address, Telegram bot token ID, encryption deadline, file extension, and more.

On execution, the ransomware attempts privilege escalation via the “ms-settings” UAC bypass, performs system and VM checks, and enumerates drives for encryption. A dynamic HTML ransom note then displays a 48-hour countdown, while a separate enforcement timer corrupts the system if deadlines or decryption attempts fail.

Telegram serves as the backbone of the RaaS, offering operators an administrative panel, victim enumeration, broadcast messaging, and optional extensions such as RAT and keylogger control. Recent ads show CyberVolk expanding into standalone tooling with tiered pricing models.

Decryption triggered via backed-up key file
Decryption triggered via backed-up key file

The encryption routine uses AES-256 in GCM mode with a hardcoded master key. Crucially, the key is written in plaintext to a file in %TEMP%, alongside the victim’s unique identifier and the attacker’s Bitcoin address – an apparent leftover test artifact that allows victims to decrypt their own files.

Despite repeated account bans on Telegram, CyberVolk continues to evolve its services. The plaintext key flaw, however, reveals quality-control issues that limit the real-world impact of VolkLocker as-is. SentinelOne’s Singularity Platform detects and blocks behaviors and payloads linked to CyberVolk.



from SentinelOne https://ift.tt/ZfJs6Fz
via IFTTT

How to Add MCP Servers to ChatGPT with Docker MCP Toolkit

ChatGPT is great at answering questions and generating code. But here’s what it can’t do: execute that code, query your actual database, create a GitHub repo with your project, or scrape live data from websites. It’s like having a brilliant advisor who can only talk, never act.

Docker MCP Toolkit changes this completely. 

Here’s what that looks like in practice: You ask ChatGPT to check MacBook Air prices across Amazon, Walmart, and Best Buy. If competitor prices are lower than yours, it doesn’t just tell you, it acts: automatically adjusting your Stripe product price to stay competitive, logging the repricing decision to SQLite, and pushing the audit trail to GitHub. All through natural conversation. No manual coding. No copy-pasting scripts. Real execution.

“But wait,” you might say, “ChatGPT already has a shopping research feature.” True. But ChatGPT’s native shopping can only lookup prices. Only MCP can execute: creating payment links, generating invoices, storing data in your database, and pushing to your GitHub. That’s the difference between an advisor and an actor.

By the end of this guide, you’ll build exactly this: a Competitive Repricing Agent that checks competitor prices on demand, compares them to yours, and automatically adjusts your Stripe product prices when competitors are undercutting you.

Here’s how the pieces fit together:

  • ChatGPT provides the intelligence: understanding your requests and determining what needs to happen
  • Docker MCP Gateway acts as the secure bridge: routing requests to the right tools
  • MCP Servers are the hands: executing actual tasks in isolated Docker containers

The result? ChatGPT can query your SQL database, manage GitHub repositories, scrape websites, process payments, run tests, and more—all while Docker’s security model keeps everything contained and safe.

In this guide, you’ll learn how to add seven MCP servers to ChatGPT by connecting to Docker MCP Toolkit. We’ll use a handful of must-have MCP servers: Firecrawl for web scraping, SQLite for data persistence, GitHub for version control, Stripe for payment processing, Node.js Sandbox for calculations, Sequential Thinking for complex reasoning, and Context7 for documentation. Then, you’ll build the Competitive Repricing Agent shown above, all through conversation.

What is Model Context Protocol (MCP)?

Before we dive into the setup, let’s clarify what MCP actually is.

Model Context Protocol (MCP) is the standardized way AI agents like ChatGPT and Claude connect to tools, APIs, and services. It’s what lets ChatGPT go beyond conversation and perform real-world actions like querying databases, deploying containers, analyzing datasets, or managing GitHub repositories.

In short: MCP is the bridge between ChatGPT’s reasoning and your developer stack. And Docker? Docker provides the guardrails that make it safe.

Why Use Docker MCP Toolkit with ChatGPT?

I’ve been working with AI tools for a while now, and this Docker MCP integration stands out for one reason: it actually makes ChatGPT productive.

Most AI integrations feel like toys: impressive demos that break in production. Docker MCP Toolkit is different. It creates a secure, containerized environment where ChatGPT can execute real tasks without touching your local machine or production systems.

Every action happens in an isolated container. Every MCP server runs in its own security boundary. When you’re done, containers are destroyed. No residue, no security debt, complete reproducibility across your entire team.

What ChatGPT Can and Can’t Do Without MCP

Let’s be clear about what changes when you add MCP.

Without MCP

You ask ChatGPT to build a system to regularly scrape product prices and store them in a database. ChatGPT responds with Python code, maybe 50 lines using BeautifulSoup and SQLite. Then you must copy the code, install dependencies, create the database schema, run the script manually, and set up a scheduler if you want it to run regularly.

Yes, ChatGPT remembers your conversation and can store memories about you. But those memories live on OpenAI’s servers—not in a database you control.

With MCP

You ask ChatGPT the same thing. Within seconds, it calls Firecrawl MCP to actually scrape the website. It calls SQLite MCP to create a database on your machine and store the data. It calls GitHub MCP to save a report to your repository. The entire workflow executes in under a minute.

Real data gets stored in a real database on your infrastructure. Real commits appear in your GitHub repository. Close ChatGPT, come back tomorrow, and ask “Show me the price trends.” ChatGPT queries your SQLite database and returns results instantly because the data lives in a database you own and control, not in ChatGPT’s conversation memory.

The data persists in your systems, ready to query anytime; no manual script execution required.

Why This Is Different from ChatGPT’s Native Shopping

ChatGPT recently released a shopping research feature that can track prices and make recommendations. Here’s what it can and cannot do:

What ChatGPT Shopping Research can do:

  • Track prices across retailers
  • Remember price history in conversation memory
  • Provide comparisons and recommendations

What ChatGPT Shopping Research cannot do:

  • Automatically update your product prices in Stripe
  • Execute repricing logic based on competitor changes
  • Store pricing data in your database (not OpenAI’s servers)
  • Push audit trails to your GitHub repository
  • Create automated competitive response workflows

With Docker MCP Toolkit, ChatGPT becomes a competitive pricing execution system. When you ask it to check prices and competitors are undercutting you, it doesn’t just inform you, it acts: updating your Stripe prices to match or beat competitors, logging decisions to your database, and pushing audit records to GitHub. The data lives in your infrastructure, not OpenAI’s servers.

Setting Up ChatGPT with Docker MCP Toolkit

Prerequisites

Before you begin, ensure you have:

  • A machine with 8 GB RAM minimal, ideally 16GB
  • Install Docker Desktop
  • A ChatGPT Plus, Pro, Business, or Enterprise Account
  • ngrok account (free tier works) – For exposing the Gateway publicly

Step 1. Enable ChatGPT developer mode

  • Head over to ChatGPT and create a new account. 
  • Click on your profile icon at the top left corner of the ChatGPT page and select “Settings”. Select “Apps and Connectors” and scroll down to the end of the page to select “Advanced Settings.”
image1

SettingsApps & ConnectorsAdvancedDeveloper Mode (ON)

image10

ChatGPT Developer Mode provides full Model Context Protocol (MCP) client support for all tools, both read and write operations. This feature was announced in the first week of September 2025, marking a significant milestone in AI-developer integration. ChatGPT can perform write actions—creating repositories, updating databases, modifying files—all with proper confirmation modals for safety.

Key capabilities:

  • Full read/write MCP tool support
  • Custom connector creation
  • OAuth and authentication support
  • Explicit confirmations for write operations
  • Available on Plus, Pro, Business, Enterprise, and Edu plans

Step 2. Create MCP Gateway

This creates and starts the MCP Gateway container that ChatGPT will connect to.

docker mcp server init --template=chatgpt-app-basic test-chatgpt-app

Successfully initialized MCP server project in test-chatgpt-app (template: chatgpt-app-basic)
Next steps:
  cd test-chatgpt-app
  docker build -t test-chatgpt-app:latest .

Step 3. List out all the project files

ls -la
total 64
drwxr-xr-x@   9 ajeetsraina  staff   288 16 Nov 16:53 .
drwxr-x---+ 311 ajeetsraina  staff  9952 16 Nov 16:54 ..
-rw-r--r--@   1 ajeetsraina  staff   165 16 Nov 16:53 catalog.yaml
-rw-r--r--@   1 ajeetsraina  staff   371 16 Nov 16:53 compose.yaml
-rw-r--r--@   1 ajeetsraina  staff   480 16 Nov 16:53 Dockerfile
-rw-r--r--@   1 ajeetsraina  staff    88 16 Nov 16:53 go.mod
-rw-r--r--@   1 ajeetsraina  staff  2576 16 Nov 16:53 main.go
-rw-r--r--@   1 ajeetsraina  staff  2254 16 Nov 16:53 README.md
-rw-r--r--@   1 ajeetsraina  staff  6234 16 Nov 16:53 ui.html

Step 4. Examine the Compose file

services:
  gateway:
    image: docker/mcp-gateway                # Official Docker MCP Gateway image
    command:
      - --servers=test-chatgpt-app           # Name of the MCP server to expose
      - --catalog=/mcp/catalog.yaml          # Path to server catalog configuration
      - --transport=streaming                # Use streaming transport for real-time responses
      - --port=8811                           # Port the gateway listens on
    environment:
      - DOCKER_MCP_IN_CONTAINER=1            # Tells gateway it's running inside a container
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock  # Allows gateway to spawn sibling containers
      - ./catalog.yaml:/mcp/catalog.yaml           # Mount local catalog into container
    ports:
      - "8811:8811"                           # Expose gateway port to host


Step 5. Bringing up the compose services

docker compose up -d
[+] Running 2/2
 ✔ Network test-chatgpt-app_default      Created                                            0.0s
 ✔ Container test-chatgpt-app-gateway-1  Started  

docker ps | grep test-chatgpt-app
eb22b958e09c   docker/mcp-gateway   "/docker-mcp gateway…"   21 seconds ago   Up 20 seconds   0.0.0.0:8811->8811/tcp, [::]:8811->8811/tcp   test-chatgpt-app-gateway-1

Step 6. Verify the MCP session

curl http://localhost:8811/mcp
GET requires an active session

Step 7. Expose with Ngrok

Install ngrok and expose your local gateway. You will need to sign up for an ngrok account to obtain an auth token.

brew install ngrok
ngrok config add-authtoken <your_token_id>
ngrok http 8811

Note the public URL (like https://91288b24dc98.ngrok-free.app). Keep this terminal open.

Step 8. Connect ChatGPT

In ChatGPT, go to SettingsApps & ConnectorsCreate.

image11

Step 9. Create connector:

SettingsApps & ConnectorsCreate

- Name: Test MCP Server
- Description: Testing Docker MCP Toolkit integration
- Connector URL: https://[YOUR_NGROK_URL]/mcp
- Authentication: None
- Click "Create"

Test it by asking ChatGPT to call the greet tool. If it responds, your connection works. Here’s how it looks:

image7
image6

Real-World Demo: Competitive Repricing Agent

Now that you’ve connected ChatGPT to Docker MCP Toolkit, let’s build something that showcases what only MCP can do—something ChatGPT’s native shopping feature cannot replicate.

We’ll create a Competitive Repricing Agent that checks competitor prices on demand, and when competitors are undercutting you, automatically adjusts your Stripe product prices to stay competitive, logs the repricing decision to SQLite, and pushes audit records to GitHub.

Time to build: 15 minutes  

Monthly cost: Free Stripe (test mode) + $1.50-$15 (Firecrawl API)

Infrastructure: $0 (SQLite is free)

The Challenge

E-commerce businesses face a constant dilemma:

  • Manual price checking across multiple retailers is time-consuming and error-prone
  • Comparing competitor prices and calculating optimal repricing requires multiple tools
  • Executing price changes across your payment infrastructure requires context-switching
  • Historical trend data is scattered across spreadsheets
  • Strategic insights require manual analysis and interpretation

Result: Missed opportunities, delayed reactions, and losing sales to competitors with better prices.

The Solution: On-Demand Competitive Repricing Agent

Docker MCP Toolkit transforms ChatGPT from an advisor into an autonomous agent that can actually execute. The architecture routes your requests through a secure MCP Gateway that orchestrates specialized tools: Firecrawl scrapes live prices, Stripe creates payment links and invoices, SQLite stores data on your infrastructure, and GitHub maintains your audit trail. Each tool runs in an isolated Docker container: secure, reproducible, and under your control.

The 7 MCP Servers We’ll Use

Server

Purpose

Why It Matters

Firecrawl

Web scraping

Extracts live prices from any website

SQLite

Data persistence

Stores 30+ days of price history

Stripe

Payment management

Updates your product prices to match or beat competitors

GitHub

Version control

Audit trail for all reports

Sequential Thinking

Complex reasoning

Multi-step strategic analysis

Context7

Documentation

Up-to-date library docs for code generation

Node.js Sandbox

Calculations

Statistical analysis in isolated containers

The Complete MCP Workflow (Executes in under 3 minutes)

image10 1

Step 1. Scrape and Store (30 seconds)

  • Agent scrapes live prices from Amazon, Walmart, and Best Buy 
  • Compares against your current Stripe product price

Step 2: Compare Against Your Price (15 seconds) 

  • Best Buy drops to $509.99—undercutting your $549.99
  • Agent calculates optimal repricing strategy
  • Determines new competitive price point

Step 3: Execute Repricing (30 seconds)

  • Updates your Stripe product with the new competitive price
  • Logs repricing decision to SQLite with full audit trail
  • Pushes pricing change report to GitHub

Step 4: Stay Competitive (instant)

  • Your product now priced competitively
  • Complete audit trail in your systems
  • Historical data ready for trend analysis

The Demo Setup: Enable Docker MCP Toolkit

Open Docker Desktop and enable the MCP Toolkit from the Settings menu.

To enable:

  1. Open Docker Desktop
  2. Go to SettingsBeta Features
  3. Toggle Docker MCP Toolkit ON
  4. Click Apply
image8

Click MCP Toolkit in the Docker Desktop sidebar, then select Catalog to explore available servers.

For this demonstration, we’ll use seven MCP servers:

  • SQLite RDBMS with advanced analytics, text and vector search, geospatial capabilities, and intelligent workflow automation
  • Stripe –  Updates your product prices to match or beat competitors for automated repricing workflows
  • GitHub – Handles version control and deployment
  • Firecrawl – Web scraping and content extraction
  • Node.js Sandbox – Runs tests, installs dependencies, validates code (in isolated containers)
  • Sequential Thinking – Debugs failing tests and optimizes code
  • Context7 – Provides code documentation for LLMs and AI code editors

Let’s configure each one step by step.

1. Configure SQLite MCP Server

The SQLite MCP Server requires no external database setup. It manages database creation and queries through its 25 built-in tools.

To setup the SQLite MCP Server, follow these steps:

  1. Open Docker Desktop → access MCP Toolkit → Catalog
  2. Search “SQLite”
  3. Click + Add
  4. No configuration needed, just click Start MCP Server
docker mcp server ls
# Should show sqlite-mcp-server as enabled

That’s it. ChatGPT can now create databases, tables, and run queries through conversation.

2. Configure Stripe MCP Server

The Stripe MCP server gives ChatGPT full access to payment infrastructure—listing products, managing prices, and updating your catalog to stay competitive.

Get Stripe API Key

  1. Go to dashboard.stripe.com
  2. Navigate to Developers → API Keys
  3. Copy your Secret Key:
    • Use sk_test_... for sandbox/testing
    • Use sk_live_... for production

Configure in Docker Desktop

  1. Open Docker Desktop → MCP Toolkit → Catalog
  2. Search for “Stripe”
  3. Click + Add
  4. Go to the Configuration tab
  5. Add your API key:
    • Field: stripe.api_key
    • Value: Your Stripe secret key
  6. Click Save and Start Server

Or via CLI:

docker mcp secret set STRIPE.API_KEY="sk_test_your_key_here"
docker mcp server enable stripe

3. Configure GitHub Official MCP Server

The GitHub MCP server lets ChatGPT create repositories, manage issues, review pull requests, and more.

Option 1: OAuth Authentication (Recommended)

OAuth is the easiest and most secure method:

  1. In MCP ToolkitCatalog, search “GitHub Official”
  2. Click + Add
  3. Go to the OAuth tab in Docker Desktop
  4. Find the GitHub entry
  5. Click “Authorize”
  6. Your browser opens GitHub’s authorization page
  7. Click “Authorize Docker” on GitHub
  8. You’re redirected back to Docker Desktop
  9. Return to the Catalog tab, find GitHub Official
  10. Click Start Server

Advantage: No manual token creation. Authorization happens through GitHub’s secure OAuth flow with automatic token refresh.

Option 2: Personal Access Token

If you prefer manual control or need specific scopes:

Step 1: Create GitHub Personal Access Token

  1. Go to https://github.com and sign in
  2. Click your profile picture → Settings
  3. Scroll to “Developer settings” in the left sidebar
  4. Click “Personal access tokens”“Tokens (classic)”
  5. Click “Generate new token”“Generate new token (classic)”
  6. Name it: “Docker MCP ChatGPT”
  7. Select scopes:
    • repo (Full control of repositories)
    • workflow (Update GitHub Actions workflows)
    • read:org (Read organization data)
  8. Click “Generate token”
  9. Copy the token immediately (you won’t see it again!)

Step 2: Configure in Docker Desktop

In MCP Toolkit → Catalog, find GitHub Official:

  1. Click + Add (if not already added)
  2. Go to the Configuration tab
  3. Select “Personal Access Token” as the authentication method
  4. Paste your token
  5. Click Start Server

Or via CLI:

docker mcp secret set GITHUB.PERSONAL_ACCESS_TOKEN="github_pat_YOUR_TOKEN_HERE"

Verify GitHub Connection

docker mcp server ls

# Should show github as enabled

4. Configure Firecrawl MCP Server

The Firecrawl MCP server gives ChatGPT powerful web scraping and search capabilities.

Get Firecrawl API Key

  1. Go to https://www.firecrawl.dev
  2. Create an account (or sign in)
  3. Navigate to API Keys in the sidebar
  4. Click “Create New API Key”
  5. Copy the API key
image13

Configure in Docker Desktop

  1. Open Docker DesktopMCP ToolkitCatalog
  2. Search for “Firecrawl”
  3. Find Firecrawl in the results
  4. Click + Add
  5. Go to the Configuration tab
  6. Add your API key:
    • Field: firecrawl.api_key
    • Value: Your Firecrawl API key
  7. Leave all other entries blank
  8. Click Save and Add Server

Or via CLI:

docker mcp secret set FIRECRAWL.API_KEY="fc-your-api-key-here"
docker mcp server enable firecrawl

What You Get

6+ Firecrawl tools, including:

  • firecrawl_scrape – Scrape content from a single URL
  • firecrawl_crawl – Crawl entire websites and extract content
  • firecrawl_map – Discover all indexed URLs on a site
  • firecrawl_search – Search the web and extract content
  • firecrawl_extract – Extract structured data using LLM capabilities
  • firecrawl_check_crawl_status – Check crawl job status

5. Configure Node.js Sandbox MCP Server

The Node.js Sandbox enables ChatGPT to execute JavaScript in isolated Docker containers.

Note: This server requires special configuration because it uses Docker-out-of-Docker (DooD) to spawn containers.

Understanding the Architecture

The Node.js Sandbox implements the Docker-out-of-Docker (DooD) pattern by mounting /var/run/docker.sock. This gives the sandbox container access to the Docker daemon, allowing it to spawn ephemeral sibling containers for code execution.

When ChatGPT requests JavaScript execution:

  1. Sandbox container makes Docker API calls
  2. Creates temporary Node.js containers (with resource limits)
  3. Executes code in complete isolation
  4. Returns results
  5. Auto-removes the container

Security Note: Docker socket access is a privilege escalation vector (effectively granting root-level host access). This is acceptable for local development but requires careful consideration for production use.

Add Via Docker Desktop

  1. MCP ToolkitCatalog
  2. Search “Node.js Sandbox”
  3. Click + Add

Unfortunately, the Node.js Sandbox requires manual configuration that can’t be done entirely through the Docker Desktop UI. We’ll need to configure ChatGPT’s connector settings directly.

Prepare Output Directory

Create a directory for sandbox output:

# macOS/Linux
mkdir -p ~/Desktop/sandbox-output

# Windows
mkdir %USERPROFILE%\Desktop\sandbox-output

Configure Docker File Sharing

Ensure this directory is accessible to Docker:

  1. Docker DesktopSettingsResourcesFile Sharing
  2. Add ~/Desktop/sandbox-output (or your Windows equivalent)
  3. Click Apply & Restart

6. Configure Sequential Thinking MCP Server

The Sequential Thinking MCP server gives ChatGPT the ability for dynamic and reflective problem-solving through thought sequences. Adding the Sequential Thinking MCP server is straightforward –  it doesn’t require any API key. Just search for Sequential Thinking in the Catalog and get it to your MCP server list.

In Docker Desktop:

  1. Open Docker DesktopMCP ToolkitCatalog
  2. Search for “Sequential Thinking”
  3. Find Sequential Thinking in the results
  4. Click “Add MCP Server” to add without any configuration

The Sequential Thinking MCP server should now appear under “My Servers” in Docker MCP Toolkit.

What you get:

  • A single Sequential Thinking tool that includes:
    • sequentialthinking – A detailed tool for dynamic and reflective problem-solving through thoughts. This tool helps analyze problems through a flexible thinking process that can adapt and evolve. Each thought can build on, question, or revise previous insights as understanding deepens.
image2

7. Configure Context7 MCP Server

The Context7 MCP enables ChatGPT to access the latest and up-to-date code documentation for LLMs and AI code editors. Adding the Context7 MCP server is straightforward. It doesn’t require any API key. Just search for Context7 in the Catalog and get it added to the MCP server lists.

In Docker Desktop:

  1. Open Docker DesktopMCP ToolkitCatalog
  2. Search for “Context7”
  3. Find Context7 in the results
  4. Click “Add MCP Server” to add without any configuration
image3

The Context7 MCP server should now appear under “My Servers” in Docker MCP Toolkit

What you get:

  • 2 Context7 tools including:
    • get-library-docs – Fetches up-to-date documentation for a library.
    • resolve-library-id – Resolves a package/product name to a Context7-compatible library ID and returns a list of matching libraries. 

Verify if all the MCP servers are available and running.

docker mcp server ls

MCP Servers (7 enabled)

NAME                   OAUTH        SECRETS      CONFIG       DESCRIPTION
------------------------------------------------------------------------------------------------
context7               -            -            -            Context7 MCP Server -- Up-to-da...
fetch                  -            -            -            Fetches a URL from the internet...
firecrawl              -            ✓ done       partial    Official Firecrawl MCP Server...
github-official        ✓ done       ✓ done       -            Official GitHub MCP Server, by ...
node-code-sandbox      -            -            -            A Node.js–based Model Context P...
sequentialthinking     -            -            -            Dynamic and reflective problem-...
sqlite-mcp-server      -            -            -            The SQLite MCP Server transform...
stripe                 -            ✓ done       -            Interact with Stripe services o...

Tip: To use these servers, connect to a client (IE: claude/cursor) with docker mcp client connect <client-name>

Configuring ChatGPT App and Connector

Use the following compose file in order to let ChatGPT discover all the tools under Docker MCP Catalog:

services:
  gateway:
    image: docker/mcp-gateway
    command:
      - --catalog=/root/.docker/mcp/catalogs/docker-mcp.yaml
      - --servers=context7,firecrawl,github-official,node-code-sandbox,sequentialthinking,sqlite-mcp-server,stripe
      - --transport=streaming
      - --port=8811
    environment:
      - DOCKER_MCP_IN_CONTAINER=1
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ~/.docker/mcp:/root/.docker/mcp:ro
    ports:
      - "8811:8811"


By now, you should be able to view all the MCP tools under ChatGPT Developer Mode.

Chatgpt MCP image

Let’s Test it Out

Now we give ChatGPT its intelligence. Copy this system prompt and paste it into your ChatGPT conversation:

You are a Competitive Repricing Agent that monitors competitor prices, automatically adjusts your Stripe product prices, and provides strategic recommendations using 7 MCP servers: Firecrawl (web scraping), SQLite (database), Stripe (price management), GitHub (reports), Node.js Sandbox (calculations), Context7 (documentation), and Sequential Thinking (complex reasoning).

DATABASE SCHEMA

Products table: id (primary key), sku (unique), name, category, brand, stripe_product_id, stripe_price_id, current_price, created_at
Price_history table: id (primary key), product_id, competitor, price, original_price, discount_percent, in_stock, url, scraped_at
Price_alerts table: id (primary key), product_id, competitor, alert_type, old_price, new_price, change_percent, created_at
Repricing_log table: id, product_name, competitor_triggered, competitor_price, old_stripe_price, new_stripe_price, repricing_strategy, stripe_price_id, triggered_at, status

Indexes: idx_price_history_product on (product_id, scraped_at DESC), idx_price_history_competitor on (competitor)

WORKFLOW

On-demand check: Scrape (Firecrawl) → Store (SQLite) → Analyze (Node.js) → Report (GitHub)
Competitive repricing: Scrape (Firecrawl) → Compare to your price → Update (Stripe) → Log (SQLite) → Report (GitHub)

STRIPE REPRICING WORKFLOW

When competitor price drops below your current price:
1. list_products - Find your existing Stripe product
2. list_prices - Get current price for the product
3. create_price - Create new price to match/beat competitor (prices are immutable in Stripe)
4. update_product - Set the new price as default
5. Log the repricing decision to SQLite

Price strategies:
- "match": Set price equal to lowest competitor
- "undercut": Set price 1-2% below lowest competitor
- "margin_floor": Never go below your minimum margin threshold

Use Context7 when: Writing scripts with new libraries, creating visualizations, building custom scrapers, or needing latest API docs

Use Sequential Thinking when: Making complex pricing strategy decisions, planning repricing rules, investigating market anomalies, or creating strategic recommendations requiring deep analysis

EXTRACTION SCHEMAS

Amazon: title, price, list_price, rating, reviews, availability
Walmart: name, current_price, was_price, availability  
Best Buy: product_name, sale_price, regular_price, availability

RESPONSE FORMAT

Price Monitoring: Products scraped, competitors covered, your price vs competitors
Repricing Triggers: Which competitor triggered, price difference, strategy applied
Price Updated: New Stripe price ID, old vs new price, margin impact
Audit Trail: GitHub commit SHA, SQLite log entry, timestamp

TOOL ORCHESTRATION PATTERNS

Simple price check: Firecrawl → SQLite → Response
Trend analysis: SQLite → Node.js → Response
Strategy analysis: SQLite → Sequential Thinking → Response
Competitive repricing: Firecrawl → Compare → Stripe → SQLite → GitHub
Custom tool development: Context7 → Node.js → GitHub
Full intelligence report: Firecrawl → SQLite → Node.js → Sequential Thinking → GitHub

KEY USAGE PATTERNS

Use Stripe for: Listing products, listing prices, creating new prices, updating product default prices

Use Sequential Thinking for: Pricing strategy decisions (match, undercut, or hold), market anomaly investigations (why did competitor prices spike), multi-factor repricing recommendations

Use Context7 for: Getting documentation before coding, learning new libraries on-the-fly, ensuring code uses latest API conventions

Use Node.js for: Statistical calculations (moving averages, standard deviation, volatility), chart generation, margin calculations

BEST PRACTICES

Space web scraping requests 2 seconds apart to respect rate limits
Calculate price difference as (your_price - competitor_price)
Trigger repricing when competitor drops below your current price
Log all repricing decisions to SQLite with Stripe IDs for audit trail
Push pricing reports to GitHub for compliance
Always use Context7 before writing code with unfamiliar libraries
Respect margin floors—never reprice below minimum acceptable margin

COMMAND RECOGNITION PATTERNS

"Check X prices and stay competitive" → Full repricing pipeline: scrape → compare → if competitor lower: Stripe update + SQLite + GitHub
"Match competitor price for X" → Stripe: list_products → list_prices → create_price (matching)
"Undercut competitors on X" → Stripe: create_price (1-2% below lowest)
"Show price history" → SQLite query → format results
"Analyze pricing strategy for X" → Sequential Thinking analysis
"Why did competitor prices change" → Query data → Sequential Thinking investigation

CORE PRINCIPLE

When competitors change prices, don't just report—ACT. Update your Stripe prices to stay competitive, log decisions to SQLite, push audit records to GitHub. Transform competitor intelligence into automated repricing. All data stays on YOUR infrastructure.



Here’s the prompt:

Set up a competitive repricing agent:
1. I sell MacBook Air M3 on my store - current Stripe price is $549.99
2. Monitor competitor prices on Amazon, Walmart, Best Buy
3. When ANY competitor drops below my price:
   - Automatically update my Stripe product price to match or beat them
   - Use "undercut" strategy (price 1% below lowest competitor)
   - Log the repricing decision to SQLite
   - Push pricing change report to GitHub

Check prices now and reprice if needed.

Here’s how ChatGPT responds – orchestrating all 7 MCP servers in a single workflow:

Competitive Repricing Agent – Execution Complete

Within 3 minutes, ChatGPT orchestrated all 7 MCP servers to analyse the market and automatically reprice your product.

Repricing Triggered and Executed:

Metrics

Before

After

Your Price

$549.99

$504.99

Price Change

-$45.00 (-8.2%)

Market Position

3rd (behind Best Buy)

#1 Lowest

Competitor Price Scan Results:

Retailer

Price

Vs. Your New Price

Your Store

$504.99

Market Leader

Best Buy

$509.99

+$5.00 (you beat by 1%)

Walmart

$669.00

+$164.01 higher

Amazon

$699.00

+$194.01 higher

What the Agent did (6 Steps):

  1. Installed SQLite3 and created database schema with 4 tables
  2. Created Stripe product (prod_TZaK0ARRJ5OJJ8) with initial $549.99 price 
  3. Scraped live competitor prices via Firecrawl from Amazon, Best Buy, and Walmart 
  4. Analysed pricing strategy with Sequential Thinking — detected Best Buy at $509.99 below your price
  5. Executed repricing — created new Stripe price at $504.99 (price_1ScRCVI9l1vmUkzn0hTnrLmW)
  6. Pushed audit report to GitHub (commit `64a488aa`)

All data stored on your infrastructure – not OpenAI’s servers. 

To check prices again, simply ask ChatGPT to ‘check MacBook Air M3 competitor prices’—it will scrape, compare, and reprice automatically. Run this check daily, weekly, or whenever you want competitive intelligence

Explore the Full Demo

View the complete repricing report and audit trail on GitHub: https://github.com/ajeetraina/competitive-repricing-agent-mcp

Want true automation? This demo shows on-demand repricing triggered by conversation. For fully automated periodic checks, you could build a simple scheduler that calls the OpenAI API every few hours to trigger the same workflow—turning this into a hands-free competitive intelligence system.Default houston Paragraph Text

Wrapping Up

You’ve just connected ChatGPT to Docker MCP Toolkit and configured multiple MCP servers. What used to require context-switching between multiple tools, manual query writing, and hours of debugging now happens through natural conversation, safely executed in Docker containers.

This is the new paradigm for AI-assisted development. ChatGPT isn’t just answering questions anymore. It’s querying your databases, managing your repositories, scraping data, and executing code—all while Docker ensures everything stays secure and contained.

Ready to try it? Open Docker Desktop and explore the MCP Catalog. Start with SQLite, add GitHub, experiment with Firecrawl. Each server unlocks new capabilities.

The future of development isn’t writing every line of code yourself. It’s having an AI partner that can execute tasks across your entire stack securely, reproducibly, and at the speed of thought.

Learn More



from Docker https://ift.tt/Y7dmMhr
via IFTTT