Friday, December 12, 2025

New Advanced Phishing Kits Use AI and MFA Bypass Tactics to Steal Credentials at Scale

Cybersecurity researchers have documented four new phishing kits named BlackForce, GhostFrame, InboxPrime AI, and Spiderman that are capable of facilitating credential theft at scale.

BlackForce, first detected in August 2025, is designed to steal credentials and perform Man-in-the-Browser (MitB) attacks to capture one-time passwords (OTPs) and bypass multi-factor authentication (MFA). The kit is sold on Telegram forums for anywhere between €200 ($234) and €300 ($351).

The kit, according to Zscaler ThreatLabz researchers Gladis Brinda R and Ashwathi Sasi, has been used to impersonate over 11 brands, including Disney, Netflix, DHL, and UPS. It's said to be in active development.

"BlackForce features several evasion techniques with a blocklist that filters out security vendors, web crawlers, and scanners," the company said. "BlackForce remains under active development. Version 3 was widely used until early August, with versions 4 and 5 being released in subsequent months."

Phishing pages connected to the kit have been found to use JavaScript files with what has been described as "cache busting" hashes in their names (e.g., "index-[hash].js"), thereby forcing the victim's web browser to download the latest version of the malicious script instead of using a cached version.

In a typical attack using the kit, victims who click on a link are redirected to a malicious phishing page, after which a server-side check filters out crawlers and bots, before serving them a page that's designed to mimic a legitimate website. Once the credentials are entered on the page, the details are captured and sent to a Telegram bot and a command-and-control (C2) panel in real-time using an HTTP client called Axios.

When the attacker attempts to log in with the stolen credentials on the legitimate website, an MFA prompt is triggered. At this stage, the MitB techniques are used to display a fake MFA authentication page to the victim's browser through the C2 panel. Should the victim enter the MFA code on the bogus page, it's collected and used by the threat actor to gain unauthorized access to their account.

"Once the attack is complete, the victim is redirected to the homepage of the legitimate website, hiding evidence of the compromise and ensuring the victim remains unaware of the attack," Zscaler said.

GhostFrame Fuels 1M+ Stealth Phishing Attacks#

Another nascent phishing kit that has gained traction since its discovery in September 2025 is GhostFrame. At the heart of the kit's architecture is a simple HTML file that appears harmless while hiding its malicious behavior within an embedded iframe, which leads victims to a phishing login page to steal Microsoft 365 or Google account credentials.

"The iframe design also allows attackers to easily switch out the phishing content, try new tricks or target specific regions, all without changing the main web page that distributes the kit," Barracuda security researcher Sreyas Shetty said. "Further, by simply updating where the iframe points, the kit can avoid being detected by security tools that only check the outer page."

Attacks using the GhostFrame kit commence with typical phishing emails that claim to be about business contracts, invoices, and password reset requests, but are designed to take recipients to the fake page. The kit uses anti-analysis and anti-debugging to prevent attempts to inspect it using browser developer tools, and generates a random subdomain each time someone visits the site.

The visible outer pages come with a loader script that's responsible for setting up the iframe and responding to any messages from the HTML element. This can include changing the parent page's title to impersonate trusted services, modifying the site favicon, or redirecting the top-level browser window to another domain.

In the final stage, the victim is sent to a secondary page containing the actual phishing components through the iframe delivered via the constantly changing subdomain, thereby making it harder to block the threat. The kit also incorporates a fallback mechanism in the form of a backup iframe appended at the bottom of the page in the event the loader JavaScript fails or is blocked.

InboxPrime AI Phishing Kit Automates Email Attacks#

If BlackForce follows the same playbook as other traditional phishing kits, InboxPrime AI goes a step further by leveraging artificial intelligence (AI) to automate mass mailing campaigns. It's advertised on a 1,300-member-strong Telegram channel under a malware-as-a-service (MaaS) subscription model for $1,000, granting purchasers a perpetual license and full access to the source code.

"It is designed to mimic real human emailing behavior and even leverages Gmail's web interface to evade traditional filtering mechanisms," Abnormal researchers Callie Baron and Piotr Wojtyla said.

"InboxPrime AI blends artificial intelligence with operational evasion techniques and promises cybercriminals near-perfect deliverability, automated campaign generation, and a polished, professional interface that mirrors legitimate email marketing software."

The platform employs a user-friendly interface that allows customers to manage accounts, proxies, templates, and campaigns, mirroring commercial email automation tools. One of its core features is a built-in AI-powered email generator, which can produce entire phishing emails, including the subject lines, in a manner that mimics legitimate business communication.

In doing so, these services further lower the barrier to entry for cybercrime, effectively eliminating the manual work that goes into drafting such emails. In its place, attackers can configure parameters, such as language, topic, or industry, email length, and desired tone, which the toolkit uses as inputs to generate convincing lures that match the chosen theme.

What's more, the dashboard enables users to save the produced email as a reusable template, complete with support for spintax to create variations of the email messages by substituting certain template variables. This ensures that no two phishing emails look identical and helps them bypass signature-based filters that look for similar content patterns.

Some of the other supported features in InboxPrime AI are listed below -

  • A real-time spam diagnostic module that can analyze a generated email for common spam-filter triggers and suggest precise corrections
  • Sender identity randomization and spoofing, enabling attackers to customize display names for each Gmail session

"This industrialization of phishing has direct implications for defenders: more attackers can now launch more campaigns with more volume, without any corresponding increase in defender bandwidth or resources," Abnormal said. "This not only accelerates campaign launch time but also ensures consistent message quality, enables scalable, thematic targeting across industries, and empowers attackers to run professional-looking phishing operations without copywriting expertise."

Spiderman Creates Pixel-Perfect Replicas of European Banks#

The third phishing kit that has come under the cybersecurity radar is Spiderman, which permits attackers to target customers of dozens of European banks and online financial services providers, such as Blau, CaixaBank, Comdirect, Commerzbank, Deutsche Bank, ING, O2, Volksbank, Klarna, and PayPal.

"Spiderman is a full-stack phishing framework that replicates dozens of European banking login pages, and even some government portals," Varonis researcher Daniel Kelley said. "Its organized interface provides cybercriminals with an all-in-one platform to launch phishing campaigns, capture credentials, and manage stolen session data in real-time."

What's notable about the modular kit is that its seller is marketing the solution in a Signal messenger group that has about 750 members, marking a departure from Telegram. Germany, Austria, Switzerland, and Belgium are the primary targets of the phishing service.

Like in the case of BlackForce, Spiderman utilizes various techniques like ISP allowlisting, geofencing, and device filtering to ascertain that only the intended targets can access the phishing pages. The toolkit is also equipped to capture cryptocurrency wallet seed phrases, intercept OTP and PhotoTAN codes, and trigger prompts to gather credit card data.

"This flexible, multi-step approach is particularly effective in European banking fraud, where login credentials alone often aren't enough to authorize transactions," Kelley explained. "After capturing credentials, Spiderman logs each session with a unique identifier so the attacker can maintain continuity through the entire phishing workflow."

Hybrid Salty-Tycoon 2FA Attacks Spotted#

BlackForce, GhostFrame, InboxPrime AI, and Spiderman are the latest additions to a long list of phishing kits like Tycoon 2FA, Salty 2FA, Sneaky 2FA, Whisper 2FA, Cephas, and Astaroth (not to be confused with a Windows banking trojan of the same name) that have emerged over the past year.

In a report published earlier this month, ANY.RUN said it observed a new Salty-Tycoon hybrid that's already bypassing detection rules tuned to either of them. The new attack wave coincides with a sharp drop in Salty 2FA activity in late October 2025, with early stages matching Salty2FA, while later stages load code that reproduces Tycoon 2FA's execution chain.

"This overlap marks a meaningful shift; one that weakens kit-specific rules, complicates attribution, and gives threat actors more room to slip past early detection," the company said.

"Taken together, this provides clear evidence that a single phishing campaign, and, more interestingly, a single sample, contains traces of both Salty2FA and Tycoon, with Tycoon serving as a fallback payload once the Salty infrastructure stopped working for reasons that are still unclear."



from The Hacker News https://ift.tt/LElTI2R
via IFTTT

Is AI the New Insider Threat?

undefined Imgur 8

Insider threats have always been difficult to manage because they blur the line between trusted access and risky behavior. 

With generative AI, these risks aren’t tied to malicious insiders misusing credentials or bypassing controls; they come from well-intentioned employees simply trying to get work done faster. Whether it’s developers refactoring code, analysts summarizing long reports, or marketers drafting campaigns, the underlying motivation is almost always productivity and efficiency.

Unfortunately, that’s precisely what makes this risk so difficult to manage. Employees don’t see themselves as creating security problems; they’re solving bottlenecks. Security is an afterthought at best. 

This gap in perception creates an opportunity for missteps. By the time IT or security teams realize an AI tool has been widely adopted, patterns of risky use may already be deeply embedded in workflows.

Right now, AI use in the workplace is a bit of a free-for-all. And when everyone’s saying “it’s fun” and “everyone’s doing it”, it feels like being back in high school: no one wants to be *that* person telling them to stop because it’s risky. 

But, as security, we do have a responsibility.

In this article, I explore the risks of unmanaged AI use, why existing security approaches fall short, and suggest one thing I believe we can do to balance users’ enthusiasm with responsibility (without being the party pooper).

Examples of Risky AI Use

The risks of AI use in the workplace usually fall into one of three categories:

  • Sensitive data breaches: A single pasted transcript, log, or API key may seem minor, but once outside company boundaries, it’s effectively gone, subject to provider retention and analysis.
  • Intellectual property leakage: Proprietary code, designs, or research drafts fed into AI tools can erode competitive advantage if they become training data or are exposed via prompt injection.
  • Regulatory and compliance violations: Uploading regulated data HIPAA, GDPR, etc. into unsanctioned AI systems can trigger fines or legal action, even if no breach occurs.

What makes these risks especially difficult is their subtlety. They emerge from everyday workflows, not obvious policy violations, which means they often go unnoticed until the damage is done.

Shadow AI

For years, Shadow IT has meant unsanctioned SaaS apps, messaging platforms, or file storage systems. 

Generative AI is now firmly in this category. 

Employees don’t think that pasting text into a chatbot like ChatGPT introduces a new system to the enterprise. In practice, however, they’re moving data into an external environment with no oversight, logging, or contractual protection.

What’s different about Shadow AI is the lack of visibility: unlike past technologies, it often leaves no obvious logs, accounts, or alerts for security teams to follow. With cloud file-sharing, security teams could trace uploads, monitor accounts created with corporate emails, or detect suspicious network traffic. 

But AI use often looks like normal browser activity. And while some security teams do scan what employees paste into web forms, those controls are limited. 

Which brings us to the real problem: we don’t really have the tools to manage AI use properly. Not yet, at least.

Controls Are Lacking

We all see people trying to get work done faster, and we know we should be putting some guardrails in place, but the options out there are either expensive, complicated, or still figuring themselves out.

The few available AI governance and security tools have clear limitations (even though their marketing might try to convince you otherwise):

  • Emerging AI governance platforms offer usage monitoring, policy enforcement, and guardrails around sensitive data, but they’re often expensive, complex, or narrowly focused.
  • Traditional controls like DLP and XDR catch structured data such as phone numbers, IDs, or internal customer records, but they struggle with more subtle, hard-to-detect information: source code, proprietary algorithms, or strategic documents.

Even with these tools, the pace of AI adoption means security teams are often playing catch-up. The reality is that while controls are improving, they rarely keep up with how quickly employees are exploring AI.

Lessons from Past Security Blind Spots

Employees charging ahead with new tools while security teams scramble to catch up is not so different from the early days of cloud file sharing: employees flocked to Dropbox or Google Drive before IT had sanctioned solutions. Or think back to the rise of “bring your own device” (BYOD), when personal phones and laptops started connecting to corporate networks without clear policies in place.

Both movements promised productivity, but they also introduced risks that security teams struggled to manage retroactively.

Generative AI is repeating this pattern, only at a much faster rate. While cloud tools or BYOD require some setup, or at least a decision to connect a personal device, AI tools are available instantly in a browser. The barrier to entry is practically zero. That means adoption can spread through an organization long before security leaders even realize it’s happening. 

And as with cloud and BYOD, the sequence is familiar: employee adoption comes first, controls follow later, and those retroactive measures are almost always costlier, clumsier, and less effective than proactive governance.

So What Can We Do?

Remember: AI-driven insider risk isn’t about bad actors but about good people trying to be productive and efficient. (OK, maybe with a few lazy ones thrown in for good measure.) It’s ordinary rather than malicious behavior that’s unfortunately creating unnecessary exposure. 

That means there’s one measure every organization can implement immediately: educating employees.

Education works best when it’s practical and relatable. Think less “compliance checkbox,” and more “here’s a scenario you’ve probably been in.” That’s how you move from fuzzy awareness to actual behavior change.

Here are three steps that make a real difference:

  • Build awareness with real examples. Show how pasting code, customer details, or draft plans into a chatbot can have the same impact as posting them publicly. That’s the “aha” moment most people need.
undefined Imgur 9
  • Emphasize ownership. Employees already know they shouldn’t reuse passwords or click suspicious links; AI use should be framed in the same personal-responsibility terms. The goal is a culture where people feel they’re protecting the company, not just following rules.
  • Set clear boundaries. Spell out which categories of data are off-limits PII, source code, unreleased products, regulated records) and offer safe alternatives like internal AI sandboxes. Clarity reduces guesswork and removes the temptation of convenience.

Until governance tools mature, these low-friction steps form the strongest defense we have.

If you can enable people to harness AI’s productivity while protecting your critical data, you reduce today’s risks. And you’re better prepared for the regulations and oversight that are certain to follow.



from Docker https://ift.tt/Imkd1lp
via IFTTT

The Good, the Bad and the Ugly in Cybersecurity – Week 50

The Good | U.S. & Spanish Officials Crack Down on Hacktivist & Identity Theft Activities

U.S. officials have charged Ukrainian national Victoria Dubranova for allegedly supporting Russian state-backed hacktivist groups in global critical infrastructure attacks. Extradited earlier this year, Dubranova faces trials in February and April 2026 tied to her suspected involvement in NoName057(16) and CyberArmyofRussia_Reborn (CARR), respectively.

The indictment states that NoName057(16) operated as a state-sanctioned effort involving multiple threat actors and a government-created IT center. Their tooling includes a custom DDoS called ‘DDoSia’ used to launch attacks against government and financial agencies as well as critical transportation.

Prosecutors say Russia’s military intelligence service funded and directed CARR, a hacktivist group with over 75,000 Telegram followers and a long record of attacks. Damage to U.S. water systems, an ammonia leak at a Los Angeles facility, and targeting of nuclear and election infrastructure are all attributed to CARR. Dubranova faces up to 27 years on CARR-related charges and 5 years on NoName charges. Multi-million dollar rewards are in place for information on either threat group.

In Spain, authorities have arrested a 19-year-old hacker for the alleged theft and sale of 64 million records stolen from nine organizations. The suspect faces charges including cybercrime, unauthorized access, and privacy violations.

The investigation first started in June after breaches at the unnamed firms were reported. Police later confirmed that the suspect possessed millions of stolen records containing full names, addresses, emails, phone numbers, DNI numbers, and IBAN codes. He reportedly tried to sell the data on multiple forums using six accounts and five pseudonyms.

While officers have seized cryptocurrency wallets containing proceeds from the alleged sales, the total number of individuals affected remains unclear. Given the scale of the crime, Spanish authorities emphasize the seriousness of attempting to monetize stolen personal information.

The Bad | Malicious VS Code Extensions Deploy Stealthy Infostealer Malware

Two malicious Visual Studio Code extensions, Bitcoin Black and Codo AI, were recently discovered on Microsoft’s VS Code Marketplace, infecting developers with information-stealing malware. Each disguised as a harmless color theme and an AI coding assistant, the extensions were published under the alias ‘BigBlack’. While download counts are still low at the time of this writing, both packages point to a clear intent to compromise developer environments.

Researchers note that earlier versions of Bitcoin Black used a PowerShell script to fetch a password-protected payload, briefly flashing a visible window that could alert users. The latest version now has a hidden batch script that quietly downloads a DLL and executable via curl, significantly reducing detection risk. Meanwhile, Codo AI delivers legitimate code-completion via ChatGPT or DeepSeek but embeds a malicious payload alongside these features.

Both extensions deploy the Lightshot screenshot tool paired with a malicious DLL that uses DLL hijacking to load an infostealer called runtime.exe. Once executed, the malware creates a directory under %APPDATA%\Local\ and begins exfiltrating sensitive data from system details and clipboard content to WiFi passwords, screenshots, installed software lists, and running processes. Finally, it launches Chrome and Edge in headless mode to extract cookies and hijack active sessions, targeting several crypto wallets including Phantom, MetaMask, and Exodus.

VirusTotal report for Lightshot.dll (Source: Koi.ai)
VirusTotal report for Lightshot.dll (Source: Koi.ai)

Microsoft has since removed both extensions from the Marketplace and the malicious DLL is already flagged by 29 of 72 antivirus engines on VirusTotal. Developers are advised to install extensions only from trusted publishers and stay alert to atypical extension behavior.

The Ugly | CyberVolk Resurfaces With New Telegram-Powered RaaS ‘VolkLocker’

CyberVolk, a pro-Russia hacktivist persona first identified in late 2024, resurfaced this August with a revamped ransomware-as-a-service (RaaS) offering known as VolkLocker (CyberVolk 2.x). SentinelLABS reported this week that the group has pivoted to using Telegram for both automation and customer interaction; however, operations are being undercut by payloads that retain artifacts, allowing victims to recover their files.

VolkLocker is written in Golang and supports both Windows and Linux. Payloads are distributed largely unprotected, with RaaS operators instructed to use UPX for packing. Builders must supply key configuration values including a Bitcoin address, Telegram bot token ID, encryption deadline, file extension, and more.

On execution, the ransomware attempts privilege escalation via the “ms-settings” UAC bypass, performs system and VM checks, and enumerates drives for encryption. A dynamic HTML ransom note then displays a 48-hour countdown, while a separate enforcement timer corrupts the system if deadlines or decryption attempts fail.

Telegram serves as the backbone of the RaaS, offering operators an administrative panel, victim enumeration, broadcast messaging, and optional extensions such as RAT and keylogger control. Recent ads show CyberVolk expanding into standalone tooling with tiered pricing models.

Decryption triggered via backed-up key file
Decryption triggered via backed-up key file

The encryption routine uses AES-256 in GCM mode with a hardcoded master key. Crucially, the key is written in plaintext to a file in %TEMP%, alongside the victim’s unique identifier and the attacker’s Bitcoin address – an apparent leftover test artifact that allows victims to decrypt their own files.

Despite repeated account bans on Telegram, CyberVolk continues to evolve its services. The plaintext key flaw, however, reveals quality-control issues that limit the real-world impact of VolkLocker as-is. SentinelOne’s Singularity Platform detects and blocks behaviors and payloads linked to CyberVolk.



from SentinelOne https://ift.tt/ZfJs6Fz
via IFTTT

How to Add MCP Servers to ChatGPT with Docker MCP Toolkit

ChatGPT is great at answering questions and generating code. But here’s what it can’t do: execute that code, query your actual database, create a GitHub repo with your project, or scrape live data from websites. It’s like having a brilliant advisor who can only talk, never act.

Docker MCP Toolkit changes this completely. 

Here’s what that looks like in practice: You ask ChatGPT to check MacBook Air prices across Amazon, Walmart, and Best Buy. If competitor prices are lower than yours, it doesn’t just tell you, it acts: automatically adjusting your Stripe product price to stay competitive, logging the repricing decision to SQLite, and pushing the audit trail to GitHub. All through natural conversation. No manual coding. No copy-pasting scripts. Real execution.

“But wait,” you might say, “ChatGPT already has a shopping research feature.” True. But ChatGPT’s native shopping can only lookup prices. Only MCP can execute: creating payment links, generating invoices, storing data in your database, and pushing to your GitHub. That’s the difference between an advisor and an actor.

By the end of this guide, you’ll build exactly this: a Competitive Repricing Agent that checks competitor prices on demand, compares them to yours, and automatically adjusts your Stripe product prices when competitors are undercutting you.

Here’s how the pieces fit together:

  • ChatGPT provides the intelligence: understanding your requests and determining what needs to happen
  • Docker MCP Gateway acts as the secure bridge: routing requests to the right tools
  • MCP Servers are the hands: executing actual tasks in isolated Docker containers

The result? ChatGPT can query your SQL database, manage GitHub repositories, scrape websites, process payments, run tests, and more—all while Docker’s security model keeps everything contained and safe.

In this guide, you’ll learn how to add seven MCP servers to ChatGPT by connecting to Docker MCP Toolkit. We’ll use a handful of must-have MCP servers: Firecrawl for web scraping, SQLite for data persistence, GitHub for version control, Stripe for payment processing, Node.js Sandbox for calculations, Sequential Thinking for complex reasoning, and Context7 for documentation. Then, you’ll build the Competitive Repricing Agent shown above, all through conversation.

What is Model Context Protocol (MCP)?

Before we dive into the setup, let’s clarify what MCP actually is.

Model Context Protocol (MCP) is the standardized way AI agents like ChatGPT and Claude connect to tools, APIs, and services. It’s what lets ChatGPT go beyond conversation and perform real-world actions like querying databases, deploying containers, analyzing datasets, or managing GitHub repositories.

In short: MCP is the bridge between ChatGPT’s reasoning and your developer stack. And Docker? Docker provides the guardrails that make it safe.

Why Use Docker MCP Toolkit with ChatGPT?

I’ve been working with AI tools for a while now, and this Docker MCP integration stands out for one reason: it actually makes ChatGPT productive.

Most AI integrations feel like toys: impressive demos that break in production. Docker MCP Toolkit is different. It creates a secure, containerized environment where ChatGPT can execute real tasks without touching your local machine or production systems.

Every action happens in an isolated container. Every MCP server runs in its own security boundary. When you’re done, containers are destroyed. No residue, no security debt, complete reproducibility across your entire team.

What ChatGPT Can and Can’t Do Without MCP

Let’s be clear about what changes when you add MCP.

Without MCP

You ask ChatGPT to build a system to regularly scrape product prices and store them in a database. ChatGPT responds with Python code, maybe 50 lines using BeautifulSoup and SQLite. Then you must copy the code, install dependencies, create the database schema, run the script manually, and set up a scheduler if you want it to run regularly.

Yes, ChatGPT remembers your conversation and can store memories about you. But those memories live on OpenAI’s servers—not in a database you control.

With MCP

You ask ChatGPT the same thing. Within seconds, it calls Firecrawl MCP to actually scrape the website. It calls SQLite MCP to create a database on your machine and store the data. It calls GitHub MCP to save a report to your repository. The entire workflow executes in under a minute.

Real data gets stored in a real database on your infrastructure. Real commits appear in your GitHub repository. Close ChatGPT, come back tomorrow, and ask “Show me the price trends.” ChatGPT queries your SQLite database and returns results instantly because the data lives in a database you own and control, not in ChatGPT’s conversation memory.

The data persists in your systems, ready to query anytime; no manual script execution required.

Why This Is Different from ChatGPT’s Native Shopping

ChatGPT recently released a shopping research feature that can track prices and make recommendations. Here’s what it can and cannot do:

What ChatGPT Shopping Research can do:

  • Track prices across retailers
  • Remember price history in conversation memory
  • Provide comparisons and recommendations

What ChatGPT Shopping Research cannot do:

  • Automatically update your product prices in Stripe
  • Execute repricing logic based on competitor changes
  • Store pricing data in your database (not OpenAI’s servers)
  • Push audit trails to your GitHub repository
  • Create automated competitive response workflows

With Docker MCP Toolkit, ChatGPT becomes a competitive pricing execution system. When you ask it to check prices and competitors are undercutting you, it doesn’t just inform you, it acts: updating your Stripe prices to match or beat competitors, logging decisions to your database, and pushing audit records to GitHub. The data lives in your infrastructure, not OpenAI’s servers.

Setting Up ChatGPT with Docker MCP Toolkit

Prerequisites

Before you begin, ensure you have:

  • A machine with 8 GB RAM minimal, ideally 16GB
  • Install Docker Desktop
  • A ChatGPT Plus, Pro, Business, or Enterprise Account
  • ngrok account (free tier works) – For exposing the Gateway publicly

Step 1. Enable ChatGPT developer mode

  • Head over to ChatGPT and create a new account. 
  • Click on your profile icon at the top left corner of the ChatGPT page and select “Settings”. Select “Apps and Connectors” and scroll down to the end of the page to select “Advanced Settings.”
image1

SettingsApps & ConnectorsAdvancedDeveloper Mode (ON)

image10

ChatGPT Developer Mode provides full Model Context Protocol (MCP) client support for all tools, both read and write operations. This feature was announced in the first week of September 2025, marking a significant milestone in AI-developer integration. ChatGPT can perform write actions—creating repositories, updating databases, modifying files—all with proper confirmation modals for safety.

Key capabilities:

  • Full read/write MCP tool support
  • Custom connector creation
  • OAuth and authentication support
  • Explicit confirmations for write operations
  • Available on Plus, Pro, Business, Enterprise, and Edu plans

Step 2. Create MCP Gateway

This creates and starts the MCP Gateway container that ChatGPT will connect to.

docker mcp server init --template=chatgpt-app-basic test-chatgpt-app

Successfully initialized MCP server project in test-chatgpt-app (template: chatgpt-app-basic)
Next steps:
  cd test-chatgpt-app
  docker build -t test-chatgpt-app:latest .

Step 3. List out all the project files

ls -la
total 64
drwxr-xr-x@   9 ajeetsraina  staff   288 16 Nov 16:53 .
drwxr-x---+ 311 ajeetsraina  staff  9952 16 Nov 16:54 ..
-rw-r--r--@   1 ajeetsraina  staff   165 16 Nov 16:53 catalog.yaml
-rw-r--r--@   1 ajeetsraina  staff   371 16 Nov 16:53 compose.yaml
-rw-r--r--@   1 ajeetsraina  staff   480 16 Nov 16:53 Dockerfile
-rw-r--r--@   1 ajeetsraina  staff    88 16 Nov 16:53 go.mod
-rw-r--r--@   1 ajeetsraina  staff  2576 16 Nov 16:53 main.go
-rw-r--r--@   1 ajeetsraina  staff  2254 16 Nov 16:53 README.md
-rw-r--r--@   1 ajeetsraina  staff  6234 16 Nov 16:53 ui.html

Step 4. Examine the Compose file

services:
  gateway:
    image: docker/mcp-gateway                # Official Docker MCP Gateway image
    command:
      - --servers=test-chatgpt-app           # Name of the MCP server to expose
      - --catalog=/mcp/catalog.yaml          # Path to server catalog configuration
      - --transport=streaming                # Use streaming transport for real-time responses
      - --port=8811                           # Port the gateway listens on
    environment:
      - DOCKER_MCP_IN_CONTAINER=1            # Tells gateway it's running inside a container
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock  # Allows gateway to spawn sibling containers
      - ./catalog.yaml:/mcp/catalog.yaml           # Mount local catalog into container
    ports:
      - "8811:8811"                           # Expose gateway port to host


Step 5. Bringing up the compose services

docker compose up -d
[+] Running 2/2
 ✔ Network test-chatgpt-app_default      Created                                            0.0s
 ✔ Container test-chatgpt-app-gateway-1  Started  

docker ps | grep test-chatgpt-app
eb22b958e09c   docker/mcp-gateway   "/docker-mcp gateway…"   21 seconds ago   Up 20 seconds   0.0.0.0:8811->8811/tcp, [::]:8811->8811/tcp   test-chatgpt-app-gateway-1

Step 6. Verify the MCP session

curl http://localhost:8811/mcp
GET requires an active session

Step 7. Expose with Ngrok

Install ngrok and expose your local gateway. You will need to sign up for an ngrok account to obtain an auth token.

brew install ngrok
ngrok config add-authtoken <your_token_id>
ngrok http 8811

Note the public URL (like https://91288b24dc98.ngrok-free.app). Keep this terminal open.

Step 8. Connect ChatGPT

In ChatGPT, go to SettingsApps & ConnectorsCreate.

image11

Step 9. Create connector:

SettingsApps & ConnectorsCreate

- Name: Test MCP Server
- Description: Testing Docker MCP Toolkit integration
- Connector URL: https://[YOUR_NGROK_URL]/mcp
- Authentication: None
- Click "Create"

Test it by asking ChatGPT to call the greet tool. If it responds, your connection works. Here’s how it looks:

image7
image6

Real-World Demo: Competitive Repricing Agent

Now that you’ve connected ChatGPT to Docker MCP Toolkit, let’s build something that showcases what only MCP can do—something ChatGPT’s native shopping feature cannot replicate.

We’ll create a Competitive Repricing Agent that checks competitor prices on demand, and when competitors are undercutting you, automatically adjusts your Stripe product prices to stay competitive, logs the repricing decision to SQLite, and pushes audit records to GitHub.

Time to build: 15 minutes  

Monthly cost: Free Stripe (test mode) + $1.50-$15 (Firecrawl API)

Infrastructure: $0 (SQLite is free)

The Challenge

E-commerce businesses face a constant dilemma:

  • Manual price checking across multiple retailers is time-consuming and error-prone
  • Comparing competitor prices and calculating optimal repricing requires multiple tools
  • Executing price changes across your payment infrastructure requires context-switching
  • Historical trend data is scattered across spreadsheets
  • Strategic insights require manual analysis and interpretation

Result: Missed opportunities, delayed reactions, and losing sales to competitors with better prices.

The Solution: On-Demand Competitive Repricing Agent

Docker MCP Toolkit transforms ChatGPT from an advisor into an autonomous agent that can actually execute. The architecture routes your requests through a secure MCP Gateway that orchestrates specialized tools: Firecrawl scrapes live prices, Stripe creates payment links and invoices, SQLite stores data on your infrastructure, and GitHub maintains your audit trail. Each tool runs in an isolated Docker container: secure, reproducible, and under your control.

The 7 MCP Servers We’ll Use

Server

Purpose

Why It Matters

Firecrawl

Web scraping

Extracts live prices from any website

SQLite

Data persistence

Stores 30+ days of price history

Stripe

Payment management

Updates your product prices to match or beat competitors

GitHub

Version control

Audit trail for all reports

Sequential Thinking

Complex reasoning

Multi-step strategic analysis

Context7

Documentation

Up-to-date library docs for code generation

Node.js Sandbox

Calculations

Statistical analysis in isolated containers

The Complete MCP Workflow (Executes in under 3 minutes)

image10 1

Step 1. Scrape and Store (30 seconds)

  • Agent scrapes live prices from Amazon, Walmart, and Best Buy 
  • Compares against your current Stripe product price

Step 2: Compare Against Your Price (15 seconds) 

  • Best Buy drops to $509.99—undercutting your $549.99
  • Agent calculates optimal repricing strategy
  • Determines new competitive price point

Step 3: Execute Repricing (30 seconds)

  • Updates your Stripe product with the new competitive price
  • Logs repricing decision to SQLite with full audit trail
  • Pushes pricing change report to GitHub

Step 4: Stay Competitive (instant)

  • Your product now priced competitively
  • Complete audit trail in your systems
  • Historical data ready for trend analysis

The Demo Setup: Enable Docker MCP Toolkit

Open Docker Desktop and enable the MCP Toolkit from the Settings menu.

To enable:

  1. Open Docker Desktop
  2. Go to SettingsBeta Features
  3. Toggle Docker MCP Toolkit ON
  4. Click Apply
image8

Click MCP Toolkit in the Docker Desktop sidebar, then select Catalog to explore available servers.

For this demonstration, we’ll use seven MCP servers:

  • SQLite RDBMS with advanced analytics, text and vector search, geospatial capabilities, and intelligent workflow automation
  • Stripe –  Updates your product prices to match or beat competitors for automated repricing workflows
  • GitHub – Handles version control and deployment
  • Firecrawl – Web scraping and content extraction
  • Node.js Sandbox – Runs tests, installs dependencies, validates code (in isolated containers)
  • Sequential Thinking – Debugs failing tests and optimizes code
  • Context7 – Provides code documentation for LLMs and AI code editors

Let’s configure each one step by step.

1. Configure SQLite MCP Server

The SQLite MCP Server requires no external database setup. It manages database creation and queries through its 25 built-in tools.

To setup the SQLite MCP Server, follow these steps:

  1. Open Docker Desktop → access MCP Toolkit → Catalog
  2. Search “SQLite”
  3. Click + Add
  4. No configuration needed, just click Start MCP Server
docker mcp server ls
# Should show sqlite-mcp-server as enabled

That’s it. ChatGPT can now create databases, tables, and run queries through conversation.

2. Configure Stripe MCP Server

The Stripe MCP server gives ChatGPT full access to payment infrastructure—listing products, managing prices, and updating your catalog to stay competitive.

Get Stripe API Key

  1. Go to dashboard.stripe.com
  2. Navigate to Developers → API Keys
  3. Copy your Secret Key:
    • Use sk_test_... for sandbox/testing
    • Use sk_live_... for production

Configure in Docker Desktop

  1. Open Docker Desktop → MCP Toolkit → Catalog
  2. Search for “Stripe”
  3. Click + Add
  4. Go to the Configuration tab
  5. Add your API key:
    • Field: stripe.api_key
    • Value: Your Stripe secret key
  6. Click Save and Start Server

Or via CLI:

docker mcp secret set STRIPE.API_KEY="sk_test_your_key_here"
docker mcp server enable stripe

3. Configure GitHub Official MCP Server

The GitHub MCP server lets ChatGPT create repositories, manage issues, review pull requests, and more.

Option 1: OAuth Authentication (Recommended)

OAuth is the easiest and most secure method:

  1. In MCP ToolkitCatalog, search “GitHub Official”
  2. Click + Add
  3. Go to the OAuth tab in Docker Desktop
  4. Find the GitHub entry
  5. Click “Authorize”
  6. Your browser opens GitHub’s authorization page
  7. Click “Authorize Docker” on GitHub
  8. You’re redirected back to Docker Desktop
  9. Return to the Catalog tab, find GitHub Official
  10. Click Start Server

Advantage: No manual token creation. Authorization happens through GitHub’s secure OAuth flow with automatic token refresh.

Option 2: Personal Access Token

If you prefer manual control or need specific scopes:

Step 1: Create GitHub Personal Access Token

  1. Go to https://github.com and sign in
  2. Click your profile picture → Settings
  3. Scroll to “Developer settings” in the left sidebar
  4. Click “Personal access tokens”“Tokens (classic)”
  5. Click “Generate new token”“Generate new token (classic)”
  6. Name it: “Docker MCP ChatGPT”
  7. Select scopes:
    • repo (Full control of repositories)
    • workflow (Update GitHub Actions workflows)
    • read:org (Read organization data)
  8. Click “Generate token”
  9. Copy the token immediately (you won’t see it again!)

Step 2: Configure in Docker Desktop

In MCP Toolkit → Catalog, find GitHub Official:

  1. Click + Add (if not already added)
  2. Go to the Configuration tab
  3. Select “Personal Access Token” as the authentication method
  4. Paste your token
  5. Click Start Server

Or via CLI:

docker mcp secret set GITHUB.PERSONAL_ACCESS_TOKEN="github_pat_YOUR_TOKEN_HERE"

Verify GitHub Connection

docker mcp server ls

# Should show github as enabled

4. Configure Firecrawl MCP Server

The Firecrawl MCP server gives ChatGPT powerful web scraping and search capabilities.

Get Firecrawl API Key

  1. Go to https://www.firecrawl.dev
  2. Create an account (or sign in)
  3. Navigate to API Keys in the sidebar
  4. Click “Create New API Key”
  5. Copy the API key
image13

Configure in Docker Desktop

  1. Open Docker DesktopMCP ToolkitCatalog
  2. Search for “Firecrawl”
  3. Find Firecrawl in the results
  4. Click + Add
  5. Go to the Configuration tab
  6. Add your API key:
    • Field: firecrawl.api_key
    • Value: Your Firecrawl API key
  7. Leave all other entries blank
  8. Click Save and Add Server

Or via CLI:

docker mcp secret set FIRECRAWL.API_KEY="fc-your-api-key-here"
docker mcp server enable firecrawl

What You Get

6+ Firecrawl tools, including:

  • firecrawl_scrape – Scrape content from a single URL
  • firecrawl_crawl – Crawl entire websites and extract content
  • firecrawl_map – Discover all indexed URLs on a site
  • firecrawl_search – Search the web and extract content
  • firecrawl_extract – Extract structured data using LLM capabilities
  • firecrawl_check_crawl_status – Check crawl job status

5. Configure Node.js Sandbox MCP Server

The Node.js Sandbox enables ChatGPT to execute JavaScript in isolated Docker containers.

Note: This server requires special configuration because it uses Docker-out-of-Docker (DooD) to spawn containers.

Understanding the Architecture

The Node.js Sandbox implements the Docker-out-of-Docker (DooD) pattern by mounting /var/run/docker.sock. This gives the sandbox container access to the Docker daemon, allowing it to spawn ephemeral sibling containers for code execution.

When ChatGPT requests JavaScript execution:

  1. Sandbox container makes Docker API calls
  2. Creates temporary Node.js containers (with resource limits)
  3. Executes code in complete isolation
  4. Returns results
  5. Auto-removes the container

Security Note: Docker socket access is a privilege escalation vector (effectively granting root-level host access). This is acceptable for local development but requires careful consideration for production use.

Add Via Docker Desktop

  1. MCP ToolkitCatalog
  2. Search “Node.js Sandbox”
  3. Click + Add

Unfortunately, the Node.js Sandbox requires manual configuration that can’t be done entirely through the Docker Desktop UI. We’ll need to configure ChatGPT’s connector settings directly.

Prepare Output Directory

Create a directory for sandbox output:

# macOS/Linux
mkdir -p ~/Desktop/sandbox-output

# Windows
mkdir %USERPROFILE%\Desktop\sandbox-output

Configure Docker File Sharing

Ensure this directory is accessible to Docker:

  1. Docker DesktopSettingsResourcesFile Sharing
  2. Add ~/Desktop/sandbox-output (or your Windows equivalent)
  3. Click Apply & Restart

6. Configure Sequential Thinking MCP Server

The Sequential Thinking MCP server gives ChatGPT the ability for dynamic and reflective problem-solving through thought sequences. Adding the Sequential Thinking MCP server is straightforward –  it doesn’t require any API key. Just search for Sequential Thinking in the Catalog and get it to your MCP server list.

In Docker Desktop:

  1. Open Docker DesktopMCP ToolkitCatalog
  2. Search for “Sequential Thinking”
  3. Find Sequential Thinking in the results
  4. Click “Add MCP Server” to add without any configuration

The Sequential Thinking MCP server should now appear under “My Servers” in Docker MCP Toolkit.

What you get:

  • A single Sequential Thinking tool that includes:
    • sequentialthinking – A detailed tool for dynamic and reflective problem-solving through thoughts. This tool helps analyze problems through a flexible thinking process that can adapt and evolve. Each thought can build on, question, or revise previous insights as understanding deepens.
image2

7. Configure Context7 MCP Server

The Context7 MCP enables ChatGPT to access the latest and up-to-date code documentation for LLMs and AI code editors. Adding the Context7 MCP server is straightforward. It doesn’t require any API key. Just search for Context7 in the Catalog and get it added to the MCP server lists.

In Docker Desktop:

  1. Open Docker DesktopMCP ToolkitCatalog
  2. Search for “Context7”
  3. Find Context7 in the results
  4. Click “Add MCP Server” to add without any configuration
image3

The Context7 MCP server should now appear under “My Servers” in Docker MCP Toolkit

What you get:

  • 2 Context7 tools including:
    • get-library-docs – Fetches up-to-date documentation for a library.
    • resolve-library-id – Resolves a package/product name to a Context7-compatible library ID and returns a list of matching libraries. 

Verify if all the MCP servers are available and running.

docker mcp server ls

MCP Servers (7 enabled)

NAME                   OAUTH        SECRETS      CONFIG       DESCRIPTION
------------------------------------------------------------------------------------------------
context7               -            -            -            Context7 MCP Server -- Up-to-da...
fetch                  -            -            -            Fetches a URL from the internet...
firecrawl              -            ✓ done       partial    Official Firecrawl MCP Server...
github-official        ✓ done       ✓ done       -            Official GitHub MCP Server, by ...
node-code-sandbox      -            -            -            A Node.js–based Model Context P...
sequentialthinking     -            -            -            Dynamic and reflective problem-...
sqlite-mcp-server      -            -            -            The SQLite MCP Server transform...
stripe                 -            ✓ done       -            Interact with Stripe services o...

Tip: To use these servers, connect to a client (IE: claude/cursor) with docker mcp client connect <client-name>

Configuring ChatGPT App and Connector

Use the following compose file in order to let ChatGPT discover all the tools under Docker MCP Catalog:

services:
  gateway:
    image: docker/mcp-gateway
    command:
      - --catalog=/root/.docker/mcp/catalogs/docker-mcp.yaml
      - --servers=context7,firecrawl,github-official,node-code-sandbox,sequentialthinking,sqlite-mcp-server,stripe
      - --transport=streaming
      - --port=8811
    environment:
      - DOCKER_MCP_IN_CONTAINER=1
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ~/.docker/mcp:/root/.docker/mcp:ro
    ports:
      - "8811:8811"


By now, you should be able to view all the MCP tools under ChatGPT Developer Mode.

Chatgpt MCP image

Let’s Test it Out

Now we give ChatGPT its intelligence. Copy this system prompt and paste it into your ChatGPT conversation:

You are a Competitive Repricing Agent that monitors competitor prices, automatically adjusts your Stripe product prices, and provides strategic recommendations using 7 MCP servers: Firecrawl (web scraping), SQLite (database), Stripe (price management), GitHub (reports), Node.js Sandbox (calculations), Context7 (documentation), and Sequential Thinking (complex reasoning).

DATABASE SCHEMA

Products table: id (primary key), sku (unique), name, category, brand, stripe_product_id, stripe_price_id, current_price, created_at
Price_history table: id (primary key), product_id, competitor, price, original_price, discount_percent, in_stock, url, scraped_at
Price_alerts table: id (primary key), product_id, competitor, alert_type, old_price, new_price, change_percent, created_at
Repricing_log table: id, product_name, competitor_triggered, competitor_price, old_stripe_price, new_stripe_price, repricing_strategy, stripe_price_id, triggered_at, status

Indexes: idx_price_history_product on (product_id, scraped_at DESC), idx_price_history_competitor on (competitor)

WORKFLOW

On-demand check: Scrape (Firecrawl) → Store (SQLite) → Analyze (Node.js) → Report (GitHub)
Competitive repricing: Scrape (Firecrawl) → Compare to your price → Update (Stripe) → Log (SQLite) → Report (GitHub)

STRIPE REPRICING WORKFLOW

When competitor price drops below your current price:
1. list_products - Find your existing Stripe product
2. list_prices - Get current price for the product
3. create_price - Create new price to match/beat competitor (prices are immutable in Stripe)
4. update_product - Set the new price as default
5. Log the repricing decision to SQLite

Price strategies:
- "match": Set price equal to lowest competitor
- "undercut": Set price 1-2% below lowest competitor
- "margin_floor": Never go below your minimum margin threshold

Use Context7 when: Writing scripts with new libraries, creating visualizations, building custom scrapers, or needing latest API docs

Use Sequential Thinking when: Making complex pricing strategy decisions, planning repricing rules, investigating market anomalies, or creating strategic recommendations requiring deep analysis

EXTRACTION SCHEMAS

Amazon: title, price, list_price, rating, reviews, availability
Walmart: name, current_price, was_price, availability  
Best Buy: product_name, sale_price, regular_price, availability

RESPONSE FORMAT

Price Monitoring: Products scraped, competitors covered, your price vs competitors
Repricing Triggers: Which competitor triggered, price difference, strategy applied
Price Updated: New Stripe price ID, old vs new price, margin impact
Audit Trail: GitHub commit SHA, SQLite log entry, timestamp

TOOL ORCHESTRATION PATTERNS

Simple price check: Firecrawl → SQLite → Response
Trend analysis: SQLite → Node.js → Response
Strategy analysis: SQLite → Sequential Thinking → Response
Competitive repricing: Firecrawl → Compare → Stripe → SQLite → GitHub
Custom tool development: Context7 → Node.js → GitHub
Full intelligence report: Firecrawl → SQLite → Node.js → Sequential Thinking → GitHub

KEY USAGE PATTERNS

Use Stripe for: Listing products, listing prices, creating new prices, updating product default prices

Use Sequential Thinking for: Pricing strategy decisions (match, undercut, or hold), market anomaly investigations (why did competitor prices spike), multi-factor repricing recommendations

Use Context7 for: Getting documentation before coding, learning new libraries on-the-fly, ensuring code uses latest API conventions

Use Node.js for: Statistical calculations (moving averages, standard deviation, volatility), chart generation, margin calculations

BEST PRACTICES

Space web scraping requests 2 seconds apart to respect rate limits
Calculate price difference as (your_price - competitor_price)
Trigger repricing when competitor drops below your current price
Log all repricing decisions to SQLite with Stripe IDs for audit trail
Push pricing reports to GitHub for compliance
Always use Context7 before writing code with unfamiliar libraries
Respect margin floors—never reprice below minimum acceptable margin

COMMAND RECOGNITION PATTERNS

"Check X prices and stay competitive" → Full repricing pipeline: scrape → compare → if competitor lower: Stripe update + SQLite + GitHub
"Match competitor price for X" → Stripe: list_products → list_prices → create_price (matching)
"Undercut competitors on X" → Stripe: create_price (1-2% below lowest)
"Show price history" → SQLite query → format results
"Analyze pricing strategy for X" → Sequential Thinking analysis
"Why did competitor prices change" → Query data → Sequential Thinking investigation

CORE PRINCIPLE

When competitors change prices, don't just report—ACT. Update your Stripe prices to stay competitive, log decisions to SQLite, push audit records to GitHub. Transform competitor intelligence into automated repricing. All data stays on YOUR infrastructure.



Here’s the prompt:

Set up a competitive repricing agent:
1. I sell MacBook Air M3 on my store - current Stripe price is $549.99
2. Monitor competitor prices on Amazon, Walmart, Best Buy
3. When ANY competitor drops below my price:
   - Automatically update my Stripe product price to match or beat them
   - Use "undercut" strategy (price 1% below lowest competitor)
   - Log the repricing decision to SQLite
   - Push pricing change report to GitHub

Check prices now and reprice if needed.

Here’s how ChatGPT responds – orchestrating all 7 MCP servers in a single workflow:

Competitive Repricing Agent – Execution Complete

Within 3 minutes, ChatGPT orchestrated all 7 MCP servers to analyse the market and automatically reprice your product.

Repricing Triggered and Executed:

Metrics

Before

After

Your Price

$549.99

$504.99

Price Change

-$45.00 (-8.2%)

Market Position

3rd (behind Best Buy)

#1 Lowest

Competitor Price Scan Results:

Retailer

Price

Vs. Your New Price

Your Store

$504.99

Market Leader

Best Buy

$509.99

+$5.00 (you beat by 1%)

Walmart

$669.00

+$164.01 higher

Amazon

$699.00

+$194.01 higher

What the Agent did (6 Steps):

  1. Installed SQLite3 and created database schema with 4 tables
  2. Created Stripe product (prod_TZaK0ARRJ5OJJ8) with initial $549.99 price 
  3. Scraped live competitor prices via Firecrawl from Amazon, Best Buy, and Walmart 
  4. Analysed pricing strategy with Sequential Thinking — detected Best Buy at $509.99 below your price
  5. Executed repricing — created new Stripe price at $504.99 (price_1ScRCVI9l1vmUkzn0hTnrLmW)
  6. Pushed audit report to GitHub (commit `64a488aa`)

All data stored on your infrastructure – not OpenAI’s servers. 

To check prices again, simply ask ChatGPT to ‘check MacBook Air M3 competitor prices’—it will scrape, compare, and reprice automatically. Run this check daily, weekly, or whenever you want competitive intelligence

Explore the Full Demo

View the complete repricing report and audit trail on GitHub: https://github.com/ajeetraina/competitive-repricing-agent-mcp

Want true automation? This demo shows on-demand repricing triggered by conversation. For fully automated periodic checks, you could build a simple scheduler that calls the OpenAI API every few hours to trigger the same workflow—turning this into a hands-free competitive intelligence system.Default houston Paragraph Text

Wrapping Up

You’ve just connected ChatGPT to Docker MCP Toolkit and configured multiple MCP servers. What used to require context-switching between multiple tools, manual query writing, and hours of debugging now happens through natural conversation, safely executed in Docker containers.

This is the new paradigm for AI-assisted development. ChatGPT isn’t just answering questions anymore. It’s querying your databases, managing your repositories, scraping data, and executing code—all while Docker ensures everything stays secure and contained.

Ready to try it? Open Docker Desktop and explore the MCP Catalog. Start with SQLite, add GitHub, experiment with Firecrawl. Each server unlocks new capabilities.

The future of development isn’t writing every line of code yourself. It’s having an AI partner that can execute tasks across your entire stack securely, reproducibly, and at the speed of thought.

Learn More



from Docker https://ift.tt/Y7dmMhr
via IFTTT

The 10 Cloud Trends Set to Define 2026

After more than a decade spent building IaaS infrastructures, I’ve learned to recognise when a market is on the edge of real transformation and 2025 has absolutely been one of those moments. It sets the stage for where I believe the industry is heading in 2026.

2025 has been a standout year for ShapeBlue and Apache CloudStack, with unprecedented adoption driven by the global shift away from VMware and the growing demand for open, sovereign, and cost-efficient private cloud platforms. CloudStack’s maturity and reliability, together with ShapeBlue’s leadership in enterprise delivery, support, and innovation, have made our ecosystem one of the most trusted alternatives available today.

But what have we learnt this year at ShapeBlue, and what do we see emerging for next year?  Here’s my top 10:

 

1.   Accelerated AI-driven Infrastructure Demand (but not all NVIDIA)

AI-driven infrastructure demand is accelerating and diversifying. Across a wide range of customers, we’re seeing rapid expansion of GPU-backed private cloud, but with a noticeable shift away from NVIDIA-only estates. Organisations increasingly want AI-capable infrastructure built on AMD, Intel, and emerging accelerators, driven by price considerations, power constraints, and the need for greater supply-chain resilience.

 

2.   ARM Moves Into the Mainstream Under Energy and Cost Pressures

Energy and cost pressures are pushing ARM servers into serious consideration. Customers operating large compute estates — or dealing with datacentre power limitations — are increasingly evaluating ARM as a mainstream option, particularly for predictable, scale-out workloads.

 

3.   VMware Licensing Turmoil Becomes the Biggest Catalyst for Cloud Strategy Shifts

VMware licensing disruption remains the largest trigger for cloud strategy change, particularly in the enterprise.  For many customers, the Broadcom/VMware shift is still forcing a strategic review of their private cloud architectures. As a result, we’re seeing strong momentum in KVM/CloudStack evaluations and more structured migration planning across the board. For a growing number of organisations, “cloud first” is quickly becoming “open source first.”

 

4.   Regulatory and Sovereignty Considerations Are Driving Projects from Day One

Sovereignty, compliance, and regulation are now front-loaded into every major project. NIS2, DORA, the EU Data Act, and UK regulatory frameworks are shaping procurement decisions. Customers increasingly want to understand portability, auditability, logging, and operational controls before even discussing feature sets.

Giles Panel

5.   Reversible, Low-Lock-In Multicloud Strategies on the Rise

Rather than adopting “multicloud for resilience,” customers are increasingly planning platforms that allow easy exit, stronger negotiation leverage, and reduced dependency on any single provider. Lower egress and switching costs are driving this shift in mindset.

 

6.   Small, Autonomous Edge Clouds See Growing Demand

Industrial, retail, and telco customers are actively exploring micro-cloud footprints that can operate independently, survive intermittent connectivity, and deploy compute closer to devices and sensors.

 

7.   Confidential Computing and Zero-Trust Models Enter Mainstream RFPs

We’re seeing more organisations — particularly in finance, pharma, and the public sector — requiring workload isolation assurances, encryption everywhere, and support for hardware-backed trusted execution environments.

 

8.   Sustainability Reporting Becomes Mandatory

Sustainability reporting is no longer optional. Customers face mandatory reporting on power, cooling, and carbon metrics, and they want infrastructure platforms that expose meaningful data, not just utilisation counters.

 

9.   Network Design Trends Favour Simpler, Scalable Fabrics

Many customers are consolidating towards Ethernet leaf-spine, EVPN/VXLAN, and intent-based tooling. They prefer architectures that minimise operational overhead while supporting modern multi-cloud networking patterns.

 

10.Power and Cooling Constraints Are Now Core Design Drivers

In customer scoping sessions, power availability, high-density cooling, and plans for future GPU expansion are shaping capacity planning far more than rack space or raw compute density.

 

In summary, 2026 will be about flexibility, efficiency, and control. The next generation of cloud is open, resilient, and built for real-world constraints and organisations that embrace these trends will be best positioned to lead. ShapeBlue and Apache CloudStack are positioned to help organisations navigate the new dynamics, delivering open, reliable, and enterprise-ready infrastructure that meets these emerging demands.

 

 

 

The post The 10 Cloud Trends Set to Define 2026 appeared first on ShapeBlue.



from CloudStack Consultancy & CloudStack... https://ift.tt/O9z3Cdl
via IFTTT

Securing GenAI in the Browser: Policy, Isolation, and Data Controls That Actually Work

The browser has become the main interface to GenAI for most enterprises: from web-based LLMs and copilots, to GenAI‑powered extensions and agentic browsers like ChatGPT Atlas. Employees are leveraging the power of GenAI to draft emails, summarize documents, work on code, and analyze data, often by copying/pasting sensitive information directly into prompts or uploading files.

Traditional security controls were not designed to understand this new prompt‑driven interaction pattern, leaving a critical blind spot where risk is highest. Security teams are simultaneously under pressure to enable more GenAI platforms because they clearly boost productivity.

Simply blocking AI is unrealistic. The more sustainable approach is to secure GenAI platforms where they are accessed by users: inside the browser session.

The GenAI browser threat model#

The GenAI‑in‑the‑browser threat model must be approached differently from traditional web browsing due to several key factors.

  1. Users routinely paste entire documents, code, customer records, or sensitive financial information into prompt windows. This can lead to data exposure or long‑term retention in the LLM system.
  2. File uploads create similar risks when documents are processed outside of approved data‑handling pipelines or regional boundaries, putting organizations in jeopardy of violating regulations.
  3. GenAI browser extensions and assistants often require broad permissions to read and modify page content. This includes data from internal web apps that users never intended to share with external services.
  4. Mixed use of personal and corporate accounts in the same browser profile complicates attribution and governance.

All of these behaviors put together create a risk surface that is invisible to many legacy controls.

Policy: defining safe use in the browser#

A workable GenAI security strategy in the browser is a clear, enforceable policy that defines what "safe use" means.

CISOs should categorize GenAI tools into sanctioned services and allow/disallow public tools and applications with different risk treatments and monitoring levels. After setting clear boundaries, enterprises can then align browser‑level enforcement so that the user experience matches the policy intent.

A strong policy consists of specifications around which data types are never allowed in GenAI prompts or uploads. Common restricted categories can include regulated personal data, financial details, legal information, trade secrets, and source code. The policy language should also be concrete and consistently enforced by technical controls rather than relying on user judgment.

Behavioral guardrails that users can live with#

Beyond allowing or disallowing applications, enterprises need guardrails that define how employees should access and use GenAI in the browser. Requiring single sign‑on and corporate identities for all sanctioned GenAI services can improve visibility and control while reducing the likelihood that data ends up in unmanaged accounts.

Exception handling is equally important, as teams such as research or marketing may require more permissive GenAI access. Others, like finance or legal, may need stricter guardrails. A formal process for requesting policy exceptions, time‑based approvals, and review cycles allows flexibility. These behavioral elements make technical controls more predictable and acceptable to end users.

Isolation: containing risk without harming productivity#

Isolation is the second major pillar of securing browser-based GenAI use. Instead of a binary model, organizations can use specific approaches to reduce risk when GenAI is being accessed. Dedicated browser profiles, for example, create boundaries between sensitive internal apps and GenAI‑heavy workflows.

Per‑site and per‑session controls provide another layer of defense. For example, a security team may allow GenAI access to designated "safe" domains while restricting the ability of AI tools and extensions to read content from high‑sensitivity applications like ERP or HR systems.

This approach enables employees to continue using GenAI for generic tasks while reducing the likelihood that confidential data is being shared with third‑party tools accessed inside the browser.

Data controls: precision DLP for prompts and pages#

Policy defines the intent, and isolation limits exposure. Data controls provide the precise enforcement mechanism at the browser edge. Inspecting user actions like copy/paste, drag‑and‑drop, and file uploads at the point where they leave trusted apps and enter GenAI interfaces is critical.

Effective implementations should support multiple enforcement modes: monitor‑only, user warnings, in‑time education, and hard blocks for clearly prohibited data types. This tiered approach helps reduce user friction while preventing serious leaks.

Managing GenAI browser extensions#

GenAI‑powered browser extensions and side panels are a tricky risk category. Many offers convenient features like page summarizations, creating replies, or data extraction. But doing so often requires extensive permissions to read and modify page content, keystrokes, and clipboard data. Without oversight, these extensions can become an exfiltration channel for sensitive information.

CISOs must be aware of the AI‑powered extensions in use at their enterprise, classify them by risk level, and enforce a default‑deny or allowed with restrictions list. Using a Secure Enterprise Browser (SEB) for continuous monitoring of newly installed or updated extensions helps identify changes in permissions that may introduce new risks over time.

Identity, accounts, and session hygiene#

Identity and session handling are central to GenAI browser security because they determine which data belongs to which account. Enforcing SSO for sanctioned GenAI platforms and tying usage back to enterprise identities will simplify logging and incident response. Browser‑level controls can help prevent cross‑access between work and personal contexts. For example, organizations can block copying content from corporate apps into GenAI applications when the user has not been authenticated into a corporate account.

Visibility, telemetry, and analytics#

Ultimately, a working GenAI security program relies on accurate visibility into how employees are using browser-based GenAI tools. Tacking which domains and apps are accessed, the contents being entered into prompts, and how often policies trigger warnings or blocks are all necessary. Aggregating this telemetry into existing logging and SIEM infrastructure allows security teams to identify patterns, outliers, and incidents.

Analytics built on this data can help highlight genuine risk. For example, enterprises can make a clear determination between non‑sensitive vs proprietary source code being entered into prompts. Using this information, SOC teams can refine rules, adjust isolation levels, and target training where it will provide the greatest impact.

Change management and user education#

CISOs with successful GenAI security programs invest in the time to explain the "why" behind restrictions. By sharing concrete scenarios that resonate with different roles, you can reduce the chances of your program failing - developers need examples related to IP, while sales and support staff benefit from stories about customer trust and contract details. Sharing scenario‑based content with relevant parties will reinforce good habits in the right moments.

When employees understand that guardrails are designed to preserve their ability to use GenAI at scale, not hinder them, they are more likely to follow the guidelines. Aligning communications with broader AI governance initiatives helps position browser‑level controls as part of a cohesive strategy rather than an isolated one.

A practical 30‑day rollout approach#

Many organizations are looking for a pragmatic path to move from ad‑hoc browser-based GenAI usage to a structured, policy‑driven model.

One effective way of doing so is utilizing a Secure Enterprise Browsing (SEB) platform that can provide you with the visibility and reach needed. With the right SEB you can map the current GenAI tools used within your enterprise, so you can create policy decisions like monitoring‑only or warn‑and‑educate modes for clearly risky behaviors. Over the following weeks, enforcement can be expanded to more users and higher‑risk data types, FAQs, and training.

By the end of a 30‑day period, many organizations can formalize their GenAI browser policy, integrate alerts into SOC workflows, and establish a cadence for adjusting controls as usage evolves.

Turning the browser into the GenAI control plane#

As GenAI continues to spread across SaaS apps and web pages, the browser remains the central interface through which most employees access them. The best GenAI protections simply cannot be worked into legacy perimeter controls. Enterprises can achieve the best results by treating the browser as the primary control plane. This approach enables security teams with meaningful ways to reduce data leakage and compliance risk while simultaneously preserving the productivity benefits that make GenAI so powerful.

With well‑designed policies, measured isolation strategies, and browser‑native data protections, CISOs can move from reactive blocking to confident, large‑scale enablement of GenAI across their entire workforce.

To learn more about Secure Enterprise Browsers (SEB) and how they can secure GenAI use at your organization, speak to a Seraphic expert.

The Hacker News

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/LBYaepf
via IFTTT