Tuesday, May 7, 2024

APT42 Hackers Pose as Journalists to Harvest Credentials and Access Cloud Data

The Iranian state-backed hacking outfit called APT42 is making use of enhanced social engineering schemes to infiltrate target networks and cloud environments.

Targets of the attack include Western and Middle Eastern NGOs, media organizations, academia, legal services and activists, Google Cloud subsidiary Mandiant said in a report published last week.

"APT42 was observed posing as journalists and event organizers to build trust with their victims through ongoing correspondence, and to deliver invitations to conferences or legitimate documents," the company said.

"These social engineering schemes enabled APT42 to harvest credentials and use them to gain initial access to cloud environments. Subsequently, the threat actor covertly exfiltrated data of strategic interest to Iran, while relying on built-in features and open-source tools to avoid detection."

APT42 (aka Damselfly and UNC788), first documented by the company in September 2022, is an Iranian state-sponsored cyber espionage group tasked with conducting information collection and surveillance operations against individuals and organizations of strategic interest to the Iranian government.

It's assessed to be a subset of another infamous threat group tracked as APT35, which is also known by various names CALANQUE, CharmingCypress, Charming Kitten, ITG18, Mint Sandstorm (formerly Phosphorus), Newscaster, TA453, and Yellow Garuda.

Both the groups are affiliated with Iran's Islamic Revolutionary Guard Corps (IRGC), but operate with a different set of goals.

While Charming Kitten focuses more on long-term, malware-intensive operations targeting organizations and companies in the U.S. and Middle East to steal data. APT42, in contrast, targets specific individuals and organizations that the regime has its eye on for the purpose of domestic politics, foreign policy, and regime stability.

Earlier this January, Microsoft attributed the Charming Kitten actor to phishing campaigns targeting high-profile individuals working on Middle Eastern affairs at universities and research organizations in Belgium, France, Gaza, Israel, the U.K., and the U.S. since November 2023.

Attacks mounted by the group are known to involve extensive credential harvesting operations to gather Microsoft, Yahoo, and Google Credentials via spear-phishing emails containing malicious links to lure documents that redirect the recipients to a fake login page.

In these campaigns, the adversary has been observed sending emails from domains typosquatting the original entities and masquerading as news outlets; legitimate services like Dropbox, Google Meet, LinkedIn, and YouTube; and mailer daemons and URL shortening tools.

The credential-grabbing attacks are complemented by data exfiltration activities targeting the victims' public cloud infrastructure to get hold of documents that are of interest to Iran, but only after gaining their trust – something Charming Kitten is well-versed at.

Cyber Espionage Campaigns
Known malware families associated with APT42

"These operations began with enhanced social engineering schemes to gain the initial access to victim networks, often involving ongoing trust-building correspondence with the victim," Mandiant said.

"Only then the desired credentials are acquired and multi-factor authentication (MFA) is bypassed, by serving a cloned website to capture the MFA token (which failed) and later by sending MFA push notifications to the victim (which succeeded)."

In an effort to cover up its tracks and blend in, the adversary has been found relying on publicly available tools, exfiltrating files to a OneDrive account masquerading as the victim's organization, and employing VPN and anonymized infrastructure to interact with the compromised environment.

Also used by APT42 are two custom backdoors that act as a jumping point to deploy additional malware or to manually execute commands on the device -

  • NICECURL (aka BASICSTAR) - A backdoor written in VBScript that can download additional modules to be executed, including data mining and arbitrary command execution
  • TAMECAT - A PowerShell toehold that can execute arbitrary PowerShell or C# content

It's worth noting that NICECURL was previously dissected by cybersecurity company Volexity in February 2024 in connection with a series of cyber attacks aimed at Middle East policy experts.

"APT42 has remained relatively focused on intelligence collection and targeting similar victimology, despite the Israel-Hamas war that has led other Iran-nexus actors to adapt by conducting disruptive, destructive, and hack-and-leak activities," Mandiant concluded.

"The methods deployed by APT42 leave a minimal footprint and might make the detection and mitigation of their activities more challenging for network defenders."

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/QpTlGaj
via IFTTT

China-Linked Hackers Used ROOTROT Webshell in MITRE Network Intrusion

May 07, 2024NewsroomVulnerability / Network Security

The MITRE Corporation has offered more details into the recently disclosed cyber attack, stating that the first evidence of the intrusion now dates back to December 31, 2023.

The attack, which came to light last month, singled out MITRE's Networked Experimentation, Research, and Virtualization Environment (NERVE) through the exploitation of two Ivanti Connect Secure zero-day vulnerabilities tracked as CVE-2023–46805 and CVE-2024–21887, respectively.

"The adversary maneuvered within the research network via VMware infrastructure using a compromised administrator account, then employed a combination of backdoors and web shells to maintain persistence and harvest credentials," MITRE said.

While the organization had previously disclosed that the attackers performed reconnaissance of its networks starting in January 2024, the latest technical deep dive puts the earliest signs of compromise in late December 2023, with the adversary dropping a Perl-based web shell called ROOTROT for initial access.

ROOTROT, per Google-owned Mandiant, is embedded into a legitimate Connect Secure .ttc file located at "/data/runtime/tmp/tt/setcookie.thtml.ttc" and is the handiwork of a China-nexus cyber espionage cluster dubbed UNC5221, which is also linked to other web shells such as BUSHWALK, CHAINLINE, FRAMESTING, and LIGHTWIRE.

Following the web shell deployment, the threat actor profiled the NERVE environment and established communication with multiple ESXi hosts, ultimately establishing control over MITRE's VMware infrastructure and dropping a Golang backdoor called BRICKSTORM and a previously undocumented web shell referred to as BEEFLUSH.

"These actions established persistent access and allowed the adversary to execute arbitrary commands and communicate with command-and-control servers," MITRE researcher Lex Crumpton explained. "The adversary utilized techniques such as SSH manipulation and execution of suspicious scripts to maintain control over the compromised systems."

Further analysis has determined that the threat actor also deployed another web shell known as WIREFIRE (aka GIFTEDVISITOR) a day after the public disclosure of the twin flaws on January 11, 2024, to facilitate covert communication and data exfiltration.

Besides using the BUSHWALK web shell for transmitting data from the NERVE network to command-and-control infrastructure on January 19, 2024, the adversary is said to have attempted lateral movement and maintained persistence within NERVE from February to mid-March.

"The adversary executed a ping command for one of MITRE's corporate domain controllers and attempted to move laterally into MITRE systems but was unsuccessful," Crumpton said.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/FkLIgNf
via IFTTT

Creating AI-Enhanced Document Management with the GenAI Stack

Organizations must deal with countless reports, contracts, research papers, and other documents, but managing, deciphering, and extracting pertinent information from these documents can be challenging and time-consuming. In such scenarios, an AI-powered document management system can offer a transformative solution.

Developing Generative AI (GenAI) technologies with Docker offers endless possibilities not only for summarizing lengthy documents but also for categorizing them and generating detailed descriptions and even providing prompt insights you may have missed. This multi-faceted approach, powered by AI, changes the way organizations interact with textual data, saving both time and effort.

In this article, we’ll look at how to integrate Alfresco, a robust document management system, with the GenAI Stack to open up possibilities such as enhancing document analysis, automating content classification, transforming search capabilities, and more.

2400x1260 2024 gen ai stack v1

High-level architecture of Alfresco document management 

Alfresco is an open source content management platform designed to help organizations manage, share, and collaborate on digital content and documents. It provides a range of features for document management, workflow automation, collaboration, and records management.

You can find the Alfresco Community platform on Docker Hub. The Docker image for the UI, named alfresco-content-app, has more than 10 million pulls, while other core platform services have more than 1 million pulls.

Alfresco Community platform (Figure 1) provides various open source technologies to create a Content Service Platform, including:

  • Alfresco content repository is the core of Alfresco and is responsible for storing and managing content. This component exposes a REST API to perform operations in the repository.
  • Database: PostgreSQL, among others, serves as the database management system, storing the metadata associated with a document.
  • Apache Solr: Enhancing search capabilities, Solr enables efficient content and metadata searches within Alfresco.
  • Apache ActiveMQ: As an open source message broker, ActiveMQ enables asynchronous communication between various Alfresco services. Its Messaging API handles asynchronous messages in the repository.
  • UI reference applications: Share and Alfresco Content App provide intuitive interfaces for user interaction and accessibility.

For detailed instructions on deploying Alfresco Community with Docker Compose, refer to the official Alfresco documentation.

 Illustration of Alfresco Community platform architecture, showing PostgreSQL, ActiveMQ, repo, share, Alfresco content app, and more.
Figure 1: Basic diagram for Alfresco Community deployment with Docker.

Why integrate Alfresco with the GenAI Stack?

Integrating Alfresco with the GenAI Stack unlocks a powerful suite of GenAI services, significantly enhancing document management capabilities. Enhancing Alfresco document management with the GenAI stack services has different benefits:

  • Use different deployments according to resources available: Docker allows you to easily switch between different Large Language Models (LLMs) of different sizes. Additionally, if you have access to GPUs, you can deploy a container with a GPU-accelerated LLM for faster inference. Conversely, if GPU resources are limited or unavailable, you can deploy a container with a CPU-based LLM.
  • Portability: Docker containers encapsulate the GenAI service, its dependencies, and runtime environment, ensuring consistent behavior across different environments. This portability allows you to develop and test the AI model locally and then deploy it seamlessly to various platforms.
  • Production-ready: The stack provides support for GPU-accelerated computing, making it well suited for deploying GenAI models in production environments. Docker’s declarative approach to deployment allows you to define the desired state of the system and let Docker handle the deployment details, ensuring consistency and reliability.
  • Integration with applications: Docker facilitates integration between GenAI services and other applications deployed as containers. You can deploy multiple containers within the same Docker environment and orchestrate communication between them using Docker networking. This integration enables you to build complex systems composed of microservices, where GenAI capabilities can be easily integrated into larger applications or workflows.

How does it work?

Alfresco provides two main APIs for integration purposes: the Alfresco REST API and the Alfresco Messaging API (Figure 2).

  • The Alfresco REST API provides a set of endpoints that allow developers to interact with Alfresco content management functionalities over HTTP. It enables operations such as creating, reading, updating, and deleting documents, folders, users, groups, and permissions within Alfresco. 
  • The Alfresco Messaging API provides a messaging infrastructure for asynchronous communication built on top of Apache ActiveMQ and follows the publish-subscribe messaging pattern. Integration with the Messaging API allows developers to build event-driven applications and workflows that respond dynamically to changes and updates within the Alfresco Repository.

The Alfresco Repository can be updated with the enrichment data provided by GenAI Service using both APIs:

  • The Alfresco REST API may retrieve metadata and content from existing repository nodes to be sent to GenAI Service, and update back the node.
  • The Alfresco Messaging API may be used to consume new and updated nodes in the repository and obtain the result from the GenAI Service.
 Illustration showing integration of two main Alfresco APIs: REST API and Messaging API.
Figure 2: Alfresco provides two main APIs for integration purposes: the Alfresco REST API and the Alfresco Messaging API.

Technically, Docker deployment includes both the Alfresco and GenAI Stack platforms running over the same Docker network (Figure 3). 

The GenAI Stack works as a REST API service with endpoints available in genai:8506, whereas Alfresco uses a REST API client (named alfresco-ai-applier) and a Messages API client (named alfresco-ai-listener) to integrate with AI services. Both clients can also be run as containers.

 Illustration of deployment architecture, showing Alfresco and GenAI Stack.
Figure 3: Deployment architecture for Alfresco integration with GenAI Stack services.

The GenAI Stack service provides the following endpoints:

  • summary: Returns a summary of a document together with several tags. It allows some customization, like the language of the response, the number of words in the summary and the number of tags.
  • classify: Returns a term from a list that best matches the document. It requires a list of terms as input in addition to the document to be classified.
  • prompt: Replies to a custom user prompt using retrieval-augmented generation (RAG) for the document to limit the scope of the response.
  • describe: Returns a text description for an input picture.

The implementation of GenAI Stack services loads the document text into chunks in Neo4j VectorDB to improve QA chains with embeddings and prevent hallucinations in the response. Pictures are processed using an LLM with a visual encoder (LlaVA) to generate descriptions (Figure 4). Note that Docker GenAI Stack allows for the use of multiple LLMs for different goals.

 Illustration of GenAI Stack Services, showing Document Loader, LLM embeddings, VectorDB, QA Chain, and more.
Figure 4: The GenAI Stack services are implemented using RAG and an LLM with visual encoder (LlaVA) for describing pictures.

Getting started 

To get started, check the following:

Obtaining the amount of RAM available for Docker Desktop can be done using following command:

docker info --format ''

If the result is under 20 GiB, follow the instructions in Docker official documentation for your operating system to boost the memory limit for Docker Desktop.

Clone the repository

Use the following command to close the repository:

git clone https://github.com/aborroy/alfresco-genai.git

The project includes the following components:

  • genai-stack folder is using https://github.com/docker/genai-stack project to build a REST endpoint that provides AI services for a given document.
  • alfresco folder includes a Docker Compose template to deploy Alfresco Community 23.1.
  • alfresco-ai folder includes a set of projects related to Alfresco integration.
    • alfresco-ai-model defines a custom Alfresco content model to store summaries, terms and prompts to be deployed in Alfresco Repository and Share App.
    • alfresco-ai-applier uses the Alfresco REST API to apply summaries or terms for a populated Alfresco Repository.
    • alfresco-ai-listener listens to messages and generates summaries for created or updated nodes in Alfresco Repository.
  • compose.yaml file describes a deployment for Alfresco and GenAI Stack services using include directive.

Starting Docker GenAI service

The Docker GenAI Service for Alfresco, located in the genai-stack folder, is based on the Docker GenAI Stack project, and provides the summarization service as a REST endpoint to be consumed from Alfresco integration.

cd genai-stack

Before running the service, modify the .env file to adjust available preferences:

# Choose any of the on premise models supported by ollama
LLM=mistral
LLM_VISION=llava
# Any language name supported by chosen LLM
SUMMARY_LANGUAGE=English
# Number of words for the summary
SUMMARY_SIZE=120
# Number of tags to be identified with the summary
TAGS_NUMBER=3

Start the Docker Stack using the standard command:

docker compose up --build --force-recreate

After the service is up and ready, the summary REST endpoint becomes accessible. You can test its functionality using a curl command.

Use a local PDF file (file.pdf in the following sample) to obtain a summary and a number of tags.

curl --location 'http://localhost:8506/summary' \
--form 'file=@"./file.pdf"'
{ 
  "summary": " The text discusses...", 
  "tags": " Golang, Merkle, Difficulty", 
  "model": "mistral"
}

Use a local PDF file (file.pdf in the following sample) and a list of terms (such as Japanese or Spanish) to obtain a classification of the document.

curl --location \
'http://localhost:8506/classify?termList=%22Japanese%2CSpanish%22' \
--form 'file=@"./file.pdf"'
{
    "term": " Japanese",
    "model": "mistral"
}

Use a local PDF file (file.pdf in the following sample) and a prompt (such as “What is the name of the son?”) to obtain a response regarding the document.

curl --location \
'http://localhost:8506/prompt?prompt=%22What%20is%20the%20name%20of%20the%20son%3F%22' \
--form 'file=@"./file.pdf"'
{
    "answer": " The name of the son is Musuko.",
    "model": "mistral"
}

Use a local picture file (picture.jpg in the following sample) to obtain a text description of the image.

curl --location 'http://localhost:8506/describe' \
--form 'image=@"./picture.jpg"'
{
    "description": " The image features a man standing... ",
    "model": "llava"
}

Note that, in this case, LlaVA LLM is used instead of Mistral.

Make sure to stop Docker Compose before continuing to the next step.

Starting Alfresco

The Alfresco Platform, located in the alfresco folder, provides a sample deployment of the Alfresco Repository including a customized content model to store results obtained from the integration with the GenAI Service.

Because we want to run both Alfresco and GenAI together, we’ll use the compose.yaml file located in the project’s main folder.

include:
  - genai-stack/compose.yaml
  - alfresco/compose.yaml
#  - alfresco/compose-ai.yaml

In this step, we’re deploying only GenAI Stack and Alfresco, so make sure to leave the compose.ai.yaml line commented out.

Start the stack using the standard command:

docker compose up --build --force-recreate

After the service is up and ready, the Alfresco Repository becomes accessible. You can test the platform using default credentials (admin/admin) in the following URLs:

Enhancing existing documents within Alfresco 

The AI Applier application, located in the alfresco-ai/alfresco-ai-applier folder, contains a Spring Boot application that retrieves documents stored in an Alfresco folder, obtains the response from the GenAI Service and updates the original document in Alfresco.

Before running the application for the first time, you’ll need to build the source code using Maven.

cd alfresco-ai/alfresco-ai-applier
mvn clean package

As we have GenAI Service and Alfresco Platform up and running from the previous steps, we can upload documents to the Alfresco Shared Files/summary folder and run the program to update the documents with the summary.

java -jar target/alfresco-ai-applier-0.8.0.jar \
--applier.root.folder=/app:company_home/app:shared/cm:summary \
--applier.action=SUMMARY
...
Processing 2 documents of a total of 2
END: All documents have been processed. The app may need to be executed again for nodes without existing PDF rendition.

Once the process has been completed, every Alfresco document in the Shared Files/summary folder will include the information obtained by the GenAI Stack service: summary, tags, and LLM used (Figure 5).

Screenshot of Document details in Alfresco, showing Document properties, Summary, tags, and LLM used.
Figure 5: The document has been updated in Alfresco Repository with summary, tags and model (LLM).

You can now upload documents to the Alfresco Shared Files/classify folder to prepare the repository for the next step.

Classifying action can be applied to documents in the Alfresco Shared Files/classify folder using the following command. GenAI Service will pick the term from the list (English, Spanish, Japanese) that best matches each document in the folder.

java -jar target/alfresco-ai-applier-0.8.0.jar \
--applier.root.folder=/app:company_home/app:shared/cm:classify \
--applier.action=CLASSIFY \
--applier.action.classify.term.list=English,Spanish,Japanese
...
Processing 2 documents of a total of 2
END: All documents have been processed. The app may need to be executed again for nodes without existing PDF rendition.

Upon completion, every Alfresco document in the Shared Files folder will include the information obtained by the GenAI Stack service: a term from the list of terms and the LLM used (Figure 6).

Screenshot showing document classification update in Alfresco Repository.
Figure 6: The document has been updated in Alfresco Repository with term and model (LLM).

You can upload pictures to the Alfresco Shared Files/picture folder to prepare the repository for the next step.

To obtain a text description from pictures, create a new folder named picture under the Shared Files folder. Upload any image file to this folder and run the following command:

java -jar target/alfresco-ai-applier-0.8.0.jar \
--applier.root.folder=/app:company_home/app:shared/cm:picture \
--applier.action=DESCRIBE
...
Processing 1 documents of a total of 1
END: All documents have been processed. The app may need to be executed again for nodes without existing PDF rendition.

Following this process, every Alfresco image in the picture folder will include the information obtained by the GenAI Stack service: a text description and the LLM used (Figure 7).

Screenshot showing document description update in Alfresco repository.
Figure 7: The document has been updated in Alfresco Repository with text description and model (LLM).

Enhancing new documents uploaded to Alfresco

The AI Listener application, located in the alfresco-ai/alfresco-ai-listener folder, contains a Spring Boot application that listens to Alfresco messages, obtains the response from the GenAI Service and updates the original document in Alfresco.

Before running the application for the first time, you’ll need to build the source code using Maven and to build the Docker image.

cd alfresco-ai/alfresco-ai-listener
mvn clean package
docker build . -t alfresco-ai-listener

As we are using the AI Listener application as a container, stop the Alfresco deployment and uncomment the alfresco-ai-listener in the compose.yaml file.

include:
  - genai-stack/compose.yaml
  - alfresco/compose.yaml
  - alfresco/compose-ai.yaml

Start the stack using the standard command:

docker compose up --build --force-recreate

After the service is again up and ready, the Alfresco Repository becomes accessible. You can verify that the platform is working by using default credentials (admin/admin) in the following URLs:

Summarization

Next, upload a new document and apply the “Summarizable with AI” aspect to the document. After a while, the document will include the information obtained by the GenAI Stack service: summary, tags, and LLM used.

Description

If you want to use AI enhancement, you might want to set up a folder that automatically applies the necessary aspect, instead of doing it manually.

Create a new folder named pictures in Alfresco Repository and create a rule with the following settings in it:

  • Name: description
  • When: Items are created or enter this folder
  • If all criteria are met: All Items
  • Perform Action: Add “Descriptable with AI” aspect

Upload a new picture to this folder. After a while, without manual setting of the aspect, the document will include the information obtained by the GenAI Stack service: description and LLM used.

Classification

Create a new folder named classifiable in Alfresco Repository. Apply the “Classifiable with AI” aspect to this folder and add a list of terms separated by comma in the “Terms” property (such as English, Japanese, Spanish).

Create a new rule for classifiable folder with the following settings:

  • Name: classifiable
  • When: Items are created or enter this folder
  • If all criteria are met: All Items
  • Perform Action: Add “Classified with AI” aspect

Upload a new document to this folder. After a while, the document will include the information obtained by the GenAI Stack service: term and LLM used.

A degree of automation can be achieved when using classification with AI. To do this, a simple Alfresco Repository script named classify.js needs to be created in the folder “Repository/Data Dictionary/Scripts” with following content.

document.move(
  document.parent.childByNamePath(    
    document.properties["genai:term"]));

Create a new rule for classifiable folder to apply this script with following settings:

  • Name: move
  • When: Items are updated
  • If all criteria are met: All Items
  • Perform Action: Execute classify.js script

Create a child folder of the classifiable folder with the name of every term defined in the “Terms” property. 

When you set up this configuration, any documents uploaded to the folder will automatically be moved to a subfolder based on the identified term. This means that the documents are classified automatically.

Prompting

Finally, to use the prompting GenAI feature, apply the “Promptable with AI” aspect to an existing document. Type your question in the “Question” property.

After a while, the document will include the information obtained by the GenAI Stack service: answer and LLM used.

A new era of document management

By embracing this framework, you can not only unlock a new level of efficiency, productivity, and user experience but also lay the foundation for limitless innovation. With Alfresco and GenAI Stack, the possibilities are endless — from enhancing document analysis and automating content classification to revolutionizing search capabilities and beyond.

If you’re unsure about any part of this process, check out the following video, which demonstrates all the steps live:

Learn more



from Docker https://ift.tt/EGn91BD
via IFTTT

New Case Study: The Malicious Comment

May 07, 2024The Hacker NewsRegulatory Compliance / Cyber Threat

How safe is your comments section? Discover how a seemingly innocent 'thank you' comment on a product page concealed a malicious vulnerability, underscoring the necessity of robust security measures. Read the full real-life case study here.

When is a 'Thank you' not a 'Thank you'? When it's a sneaky bit of code that's been hidden inside a 'Thank You' image that somebody posted in the comments section of a product page! The guilty secret hidden inside this particular piece of code was designed to let hackers bypass security controls and steal the personal identifying information of online shoppers, which could have meant big trouble for them and the company.

The page in question belongs to a global retailer. User communities are often a great source of unbiased advice from fellow enthusiasts, which was why a Nikon camera owner was posting there. They were looking for the ideal 50mm lens and asked for a recommendation. They offered thanks in advance to whoever might take the trouble to respond, and even left a little image that said, "Thank you," too, and to the naked eye it looked fine.

The comment and image stayed on the site for three years(!), but when the company started using the continuous web threat management solution from Reflectiz, a leading web security firm, it detected something troubling within this innocent-looking graphic during a routine monitoring scan. In this article, we give a broad overview of what happened, but if you'd prefer a deeper explanation, along with more details on how you can protect your own comments pages, you can download the full, in-depth case study here.

Altered Images

One of the best things about the web is the fact that you can easily share images, but as a human looking at hundreds of them every day, it's easy to forget that each one is made up of code, just like any other digital asset on a webpage. That being the case, malicious actors often try to hide their own code within them, which brings us to the practice of steganography. This is the term for hiding one piece of information inside another. It isn't the same as cryptography, which turns messages into gibberish so they can't be understood. Instead, steganography hides data in plain sight, in this case, inside an image.

Anatomy of a Pixel

You may be aware that computer monitors display images using a mosaic of dots called pixels and that each pixel can emit a mixture of red, green, and blue light. The strength of each color in one of these RGB pixels is determined by a value between 0 and 255, so 255,0,0 gives us red, 0,255,0 gives us green, and so on.

255,0,0 is the strongest red that the screen can display, and while 254,0,0 is slightly less strong, it would look exactly the same to the human eye. By making lots of these small alterations to the values of selected pixels, malicious actors can hide code in plain sight. By changing enough of them, they can create a sequence of values that a computer can read as code, and in the case of the one posted in the photography retailer's comments section, the altered image contained hidden instructions and the address of a compromised domain. It was a surprise to find that the JavaScript on the page was using the hidden information to communicate with it.

Consequences

The big problem for anyone operating an e-commerce website is that malicious actors are always looking for opportunities to steal customer PII and payment card details, and altering image files is only one of many possible methods they use. Legislators in a growing number of territories, as well as rule makers in areas like the payment card industry, have responded by implementing detailed regulatory frameworks that impose stringent security requirements on providers along with big fines if they fail.

GDPR requires anybody selling to European Union customers to follow its large and detailed framework. Anytime an e-commerce retailer succumbs to steganography or any other kind of attack that compromises customer information, it can attract fines in the millions of dollars, trigger class action lawsuits, and create bad publicity that leads to reputational damage. That's why it's so crucial to understand how to defend your website from such attacks, which the full case study explains.

Continuous Protection

The case study goes into depth on how this threat was uncovered and controlled, but the short explanation is that the platform's monitoring technology detected suspicious activity in a web component, then cross referenced certain details with its extensive threat database.

The system routinely identifies and blocks any third-party web components that track user activity without their permission. It detects which third-party components get hold of users' geo-location, camera, and microphone permissions without their consent and it maps all web components that can access sensitive information.

In this case, human security specialists at Reflectiz alerted the company to the vulnerability, gave its security staff clear mitigation steps, and investigated the suspicious code to understand how the attackers managed to put it there. You can read about their findings here, in the full case study, as well as learn what security steps to prioritize in order to avoid the same thing happening to your own comments pages.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/KQwPlWd
via IFTTT

Google Simplifies 2-Factor Authentication Setup (It's More Important Than Ever)

May 07, 2024NewsroomOnline Security / Data Breach

Google on Monday announced that it's simplifying the process of enabling two-factor authentication (2FA) for users with personal and Workspace accounts.

Also called, 2-Step Verification (2SV), it aims to add an extra layer of security to users' accounts to prevent takeover attacks in case the passwords are stolen.

The new change entails adding a second step method, such as an authenticator app or a hardware security key, before turning on 2FA, thus eliminating the need for using the less secure SMS-based authentication.

"This is particularly helpful for organizations using Google Authenticator (or other equivalent time-based one-time password (TOTP) apps)," the company said. "Previously, users had to enable 2SV with a phone number before being able to add Authenticator."

Users with hardware security keys have two options to add them to their accounts, including by registering a FIDO1 credential on the hardware key or by assigning a passkey (i.e., a FIDO2 credential) to one.

Google notes that Workspace accounts may still be required to enter their passwords alongside their passkey if the admin policy for "Allow users to skip passwords at sign-in by using passkeys" is turned off.

In another noteworthy update, users who opt to turn off 2FA from their account settings will now no longer have their enrolled second steps automatically removed.

"When an administrator turns off 2SV for a user from the Admin console or via the Admin SDK, the second factors will be removed as before, to ensure user off-boarding workflows remain unaffected," Google said.

The development comes as the search giant said over 400 million Google accounts have started using passkeys over the past year for passwordless authentication.

Modern authentication methods and standards like FIDO2 are designed to resist phishing and session hijacking attacks by leveraging cryptographic keys generated by and linked to smartphones and computers in order to verify users as opposed to a password that can be easily stolen via credential harvesting or stealer malware.

However, new research from Silverfort has found that a threat actor could get around FIDO2 by staging an adversary-in-the-middle (AitM) attack that can hijack user sessions in applications that use single sign-on (SSO) solutions like Microsoft Entra ID, PingFederate, and Yubico.

"A successful MitM attack exposes the entire request and response content of the authentication process," security researcher Dor Segal saidsaid.

"When it ends, the adversary can acquire the generated state cookie and hijack the session from the victim. Put simply, there is no validation by the application after the authentication ends."

The attack is made possible owing to the fact that most applications do not protect the session tokens created after authentication is successful, thus permitting a bad actor to gain unauthorized access.

What's more, there is no validation carried out on the device that requested the session, meaning any device can use the cookie until it expires. This makes it possible to bypass the authentication step by acquiring the cookie by means of an AitM attack.

To ensure that the authenticated session is used solely by the client, it's advised to adopt a technique known as token binding, which allows applications and services to cryptographically bind their security tokens to the Transport Layer Security (TLS) protocol layer.

While the token binding is limited to Microsoft Edge, Google last month announced a new feature in Chrome called Device Bound Session Credentials (DBSC) to help protect users against session cookie theft and hijacking attacks.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/jWlu56q
via IFTTT

Russian Operator of BTC-e Crypto Exchange Pleads Guilty to Money Laundering

May 07, 2024NewsroomCryptocurrency / Cybercrime

A Russian operator of a now-dismantled BTC-e cryptocurrency exchange has pleaded guilty to money laundering charges from 2011 to 2017.

Alexander Vinnik, 44, was charged in January 2017 and taken into custody in Greece in July 2017. He was subsequently extradited to the U.S. in August 2022. Vinnik and his co-conspirators have been accused of owning and managing BTC-e, which allowed its criminal customers to trade in Bitcoin with high levels of anonymity.

BTC-e is said to have facilitated transactions for cybercriminals worldwide, receiving illicit proceeds from numerous computer intrusions and hacking incidents, ransomware scams, identity theft schemes, corrupt public officials, and narcotics distribution rings.

The crypto exchange received more than $4 billion worth of bitcoin over the course of its operation, according to the U.S. Department of Justice (DoJ). It also processed over $9 billion-worth of transactions and served over one million users worldwide, several of them in the U.S.

In addition, the entity was not registered as a money services business with the U.S. Department of Treasury despite doing substantial business in the U.S. and did not enforce any anti-money laundering (AML) or Know Your Customer (KYC) guidelines as required by federal law, making it an attractive choice for criminals looking to obscure their ill-gotten funds.

Vinnik was previously charged with one count of operation of an unlicensed money service business, one count of conspiracy to commit money laundering, 17 counts of money laundering, and two counts of engaging in unlawful monetary transactions.

"BTC-e was one of the primary ways by which cyber criminals around the world transferred, laundered, and stored the criminal proceeds of their illegal activities," the DoJ said. "Vinnik operated BTC-e with the intent to promote these unlawful activities and was responsible for a loss amount of at least $121 million."

Earlier this February, the U.S. government charged another BTC-e operator, a Belarusian and Cypriot national named Aliaksandr Klimenka for money laundering and operating an unlicensed money services business.

Shortly following Vinnik's arrest in 2017, the U.S. Department of the Treasury's Financial Crimes Enforcement Network (FinCEN) announced [PDF] it assessed a $110 million civil money penalty against BTC-e for violating AML laws and an additional $12 million penalty against Vinnik.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/gL4nybr
via IFTTT

Monday, May 6, 2024

New UI for on-premises in Tech Preview

Late last year, Citrix announced the launch of a new cloud-based user interface. Citrix announced the tech preview with 2311 storefront CR and subsequent versions marking a significant step forward in providing a consistent experience across both cloud and on-premises environments. Available now in technical preview, this UI revamp is set to redefine how users interact with Citrix apps and desktops, enhancing simplicity, functionality, and overall user satisfaction.

The image below showcases the transition from the previous Citrix StoreFront UI to the enhanced Citrix user experience. 

Enabling the New User Experience

For those eager to test out the new UI,  you should upgrade to StoreFront 2311 and later. Creating a new store and enabling the experience are the first steps. This approach allows for thorough testing with a limited user base, ensuring a seamless transition for all. After the store is created, head to “Manage Receiver for Website” and click “Configure.” Under “Classic/Next-Gen Experience,” select “Enable Next-Gen Experience” as shown in the screenshots below. 

Key Benefits of the New UI

User-Friendly Interface: 

With a focus on reducing visual complexity and providing easy access to essential features, the new UI streamlines the user experience, making navigation a breeze. From intuitive navigation to personalized favorites, the new UI offers a revamped layout designed to enhance productivity and user engagement.

  • Simple View: Users with fewer than 20 apps are presented with a streamlined view that combines all apps and desktops onto a single page. This view does not include tabs or categories, making navigation straightforward. Apps marked as favorites are prioritized at the beginning of the list, followed by the rest in alphabetical order.
  • Home Page: Users with more than 20 apps are directed to the Home page upon sign-in. This page allows the ability to favorite apps and showcases up to five most recently used apps and desktops for quick retrieval. Additionally, apps mandated by admins are prominently marked with a star icon, indicating their importance. Users cannot remove these apps from their favorites list.
  • Apps Page: If the admin has not enabled the home page, users are directed to the Apps page. Here, favorite apps are listed first, followed by all other apps in alphabetical order. If app categories are created by the admin, users can click on these categories to locate specific apps more efficiently.
  • The Desktops tab offers quick access to virtual desktops, organized similarly to the Apps page, with favorites at the top for efficient navigation.

Categorization of Apps: 

Say goodbye to cluttered interfaces. The new UI improves the multi-level folder structures that adapt to the user’s screen size, ensuring a clutter-free experience that enhances overall satisfaction. 

  • Across industries such as education and manufacturing, organizing applications into intuitive categories streamlines workflows and enhances user productivity. 
  • For example, in an IT organization managing a diverse range of applications, categorizing apps based on deployment environments allows administrators to segregate production-ready applications from those in development or testing phases. 
  • Similarly, in a multinational corporation with geographically dispersed server infrastructure, categorizing apps based on regions and server locations facilitates efficient resource allocation and load balancing

Activity Manager: 

A game-changer in resource management, the Activity Manager empowers users to take quick actions on active virtual apps and desktops, optimizing performance and efficiency.

By providing users with visibility and control over active sessions, it transforms how resources are managed. 

  • From disconnecting sessions to shutting down desktops, users can take action with ease, reducing help desk tickets and optimizing resource utilization. 
  • In the healthcare and financial sectors, the Activity Manager optimizes virtual desktop session management. Healthcare professionals and clinicians rely on virtual desktop sessions for accessing patient records and medical apps. With the Activity Manager, they can swiftly disconnect between patients or shut down desktops post-rounds, ensuring patient data security and enhancing workflow efficiency while meeting privacy regulations. 
  • Similarly, in financial institutions, employees can log out or shut down desktops promptly, reducing unauthorized access to sensitive data and aligning with regulatory requirements.

Improved Search Capabilities: 

Finding what you need is now faster and more intuitive than ever.  Displays Recently Used Apps Automatically, saving time by showing recently used apps before typing anything. Instant Search Results show up immediately upon typing the first alphabet, eliminating the need to type three letters before displaying results. Fuzzy Search provides results with the closest match to the typed text, even with spelling mistakes, enhancing search accuracy.

What is in the pipeline

Instantaneous UI Rendering: We understand the value of time in the digital workspace. To further optimize user experiences, we are parallelizing many startup processes, including bridge setup and authentication handshakes. This enhancement aims to render the UI within milliseconds, significantly reducing the current 3~5 seconds spinner delay.

Activity Manager – Hibernate and resume: Users can hibernate active or disconnected sessions in the Activity Manager if the underlying desktop has the capability Upon selecting the ‘Hibernate’ action, users are informed of the benefits, such as energy savings and session state retention, with a progress indicator displayed during hibernation. In-Hibernation sessions are segregated in the Activity Manager. Users can resume these sessions from the activity manager

Enhanced Search Results: Search results will now include categories. This empowers users to distinguish between different apps belonging to various categories. The search results will feature a convenient dropdown menu, enabling users to take further actions like favoriting and accessing app descriptions.

Please note the known limitations and issues in our documentation here

Try it today!

You are able to test the preview today and offer your valuable feedback. Your insights are crucial in shaping our platform’s evolution and ensuring it meets your needs effectively.

Additionally, we’d love to hear about any customizations you’re actively using. This information will help us explore options to provide some of them out of the box, simplifying your experience and enhancing usability. You can share your feedback and customization details through our Podio form here.

In conclusion, the new UI for on-premises stores represents a significant leap forward in Citrix’s commitment to delivering innovative solutions that meet the evolving needs of users. With its user-friendly interface, enhanced functionality, and customization options, the new UI is poised to redefine how users interact with Citrix apps and desktops. Try it today!



from Citrix Blogs https://ift.tt/e0RqFE2
via IFTTT

7 ways to optimize cloud spend with Terraform

The statistics are clear: Nearly every IT organization is overspending on cloud. For example, 94% of respondents to the 2023 HashiCorp State of Cloud Strategy Survey experienced some form of avoidable cloud costs (aka “cloud waste”). S&P Global estimates there is $24 billion in untapped cloud savings annually across the enterprises they surveyed. And the Cloud Native Computing Foundation’s latest survey on cloud native FinOps and cloud financial management found that 70% of respondents are losing money from overprovisioning while 43% are failing to deactivate idle resources after they’ve been used.

There’s no mystery around why this is happening. Operators know that the most common sources of cloud waste are:

  • Overprovisioned resources
  • Idle or underused resources

Yet companies still struggle to address these sources with the right skills, tools, and processes.

This post looks at why cloud waste is so tough to control and explores seven HashiCorp Terraform capabilities that can help mitigate them.

For a comprehensive look at cloud and operational cost savings strategies at each level of cloud adoption maturity, read our white paper:

Cloud waste: Where does it come from?

Why are so many companies spending more on cloud resources than they need to? It often comes down to inexperience, reliability concerns, a lack of tracking, and insufficient processes and guardrails.

Inexperience and reliability concerns

Early in a cloud migration, developers are often given a lot of leeway to buy the cloud infrastructure they think they need. When infrastructure first moves to the cloud, there’s no longer an ops team or sysadmin to stop developers from clicking a few buttons, entering a company credit card, and provisioning 50 cloud compute instances. Developers can easily make errors when manually entering infrastructure parameters for every deployment — provisioning 21 instances instead of 12, for example.

While overprovisioning is often unintentional, some developers prefer to overprovision infrastructure so that they don’t unexpectedly run out of compute and cause application outages. Similarly, teams may buy cloud services with more features and capabilities than they really need. These types of overprovisioning may be due to fear and a lack of clarity around cost vs. reliability tradeoffs, but simple inexperience also plays a part.

According to the HashiCorp State of Cloud Strategy Survey, 90% of organizations face gaps in their cloud-related skill sets, and that was a primary cause of cloud waste for 43% of respondents. While senior developers and experienced operations/platform engineers may know how to right-size cloud resources, they may not have an automated way to propagate their knowledge and best practices to junior developers and others across the organization.

Lack of tracking and process

With many teams working on different projects, it’s not uncommon for organizations to simply lose track of non-production infrastructure that’s still incurring costs. Developers and sales engineers often create demo or sandbox environments but forget to decommission them when they are no longer needed.

Without tracking or tools to clean up non-production resources automatically, the organization may not even know what it’s spending money on. Team members may have to expend effort just to track down and gather context on all of these resources when managers finally start to notice the waste.

How Terraform helps

If your entire organization hasn’t yet adopted infrastructure as code through Terraform, doing so is a high-ROI first step that will organically reduce cloud-management costs. Migrating to Terraform brings a level of standardization and core stability to cloud infrastructure usage that:

  • Increases productivity by eliminating inefficient manual processes
  • Reduces security risks and the risk of human error
  • Increases infrastructure visibility and architectural clarity

South African banking group Nedbank, for example, was able to complete projects at 25% lower resource costs, deploying more than 1,000 virtual machines a month using the HCP Terraform pipeline.

HCP

Terraform provides features to help teams manage costs, enforce policies, and increase productivity throughout the cloud adoption process. Here are some key features Terraform offers for optimizing cloud spend:

1. Modules

Terraform modules allow you to templatize and reuse common, org-approved Terraform configurations to reduce code duplication and standardize organizational requirements and best practices. HCP Terraform and Terraform Enterprise include a built-in private registry, allowing platform teams to test and publish modules for easy discovery and self-service consumption by downstream teams.

Using Terraform’s private registry as a central, org-wide internal infrastructure repository lets engineering teams from across the organization pool their best infrastructure practices into one platform as code, eliminating large swaths of unnecessary work by separate teams trying to write configurations that have already been created elsewhere in the organization. And by encouraging developers to use “golden” modules approved by the security, operations, governance, and platform teams, with security and operational best practices baked in, organizations not only limit errors and security risks, they also reduce costs because cost-efficient infrastructure choices are also baked in.

These golden modules help less experienced developers use the cost-efficient infrastructure resources that senior engineers and architects have selected for their most common use cases.

2. No-code provisioning

HCP Terraform and Terraform Enterprise further simplify the golden module workflow with a feature called no-code provisioning.

No-code provisioning gives platform teams a process for building fully push-button, self-service provisioning workflows that application developers and even non-technical stakeholders can use. No knowledge of Terraform, HashiCorp Configuration Language (HCL), or the command line is required. Users of no-code workspaces and modules don’t need to manage Terraform configurations or directly use version control systems. They simply select no-code modules from a menu and with a few clicks, they have secure, cost-optimized infrastructure crafted by the experts on their organization’s platform team.

These simplified no-code workflows empower more people within an organization to cost-efficiently self-serve their deployments, which also saves time for platform teams since they can delegate provisioning tasks to more users without having to step in and help.

3. Automated policy guardrails

Even with golden modules, code reviews are needed in the provisioning process because errors can still creep in and security holes can open up. Organizations may also want to enforce cost-control policies and reliability best practices that can’t be contained in modules.

Manual policy checks become productivity bottlenecks, demanding an approver’s time and energy. That’s why automating as much of the approval criteria as possible is imperative for controlling time costs and optimizing cloud costs.

HCP Terraform and Terraform Enterprise let platform teams compose and manage automation for those policy checks with HashiCorp Sentinel and Open Policy Agent (OPA). Sentinel and OPA are policy as code frameworks that enable platform teams to write policy rule automations as customizable and auditable code. And just as infrastructure as code allows configuration sharing among industry experts, high-quality policy as code libraries are also free and discoverable in the public Terraform Registry.

Sentinel policies can put guardrails around security and compliance, but they can also be used to control cloud costs by limiting the types and size of compute and storage resources, or by limiting the cloud services and regions that can be provisioned.

For illustration, imagine a golden module with an input variable for the number of instances to provision. Even though the module will build the resources according to the platform team’s security and cost specifications, the user-entered data won’t always be controllable. If a user accidentally enters 200 instances for provisioning instead of 20, the module might not stop that action, but an automatic policy check can. It’s not hard to see how policy checks can make a big difference in optimizing cloud costs.

4. Run tasks

HCP Terraform and Terraform Enterprise also contain deep policy automation integrations with many popular testing and scanning tools. These integrations are called run tasks and they are built in collaboration with partner tool vendors.

Using run tasks in addition to Sentinel or OPA policies, organizations can harness third-party cloud cost optimization products in their provisioning pipelines, such as Infracost, Kion, and Bridgecrew. For a full list of Terraform run task integrations, visit the Terraform Registry.

Together, modules, policies, and run tasks help organizations bake security and budget compliance into their provisioning pipelines by default, with little to no human intervention required. But what about Day 2?

5. Drift detection

Even with a secure, compliant initial provisioning process, changes may occur outside of the normal workflow. These might include changes made directly through the cloud console, emergency actions taken during an outage, or even changes in Terraform provider defaults. These out-of-band changes can cause configuration drift, which leads to unexpected behavior and performance issues that can undercut cost management efforts.

Drift detection is a feature of the health assessments system in HCP Terraform and Terraform Enterprise. Terraform drift detection regularly scans for infrastructure drift by comparing the actual state of resources to the last saved state. Proactive notifications help teams catch and track unexpected changes while also identifying potential policy or process issues that need to be addressed. Terraform’s health dashboard also lets users quickly roll back infrastructure that has drifted away from its desired state, preventing unexpected budget overages due to drift.

6. Continuous validation

Approved modules and policy checks are excellent ways to ensure compliance before provisioning, but they can't prevent every issue. Checks are also necessary in production where live infrastructure is running. Terraform’s ongoing health assessments monitor infrastructure immediately after provisioning; Day 2, and beyond.

Continuous validation in HCP Terraform and Enterprise provides these ongoing checks to make sure infrastructure is working as expected. Just like approved modules and policy checks, continuous validation can be used to control budgets and close security gaps. For example, it could be used to continuously check if a set of infrastructure complies with AWS Budgets parameters set by the finance and platform teams. It can also detect certificate expiration and check simple information such as whether a virtual machine (VM) is up and running.

Continuous validation uses custom assertions created by platform teams to trigger notifications that alert infrastructure owners as soon as a check fails. This helps avoid:

  • Budget non-compliance
  • Downtime due to expiring certificates
  • Security issues from outdated images
  • and many more scenarios

7. Ephemeral workspaces

According to the HashiCorp State of Cloud Strategy Survey, half of all respondents struggle with idle or underused resources. Lack of expiration dates on temporary cloud resources impacted the bottom line of 39% of organizations.

Idle or underutilized resources are big contributors to avoidable cloud spend. Most organizations have few, if any, processes to clean up temporary infrastructure deployments, and many don’t take advantage of the ability to deprovision non-production infrastructure outside of work hours.

Terraform’s ephemeral workspaces let customers automate the destruction of temporary resources at a set time or based on inactivity. Administrators can require ephemeral workspace usage in certain cases to reduce management toil and infrastructure costs by eliminating the need for manual clean-up and simplifying workspace management.

A systematic approach to cloud cost optimization

Reducing cloud waste requires systematic management from Day 0 to Day N. HCP Terraform has a comprehensive array of features to stop cloud waste and it's free to try as your organization scales its infrastructure provisioning practices.

For a deeper dive into the topic of cloud and operations cost savings, visit our cloud cost optimization page and download the 3 phases of optimizing cloud and operator spend with Terraform white paper.

We’d love to learn about your team’s challenges and discuss how HashiCorp can help your organization optimize cloud spending. Have a chat with our sales and solutions engineers.



from HashiCorp Blog https://ift.tt/KT86k1h
via IFTTT

PinnacleOne ExecBrief | Digital Sovereignty and Splinternets in Cloud, AI & Space

Last week, PinnacleOne reviewed the collision of commercial interests and state competition in space.

This week, we step back and examine the growing trend towards digital sovereignty, manifesting in national competition to secure and lead increasingly strategic cloud, AI, and space networks.

Please subscribe to read future issues — and forward this newsletter to interested colleagues.

Contact us directly with any comments or questions: pinnacleone-info@sentinelone.com

Insight Focus | Digital Sovereignty and Splinternets in Cloud, AI, and Space

The concept of digital sovereignty has gained significant traction in recent years as nations seek to assert greater control over critical economic and military capabilities at the technical frontier. This trend – driven by geopolitical competition and the strategic importance of data, cloud computing, artificial intelligence (AI), and space technologies – has significant implications for global businesses. As nations pursue sovereign capabilities across these domains, corporate leaders must navigate an increasingly complex and fragmented digital and security landscape.

Data/Cloud Sovereignty

Nations are establishing sovereign cloud services to maintain control over their data and ensure compliance with local regulations and privacy requirements. The partnership between Microsoft and G42 in the United Arab Emirates exemplifies this trend, offering secure access to cloud and AI features while adhering to local data sovereignty requirements. Microsoft is also expanding its Azure services footprint in the UAE via Khazna Data Centers, a joint venture between G42 and e& to support this initiative.

In the words of Secretary Raimondo, “When it comes to emerging technology, you cannot be both in China’s camp and our camp.” It remains to be seen which side will end up benefiting more from this deal, given how much the U.S. had to offer to (apparently) woo G42 from its Chinese entanglements. Nevertheless, the forces of geopolitical network competition are clearly multipolar – this gives middle powers juice to make deals with multinational cloud providers on favorable terms, including respect for data sovereignty and localization of frontier capabilities.

AI Sovereignty

The strategic importance of AI is leading more nations to pursue AI sovereignty, recognizing the need to develop and (attempt to) control this transformative technology. Industry leaders like Jensen Huang of Nvidia and Arvind Krishna of IBM have advocated for countries to build their own “sovereign AI” capabilities, tailored to their specific language, cultural, and business needs.

Leading and guiding AI technologies is seen as critical for defending national interests and ensuring economic and military security. Examples of sovereign AI strategies include India’s plan to organize and make available Indian data for AI model creation, Singapore’s Southeast Asia AI plan, the Netherlands’ generative AI vision, and Taiwan’s sovereign model strategy to counter the influence of Chinese AI tools. As a sign of the times, some tech investors are eyeing the idea of “sovereign computational stacks” which float aboard undersea-cable connected platforms that help sanctioned entities skirt regulators.

Space Sovereignty

Nations are also seeking to establish their own satellite constellations for secure, reliable, and high-bandwidth communications, commercial space-based observation, scientific, and defense purposes. The United States’ Proliferated Warfighter Space Architecture (PWSA), a secure low-Earth orbit (LEO) network, and China’s plans for a LEO broadband constellation highlight the growing importance of space sovereignty in the LEO domain, currently dominated by SpaceX. The European Union has also approved plans for the IRIS 2 constellation, a multi-orbit satellite system designed to bolster Europe’s governmental and institutional communication services and digital sovereignty.

The Emirates has formed their own national space champion, Space42, by merging their AI-driven geospatial intelligence provider Bayanat with Yahsat, the UAE’s principal satellite firm. The link between space and AI is explicit per the Space42 chairman, “Building upon its enormous capabilities, the new entity is poised to play a significant role in realizing the ambitious objectives outlined by the National Space Strategy 2030 and the National Strategy for Artificial Intelligence 2031”.

As we examined last week, these developments have significant implications for the blurred lines between commercial interests and national imperatives as the space domain becomes increasingly contested and potentially a field of conflict.

Compliance and Cybersecurity Challenges

As nations assert digital sovereignty, companies operating globally will face a complex web of data governance, privacy, and operational regulations across multiple jurisdictions. Compliance with diverse requirements for data localization, storage, processing, and access will be a significant challenge. Moreover, the fragmentation of digital infrastructure and the proliferation of sovereign systems may introduce new cybersecurity risks, as companies must ensure the security and integrity of their data and systems across multiple platforms and jurisdictions.

Market Access and Data Flow Implications

The rise of sovereign cloud services, AI capabilities, and space and terrestrial communication networks may restrict the free flow of data across borders and limit market access for foreign companies. Nations may prioritize domestic providers or impose barriers to entry for foreign firms, particularly in strategic sectors. For example, China’s LEO broadband constellation could hinder outside attempts to garner market share within the country or its allies. Executives must anticipate potential disruptions to their global operations and supply chains while exploring partnerships or localization strategies to maintain access to key markets.

Navigating the Fragmented Digital Landscape

The proliferation of sovereign digital infrastructures could lead to a fragmented global digital landscape, often referred to as the “splinternet”. This fragmentation may hinder interoperability, collaboration, and innovation across borders, impacting the ability of multinational companies to leverage digital technologies effectively. Leaders must consider the long-term implications of a splintered digital ecosystem and develop strategies to navigate this increasingly complex environment while ensuring the security and resilience of their digital assets.

Strategic Considerations for Corporate Leaders

  1. Assess compliance and cybersecurity requirements – Evaluate the impact of digital sovereignty regulations in each market and ensure compliance with data governance, privacy, and operational requirements while addressing the cybersecurity challenges posed by fragmented digital infrastructures.
  2. Mitigate market access risks – Anticipate potential disruptions to global operations and supply chains due to restricted data flows and market access barriers. Consider partnerships or localization strategies to maintain a presence in key markets.
  3. Adapt to a fragmented digital landscape – Develop strategies to navigate the complexities of a splintered digital ecosystem, addressing interoperability challenges, potential barriers to collaboration and innovation, and the cybersecurity implications of operating across multiple sovereign platforms.
  4. Invest in resilient and secure digital infrastructure – Build resilient and adaptable digital infrastructure that can withstand the challenges posed by digital sovereignty trends and ensure the security and integrity of data and systems across multiple jurisdictions.
  5. Engage in policy dialogues – Actively participate in policy discussions and industry forums to advocate for balanced approaches that safeguard national interests, promote global collaboration and innovation, and address the cybersecurity challenges posed by digital sovereignty.

Going Forward

The pursuit of digital sovereignty by nations has significant implications for the global digital landscape, potentially leading to a fragmented “splinternet” and introducing new cybersecurity and enterprise architecture challenges. Corporate leaders must navigate an increasingly complex web of compliance requirements, market access barriers, interoperability issues, and cybersecurity risks.

By proactively assessing the impact of digital sovereignty trends, adapting strategies accordingly, investing in secure and resilient digital infrastructure, and engaging in policy dialogues, executives can position their organizations to thrive in an increasingly complex and fragmented digital world.



from SentinelOne https://ift.tt/aBCkvrj
via IFTTT