Thursday, May 9, 2024

Wasm vs. Docker: Performant, Secure, and Versatile Containers

Docker and WebAssembly (Wasm) represent two pivotal technologies that have reshaped the software development landscape. You’ve probably started to hear more about Wasm in the past few years as it has gained in popularity, and perhaps you’ve also heard about the benefits of using it in your application stack. This may have led you to think about the differences between Wasm and Docker, especially because the technologies work together so closely.

In this article, we’ll explore how these two technologies can work together to enable you to deliver consistent, efficient, and secure environments for deploying applications. By marrying these two tools, developers can easily reap the performance benefits of WebAssembly with containerized software development.

White text on blue background saying Wasm vs. Docker

What’s Wasm?

Wasm is a compact binary instruction format governed by the World Wide Web Consortium (W3C). It’s a portable compilation target for more than 40 programming languages, like C/C++, C#, JavaScript, Go, and Rust. In other words, Wasm is a bytecode format encoded to run on a stack-based virtual machine.

Similar to the way Java can be compiled to Java bytecode and executed on the Java Virtual Machine (JVM), which can then be compiled to run on various architectures, a program can be compiled to Wasm bytecode and then executed by a Wasm runtime, which can be packaged to run on different architectures, such as Arm and x86.

a program can be compiled to Wasm bytecode and then executed by a Wasm runtime, which can be packaged to run on different architectures, such as Arm and x86

What’s a Wasm runtime?

Wasm runtimes bridge the gap between portable bytecode and the underlying hardware architecture. They also provide APIs to communicate with the host environment and provide interoperability between other languages, such as JavaScript.

At a high level, a Wasm runtime runs your bytecode in three semantic phases:

  1. Decoding: Processing the module to convert it to an internal representation
  2. Validation: Checking to see that the decoded module is valid
  3. Execution: Installing and invoking a valid module

Wasm runtime examples include Spin, Wasmtime, WasmEdge, and Wasmer. Major browsers like Firefox and Chrome also use Spider Monkey and V8, respectively.

Why use Wasm?

To understand why you might want to use WebAssembly in your application stack, let’s examine its main benefits — notably, security without sacrificing performance and versatility.

Security without sacrificing performance

Wasm enables code to run at near-native speed within a secure, sandboxed environment, protecting systems from malicious software. This performance is achieved through just-in-time (JIT) compilation of WebAssembly bytecode directly into machine code, bypassing the need for transpiling into an intermediate format. 

Wasm also uses shared linear memory — a contiguous block of memory that simplifies data exchange between modules or between WebAssembly and JavaScript. This design allows for efficient communication and enables developers to blend the flexibility of JavaScript with the robust performance of WebAssembly in a single application.

The security of this system is further enhanced by the design of the host runtime environment, which acts as a sandbox. It restricts the Wasm module from accessing anything outside of the designated memory space and from performing potentially dangerous operations like file system access, network requests, and system calls. WebAssembly’s requirement for explicit imports and exports to access host functionality adds another layer of control, ensuring a secure execution environment.

Use case versatility

Finally, WebAssembly is relevant for more than traditional web platforms (contrary to its name). It’s also an excellent tool for server-side applications, edge computing, game development, and cloud/serverless computing. If performance, security, or target device resources are a concern, consider using this compact binary format.

During the past few years, WebAssembly has become more prevalent on the server side because of the WebAssembly System Interface (or WASI). WASI is a modular API for Wasm that provides access to operating system features like files, filesystems, and clocks. 

Docker vs. Wasm: How are they related?

After reading about WebAssembly code, you might be wondering how Docker is relevant. Doesn’t WebAssembly handle sandboxing and portability? How does Docker fit in the picture? Let’s discuss further.

Docker helps developers build, run, and share applications — including those that use Wasm. This is especially true because Wasm is a complementary technology to Linux containers. However, handling these containers without solid developer experience can quickly become a roadblock to application development.

That’s where Docker comes in with a smooth developer experience for building with Wasm and/or Linux containers.

Benefits of using Docker and Wasm together

Using Docker and Wasm together affords great developer experience benefits as well, including:

  • Consistent development environments: Developers can use Docker to containerize their Wasm runtime environments. This approach allows for a consistent Wasm development and execution environment that works the same way across any machine, from local development to production.
  • Efficient deployment: By packaging Wasm applications within Docker, developers can leverage efficient image management and distribution capabilities. This makes deploying and scaling these types of applications easier across various environments.
  • Security and isolation: Although Docker isolates applications at the operating system level, Wasm provides a sandboxed execution environment. When used together, the technologies offer a robust layered security model against many common vulnerabilities.
  • Enhanced performance: Developers can use Docker containers to deploy Wasm applications in serverless architectures or as microservices. This lets you take advantage of Wasm’s performance benefits in a scalable and manageable way.

How to enable Wasm on Docker Desktop

If you’re interested in running WebAssembly containers, you’re in luck! Support for Wasm workloads is now in beta, and you can enable it on Docker Desktop by checking Enable Wasm on the Features in development tab under Settings (Figure 2).

Note: Make sure you have containerd image store support enabled first.

Screenshot of Docker Desktop Settings showing checkmark beside "Enable Wasm" option.
Figure 2: Enable Wasm in Docker Desktop.

After enabling Wasm in Docker Desktop, you’re ready to go. Docker currently supports many Wasm runtimes, including Spin, WasmEdge, and Wasmtime. You can also find detailed documentation that explains how to run these applications.

How Docker supports WebAssembly

To explain how Docker supports WebAssembly, we’ll need to quickly review how the Docker Engine works.

The Docker Engine builds on a higher-level container runtime called containerd. This runtime provides fundamental functionality to control the container lifecycle. Using a shim process, containerd can leverage runc (a low-level runtime) under the hood. Then, runc can interact directly with the operating system to manage various aspects of containers.

The Docker Engine builds on a higher-level container runtime called containerd. This runtime provides fundamental functionality to control the container lifecycle. Using a shim process, containerd can leverage runc (a low-level runtime) under the hood. Then, runc can interact directly with the operating system to manage various aspects of containers.

What’s neat about this design is that anyone can write a shim to integrate other runtimes with containerd, including WebAssembly runtimes. As a result, you can plug-and-play with various Wasm runtimes in Docker, like WasmEdge, Spin, and Wasmtime.

The future of WebAssembly and Docker

WebAssembly is continuously evolving, so you’ll need a tight pulse to keep up with ecosystem developments. One recent advancement relates to how the new WebAssembly Component model will impact shims for the various container runtimes. At Docker, we’re working to make it simple for developers to create Wasm containers and enhance the developer experience.

In a famous 2019 tweet thread, Docker founder Solomon Hykes described the future of cloud computing. In this future, he describes a world where Docker runs Windows, Linux, and WebAssembly containers side by side. Given all the recent developments in the ecosystem, that future is well and truly here.

Recent advancements include:

  • The launch of WASI Preview 2 fully rebased WASI on the component model type system and semantics: This makes it modular, fully virtualizable, and accessible to various source languages.
  • Fermyon, Microsoft, SUSE, LiquidReply, and others have also released the SpinKube open source project: The project provided a straightforward path for deploying Wasm-based serverless functions into Kubernetes clusters. Developers can use SpinKube with Docker via k3s (a lightweight wrapper to run Rancher Lab’s minimal Kubernetes distribution). Docker Desktop also includes the shim, which enables you to run Kubernetes containers on your local machine.

In 2024, we expect the combination of Wasm and containers to be highly regarded for its efficiency, scalability, and cost.

Wrapping things up

In this article, we explained how Docker and Wasm work together and how to use Docker for Wasm workloads. We’re excited to see Wasm’s adoption grow in the coming years and will continue to enhance our support to meet developers both where they’re at and where they’re headed. 

Check out the following related materials for details on Wasm and how it works with Docker:

Learn more



from Docker https://ift.tt/xELcG5W
via IFTTT

Terraform Enterprise adds Podman support and workflow enhancements

HashiCorp Terraform Enterprise is the self-hosted distribution of HCP Terraform (formerly Terraform Cloud) for customers with strict regulatory, data residency, or air-gapped networking requirements. With version 202404-2, we are excited to announce that Terraform Enterprise is now fully supported for deployment on Podman with Red Hat Enterprise Linux 8 and above.

Flexible Terraform deployment options

In September 2023, HashiCorp announced new flexible deployment options for Terraform Enterprise, with initial support for Docker Engine and cloud-managed Kubernetes services (Amazon EKS, Microsoft AKS, and Google GKE).

Many enterprises choose Red Hat Enterprise Linux (RHEL) as their standard Linux distribution because of its reputation for stability, security, and enterprise support offerings. Prior to version 8.0, RHEL included a fully supported Docker runtime. In RHEL 8.0 and above this was replaced with Podman, a container runtime developed by Red Hat engineers. With end-of-life of RHEL 7.x scheduled for June 2024, customers were faced with the lack of an end-to-end supported option for running Terraform Enterprise on RHEL.

With the general availability of Podman support in Terraform Enterprise v202404-2 and above, organizations that are standardized on Red Hat Enterprise Linux can continue to leverage their preferred platform with a fully supported upgrade path.

Migration from Replicated-based installs

Customers still running a Replicated deployment of Terraform Enterprise are strongly encouraged to migrate to one of the new flexible deployment options. The final Replicated release of Terraform Enterprise is scheduled for November 2024. While HashiCorp will support this release until April 1, 2026, migrating by November ensures organizations will continue to receive the latest features and fixes.

As of the April 2024 release, the flexible deployment options for Terraform Enterprise now include:

  • Docker Engine on any supported Linux distribution
  • Cloud-managed Kubernetes: Amazon EKS, Azure AKS, or Google GKE
  • Podman on Red Hat Enterprise Linux 8 or 9

Customers can contact their HashiCorp representative for more information and to validate their migration and upgrade path.

Recent Terraform feature enhancements

Since our last recap in January, Terraform Enterprise has gained additional enhancements and new features to improve efficiency and management flexibility, including:

Upgrade now

To learn more about installing Terraform Enterprise on Podman, review the requirements and installation documentation. To learn about everything new and changed in recent Terraform Enterprise versions, check out the release notes.

To learn more about standardizing the infrastructure lifecycle with Terraform, explore our hosted and self-managed delivery options by visiting the Terraform product page or contacting HashiCorp sales.



from HashiCorp Blog https://ift.tt/pyjbrCe
via IFTTT

macOS Cuckoo Stealer | Ensuring Detection and Defense as New Samples Rapidly Emerge

Infostealers targeting macOS devices have been on the rise for well over a year now, with variants such as Atomic Stealer (Amos), RealStealer (Realst), MetaStealer and others widely distributed in the wild through malicious websites, cracked applications and trojan installers. These past few weeks have seen a new macOS malware family appear that researchers have dubbed ‘Cuckoo Stealer’, drawing attention to its abilities to act both as an infostealer and as spyware.

In this post, we review Cuckoo Stealer’s main features and logic from a detection point of view and offer extended indicators of compromise to aid threat hunters and defenders. At the time of writing the latest version of XProtect, version 2194, does not block execution of Cuckoo Stealer malware. SentinelOne customers are protected from macOS Cuckoo Stealer.

More Cuckoo Stealers Appearing

Since the initial report on the emergence of this family of malware on April 30, we have seen a rise in new samples and trojanized applications from the initial four originally reported by Kanji to 18 unique trojanized applications at the time of writing, with new samples appearing daily.

The trojanized apps are various kinds of “potentially unwanted programs” offering dubious services such as PDF or music converters, cleaners and uninstallers (a full list appears in the IoCs at the end) such as:

  • App Uninstaller.app
  • DumpMedia Amazon Music Converter.app
  • FoneDog Toolkit for Android on Mac.app
  • iMyMac PDF Compressor.app
  • PowerUninstall.app
  • TuneSolo Apple Music Converter.app

As reported previously, these applications contain a malicious binary in the MacOS folder named upd. The most recent binaries – in ‘fat’ and ‘thin’ versions for both Intel x86 and arm64 architectures – are ad hoc codesigned and share the same bundle identifier, upd.upd.

Apple’s codesign utility will provide identical output for all these samples:

codesign -dv file
…
Identifier=upd.upd
Format=Mach-O thin (x86_64)
CodeDirectory v=20400 size=1536 flags=0x2(adhoc) hashes=38+7 location=embedded
Signature=adhoc
Info.plist=not bound
TeamIdentifier=not set
Sealed Resources=none
Internal requirements count=0 size=12

Some protection is offered to unsuspecting users by Apple’s Gatekeeper, which will by default throw a warning that the application is not notarized. The malware authors have anticipated this and provided the user with instructions on how to run the application.

macOS Cuckoo Stealer Gatekeeper

The malware is written in C++ and was created in build 12B45b of Xcode, version 12.2, a rather old version that was released in November 2020, using a device still running macOS 11 Big Sur (build 20A2408) from the same year.

The code signature and App Info.plist containing this information make current samples relatively easy to identify.

Simple Obfuscation Helps Cuckoo to Hide in Apple’s Nest

A noticeable characteristic of the malware is the heavy use of XOR’d strings in an attempt to hide its behavior from simple static signature scanners. The samples use different XOR keys (see the list of IoCs at the end of this post) of varying lengths to decrypt the main strings and functionality dynamically.

Though the binary is stripped and lacks function names, the decrypt routine is readily identifiable from the large number of cross references to it in the rest of the code. Current samples call the decrypt routine precisely 223 times.

Cuckoo decryption function
Cuckoo decryption function

By breaking on this function in a debugger, it is relatively straightforward to output the decrypted strings to understand the malware’s behavior.

However, not all obfuscated strings are processed through this function. The decryption key and routine can be found independently in other places in the code as well.

Of the few unobfuscated strings in the current binary is one that represents an array of file extensions, indicating the kind of information the malware authors are interested in stealing.

{"txt", "rtf", "doc", "docx", "xls", "xlsx", "key", "wallet", "jpg", "dat", "pdf", "pem", "asc", "ppk", "rdp", "sql", "ovpn", "kdbx", "conf", "json"}

Looking for cross references to ‘wallet’ (one of the items in the array), we find the array is consumed in a function which calls both the decrypt function and another function that implements the same XOR routine and key.

macOS Cuckoo in function decryption

Cuckoo Stealer Observable Behavior

Despite these attempts at obfuscation, analysis of Cuckoo Stealer reveals that, unsurprisingly, it uses many of the same techniques as other infostealers we have encountered in the last 12 months or so. In particular, it makes various uses of AppleScript to duplicate files and folders of interest and to steal the user’s admin password in plain text.

This is achieved through a simple AppleScript dialog using the “hidden answer” option, a ploy that macOS attackers have been using since at least 2008, as we observed recently in relation to Atomic Stealer.

With Cuckoo Stealer, if the user enters anything other than a valid admin password, the malware will repeatedly display the dialog until the right password is provided. This remains true even if the user presses the ‘Cancel’ button.

The underlying mechanism for how the password is checked was nicely elucidated by Kandji researchers here. The scraped password is then saved in clear text in a file named pw.dat in a hidden subfolder of the User’s home directory. The hidden folder’s name is a combination of .local- and a randomly generated UUID identifier. The following regexes can be used to find paths or commands containing this pattern:

\.local-[[:xdigit:]]{8}-[[:xdigit:]]{4}-[[:xdigit:]]{4}-[[:xdigit:]]{4}-[[:xdigit:]]{12}/

// alternatively:
\.local-[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}/

In addition, the malware also attempts to install a persistence LaunchAgent with the label com.user.loginscript. The name of the property list file itself will take the form of the parent application bundle. For example, the trojan DumpMedia Spotify Music Converter.app will create a plist called ~/Library/LaunchAgents/com.dumpmedia.spotifymusicconverter.plist, while iMyMac Video Converter.app will write the same plist out as com.immyac.videoconverter.plist.

Cuckoo Stealer Launch Agent
Cuckoo Stealer LaunchAgent

This persistence agent will point to a copy of the upd binary located in the same hidden .local-<UUID> directory mentioned above.

The malware also makes use of several Living Off the Land utilities including xattr, osascript and system_profiler for discovery.

Command Arguments
awk /Hardware UUID/{print $(NF)}
launchctl load -w “/Users/user1/Library/LaunchAgents/com.dumpmedia.spotifymusicconverter.plist”
osascript -e ‘display dialog “macOS needs to access System Settings” default answer “” with title “System Preferences” with icon caution with hidden answer’
system_profiler SPHardwareDataType \| awk ‘/Hardware UUID/{print $(NF)}’
xattr -d com.apple.quarantine “/Users/user1/.local-6635DD81-94DD-59E3-9D84-20BD41C51999/DumpMediaSpotifyMusicConverter”
SentinelOne detects Cuckoo Stealer
Cuckoo Stealer execution chain

SentinelOne Protects Against Cuckoo Stealer

SentinelOne Singularity detects Cuckoo Stealer and prevents its execution when the policy is set to Protect/Protect. In Detect mode, the agent will allow analysts to observe and investigate malicious behavior, as shown below.

SentinelOne Console detects Cuckoo Stealer

Agent version 23.4.1.7125 and later offer an extensive set of behavioral indicators including reference to MITRE TTPs specific to macOS infostealers.

Conclusion

The actors behind the Cuckoo Stealer campaign have clearly invested some resources into developing a novel infostealer rather than buying any of the ready-made offerings currently circulating in various Telegram channels and darknet forums. This, along with the rising numbers of samples we have observed since initial reporting of this threat, suggests that we will likely see further variants of this malware in the future.

Enterprises are advised to use a third party security solution such as SentinelOne Singularity to ensure that devices are protected against this and other threats targeting macOS devices in the fleet.

To learn more about how SentinelOne can help protect your organization, contact us or request a free demo.

Indicators of Compromise

Bundle Identifier
upd.upd

Observed Application Names
App Uninstaller.app
DumpMedia Amazon Music Converter.app
DumpMedia DeezPlus.app
DumpMedia Pandora Music Converter.app
DumpMedia Spotify Music Converter.app
DumpMedia Video Converter.app
DumpMedia YouTube Music Converter.app
FoneDog Data Recovery.app
FoneDog iPhone Cleaner.app
FoneDog PDF Compressor.app
FoneDog Toolkit for Android on Mac.app
FoneDog Toolkit for iOS on Mac.app
FoneDog Video Converter.app
iMyMac PDF Compressor.app
iMyMac Video Converter.app
PowerUninstall.app
TunesFun Apple Music Converter.app
TuneSolo Apple Music Converter.app

Observed Mach-Os (SHA1)
04a572b2a17412bba6c875a43289aac521f7b98d
0e3e58a2b19072823df2ec52f09e51acf0d0d724
127c486eab9398a2f42208d96aa12dd8fcfb68b5
1ef1f94d39931b6e625167b021a718f3cfe6bb80
1f49bb334ebcec6b2493d157caf90a8146fb68d9
219f57e9afe201ad4088340cd5b191223d4c4227
24c311abe5d93d21172a6928ba3a211528aa04f9
266f48c38efbb5a6d49fb74194c74fe68d02d62a
298c9ab225d7262a2106bc7bec0993eaa1210a0d
2a422057790bae755c3225aff3e47977df234b11
2c7ec5358b69f8e36c35c53501e4ba6efce25689
2cdda89c50c2aa1eb4b828350b7086748c58fe08
35d75565de813e89a765718ed31c1bfebfd3c11c
4cf895c391557498d2586cee3ace3c32a3a83a4e
4cfdf872051900df8a959b95a03f6c906ad4596e
50360b325aad398a5d580a2adc9aef597eb98855
5220a53c1930ea93849caa88850cb6628a06cd90
57a1f3d3cbbc33b92177660ee620bff4f1c5b229
63eb1abe69b11c8ae04092ccf822633d1e1ff648
69c6c1f09f8a1ad61f1c48527ff27e56847a716f
6aba0ebabccea1902ba2ab7ac183a4bd22617555
71fddbccb15904b14b5773e689f611bfd5a0d111
82c70c956f5f66cf642991285fd631a9094abbf4
873fd2fc21457e707832c859534d596a7c803a46
8bab36fe676c8296ef3889d5ef0afcc4b3f017f3
8bc02ae4262eaf2cbb2454709db7f95cebcc9432
8bee44d0e4e22d3a85cfb9d00d00cb7d85433c9d
8c10459be56dde03c75cda993a489373a8251abf
9ac058d4541aa0e7ba222d25c55c407451f318a7
9d4b45104b3eb3734cb0ba45ca365b95a4c88505
9efa91a0cba44334b1071344314853699155814f
ac755f6da9877a4fc161d666f866a1d82e6de1b0
ac948abaa90b4f1498e699706407ac0c6d4164c7
b49a69fa41a2d7f5f81dbc2be9ea7cfc45c1f3df
b4bd11aa174d1a2f75aff276a2f9c50c4b6a4a1d
b4da5459ccd0556357f8ccd3471a63eebfa6e3b7
b65880c2aecc15db8afa80f027ed0650be23e8f9
bd5cdf05db06c3a81b0509e9f85c26feb34cea81
c5c8335ed343d14d2150a9ba90e182ca739bde8a
c8a6e4a3b16adf5be7c37b589d36cb2bd9706a92
c98d92e01423800404c77f6f82d62e5e7516d46d
cd04a6df24ab7852267619d388dee17f20c66deb
cf069bcafb6510282c8aeab7282e19abc46d558f
db180e1664e566a3393d884a52b93b35bb33911e
db19034d60973d0bcaa237c24252fe969803bc7c
dfed0ca9d883a45a40b2c23c29557ac4679ef698
e57b537f5f3307c6c59f5477e6320f17a9ba5046
e68f0f0e6102a1cd78d5d32ec7807b2060d08f79
e6fa7fcbaf339df464279b8090f6908fed7b325a
e9180ee202c42e2b94689c7e3fb2532dd5179fad
ecca309e0b43cd7f4517a863b95abf7b89be4584
f4999331606b753daaf6d6ad84917712f1420c85
f6e9081e36ca28bf619aebb40a67c56a2de2806e
fad49cac81011214d7fe3db7fc0bd663ef7bb353

Observed XOR Keys
0dhIscuDmR6xn3VMAG9ZYjBKC4VDeXGbyDyWjHM
4E72G6aXPne5ejcUgAfae6khJB3c871V0QUmkI
6neCM1yILp7V3BbMpgfgYYE6KY
7ricF8bWO0eBNiKEravcj2iIXohSNt
7Y9lGDAyEf9vxEmFgRqpDwYM52NFPbsUc
GXMSjRLvCPrrFnc1xa3xvYd43DfM8
HhvDDxmmfm7QuLH4rP63Fzn2eyW5BzuM3N
Hnyl2YPkOMLTNOndVtQwON
JB3k62Vtqymx09aJtnF9lZrCeIc
JsGqCdROAT1VDpSnxrAyZY45uQvRFP
LydNPzURb22Lxk4fxPkdd
MTGpOAycVm9btlQyEa5xVQPiz
Qmi5gstd6Oc27AJLXJQtEqGMxXzHUx
QssogTgvuTaZzPYZQynw0d
aZeTZw0X2lXM083cgmJQvnmCn9kmt
coOwAdmPtzt5Ps9rvUGOMEeFYajX2nJaismV
rzdbcSkVHXHefChUJQFGjAm12oinXwlyH2sHfiY
vLiOnPSKZ1bqjlp1dwuDvmmeQ3QN

SentinelOne Singularity XDR
See how SentinelOne XDR provides end-to-end enterprise visibility, powerful analytics, and automated response across your complete technology stack.


from SentinelOne https://ift.tt/4Nx2dVL
via IFTTT

Kremlin-Backed APT28 Targets Polish Institutions in Large-Scale Malware Campaign

May 09, 2024NewsroomMobile Security / Cyber Attack

Polish government institutions have been targeted as part of a large-scale malware campaign orchestrated by a Russia-linked nation-state actor called APT28.

"The campaign sent emails with content intended to arouse the recipient's interest and persuade him to click on the link," the computer emergency response team, CERT Polska, said in a Wednesday bulletin.

Clicking on the link redirects the victim to the domain run.mocky[.]io, which, in turn, is used to redirect to another legitimate site named webhook[.]site, a free service that allows developers to inspect data that's being sent via a webhook, in an effort to evade detection.

The step step involves the download of a ZIP archive file from webhook[.]site, which contains the Windows Calculator binary that masquerades as a JPG image file ("IMG-238279780.jpg.exe"), a hidden batch script file, and another hidden DLL file ("WindowsCodecs.dll").

Should a victim run the application, the malicious DLL file is side-loaded by means of a technique called DLL side-loading to ultimately run the batch script, while images of an "actual woman in a swimsuit along with links to her real accounts on social media platforms" are displayed in a web browser to maintain the ruse.

The batch script simultaneously downloads a JPG image ("IMG-238279780.jpg") from webhook[.]site that's subsequently renamed to a CMD script ("IMG-238279780.cmd) and executed, following which it retrieves the final-stage payload to gather information about the compromised host and send the details back.

CERT Polska said the attack chain bears similarities to a previous campaign that propagated a custom backdoor called HeadLace.

It's worth noting the abuse of legitimate services like Mocky and webhook[.]site is a tactic repeatedly adopted by ATP28 actors so as to sidestep detection by security software.

"If your organization does not use the above-mentioned services, we recommend that you consider blocking the above-mentioned domains on edge devices," it added.

"Regardless of whether you use the above-mentioned websites, we also recommend filtering emails for links in webhook.site and run.mocky.io, because cases of their legitimate use in the email content are very rare."

The development comes days after NATO countries accused the Kremlin-backed group of conducting a long-term cyber espionage campaign targeting their political entities, state institutions, and critical infrastructure.

APT28's malicious activities have also expanded to target iOS devices with the XAgent spyware, which was first detailed by Trend Micro in connection with a campaign dubbed Operation Pawn Storm in February 2015.

"Primarily targeting political and government entities in Western Europe, XAgent possesses capabilities for remote control and data exfiltration," Broadcom-owned Symantec said.

"It can gather information on users' contacts, messages, device details, installed applications, screenshots, and call records. This data could potentially be used for social engineering or spear-phishing campaigns."

News of APT28's attacks on Polish entities also follows a spike in financially motivated attacks by Russian e-crime groups like UAC-0006 targeting Ukraine in the second half of 2023, even as organizations in Russia and Belarus have been targeted by a nation-state actor known as Midge to deliver malware capable of plundering sensitive information.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/Vr9epTH
via IFTTT

CrowdStrike Enhances Cloud Asset Visualization to Accelerate Risk Prioritization

The massive increase in cloud adoption has driven adversaries to focus their efforts on cloud environments — a shift that led to cloud intrusions increasing by 75% in 2023, emphasizing the need for stronger cloud security.

Larger scale leads to larger risk. As organizations increase their quantity of cloud assets, their attack surface grows. Each asset brings its own set of security concerns. Large cloud environments are prone to more cloud misconfigurations, which provide more opportunities for adversaries to breach the perimeter. Furthermore, when breaches do occur, tracing lateral movement to stop malicious activity is challenging in a complex cloud environment.

CrowdStrike, a proven cloud security leader, has enhanced its CrowdStrike Falcon® Cloud Security capabilities to ensure security analysts can easily visualize their cloud assets’ connections so they can better understand and prioritize risks. Today we’re expanding our asset graph to help modern organizations secure everything they build in the cloud.

Stop Adversaries with Attack Path Analysis

We continue to expand our attack path analysis capabilities. Today, we’re announcing support for key AWS services including EC2, S3, IAM, RDS and container images.

With this enhanced support, CrowdStrike customers can quickly understand where their cloud weaknesses would allow adversaries to:

  • Gain initial access to their AWS environment
  • Move laterally to access vital compute resources
  • Extract data from storage buckets

Investigating cyberattacks can be a grueling, stressful task. The CrowdStrike Falcon® platform stops breaches and empowers security analysts to find the root cause of each attack. As Falcon’s attack path analysis extends further into the cloud, customers can leverage CrowdStrike® Asset Graph to more quickly investigate attacks and proactively resolve cloud weaknesses.

Figure 1. A filtered view of cloud assets shows all EC2 instances in the AWS account.

 

In this example, we are investigating an EC2 instance with a vulnerable metadata version enabled. We see the EC2 instance is open to global traffic, so we select “Asset Graph” to investigate.

In Asset Graph, an adversary’s potential entry point is automatically flagged for us. The access control list is misconfigured and accepts traffic from every IP address. Upon inspection, we quickly visualize how the adversary would move laterally to access our EC2 instance. To resolve this issue, we first restrict the access control list to company-specific IP addresses. Then, we update the metadata service version used by the EC2 instance.

Figure 2. The EC2’s attack path analysis reveals a potential entry point for adversaries.

 

Both indicators of attack (IOAs) and indicators of misconfiguration (IOMs) are available for each managed cloud asset. With this knowledge, security teams can quickly identify each asset that allows for initial access to their cloud. Furthermore, sensitive compute and storage assets are automatically traced to upstream security groups and network access lists that allow for initial access. Using Falcon’s attack path analysis, security teams quickly see the remediation steps required to protect their cloud from adversaries.

Investigate Findings with Query Builder

Speed and agility are massive cloud benefits. However, the ability to quickly spin up cloud resources can result in asset sprawl — an unexpectedly large number of cloud assets in a live environment. For example, in some environments, a single S3 bucket can be accessible to many IAM roles. Each of those IAM roles may contain access to a large quantity of other storage buckets. Security teams need a way to sift through massive cloud estates to find the services requiring attention.

Figure 3. A CrowdStrike Asset Graph view reveals the many connections between cloud assets.

 

The Falcon query builder capabilities allow security teams to ask questions like:

  • Which EC2 instances are internet-facing and contain critical security risks?
  • Have any IOAs appeared on my AWS assets in the last seven days?

Figure 4. A query checking for internet-facing EC2 instances with critical security risks.

 

With Falcon’s query builder, pinpointing cloud weaknesses becomes an efficient process. Graphical views of cloud assets can be daunting. Building queries with Falcon enables teams to focus their attention on the assets that matter most: those that are prone to exploitation by adversaries.

Delivered from the Unified CrowdStrike Falcon Platform

The expansion of cloud asset visualization is another step toward providing a single console that addresses every cloud security concern. By integrating IOAs and IOMs with a connected asset map, CrowdStrike offers a robust, efficient solution for investigating today’s cloud security challenges. 

Unlike other vendors that may offer disjointed security components, CrowdStrike’s approach integrates elements across the entire cloud infrastructure. From hybrid to multi-cloud environments, everything is managed through a single, intuitive console within the AI-native CrowdStrike Falcon platform. This unified cloud-native application protection platform (CNAPP) ensures organizations achieve the highest standards of security, effectively shielding against breaches with an industry-leading cloud security solution. The cloud asset visualization, while pivotal, is just one component of this comprehensive CNAPP approach, underscoring CrowdStrike’s commitment to delivering unparalleled security solutions that meet and anticipate the adversaries’ attacks on cloud environments.

Get a free Cloud Security Health Check and see Falcon Cloud Security in action for yourself.  

During the review, you will engage in a one-on-one session with a cloud security expert, evaluate your current cloud environment, and identify misconfigurations, vulnerabilities and potential cloud threats. 

Additional Resources



from Cybersecurity Blog | CrowdStrike https://ift.tt/zxcIpS4
via IFTTT

CrowdStrike Cloud Security Defines the Future of an Evolving Market

Today’s businesses are building their future in the cloud. They rely on cloud infrastructure and services to operate, develop new products and deliver greater value to their customers. The cloud is the catalyst for digital transformation among organizations of all sizes and industries.

But while the cloud powers immeasurable speed, growth and innovation, it also presents risk. The adoption of cloud technologies and modern software development practices have driven an explosion in the number of services, applications and APIs organizations rely on. For many, the attack surface is larger than ever — and rapidly expanding.

Adversaries are taking advantage of the shift. Last year, CrowdStrike observed a 75% increase in cloud intrusions and a 110% spike in cloud-conscious incidents, indicating threat actors are increasingly adept at breaching and navigating cloud environments. Cloud is the new battleground for modern cyber threats, but most organizations are not prepared to fight on it.

It’s time for a pivotal change in how organizations secure their cloud environments. CrowdStrike’s vision is to simplify and scale cloud security through a single, unified platform so security teams can protect the business with the same agility as their engineering colleagues. Our leadership in cloud security demonstrates our results so far: Most recently, we were recognized as a leader in The Forrester Wave™: Cloud Workload Security, Q1 2024 and a global leader in Frost & Sullivan’s Frost Radar: Cloud-Native Application Protection Platforms, 2023.

Today, our commitment to cloud security innovation continues. I’m thrilled to announce the general availability of CrowdStrike Falcon Application Security Posture Management (ASPM) and the expansion of our cloud detection and response (CDR) capabilities. Let’s dive into the details.

CrowdStrike CNAPP Extends Cloud Security to Applications

With the integration of ASPM into Falcon Cloud Security, CrowdStrike brings together the most critical CNAPP capabilities in a single, cloud-native platform, delivering the deep visibility, DevOps workflow integrations and incident response capabilities teams need to secure their cloud infrastructure and applications.

The demand for strong application security has never been greater: 71% of organizations report releasing application updates at least once a week, 23% push updates multiple times per week and 19% push updates multiple times per day. Only 54% of major code changes undergo a full security review before they’re deployed to production. And 90% of security teams use 3+ tools to detect and prioritize application vulnerabilities, making prioritization a top challenge for most.

CrowdStrike now delivers a unified CNAPP platform that sets a new standard for modern cloud security with:

  • Business Threat Context: DevSecOps teams can quickly understand and prioritize high-risk threats and vulnerabilities affecting sensitive data and the mission-critical applications organizations rely on most.
  • Deep Runtime Visibility: With comprehensive monitoring across runtime environments, security teams can rapidly identify vulnerabilities across cloud infrastructure, workloads, applications, APIs, GenAI and data to eliminate security gaps.
  • Runtime Protection: Fueled by industry-leading threat intelligence, Falcon Cloud Security detects and prevents cloud-based threats in real-time.
  • Industry-Leading MDR and CDR: By unifying industry-leading managed threat hunting and deep visibility across cloud, identity and endpoints, CrowdStrike’s CDR accelerates detection and response across every stage of a cloud attack, even as threats move laterally from cloud to endpoint.
  • Shift-Left Security: By embedding security early in the application development lifecycle, Falcon Cloud Security enables teams to proactively address potential issues, streamlining development and driving efficiency across development and security operations.

Application security is cloud security — but no vendor has successfully incorporated a way to protect the apps that companies build to support business-critical functions and drive growth and revenue. CrowdStrike now provides a single, holistic solution for organizations to secure everything they create and run in the cloud.

CrowdStrike Expands Cloud Detection and Response Leadership

CrowdStrike’s unified approach to CDR brings together world-class adversary intelligence, elite 24/7 threat hunting services and the industry’s most complete CNAPP. We are expanding our threat hunting with unified visibility across and within clouds, identities and endpoints to stop every stage of a cloud attack — even as threats move laterally from cloud to endpoint.

Our new CDR innovations are built to deliver the industry’s most comprehensive CDR service, drive consolidation across cloud security operations and stop breaches. This release empowers users to:

  • Protect Cloud Control Planes: Beginning with Microsoft Azure, CrowdStrike expands visibility into cloud control plane activity, complimenting existing threat hunting for cloud runtime environments.
  • Stop Cloud Identity Threats: Our unified platform approach enables cloud threat hunters to monitor and prevent compromised users and credentials from being exploited in cloud attacks.
  • Prevent Lateral Movement: The CrowdStrike Falcon platform enables CrowdStrike cloud threat hunters to track lateral movement from cloud to endpoint, facilitating rapid response and actionable insights for decisive remediation from indicators to root cause.

By uniting industry-leading managed threat hunting and deep visibility across cloud, identity and endpoints, CrowdStrike accelerates detection and response at every stage of a cloud attack. Our threat hunters rapidly detect, investigate and respond to suspicious behaviors and new attacker tradecraft while alerting customers of the complete attack path analysis of cloud-based threats.

Stop Breaches from Code to Cloud with CrowdStrike

Traditional approaches to securing cloud environments and applications have proven slow and ineffective. Security teams are overwhelmed with cybersecurity tools and alerts but struggle to gain the visibility they need to prioritize threats. Security engineers, often outnumbered by developers, must secure applications developed at a rapid pace. Tool fragmentation and poor user experience has led to more context switching, stress and frustration among security practitioners, and greater risk for organizations overall.

CrowdStrike, the pioneer of cloud-native cybersecurity, was born in the cloud to protect the cloud. We have been consistently recognized for our industry-leading cloud security strategy. Our innovations announced today continue to demonstrate our commitment to staying ahead of modern threats and building the technology our customers need to stop breaches.

Businesses must act now to protect their cloud environments — and the mission-critical applications and data within them — from modern adversaries. CrowdStrike is here to help.



from Cybersecurity Blog | CrowdStrike https://ift.tt/OhbQHj6
via IFTTT

Using Git submodules in your main Azure DevOps repository – Part

Interior photo of home office with the focus on a wooden desk with dual screen with Bicep code facing a large window . A strong IT pro with glasses is coding. Soft lighting at dusk with a blurred background view of a snowy mountain forest. The interior has log wall, burning wood stove and a lit desk lamp

While working with Azure and Bicep for Infrastructure as Code, I needed to write down instructions I could give DevOps teams on using Git submodules. I noticed that people often lack a lab to learn from. So, I decided to start with an accessible lab environment where they could play with the concept. That way, they get a better grasp of it before they use it anywhere near production. So, let’s dive in!

Prepare a small lab

For this lab, you need two projects in your Aure DevOps organization. You learn about the security settings required to make this work. If you do not have an Azure DevOps account and organization yet, please see Sign up for Azure DevOps – Azure DevOps Services | Microsoft Learn

It is free! Next, you’ll need an Azure tenant to deploy resources. For this lab, you don’t have to. But remember that by using temporary lab email addresses to create Microsoft accounts, you can have successive 30-day trials with 200$ credits for your needs.

Creating the projects

Log in to Azure DevOps and select your organization. Here, you create ProjectOne and ProjectTwo.

You do so by clicking on “New Project” and …

Azure DevOps | Creating the projects

  1. Fill out the name and description.
  2. By default, the project is private, which is what we want for this exercise.
  3. We leave the advanced options on their defaults.

Azure DevOps | Create new project

Creating the repositories In Azure DevOps

Now, you need some repositories to work with. So, create three repositories, two of which are in ProjectOne and one in ProjectTwo. The structure looks like this.

ProjectOne

MainRepoProjectOne

SubRepoProjectOne

ProjectOne

SubRepoProjectTwo

Now select ProjectOne. There is a repository called RepoOne that the tooling created for you. Select it, and use the menu to rename it to MainRepoProjectOne. All Repositories | Use the menu to rename it to MainRepoProjectOne.

The second repository, the one for the submodule in ProjectOne, you create yourself.

The second repository, the one for the submodule in ProjectOne, you create yourself.

Note that the repository type defaults to Git, which is what we want. Uncheck “Add a README.” In this lab, we start from an uninitialized repository.

Now, go to ProjectTwo and rename the default repository there to SubRepoProjectTwo.

ProjectTwo and rename the default repository there to SubRepoProjectTwo.

Create a folder structure for the repositories on our workstation

Create a root folder on your local volume of choice. I do this on OneDrive as I carry my local folder structure over to my desktop(s) or laptop(s) anywhere I log into OneDrive. Granted, this is a bit easier on Windows than on Linux. Windows and Linux support Visual Studio Code, the editor I use.

In the root of your volume (OneDrive, in my case), create one folder

AzureDevOps

In Project AzureDevOps, create one subfolder with the name of your organization.

WorkingHardInIT

In WorkingHardInIT, create two subfolders for projects

ProjectOne

In ProjectOne, create two subfolders

MainRepoProjectOne

SubRepoProjectOne

ProjectTwo

In ProjectTwo, create one subfolder

SubRepoProjectTwo

So you have a structure that looks like below.

In Project AzureDevOps, create one subfolder with the name of your organization.

That’s it. The repositories are ready to demonstrate using Git submodules in your main Azure DevOps repository. All right!

Visual Studio Code and Git!

Install Visual Studio Code: winget install -e Microsoft.VisualStudioCode

Install Git: winget install -e Git.Git

These are the minimal requirements for the lab exercise. You use extra extensions in Visual Studio Code and possibly other tools for actual work, but this gets you going. While not a hard-core requirement, I also install PowerShell Core.

Install PowerShell Core: winget install Powershell –source winget

Get going with Git and remote repositories

Initialize the git repository locally

I initialize the Git repository locally and push it to Azure DevOps from the command line. You can also use the Git GUI tools or a Visual Studio Git extension. In the root folder of our repository (MainRepoProjectOne), execute the following.

Initialize the git repository locally

That’s it. Locally, you now have a Git repository. I do something else before creating files and adding and committing them to the remote repositories. I configure my username and email for Git.

Configure the Git username and email

I push work to Azure DevOps or GitHub, which different organizations, businesses, or departments own. If I don’t set a username and email, it uses my local workstation’s user and the user@worktation as the email address. That is messy. If I set that up globally, it uses the same username and email address everywhere, no matter the organization or business. I like to set it to something generic globally but be more specific per Git repository.

So, while in the root of the Git main repository I just created, I executed the lines below.

git config user.name “WorkingHardInIT”

git config user.email workinghardinit@workinghardsmartint.work

So I can have a better-suited name indication and email that matches the organization where the repo lives. It prevents the organization from seeing a name like WorkingHardInIT (Black Bear Consulting) with the email workinghardinit@blackbear.com while committing to their repo at Polar Bear Consulting. You get the idea.

For the global config, I execute the following:

          git config –global user.name “Smokey Bear”

          git config –global user.email “smokeybear@capitan.bears”

That prevents the username and email in the commits from being autogenerated from the logged-in user and the email address from being generated from the computer’s name, like demouser@windows11vm01.datawisetech.corp or so. Still, it is generic and does not point to one specific organization.

Adding some files and subfolders to the local Git main repository

First, we’ll put a file in the root of our Git repository. I create an HTML file that gets pushed to a remote repo even when that file is empty. A text file needs content. In the root of the Git repository, execute

ni HelloFromMainRepo.html

Adding some files and subfolders to the local Git main repository

Next, I create a subfolder and add a file to it. In the root of the Git repository, execute

  • md InMainRepo
  • cd .\InMainRepo\
  • ni HelloFromSubFolderInMainRepo.html

Next, I create a subfolder and add a file to it. In the root of the Git repository, execute

Now let’s look at git status and see what is the status of my repository. Yes, there are changes!

Now let's look at git status and see what is the status of my repository.

Navigate to the root of the Git repository and run

  • git add .
  • git commit -m “Added first files and subfolder to MainRepoProjectOne”

Navigate to the root of the Git repository and run

Cool. But this main Git repository still only exists on our local machine. I will now push it to the Azure DevOps repository I created earlier. Make sure you are in the root folder of your Git Repo (MainRepoProjectOne) and execute the following.

Navigate to the root of the Git repository and run

Git remote add origin

Now run

  • git push -u origin –all

git push -u origin –all

When I refresh the MainRepoProjectOne in Azure DevOps, I see the files and the commit message. The remote repository is now synced with the local changes we made, added, and committed to Git.

Azure DevOps | The remote repository is now synced with the local changes we made, added, and committed to Git.

I will repeat this process in the folders for the two other repository folders. I list the commands for this setup with a screenshot of the final result.

SubRepoProjectOne

From the root folder for SubRepoProjectOne

  • git init
  • git config user.name “WorkingHardInIT working on SubRepoProjectOne”
  • git config user.email workinghardinit@workinghardinint.work
  • git status
  • ni HelloFromSubRepoProjectOne.html
  • md InSubRepoProjectOne
  • cd .\InSubRepoProjectOne\
  • ni HelloFromSubFolderInSubRepoProjectOne.html
  • Git status

[Navigate back to the root folder SubRepoProjectOne]

  • cd ..
  • git status
  • git add .
  • git commit -m “Added first files and subfolder to SubRepoProjectOne”
  • git status
  • git remote add origin https://workinghardinit@dev.azure.com/workinghardinit/ProjectOne/_git/SubRepoProjectOne
  • git push -u origin –all

Navigate back to the root folder SubRepoProjectOne

SubRepoProjectTwo

From the root folder for SubRepoProjectTwo

  • git init
  • git config user.name “WorkingHardInIT working on SubRepoProjectTwo”
  • git config user.email workinghardinit@workinghardinint.work
  • git status
  • ni HelloFromSubRepoProjectTwo.html
  • md InSubRepoProjectTwo
  • cd .\InSubRepoProjectTwo\
  • ni HelloFromSubFolderInSubRepoProjectTwo.html
  • Git status

[Navigate back to the root folder SubRepoProjectTwo]

  • cd ..
  • git status
  • git add .
  • git commit -m “Added first files and subfolder to SubRepoProjectTwo”
  • git status
  • git remote add origin https://workinghardinit@dev.azure.com/workinghardinit/ProjectTwo/_git/SubRepoProjectTwo
  • git push -u origin –all

Navigate back to the root folder SubRepoProjectTwo

Can I play with submodules now?

Yes! I completed the prep work. In part II we’ll dive into that. But for now, wrap your head around the lab we set up. Practice some basic git commands. Play with a remote repo! See how it behaves. If you break it, don’t worry. Delete the lab and start over. The Git & DevOps scene requires practice and a “wax on – wax off” mentality to learn.

The Git & DevOps scene requires practice and a "wax on - wax off" mentality to learn

Photo by Yancy Min on Unsplash

Note that I created three independent Git repositories. They exist locally in their dedicated folder structures and separate repositories in two different projects in Azure DevOps.

Let’s learn how to use git log to see what I last did in Git.

  • git log
  • git log –oneline

Add an extra HTML file to the MainRepoProjectOne root folder

I add an extra HTML file to the MainRepoProjectOne root folder

  • ni NewFileInMainRepoRoot.html
  • git add .
  • git commit -m “Added NewFileInMainRepoRoot.html to the MainRepoProjectOne”

I still need to push this to the remote repository in Azure DevOps. For now, the changes are local. In the log, you can see that I added and committed the last file to git. Usually, I’d push this to the remote repository, but I am using it to show the relationship and behavior between the main remote repository and the submodule remote repository.

Now, it’s time for me to add a Git submodule. But that will be for Part II of this series. But do keep reading and follow the flow! Remember that when you close this repo in Visual Studio Code, you can come back and continue anytime. Gits knows where you left off. To be continued in Part II!



from StarWind Blog https://ift.tt/gE2oe8d
via IFTTT