Friday, September 13, 2024

Madrid update: Our first weeks as Founders

In April, HashiCorp opened our new Madrid Tech Hub, marking a significant milestone in our commitment to grow our European presence and support companies with Infrastructure and Security Lifecycle Management and security lifecycle (SLM) management software that has become essential for enterprises around the world. In order to build and support the new location, Julia Friedman and Will Farley were given the opportunity to be part of the Madrid Founders Program. Here’s an update on their individual experiences since the Madrid Tech Hub launch; these personal experiences offer a glimpse into the human side of business expansion.

What have been your first impressions of your first few months in Madrid?

Julia Friedman: The first few months in Madrid have been full of discovery and excitement. The city is beautiful and vibrant, the people are warm and welcoming, and there’s so much to see and do at all hours of the day and night. What’s been most enjoyable, though, has been the steady stream of new things to see, do and try. I take my dog for a long walk every morning and I always manage to find some new restaurant or coffee shop I’d like to try, or a new museum to visit, or a new park to stroll through. I’ve been keeping a running list of places and have been working them down as much as I can. Aside from the obvious excitement of being in a new place, and a world-class city at that, it’s been interesting reestablishing the normal rhythms of life in a new country. There’s seemingly always something new to figure out, even in seemingly-mundane tasks like “finding the best way to commute to the office” or “figuring out the right brand of dish soap.” It’s amusing that even in the most mundane situations, you still end up finding something new and novel to catch your attention.

Can you describe a typical day at the Madrid Tech Hub?

Will Farley: The early days of the hub have been focused mainly on enablement. Ensuring the team here has the right skills to be successful. It has been great to see everyone throwing themselves into the deep end with our tech and growing in confidence day by day.

What are some challenges and opportunities have you come across?

Will: As expected, the language. However, being in this environment is great in terms of being able to immerse myself in it.

What are some of your favorite places to visit in Madrid?

Will: Hands down Retiro Park - one of the most beautiful parks I have ever been to in my life!

Julia: I’d second Will’s pick of Retiro Park: it’s an amazingly beautiful park complete with walking trails and botanical gardens. We visited the Rose Garden in the peak of the rose bloom and it was a riot of color. Madrid is also a city of plazas, and I live near two that are amazing places to gather: the Plaza de Olavide in the Chamberí neighborhood and the Plaza Dos de Mayo in the Malasaña neighborhood. Both plazas are lined with restaurants, bars, and coffee shops and it’s wonderful to sit in the sun and enjoy a coffee or a Tinto de Verano.

How have you/your family adjusted to life in Madrid?

Will: First things first… straight to the shops to buy some sun cream.

Julia: It’s been a process for sure, but we’ve adjusted to life here quite well. One of the most interesting things has been shifting our schedules and lives to better fit the rhythms of the Spanish day. The sun rises and sets later in the day here, which partially explains why Spain has dinner at 10 p.m., or why I get to go watch a football match tonight at 9:30 p.m. after work. Seeing how Madrid comes alive in the evening has been one of the most fun parts of living here: the restaurants and plazas are full of people late into the night and it’s so fun being in a city that’s just getting started at 11 p.m. (unlike San Francisco, where we used to live). My wife and I are both settling in and making friends, though, and our dog has never been more popular than he’s been here: we usually have to add an extra 10-15 minutes to our walks because everyone on the street wants to pet him!

What local customs or traditions that you have particularly enjoyed or found interesting?

Will: Personally, being an avid sports player - I have recently embraced Padel. Which is not popular in the UK. With it being such a huge part of the sporting scene in Spain - it’s been brilliant to keep in shape but also meet people due to the social side of the game too.

Julia: I’d definitely second Padel, it’s such a fun sport and such a good way to meet new people. I’ve also really enjoyed participating in some of the local cultural observances and neighborhood festivals of Madrid: we held an office event earlier in the summer celebrating Día de San Isidro (the feast day of the patron saint of Madrid), in which we learned to dance some customary dances called chotis. In August, as the city heats up, many of the neighborhoods also hold parties called verbenas to celebrate their area, and getting to see some of these events has been so much fun. There’s food, drinks, music, dancing and so much more to see and do.

What is next for the Madrid tech Hub and what is next for you?

Will: Unleashing a pool of incredibly talented people to roll their sleeves up to support the field and wider technical community at HashiCorp. I’m looking forward to seeing the efficiencies we will drive and for the success stories to start rolling in.

Julia: I’m most excited to see how we’re able to drive innovation for our customers from Madrid, both in terms of new ways to use our products to solve new technical challenges for our customers, and in terms of creating the best ways to enable our customers to adopt the cloud at scale. Our Customer Lifecycle Management team is smart, driven, ambitious, and creative, and those are the right ingredients to help our customers thrive. As for me, I’m excited to see what’s next on my Spanish adventure. Who knows, this may just be my first chapter here…

Any final thoughts you'd like to share about your experience in Madrid so far?

Will: I have loved every second of this experience so far here in Madrid. Professionally it's been incredible to build something to what it is already today & personally to live in a different country. Looking forward to seeing the growth over the next couple of months for the team.

Julia: It’s been an incredible adventure so far, and I’m excited to see what the next few months have in store, both for me and for our amazing team here.



from HashiCorp Blog https://ift.tt/FzWhxwZ
via IFTTT

Apple Vision Pro Vulnerability Exposed Virtual Keyboard Inputs to Attackers

Sep 13, 2024Ravie LakshmananVirtual Reality / Vulnerability

Details have emerged about a now-patched security flaw impacting Apple's Vision Pro mixed reality headset that, if successfully exploited, could allow malicious attackers to infer data entered on the device's virtual keyboard.

The attack, dubbed GAZEploit, has been assigned the CVE identifier CVE-2024-40865.

"A novel attack that can infer eye-related biometrics from the avatar image to reconstruct text entered via gaze-controlled typing," a group of academics from the University of Florida said.

"The GAZEploit attack leverages the vulnerability inherent in gaze-controlled text entry when users share a virtual avatar."

Following responsible disclosure, Apple addressed the issue in visionOS 1.3 released on July 29, 2024. It described the vulnerability as impacting a component called Presence.

"Inputs to the virtual keyboard may be inferred from Persona," it said in a security advisory, adding it resolved the problem by "suspending Persona when the virtual keyboard is active."

In a nutshell, the researchers found that it was possible to analyze a virtual avatar's eye movements (or "gaze") to determine what the user wearing the headset was typing on the virtual keyboard, effectively compromising their privacy.

As a result, a threat actor could, hypothetically, analyze virtual avatars shared via video calls, online meeting apps, or live streaming platforms and remotely perform keystroke inference. This could then be exploited to extract sensitive information such as passwords.

The attack, in turn, is accomplished by means of a supervised learning model trained on Persona recordings, eye aspect ratio (EAR), and eye gaze estimation to differentiate between typing sessions and other VR-related activities (e.g., watching movies or playing games).

In the subsequent step, the gaze estimation directions on the virtual keyboard are mapped to specific keys in order to determine the potential keystrokes in a manner such that it also takes into account the keyboard's location in the virtual space.

"By remotely capturing and analyzing the virtual avatar video, an attacker can reconstruct the typed keys," the researchers said. "Notably, the GAZEploit attack is the first known attack in this domain that exploits leaked gaze information to remotely perform keystroke inference."

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/f8QwDOk
via IFTTT

Announcing the 11th Annual Flare-On Challenge

Written by: Nick Harbour


When it's pumpkin spice season, that means it's also Flare-On Challenge season. The Flare-On Challenge is a reverse engineering contest held every year by the FLARE team, and this marks its eleventh year running. It draws thousands of players from around the world every year, and is the foremost single-player CTF-style challenge for current and aspiring reverse engineers. It provides individual players with a gauntlet of increasingly challenging puzzles to test their ability, and earn a position in our hall of fame. Veteran competitors who have been following the live countdown over at flare-on.com may have already marked their calendar for the contest launch at 8:00pm ET on Sept. 27th, 2024. It will run for six weeks, ending at 8:00pm ET on Nov. 8th, 2024.

The Flare-On contest always features a diverse array of architectures, but with a strong representation of Windows binaries. This year’s contest may be the most diverse ever, with 10 challenges covering architectures including Windows, Linux, JavaScript, .NET, YARA, UEFI, Verilog, and Web3. Yes, you read that correctly, there is a YARA challenge this year. The challenges are often designed to represent Reverse Engineering challenges the FLARE team has encountered on the frontlines of cybersecurity.

If you successfully crush all 10 challenges you will be eligible to receive a prize, which will be revealed later. This crucial bit of gear will distinguish you from your colleagues who have not mastered the arcane art of Reverse Engineering, and will thus be an object of their envy. Your name or handle, should you choose to be included, will be permanently etched into the Hall of Fame on the Flare-On website

Please check the Flare-On website for the live countdown and, upon launch, the link to the game server. Early account registration will open approximately two days before launch. While you’re there, check out last year’s challenges and official solutions to prepare yourself. For official news and information, we will be using the Twitter hashtag #flareon11.



from Threat Intelligence https://ift.tt/Ru2QJb0
via IFTTT

17-Year-Old Arrested in Connection with Cyber Attack Affecting Transport for London

Sep 13, 2024Ravie LakshmananCyber Attack / Crime

British authorities on Thursday announced the arrest of a 17-year-old male in connection with a cyber attack affecting Transport for London (TfL).

"The 17-year-old male was detained on suspicion of Computer Misuse Act offenses in relation to the attack, which was launched on TfL on 1 September," the U.K. National Crime Agency (NCA) said.

The teenager, who's from Walsall, is said to have been arrested on September 5, 2024, following an investigation that was launched in the incident's aftermath.

The law enforcement agency said the unnamed individual was questioned and subsequently let go on bail.

"Attacks on public infrastructure such as this can be hugely disruptive and lead to severe consequences for local communities and national systems," Deputy Director Paul Foster, head of the NCA's National Cyber Crime Unit, said.

"The swift response by TfL following the incident has enabled us to act quickly, and we are grateful for their continued cooperation with our investigation, which remains ongoing."

TfL has since confirmed that the security breach has led to the unauthorized access of bank account numbers and sort codes for around 5,000 customers and that it will be directly contacting those impacted.

"Although there has been very little impact on our customers so far, the situation is evolving and our investigations have identified that certain customer data has been accessed," TfL said.

"This includes some customer names and contact details, including email addresses and home addresses where provided."

It's worth noting that West Midlands police previously arrested a 17-year-old boy, also from Walsall, in July 2024 in connection with a ransomware attack on MGM Resorts. The incident was attributed to the infamous Scattered Spider group.

It's currently not clear if these two events refer to the same individual. Back in June, another 22-year-old U.K. national was arrested in Spain for his alleged involvement in several ransomware attacks carried out by Scattered Spider.

The dangerous e-crime group is part of a larger collective called The Com, a loose-knit ecosystem of various groups that have engaged in cybercrime, squatting, and physical violence. It's also tracked as 0ktapus, Octo Tempest, and UNC3944.

According to a new report from EclecticIQ, Scattered Spider's ransomware operations have increasingly honed in on cloud infrastructures within the insurance and financial sectors, echoing a similar analysis from Resilience Threat Intelligence in May 2024.

The group has a well-documented history of gaining persistent access to cloud environments via sophisticated social engineering tactics, as well as purchasing stolen credentials, executing SIM swaps, and utilizing cloud-native tools.

"Scattered Spider frequently uses phone-based social engineering techniques like voice phishing (vishing) and text message phishing (smishing) to deceive and manipulate targets, mainly targeting IT service desks and identity administrators," security researcher Arda Büyükkaya said.

"The cybercriminal group abuses legitimate cloud tools such as Azure's Special Administration Console and Data Factory to remotely execute commands, transfer data, and maintain persistence while avoiding detection."

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/hP5XQza
via IFTTT

The Good, the Bad and the Ugly in Cybersecurity – Week 37

The Good | Cybercrime Syndicate Members Arrested In Singapore & Dark Market Admins Indicted for Fraud

Singaporean authorities conducted an island-wide raid on various suspects that were being monitored for their links to a global cybercrime syndicate. As a result of the operation, 160 officials from various law enforcement departments joined up to arrest five Chinese nationals and one Singaporean individual.

The suspects were found in possession of hacking tools and devices, a specialized backdoor called PlugX that was primed to control malware, stolen personally identifiable information (PII), credentials to cybercriminal-controlled servers, and a substantial amount of cash and cryptocurrency. Currently, the operation is part of ongoing investigations into the greater syndicate’s activities. PlugX is a remote access trojan (RAT) largely associated with PRC state-sponsored threat groups such as APT10, APT41, and Earth Preta.

Electronic devices containing stolen data, hacking tools, and PlugX (Source: Singapore Police Force)

In a related development, two individuals have been indicted in the U.S. for their alleged involvement in managing a dark web marketplace known as WWH Club (wwh-club[.]ws). The platform specialized in the sale of sensitive personal and financial information. WWH Club also offered online courses on cybercrime techniques to over 353,000 aspiring cybercriminals, evening allowing buyers to pay extra for training materials.

The accused, identified as Alex Khodyrev (a Kazakhstan national) and Pavel Kublitskii (a Russian national), face charges of conspiracy to commit access device fraud and conspiracy to commit wire fraud. Khodyrev and Kublitskii are said to have acted as the main administrators of, not only WWH Club, but also its various sister sites that all functioned as dark markets, forums, and cybercrime training centers. If convicted, the defendants could face up to 20 years in federal prison. The case highlights ongoing efforts by law enforcement agencies to tackle cybercrime at its roots.

The Bad | Cryptocurrency Industry Reached $5.6 Billion in Losses in 2023, FBI Reports

Crypto took a major hit last year with losses exceeding $5.6 billion, mainly driven by investment fraud, tech support scams, and social engineering via government impersonation. Latest findings published by the FBI’s Internet Crime Complaint Center (IC3), the product of almost 70,000 reports, marks this 45% rise as a new record high for the industry. The U.S. alone accounts for $4.8 billion of these reported cases, followed by the Cayman Islands, Mexico, Canada, the U.K., India, and Australia.

(Source: FBI)

The report laid out several fraud and scam trends, ranging from fake investment sites, pig butchering schemes linked to dating apps and professional networking platforms, and liquidity mining scams that offer high returns for staking assets. Another popular method saw criminals launching fake blockchain-based gaming apps to trick users into connecting their cryptocurrency wallets. Scammers also showed no mercy in their tactics, reportedly targeting victims of previous frauds in secondary scams by offering fake cryptocurrency recovery services to charge upfront fees for retrieving stolen assets.

Cybercriminals continue their attacks on the cryptocurrency industry, taking advantage of its decentralized nature. With no central authority oversight in place, fraudulent transactions and illicit financial activities are very difficult to trace or reverse. Cryptocurrency also offers anonymity and easy ways to obscure money trails – both attractive qualities sought out by online attackers and scammers.

The FBI’s findings is followed closely by reports of misconfigured, Internet-exposed Selenium Grid instances being targeted by malicious actors for illicit cryptocurrency mining and proxyjacking. Exploiting these public systems, threat actors leverage their victim’s Internet bandwidth to inject malicious scripts, drop cryptocurrency miners, and more.

The Ugly | New Malware Launched in Iranian-Backed Campaign Against Iraqi Government Entities

Iraqi government entities have been targeted in a sophisticated campaign led by OilRig (aka APT34), an Iranian state-sponsored threat group. In a new report released this week, security researchers identified the Iraqi Prime Minister’s Office and the nation’s Ministry of Foreign Affairs as key victims in this campaign featuring a novel set of malware families.

OilRig has been active in the threat scene since 2014 and is specialized in leveraging phishing techniques to infiltrate Middle Eastern networks. Often, OilRig deploys custom backdoors to steal sensitive data. In this recent operation, the group introduced two new malware strains, Veaty and Spearal, designed to execute PowerShell commands and extract sensitive files. Researchers noted that the strains were observed employing unique command and control (C2) mechanisms consisting of a custom DNS tunneling protocol and email-based C2 channel tailor-made for the campaign.

Specifically, Spearal, a .NET backdoor, uses DNS tunneling for communication, while Veaty relies on compromised email accounts for its command-and-control (C2) operations. OilRig then leverages social engineering tactics and phishing emails containing deceptive files, such as Avamer.pdf.exe or IraqiDoc.docx.rar, to deliver these malware tools.

Installer deploying Spearal malware showing logo of the Iraqi General Secretariat of the Council of Ministers (Source: Check Point)

According to the report, OilRig’s attack methods consistently exploit previously compromised email accounts and utilize custom DNS tunneling protocols. The campaign targeting Iraqi government infrastructure also involved the discovery of additional backdoors, such as CacheHttp.dll, that targets Microsoft’s Internet Information Services (IIS) servers. Advanced persistent threats (APTs) like OilRig continue to develop specialized techniques in maintaining C2 channels to further develop elaborate cyber-espionage campaigns against high-value adversarial targets. Learn more about the mechanisms behind OilRig’s C2 communications and the group’s TTPs in this LABScon Replay presentation.



from SentinelOne https://ift.tt/dtHzAMi
via IFTTT

Progress WhatsUp Gold Exploited Just Hours After PoC Release for Critical Flaw

Sep 13, 2024Ravie LakshmananSoftware Security / Threat Intelligence

Malicious actors are likely leveraging publicly available proof-of-concept (PoC) exploits for recently disclosed security flaws in Progress Software WhatsUp Gold to conduct opportunistic attacks.

The activity is said to have commenced on August 30, 2024, a mere five hours after a PoC was released for CVE-2024-6670 (CVSS score: 9.8) by security researcher Sina Kheirkhah of the Summoning Team, who is also credited with discovering and reporting CVE-2024-6671 (CVSS scores: 9.8).

Both the critical vulnerabilities, which allow an unauthenticated attacker to retrieve a user's encrypted password, were patched by Progress in mid-August 2024.

"The timeline of events suggests that despite the availability of patches, some organizations were unable to apply them quickly, leading to incidents almost immediately following the PoC's publication," Trend Micro researchers Hitomi Kimura and Maria Emreen Viray said in a Thursday analysis.

The attacks observed by the cybersecurity company involve bypassing WhatsUp Gold authentication to exploit the Active Monitor PowerShell Script and ultimately download various remote access tools for gaining persistence on the Windows host.

This includes Atera Agent, Radmin, SimpleHelp Remote Access, and Splashtop Remote, with both Atera Agent and Splashtop Remote installed by means of a single MSI installer file retrieved from a remote server.

"The polling process NmPoller.exe, the WhatsUp Gold executable, seems to be able to host a script called Active Monitor PowerShell Script as a legitimate function," the researchers explained. "The threat actors in this case chose it to perform for remote arbitrary code execution."

While no follow-on exploitation actions have been detected, the use of several remote access software points to the involvement of a ransomware actor.

This is the second time security vulnerabilities in WhatsUp Gold have been actively weaponized in the wild. Early last month, the Shadowserver Foundation said it had observed exploitation attempts against CVE-2024-4885 (CVSS score: 9.8), another critical bug that was resolved by Progress in June 2024.

The disclosure comes weeks after Trend Micro also revealed that threat actors are exploiting a now-patched security flaw in Atlassian Confluence Data Center and Confluence Server (CVE-2023-22527, CVSS score: 10.0) to deliver the Godzilla web shell.

"The CVE-2023-22527 vulnerability continues to be widely exploited by a wide range of threat actors who abuse this vulnerability to perform malicious activities, making it a significant security risk to organizations worldwide," the company said.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/b7ZD5Jg
via IFTTT

Thursday, September 12, 2024

New Android Malware 'Ajina.Banker' Steals Financial Data and Bypasses 2FA via Telegram

Sep 12, 2024Ravie LakshmananMobile Security / Financial Fraud

Bank customers in the Central Asia region have been targeted by a new strain of Android malware codenamed Ajina.Banker since at least November 2024 with the goal of harvesting financial information and intercepting two-factor authentication (2FA) messages.

Singapore-headquartered Group-IB, which discovered the threat in May 2024, said the malware is propagated via a network of Telegram channels set up by the threat actors under the guise of legitimate applications related to banking, payment systems, and government services, or everyday utilities.

"The attacker has a network of affiliates motivated by financial gain, spreading Android banker malware that targets ordinary users," security researchers Boris Martynyuk, Pavel Naumov, and Anvar Anarkulov said.

Targets of the ongoing campaign include countries such as Armenia, Azerbaijan, Iceland, Kazakhstan, Kyrgyzstan, Pakistan, Russia, Tajikistan, Ukraine, and Uzbekistan.

There is evidence to suggest that some aspects of the Telegram-based malware distribution process may have been automated for improved efficiency. The numerous Telegram accounts are designed to serve crafted messages containing links -- either to other Telegram channels or external sources -- and APK files to unwitting targets.

The use of links pointing to Telegram channels that host the malicious files has an added benefit in that it bypasses security measures and restrictions imposed by many community chats, thereby allowing the accounts to evade bans when automatic moderation is triggered.

Besides abusing the trust users place in legitimate services to maximize infection rates, the modus operandi also involves sharing the malicious files in local Telegram chats by passing them off as giveaways and promotions that claim to offer lucrative rewards and exclusive access to services.

"The use of themed messages and localized promotion strategies proved to be particularly effective in regional community chats," the researchers said. "By tailoring their approach to the interests and needs of the local population, Ajina was able to significantly increase the likelihood of successful infections."

The threat actors have also been observed bombarding Telegram channels with several messages using multiple accounts, at times simultaneously, indicating a coordinated effort that likely employs some sort of an automated distribution tool.

The malware in itself is fairly straightforward in that, once installed, it establishes contact with a remote server and requests the victim to grant it permission to access SMS messages, phone number APIs, and current cellular network information, among others.

Ajina.Banker is capable of gathering SIM card information, a list of installed financial apps, and SMS messages, which are then exfiltrated to the server.

New versions of the malware are also engineered to serve phishing pages in an attempt to collect banking information. Furthermore, they can access call logs and contacts, as well as abuse Android's accessibility services API to prevent uninstallation and grant themselves additional permissions.

"The hiring of Java coders, created Telegram bot with the proposal of earning some money, also indicates that the tool is in the process of active development and has support of a network of affiliated employees," the researchers said.

"Analysis of the file names, sample distribution methods, and other activities of the attackers suggests a cultural familiarity with the region in which they operate."

The disclosure comes as Zimperium uncovered links between two Android malware families tracked as SpyNote and Gigabud (which is part of the GoldFactory family that also includes GoldDigger).

"Domains with really similar structure (using the same unusual keywords as subdomains) and targets used to spread Gigabud samples and were also used to distribute SpyNote samples," the company said. "This overlap in distribution shows that the same threat actor is likely behind both malware families, pointing to a well-coordinated and broad campaign."

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/FjJevVR
via IFTTT

Urgent: GitLab Patches Critical Flaw Allowing Unauthorized Pipeline Job Execution

Sep 12, 2024Ravie LakshmananDevSecOps / Vulnerability

GitLab on Wednesday released security updates to address 17 security vulnerabilities, including a critical flaw that allows an attacker to run pipeline jobs as an arbitrary user.

The issue, tracked as CVE-2024-6678, carries a CVSS score of 9.9 out of a maximum of 10.0

"An issue was discovered in GitLab CE/EE affecting all versions starting from 8.14 prior to 17.1.7, starting from 17.2 prior to 17.2.5, and starting from 17.3 prior to 17.3.2, which allows an attacker to trigger a pipeline as an arbitrary user under certain circumstances," the company said in an alert.

The vulnerability, along with three high-severity, 11 medium-severity, and two low-severity bugs, have been addressed in versions 17.3.2, 17.2.5, 17.1.7 for GitLab Community Edition (CE) and Enterprise Edition (EE).

It's worth noting that CVE-2024-6678 is the fourth such flaw that GitLab has patched over the past year after CVE-2023-5009 (CVSS score: 9.6), CVE-2024-5655 (CVSS score: 9.6), and CVE-2024-6385 (CVSS score: 9.6).

While there is no evidence of active exploitation of the flaws, users are recommended to apply the patches as soon as possible to mitigate against potential threats.

Earlier this May, U.S. Cybersecurity and Infrastructure Security Agency (CISA) revealed that a critical GitLab vulnerability (CVE-2023-7028, CVSS score: 10.0) had come under active exploitation in the wild.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/Tp8FBhG
via IFTTT

Beware: New Vo1d Malware Infects 1.3 Million Android TV Boxes Worldwide

Sep 12, 2024Ravie LakshmananMalware / IoT Security

Nearly 1.3 million Android-based TV boxes running outdated versions of the operating system and belonging to users spanning 197 countries have been infected by a new malware dubbed Vo1d (aka Void).

"It is a backdoor that puts its components in the system storage area and, when commanded by attackers, is capable of secretly downloading and installing third-party software," Russian antivirus vendor Doctor Web said in a report published today.

A majority of the infections have been detected in Brazil, Morocco, Pakistan, Saudi Arabia, Argentina, Russia, Tunisia, Ecuador, Malaysia, Algeria, and Indonesia.

It's currently not known what the source of the infection is, although it's suspected that it may have either involved an instance of prior compromise that allows for gaining root privileges or the use of unofficial firmware versions with built-in root access.

The following TV models have been targeted as part of the campaign -

  • KJ-SMART4KVIP (Android 10.1; KJ-SMART4KVIP Build/NHG47K)
  • R4 (Android 7.1.2; R4 Build/NHG47K)
  • TV BOX (Android 12.1; TV BOX Build/NHG47K)

The attack entails the substitution of the "/system/bin/debuggerd" daemon file (with the original file moved to a backup file named "debuggerd_real"), as well as the introduction of two new files – "/system/xbin/vo1d" and "/system/xbin/wd" – which contain the malicious code and operate concurrently.

"Before Android 8.0, crashes were handled by the debuggerd and debuggerd64 daemons," Google notes in its Android documentation. "In Android 8.0 and higher, crash_dump32 and crash_dump64 are spawned as needed."

Two different files shipped as part of the Android operating system – install-recovery.sh and daemonsu – have been modified as part of the campaign to trigger the execution of the malware by starting the "wd" module.

"The trojan's authors probably tried to disguise one if its components as the system program '/system/bin/vold,' having called it by the similar-looking name 'vo1d' (substituting the lowercase letter 'l' with the number '1')," Doctor Web said.

The "vo1d" payload, in turn, starts "wd" and ensures it's persistently running, while also downloading and running executables when instructed by a command-and-control (C2) server. Furthermore, it keeps tabs on specified directories and installs the APK files that it finds in them.

"Unfortunately, it is not uncommon for budget device manufacturers to utilize older OS versions and pass them off as more up-to-date ones to make them more attractive," the company said.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/E7B1wWh
via IFTTT

StarWind Virtual SAN (VSAN) vs Microsoft Storage Spaces Direct (S2D) – Part 2: NVM over RDMA Performance Comparison

Introduction

In the fast-paced world of hyperconverged infrastructure (HCI), performance and efficiency aren’t just buzzwords – they’re essential. As organizations push the boundaries of what their IT infrastructure can deliver, selecting the most effective solution becomes a critical decision. In this context, StarWind Virtual SAN (VSAN) and Microsoft Storage Spaces Direct (S2D) are two software-defined storage products that offer distinct approaches to leveraging NVMe and RDMA for high-performance HCI storage.

This article is the second in a series exploring the performance of StarWind VSAN and Microsoft S2D in a 2-node Hyper-V cluster setup. In the first article, we compared these two solutions using NVMe-oF over TCP, exploring their performance, capacity efficiency, and practical application. If you missed it, you can catch up here. Now, we’re turning our attention to RDMA-based configurations to give you an even clearer picture of which solution might be your ideal fit.

In this article, we’ll evaluate how these solutions perform in a 2-node Hyper-V cluster across two key scenarios:

  • StarWind VSAN NVMe over RDMA 
    • Host Mirroring + MDRAID-5.
  • Microsoft Storage Spaces Direct over RDMA 
    • Mirror-accelerated parity, workload placed in the mirror tier.
    • Mirror-accelerated parity, workload placed in both tiers – mirror and parity.

By examining these configurations, we aim to provide insights into how each solution performs under varying workloads and how these performance characteristics translate into real-world benefits. In the sections that follow, we’ll walk you through our testbed setup, benchmarking methodology, and the results of our performance tests.

Solution diagram:

StarWind Virtual SAN NVMe over RDMA scenario:

StarWind Virtual SAN NVMe over RDMA scenario

StarWind Virtual SAN (VSAN) setup was designed to leverage the full potential of NVMe drives and RDMA for high-performance storage. Here’s how it was configured:

  • NVMe drives: Each Hyper-V node was equipped with 5x NVMe drives, which were directly passed through to the StarWind VSAN Controller Virtual Machine (CVM). This direct pass-through ensures that the drives can fully leverage the speed and performance benefits of NVMe technology.
  • RDMA: To enable RDMA (Remote Direct Memory Access) and achieve ultra-low latency communication between the nodes, Mellanox NICs were used. These NICs were configured with SR-IOV (Single Root I/O Virtualization), allowing their Virtual Functions to be passed through to the StarWind VSAN CVM. This setup provides the necessary RDMA compatibility for high-speed data transfer.
  • MDRAID5 array creation: Inside the StarWind VSAN CVM, the 5x NVMe drives were assembled into an MDRAID-5 array. This RAID configuration provides a nice balance between performance, capacity, and redundancy.
  • High Availability (HA): On top of the MDRAID-5 array, we created two StarWind High Availability (HA) devices. These HA devices replicate data between the two nodes, ensuring continuous availability even in the event of a node failure.
  • NVMe-oF connectivity: The StarWind HA devices were connected to the nodes using StarWind NVMe-oF Initiator. The NVMe initiator plays a key role in establishing the high-speed NVMe-oF connection across the RDMA network, which is critical for maintaining low-latency and high-throughput operations.
  • Cluster Shared Volumes: Finally, Cluster Shared Volumes (CSVs) were created on top of the connected HA devices. These CSVs allow both nodes to access the same storage simultaneously, enabling efficient load balancing and resource utilization.

It’s worth noting that we used StarWind NVMe-oF Initiator because, currently, Microsoft does not offer a native NVMe-oF initiator. Microsoft has announced plans to release an NVMe initiator for Windows Server 2025, but it will support NVMe over TCP only, with no confirmation yet regarding RDMA support.

Microsoft Storage Spaces Direct over RDMA scenario – Mirror-accelerated parity:

Microsoft Storage Spaces Direct over RDMA scenario - Mirror-accelerated parity

For the S2D setup, we implemented a mirror-accelerated parity configuration, which offers an optimal balance between performance and capacity efficiency. This setup allows us to evaluate how well S2D handles different workloads, particularly in scenarios where the workload is either fully placed in the high-performance mirror tier or spread across both the mirror and parity tiers.

Here’s how we structured the solution:

  • Storage tiers: We created two distinct storage tiers, each configured to optimize specific aspects of data handling:
    • NestedPerformance tier: Configured with the mirror resiliency setting, this tier uses SSDs and ensures high data redundancy by storing four copies of each piece of data. The command used to create this tier was:
New-StorageTier -StoragePoolFriendlyName s2d-pool -FriendlyName NestedPerformance -ResiliencySettingName Mirror -MediaType SSD -NumberOfDataCopies 4
    • NestedCapacity tier: This tier focuses on capacity efficiency, using a parity resiliency setting. It stores two copies of each piece of data with one parity stripe, configured using the following command:
New-StorageTier -StoragePoolFriendlyName s2d-pool -FriendlyName NestedCapacity -ResiliencySettingName Parity -MediaType SSD -NumberOfDataCopies 2 -PhysicalDiskRedundancy 1 -NumberOfGroups 1 -FaultDomainAwareness StorageScaleUnit -ColumnIsolation PhysicalDisk -NumberOfColumns 4
  • Volumes setup: Following Microsoft’s recommendations, two volumes were created across these tiers:
    • Volume01 and Volume02: Both volumes were configured with 20% of their data in the high-performance mirror tier and the remaining 80% in the capacity-focused parity tier. This setup allows us to observe how the system handles data as it moves between tiers, particularly when the mirror tier reaches its capacity limits. The commands used to create these volumes were:
New-Volume -StoragePoolFriendlyName s2d-pool -FriendlyName Volume01 -StorageTierFriendlyNames NestedPerformance, NestedCapacity -StorageTierSizes 820GB, 3276GB

New-Volume -StoragePoolFriendlyName s2d-pool -FriendlyName Volume02 -StorageTierFriendlyNames NestedPerformance, NestedCapacity -StorageTierSizes 820GB, 3276GB
  • ReFS data movement: The Resilient File System (ReFS) is configured to automatically move data between the tiers when the mirror tier reaches 85% capacity. This threshold was left at its default setting to simulate a typical production environment.
  • Testing Scenarios:
    • Scenario 1: Workload in the mirror tier: Here, the entire workload was placed within the mirror tier, leveraging its high performance and redundancy.
    • Scenario 2: Workload spilling into the parity tier: In the second scenario, we explored the performance impact when the workload exceeds the mirror tier’s capacity, forcing ReFS to start moving data to the slower parity tier. We also simulated conditions where writes were directed straight to the parity tier, representing a worst-case scenario in terms of performance.

In real-world applications, performance would likely fall somewhere between these two scenarios, depending on the specific workload and how much data resides in each tier. This dual-tier approach provides valuable insights into how S2D manages different types of data and how it balances performance with capacity efficiency.

Capacity efficiency:

In evaluating the capacity efficiency of these configurations, it’s essential to understand how each solution optimizes storage use while balancing performance and resiliency.

  • StarWind Virtual SAN
    Achieves a capacity efficiency of 40%, thanks to its combination of host mirroring and MDRAID-5.
  • Microsoft S2D mirror-accelerated parity
    Delivers a capacity efficiency of 35.7% (20% mirror, 80% parity), though this can vary depending on the percentage of the volume allocated to the mirror tier. For more details on how to calculate capacity efficiency for mirror-accelerated parity, please refer to the provided link.

Microsoft also recommends keeping some storage capacity unallocated, about 20% of the total pool size, to enable “in-place” repairs if drives fail. This reserve space, in our case, 5.82 TB, allows for immediate parallel repairs, which means your data remains safe and the system stays resilient even if something goes wrong. This happens automatically. It’s an added layer of security that can be very important in maintaining uptime and performance.

Capacity

So, when you’re planning your storage solution, it’s definitely something to keep in mind.

Testbed overview:

Our testbed setup is designed to push the limits of both StarWind VSAN and Microsoft S2D in a high-performance environment.

Hardware:

Server model Supermicro SYS-220U-TNR
CPU Intel(R) Xeon(R) Platinum 8352Y @2.2GHz
Sockets 2
Cores/Threads 64/128
RAM 256GB
NICs 2x Mellanox ConnectX®-6 EN 200GbE (MCX613106A-VDA)
Storage 5x NVMe Micron 7450 MAX: U.3 3.2TB

 

Software:

Windows Server Windows Server 2022 Datacenter 21H2 OS build 20348.2527
StarWind VSAN Version V8 (build 15469, CVM 20240530) (kernel – 5.15.0-113-generic)
StarWind NVMe-oF Initiator StarWind NVMe-oF Initiator.2.0.0.672(rev 674).Setup.486

 

StarWind VSAN CVM parameters:

CPU 24 vCPU
RAM 32GB
NICs 1x network adapter for management
4x Mellanox ConnectX-6 Virtual Function network adapter (SRIOV)
Storage MDRAID5 (5x NVMe Micron 7450 MAX: U.3 3.2TB)

Testing methodology:

To accurately assess the performance of both StarWind VSAN and Microsoft S2D, we conducted a series of benchmarks using the FIO utility in client/server mode. Here’s a breakdown of the testing setup and methodology:

Virtual Machine Configuration:

  • Total VMs: 20 (10 per host)
  • VM Specs:
    • vCPUs: 4 per VM
    • RAM: 8GB per VM
    • Disks: 3x RAW virtual disks per VM, each connected to a separate SCSI controller

Virtual Disk Sizes:

  • For Microsoft S2D (Mirror-accelerated parity):
    • Mirror-only: 10GB per virtual disk
    • Both tiers: 100GB per virtual disk
  • For StarWind VSAN NVMe-oF: 100GB per virtual disk

Preparation:

  • Virtual disks were pre-filled with random data to simulate real-world usage conditions before running the tests.

Test Patterns: We evaluated the performance using the following I/O patterns:

  • 4k random read
  • 4k random read/write (70/30)
  • 4k random write
  • 64k random read
  • 64k random write
  • 1M read
  • 1M write

Warm-Up Procedures:

  • 4k random read/write (70/30) and 4k random write patterns: VM disks were warmed up using the 4k random write pattern for 4 hours.
  • 64k random write pattern: VM disks were warmed up using the 64k random write pattern for 2 hours.

Test Execution:

  • Each test was conducted three times, and the average result was used as the final performance metric.
  • Duration:
    • Read tests: 600 seconds
    • Write tests: 1800 seconds

Microsoft S2D Specifics:

  • Following Microsoft’s recommendations, the testing VMs were placed on the node that owns the volume. This setup minimizes network utilization by ensuring local data reads without using the network stack, thus reducing latency during write operations.
  • Each VHDX file was placed in different subdirectories, which helps optimize ReFS performance by minimizing metadata operation size and allowing parallel execution, reducing overall application latency.

StarWind VSAN Specifics:

  • VMs were evenly distributed across both hosts without being pinned to the node that owns the volume, which ensures a balanced load.
  • Similar to the S2D setup, each VHDX file was placed in different subdirectories to optimize performance.

Benchmarking local NVMe performance:

Before diving into our performance verification, we took a moment to set the stage with vendor-claimed performance figures for the NVMe drives. Here is the image with vendor-claimed performance:

 

vendor-claimed performance

Using the FIO utility in client/server mode, we conducted a series of tests on a single Micron 7450 MAX U.3 3.2TB NVMe drive. The following results were observed:

1x NVMe Micron 7450 MAX: U.3 3.2TB
Pattern Numjobs IOdepth IOPs MiB\s Latency (ms)
4k random read 6 32 997,000 3,894 0.192
4k random read/write 70/30 6 16 531,000 2,073 0.142
4k random write 4 4 385,000 1,505 0.041
64k random read 8 8 92,900 5,807 0.688
64k random write 2 1 27,600 1,724 0.072
1M read 1 8 6,663 6,663 1.200
1M write 1 2 5,134 5,134 0.389

Our tests confirmed that the NVMe drive’s performance is fully in line with the vendor’s claims. This validation step is crucial for ensuring that our subsequent benchmarks are based on accurate and trustworthy hardware performance.

Benchmark results in a table:

The benchmarking results are presented in tables to illustrate performance metrics such as IOPS, throughput (MiB/s), latency (ms), and CPU usage. An additional metric, “IOPS per 1% CPU usage,” highlights the performance dependency on the CPU usage for 4k random read/write patterns. This parameter is calculated using the following formula:

IOPS per 1% CPU usage = IOPS / Node count / Node CPU usage

Where:

  • IOPS represents the number of I/O operations per second for each pattern.
  • Node count is 2 nodes in our case.
  • Node CPU usage denotes the CPU usage of one node during the test.

By incorporating this additional metric, we aimed to provide deeper insights into how CPU usage correlates with IOPS, offering a more nuanced understanding of performance characteristics.

Now let’s delve into the detailed benchmark results for each storage configuration.

StarWind VSAN NVMe over RDMA scenario

The table provides a detailed breakdown of StarWind VSAN’s performance under the Hyper-V NVMe over RDMA scenario, focusing on various workload patterns and configurations.

For 4k random reads, the IOPS ranges from 893,000 at lower queue depths to 1,624,000 at higher depths.

In mixed 4k random read/write (70%/30%) scenarios, the solution delivers up to 856,000 IOPS, maintaining strong performance even under mixed workloads.

For larger workloads, such as the 64k random read pattern, StarWind VSAN achieves up to 19,062 MiB/s while maintaining consistent latency and CPU utilization. In write-heavy scenarios like the 1024k write pattern, the throughput peaks at 4,479 MiB/s, with latency increasing as queue depth rises, yet the CPU usage remains stable between 16% and 19%.

VM count Pattern Numjobs IOdepth IOPs MiB/s Latency (ms) Node CPU usage % IOPs per 1% CPU usage
20 4k random read 3 4 893,000 3,488 0.267 44.00% 10,148
4k random read 3 8 1,092,000 4,266 0.438 45.00% 12,133
4k random read 3 16 1,399,000 5,465 0.683 50.00% 13,990
4k random read 3 32 1,624,000 6,344 1.172 53.00% 15,321
4k random read 3 64 1,558,000 6,086 2.461 53.00% 14,698
4k random read 3 128 1,551,000 6,059 4.967 52.00% 14,913
4k random read/write (70%/30%) 3 2 396,000 1,547 0.355 32.00% 6,188
4k random read/write (70%/30%) 3 4 596,000 2,328 0.487 41.00% 7,268
4k random read/write (70%/30%) 3 8 756,000 2,953 0.785 47.00% 8,043
4k random read/write (70%/30%) 3 16 856,000 3,344 1.346 48.00% 8,917
4k random read/write (70%/30%) 3 32 854,000 3,336 2.656 47.00% 9,085
4k random read/write (70%/30%) 3 64 736,000 2,875 6.001 41.00% 8,976
4k random write 3 2 201,000 785 0.595 25.00% 4,020
4k random write 3 4 288,000 1,126 0.826 31.00% 4,645
4k random write 3 8 341,000 1,332 1.406 34.00% 5,015
4k random write 3 16 330,000 1,290 2.906 32.00% 5,156
4k random write 3 32 196,000 766 9.818 21.00% 4,667
64k random read 3 2 243,000 15,187 0.493 25.00%
64k random read 3 4 280,000 17,500 0.856 26.00%
64k random read 3 8 297,000 18,562 1.613 27.00%
64k random read 3 16 302,000 18,875 3.182 28.00%
64k random read 3 32 305,000 19,062 6.292 28.00%
64k random write 3 1 42,200 2,638 1.420 17.00%
64k random write 3 2 48,800 3,050 2.459 18.00%
64k random write 3 4 52,900 3,306 4.532 18.00%
64k random write 3 8 57,800 3,613 8.312 19.00%
64k random write 3 16 62,300 3,894 15.389 19.00%
64k random write 3 32 67,100 4,194 28.611 21.00%
1024k read 1 1 13,800 13,800 1.451 15.00%
1024k read 1 2 16,200 16,200 2.433 16.00%
1024k read 1 4 17,600 17,600 4.551 17.00%
1024k read 1 8 18,300 18,300 8.759 18.00%
1024k read 1 16 18,900 18,900 16.976 18.00%
1024k write 1 1 3,703 3,703 5.399 16.00%
1024k write 1 2 3,744 3,744 10.636 17.00%
1024k write 1 4 3,853 3,853 20.747 18.00%
1024k write 1 8 4,479 4,479 35.707 19.00%

Overall, StarWind VSAN shows great performance at 4k random read/write patterns, consistent read and write performance regardless of VM location, and good capacity efficiency at 40%.

Microsoft Storage Spaces Direct over RDMA scenario (Mirror tier only)

The next table presents S2D’s performance with a mirror-accelerated parity configuration, focusing on workloads in the mirror tier.

For 4k random read patterns, IOPS ranges from 858,000 at lower queue depths to 2,615,000 at higher depths, with corresponding latencies between 0.278 ms and 2.921 ms.

In the 4k random read/write (70%/30%) scenarios, IOPS ranges from 58,200 to 941,000, with latency fluctuating from 0.305 ms to 8.247 ms as queue depth increases. The node CPU usage varies from 3% to 52%, reflecting how the system manages mixed workloads.

For larger data patterns like the 64k random read and 1024k write, S2D demonstrates robust throughput, reaching up to 10,500 MiB/s in the 1024k write pattern. Latency remains relatively low at the lower queue depths but increases significantly as the queue depth rises. CPU utilization is kept within a range of 5% to 26% for these larger workloads, showing the system’s ability to handle high-throughput tasks efficiently.

VM count Pattern Numjobs IOdepth IOPs MiB/s Latency (ms) Node CPU usage % IOPs per 1% CPU usage
20 4k random read 3 4 858,000 3,352 0.278 28.00% 15,321
4k random read 3 8 782,000 3,055 0.620 21.00% 18,619
4k random read 3 16 1,079,000 4,216 0.888 29.00% 18,603
4k random read 3 32 1,615,000 6,308 1.189 41.00% 19,695
4k random read 3 64 2,306,000 9,008 1.663 54.00% 21,352
4k random read 3 128 2,615,000 10,215 2.921 67.00% 19,515
4k random read/write (70%/30%) 3 2 410,000 1,602 0.305 29.00% 7,069
4k random read/write (70%/30%) 3 4 113,400 443 2.112 7.00% 8,100
4k random read/write (70%/30%) 3 8 58,200 227 8.247 3.00% 9,700
4k random read/write (70%/30%) 3 16 667,000 2,605 1.607 38.00% 8,776
4k random read/write (70%/30%) 3 32 908,000 3,547 2.791 48.00% 9,458
4k random read/write (70%/30%) 3 64 941,000 3,676 6.017 52.00% 9,048
4k random write 3 2 102,000 398 1.171 13.00% 3,923
4k random write 3 4 50,100 196 4.794 7.00% 3,579
4k random write 3 8 34,300 134 13.994 5.00% 3,430
4k random write 3 16 66,100 258 14.504 8.00% 4,131
4k random write 3 32 294,000 1,149 6.527 34.00% 4,324
64k random read 3 2 319,000 19,938 0.374 17.00%
64k random read 3 4 504,000 31,500 0.475 26.00%
64k random read 3 8 439,000 27,438 1.081 22.00%
64k random read 3 16 611,000 38,187 1.572 27.00%
64k random read 3 32 851,000 53,187 2.252 38.00%
64k random write 3 1 120,000 7,475 0.500 19.00%
64k random write 3 2 130,000 8,153 0.919 20.00%
64k random write 3 4 51,150 3,197 4.696 7.00%
64k random write 3 8 38,700 2,419 12.334 6.00%
64k random write 3 16 46,500 2,906 20.895 6.00%
64k random write 3 32 161,000 10,063 11.905 26.00%
1024k read 1 1 19,900 19,900 1.004 5.00%
1024k read 1 2 31,800 31,800 1.257 7.00%
1024k read 1 4 44,000 44,000 1.815 11.00%
1024k read 1 8 50,300 50,300 3.176 14.00%
1024k read 1 16 52,300 52,300 6.114 16.00%
1024k write 1 1 9,887 9,887 2.022 8.00%
1024k write 1 2 10,150 10,150 3.912 8.00%
1024k write 1 4 10,200 10,200 7.841 9.00%
1024k write 1 8 10,500 10,500 15.250 10.00%

Microsoft Storage Spaces Direct over RDMA scenario (Mirror + Parity tiers)

The performance metrics for the dual-tier configuration in S2D highlight workload management across both mirror and parity tiers.

In 4k random read patterns, IOPS ranges from 803,000 to 2,450,000, with latencies increasing from 0.297 ms to 3.133 ms as queue depth rises. Node CPU usage scales from 26% to 68%, with IOPS per 1% CPU usage showing efficient resource utilization, peaking at 19,773.

For the 4k random read/write (70%/30%) pattern, IOPS spans from 102,600 to 298,700, and latency escalates from 1.035 ms to 20.281 ms as queue depths increase. Node CPU usage varies between 20% and 50%, highlighting the system’s ability to manage mixed workloads, although the efficiency, measured by IOPS per 1% CPU usage, peaks at a more modest 3,075.

In the 64k random read and 1024k write patterns, throughput is substantial for reads, reaching up to 49,600 MiB/s, but write performance significantly declines in the 1024k write pattern, with throughput peaking at 2,341 MiB/s and latency increasing dramatically to 68.424 ms at higher queue depths. Despite the high node CPU efficiency in read scenarios, write performance shows noticeable degradation across tiers.

VM count Pattern Numjobs IOdepth IOPs MiB/s Latency (ms) Node CPU usage % IOPs per 1% CPU usage
20 4k random read 3 4 803,000 3,137 0.297 27.00% 14,870
4k random read 3 8 774,000 3,023 0.620 26.00% 14,885
4k random read 3 16 977,000 3,816 0.982 29.00% 16,845
4k random read 3 32 1,531,000 5,980 1.252 42.00% 18,226
4k random read 3 64 2,175,000 8,496 1.764 55.00% 19,773
4k random read 3 128 2,450,000 9,570 3.133 68.00% 18,015
4k random read/write (70%/30%) 3 2 152,700 598 1.035 32.00% 2,386
4k random read/write (70%/30%) 3 4 157,200 614 1.924 32.00% 2,456
4k random read/write (70%/30%) 3 8 102,600 400 4.926 20.00% 2,565
4k random read/write (70%/30%) 3 16 260,200 1,016 4.759 45.00% 2,891
4k random read/write (70%/30%) 3 32 298,700 1,167 9.019 50.00% 2,987
4k random read/write (70%/30%) 3 64 282,900 1,105 20.281 46.00% 3,075
4k random write 3 2 57,500 225 2.085 29.00% 991
4k random write 3 4 70,600 276 3.398 33.00% 1,070
4k random write 3 8 83,300 326 5.761 37.00% 1,126
4k random write 3 16 89,000 348 10.774 41.00% 1,085
4k random write 3 32 86,800 339 22.360 39.00% 1,113
64k random read 3 2 312,000 19,500 0.383 18.00%
64k random read 3 4 470,000 29,375 0.510 26.00%
64k random read 3 8 386,000 24,125 1.259 22.00%
64k random read 3 16 555,600 34,725 1.728 27.00%
64k random read 3 32 776,000 48,500 2.474 38.00%
64k random write 3 1 14,100 881 4.258 13.00%
64k random write 3 2 13,700 856 8.771 14.00%
64k random write 3 4 14,300 894 16.719 14.00%
64k random write 3 8 15,400 962 31.095 16.00%
64k random write 3 16 14,800 925 64.890 19.00%
64k random write 3 32 14,800 925 129.896 18.00%
1024k read 1 1 19,700 19,700 1.015 5.00%
1024k read 1 2 31,000 31,000 1.256 8.00%
1024k read 1 4 41,800 41,800 1.914 11.00%
1024k read 1 8 47,600 47,600 3.358 13.00%
1024k read 1 16 49,600 49,600 6.452 16.00%
1024k write 1 1 1,904 1,904 10.707 4.00%
1024k write 1 2 1,810 1,810 22.290 5.00%
1024k write 1 4 1,981 1,981 40.353 5.00%
1024k write 1 8 2,341 2,341 68.424 5.00%

Overall, S2D shows exceptional performance in both test cases, however, the storage capacity efficiency is about 35.7% and could be even less if additional space is assigned for in-place repairs.

Benchmarking results in graphs:

With all benchmarks completed and data collected, we can now compare the results using graphical charts for a clearer understanding.

4k random read:

Figure 1: 4K RR (IOPS)

Figure 1: 4K RR (IOPS)

Let’s start with the 4K random read test, where Figure 1 demonstrates the performance in IOPS.

StarWind VSAN NVMe over RDMA starts off strong, delivering 893,000 IOPS at a 4-depth queue and climbing to an impressive 1,624,000 IOPS at a 32-depth queue, and then slightly declining.

Microsoft Storage Spaces Direct (S2D) in both configurations (“mirror-only” and “mirror + parity”) showed significant variability. The “mirror-only” setup achieved a peak of 2,615,000 IOPS at a 128-depth queue, while “mirror + parity” peaked slightly lower at 2,450,000 IOPS. StarWind’s peak performance at 32-depth was about 62% of S2D “mirror-only” and 66% of S2D both tiers at their respective peaks.

This significant variability in S2D’s performance can be traced back to its sophisticated use of Cluster Shared Volumes (CSV). The CSV architecture enables multiple hosts to share access to the same disk, effectively coordinating read and write operations through the SMB 3.0 multichannel protocol. This approach is what gives S2D its impressive peak performance, especially in scenarios where the VM runs on the node that owns the volume. In this case, it can read data directly from the local disk, bypassing the network stack. This local read path minimizes latency and maximizes performance, leading to impressive IOPS numbers (if you want to explore this topic in more detail, please read here or check this article).

However, the very nature of CSV that boosts performance also introduces complexity. S2D’s architecture demands careful monitoring to ensure that VMs are optimally placed, as any deviation can lead to performance dips.

 

Figure 2: 4K RR (Latency)

Figure 2: 4K RR (Latency)

Latency is a critical factor, and in Figure 2 we analyze latency metrics for the 4K random read test.

We can see that latency increased with queue depth across all configurations. StarWind began with a low latency of 0.267 ms, rising to 4.967 ms at maximum queue depth.

The S2D “mirror-only” configuration had a low starting latency at 0.278 ms but escalated to 2.921 ms at 128 depth. Both tiers setup had a similar trend, starting at 0.297 ms and ending at 3.133 ms. At maximum queue depth, StarWind’s latency was approximately 70% higher than both S2D configurations.

The latency advantage of S2D is again attributed to local reads. While S2D enjoys lower latency, StarWind VSAN’s performance remains unaffected by VM location, offering simplicity at the cost of slightly higher latency.

 

Figure 3: 4K RR (IOPS per 1% CPU Usage)

Figure 3: 4K RR (IOPS per 1% CPU Usage)


Figure 3
showcases the results of the 4K random read test with a numjob=3, measuring IOPS per 1% CPU usage.

StarWind demonstrated a steady increase in IOPS per 1% CPU usage, peaking at 15,321 IOPS at 32-depth before a slight drop.

S2D “mirror-only” showed the highest efficiency, reaching 21,352 IOPS per 1% CPU at 64-depth. Both tiers configuration had a similar peak efficiency of 19,773 IOPS per 1% CPU at the same depth. StarWind’s efficiency was around 72% of S2D “mirror-only” and 77% of “mirror + parity” at their most efficient points.

 

4k random read/write 70/30:

Figure 4: 4K RR/RW 70%/30% (IOPS)

Figure 4: 4K RR/RW 70%/30% (IOPS)

In virtualized environments, the mixed 4K random read/write workload serves as the backbone of daily operations. The ability to maintain high performance with mixed I/O across varied queue depths is critical. Figure 4 shows IOPS for the 4K random read/write (70%/30%) pattern.

Interestingly, with Storage Spaces Direct, there’s a noticeable drop in performance at queue depths 4 and 8. This performance drop is not observed in StarWind VSAN tests. StarWind maintains consistent performance, hitting 596,000 IOPS at queue depth 4 and 756,000 IOPS at queue depth 8.

StarWind holds its ground well and demonstrates impressive stability, achieving 856,000 IOPS at a 16-depth queue before experiencing a slight dip. In comparison, the S2D “mirror-only” configuration reached a higher peak of 941,000 IOPS at a 64-depth queue, while “mirror + parity” setup lagged behind with a peak of 298,700 IOPS.

StarWind’s peak performance was about 91% of the S2D “mirror-only” configuration, but it significantly outshined “mirror + parity” setup, delivering nearly three times the IOPS.

 

The main reason for the lower performance in the S2D “mirror + parity” scenario is the overhead of ReFS, which has to move new data from the mirror to the parity tier, leading to performance degradation. As a result, S2D records 152,700 IOPS at queue depth 2, drops to a low of 102,600 IOPS at QD=8, and then peaks at 298,700 IOPS at queue depth 32. In contrast, StarWind’s more consistent performance makes it a strong contender, especially in virtualization environments where mixed workloads are common.

 

Figure 5: 4K RR/RW 70%/30% (Latency)

Figure 5: 4K RR/RW 70%/30% (Latency)


Figure 5
reveals the latency associated with the 4K random read/write (70%/30%) workload.

Here, the picture is the same: S2D’s mirror-accelerated parity setup struggles, especially when the workload spans both mirror and parity tiers, causing data movement delays.

StarWind’s consistent latency, starting with 0.355 ms at queue depth 2 and rising to 6.001 ms at queue depth 64, ensures smoother operations without the need for complex configurations.

StarWind’s latency at maximum depth was almost identical to S2D “mirror-only” but 70% lower compared to the S2D “mirror + parity” configuration.

 

Figure 6: 4K RR/RW 70%/30% (IOPS per 1% CPU Usage)

Figure 6: 4K RR/RW 70%/30% (IOPS per 1% CPU Usage)

In Figure 6, the IOPS per 1% CPU usage for the 4K random read/write (70%/30%) pattern is depicted.

StarWind VSAN shows strong efficiency with 9,085 IOPS per 1% CPU usage at 32 IO depth, nearing the performance of S2D’s “mirror-only” setup, while far surpassing “mirror + parity” configuration. StarWind’s efficiency was approximately 96% of S2D “mirror-only” and three times better than S2D in the “dual-tier” scenario.

For IOPS per 1% CPU usage, S2D’s performance is uneven, fluctuating with workload intensity, whereas StarWind provides steady and reliable results.

 

4k random write:

Figure 7: 4K RW (IOPS)

Figure 7: 4K RW (IOPS)

The 4K random write performance pattern, as shown in Figure 7, further highlights the disparities between Microsoft Storage Spaces Direct and StarWind VSAN.

S2D’s performance varies greatly depending on the workload’s placement within the mirror or parity tier, with significant drops in performance at higher queue depths. StarWind, meanwhile, maintains stable performance, unaffected by workload placement or queue depth.

In pure 4K random write operations, StarWind stands out, achieving 341,000 IOPS at an 8-depth queue, which is 16% higher than S2D mirror-only’s peak of 294,000 IOPS at QD=32.

The S2D “mirror + parity” configuration struggles even more, peaking at only 89,000 IOPS at QD=16. Here, StarWind represents a remarkable 283% higher performance in write operations than S2D in the “dual-tier” scenario, making it an obvious choice for environments where write speed is critical.

 

Figure 8: 4K RW (Latency)

Figure 8: 4K RW (Latency)

Latency during 4K random write operations, depicted in Figure 8, confirms StarWind VSAN’s domination in this test pattern. Starting at 0.595 ms, write latency increases to 9.818 ms, which is still considerably lower than Storage Spaces Direct with workload in mirror tier, which begins at 1.171 ms and peaks at 14.504 ms at a 16 IO depth.

When comparing StarWind VSAN to Microsoft S2D with workload within “mirror + parity”, the performance gap is even more pronounced, with its latency climbing to 22.360 ms at a 32 IO depth. StarWind’s maximum latency was about 68% of the S2D’s “mirror-only” latency and 44% of “mirror + parity” setup.

In 4K RW pattern, we see that latency under S2D can spike, particularly when ReFS is forced to shuffle data between tiers, while StarWind VSAN’s latency remains consistently lower.

 

Figure 9: 4K RW (IOPS per 1% CPU Usage)

Figure 9: 4K RW (IOPS per 1% CPU Usage)

Efficiency in 4K random write workloads is measured in IOPS per 1% CPU usage, as shown in Figure 9.

StarWind’s efficiency in write operations is impressive, with 5,156 IOPS per 1% CPU usage at 32 IO depth, outpacing Storage Spaces Direct with workload in mirror tier by about 19%. Both tiers configuration, once again, falls short, peaking at 1,126 IOPS per 1% CPU at 8 IO depth and being 77.6% lower than StarWind.

 

64k random read:

Figure 10: 64K RR (Throughput)

Figure 10: 64K RR (Throughput)

As we shift to larger block sizes, Figure 10 presents throughput for the 64K random read test.

StarWind started with a throughput of 15,187 MiB/s, increasing to 19,062 MiB/s at 32 IO depth.

Storage Spaces Direct with the workload in the mirror tier reached a peak throughput of 53,187 MiB/s at 32 IO depth, and “mirror + parity” setup had a slightly lower peak of 48,500 MiB/s at the same IO depth. StarWind’s maximum throughput was approximately 36% of “mirror-only” and 39% of “mirror + parity”.

With 64K random reads, Microsoft S2D shines again by leveraging local data access to push throughput to impressive levels.

Figure 11: 64K RR (Latency)

Figure 11: 64K RR (Latency)


Figure 11
delves into latency during 64K random reads. The results align with the throughput data discussed earlier.

StarWind’s latency started at 0.493 ms and increased to 6.292 ms at 32 IO depth.

S2D with workload in the mirror tier began with a lower latency of 0.374 ms, peaking at 2.252 ms, while “mirror + parity” configuration began at 0.383 ms, increasing to 2.474 ms at 32 IO depth.

 

Figure 12: 64K RR (CPU Usage)

Figure 12: 64K RR (CPU Usage)

In Figure 12, we explore CPU usage during 64K random reads.

StarWind shows stable CPU usage, ranging from 25% to 28%, across various queue depths.

The S2D “mirror-only” scenario started at 17% and increased to 38% at IO depth=32. Both tiers setup followed a similar trend, starting at 18% and reaching 38%.

 

64k random write:

Figure 13: 64K RW (Throughput)

Figure 13: 64K RW (Throughput)


Figure 13
vividly illustrates the stark differences in 64K random write throughput between StarWind VSAN and Microsoft Storage Spaces Direct (S2D).

The performance of Storage Spaces Direct with the workload in the mirror tier shows considerable fluctuations. It starts highest scoring 7,475 MiB/s at IO depth=1 and 8,153 MiB/s at IO depth=2. At a 4 IO depth, S2D achieves 3,197 MiB/s, which drops to 2,419 MiB/s at an 8 IO depth before slightly increasing to 2,906 MiB/s at a 16 IO depth and rebounding back to the high score of 10,063 at IOdepth=32, outrunning StarWind by 139.9%. This erratic pattern mirrors the behavior observed in other tests, such as the 4K random write operations.

StarWind starts off with lower initial throughput of 2,638 and 3,050 MiB/s at IO depths 1 and 2, but delivers much more consistent performance as the test progresses. At IO depth=4, StarWind clocks in at 3,306 MiB/s, outpacing S2D by 3.4%. The gap widens as we move to an 8 IO depth, where StarWind reaches 3,613 MiB/s – a 49.4% lead over S2D. At IO depth=16, StarWind is still leading with 3,894 MiB/s, outperforming S2D by 33.9%.

A different story unfolds when we examine S2D performance in “mirror + parity” tests. It struggles at IO depth 1, with a low of 881 MiB/s, peaks at 962 MiB/s at IO depth 8, and drops to 925 MiB/s at IO depth 32.

 

Figure 14: 64K RW (Latency)

Figure 14: 64K RW (Latency)

Latency for 64K random writes is detailed in Figure 14, where StarWind’s performance remains more consistent, avoiding the severe latency spikes observed in S2D’s at IO depths 4, 8, and 16.

 

Figure 15: 64K RW (CPU usage)

Figure 15: 64K RW (CPU usage)

Let’s move on to Figure 15, which compares CPU usage during 64K random writes.

Here, CPU usage follows a similar trend as in the previous 64K random writes figures: Microsoft S2D’s efficiency is better only under specific conditions, while StarWind delivers more reliable usage metrics, ranging from 17% to 21%.

 

1M read:

Figure 16: 1024K R (Throughput)

Figure 16: 1024K R (Throughput)

In Figure 16, we see the throughput results for 1024K read operations.

Microsoft S2D again benefits from local data access, achieving high throughput that significantly outpaces StarWind VSAN. Thus, StarWind’s 1024K read throughput ranged from 13,800 MiB/s to 18,900 MiB/s at a 16 IO depth. The S2D “mirror-only” setup peaks at 52,300 MiB/s, while “mirror + parity” reached a slightly lower peak of 49,600 MiB/s. StarWind’s peak throughput was approximately 36% of “mirror-only” and 38% of “mirror + parity”.

 

Figure 17: 1024K R (Latency)

Figure 17: 1024K R (Latency)


Figure 17
 shows the latency results during the 1024K read test.

The resulting latency is predictably lower in Microsoft Storage Spaces Direct. StarWind’s latency increased from 1.451 ms to 16.976 ms at a 16 IO depth, whereas “mirror-only” S2D showed lower latency, starting at 1.004 ms and peaking at 6.114 ms at a 16 IO depth.

“Mirror + Parity” followed a similar pattern, starting at 1.015 ms and peaking at 6.452 ms. StarWind’s maximum latency was about 278% higher than “mirror-only” and 263% higher than S2D with workloads in both mirror and parity tiers.

 

Figure 18: 1024K R (CPU Usage)

Figure 18: 1024K R (CPU Usage)


Figure 18
highlights CPU usage during 1024K reads.

StarWind’s CPU usage ranged from 15% to 18%, while the S2D “mirror-only” setup started at 5% and increased to 16% at 16 IO depth. Even when workloads span both tiers, S2D maintains almost the same CPU usage levels as in the “mirror-only” benchmarks.

As IO depth increases, the CPU usage gap between StarWind VSAN and S2D narrows. S2D consistently uses less CPU across all IO depths, with the difference being most pronounced at lower IO depths (66.67% less at 1 IO depth than StarWind VSAN) and gradually decreasing to 11.11% less at 16 IO depth.


1M write:

Figure 19: 1024K W (Throughput)

Figure 19: 1024K W (Throughput)

When we shift our focus to 1024K sequential write throughput, Figure 19 underlines some clear distinctions in performance between StarWind VSAN and Storage Spaces Direct (S2D).

At IO depth=1, S2D in mirror-accelerated parity mode with workload in the mirror tier, reaches a throughput of 9,887 MiB/s, while StarWind VSAN manages 3,703 MiB/s. This represents an impressive 167% higher throughput for S2D.

As the IO depth increases to 8, S2D maintains its lead, achieving 10,500 MiB/s compared to StarWind’s 4,479 MiB/s. This results in a 134% higher throughput for S2D at this IO depth.

However, this performance advantage for S2D is primarily evident when the workload does not spill out of the mirror tier.

If the workload hits both tiers – mirror and parity – the results change significantly. Under these conditions, StarWind VSAN exhibits a more stable performance curve, delivering 94% higher throughput than S2D in “mirror + parity” at 1 IO depth to an impressive 91% higher at an 8 IO depth.

 

Figure 20: 1024K W (Latency)

Figure 20: 1024K W (Latency)

Latency during 1024K writes, as shown in Figure 20, displays exactly the same picture.

StarWind’s latency increases from 5.399 ms at 1 IO depth to 35.707 ms at an 8 IO depth, while the S2D “mirror-only” configuration has a lower latency peak at 15.250 ms. “Mirror + Parity” setup, however, suffers from extremely high latency, peaking at 68.188 ms. StarWind VSAN demonstrates significantly lower latency than the Storage Spaces Direct (S2D) “mirror + parity” configuration, with latency measurements that are approximately 92% lower.

 

Figure 21: 1024K W (CPU Usage)

Figure 21: 1024K W (CPU Usage)

Lastly, Figure 21 compares CPU usage during 1024K writes, with StarWind being significantly outmatched by both S2D setups.

StarWind VSAN’s CPU utilization increases from 16% at 1 IO depth to 19% as the queue depth rises. The S2D “mirror-only” configuration demonstrates a much lower CPU usage, capping at 10% at its highest throughput at IO depth=8. This efficiency gives S2D “mirror-only” an edge in terms of IOPS per CPU usage.

What’s really interesting, when the workload spans both tiers of S2D, it continues to exhibit even lower CPU usage, starting at just 4% at 1 IO depth and modestly rising to 5% at 2, 4, and 8 IO depths.

Additional benchmarking: 1 VM, 1 numjobs, 1 iodepth.

To gain a deeper understanding of how StarWind Virtual SAN and Storage Spaces Direct (S2D) perform under specific synthetic conditions, we conducted additional benchmarks focusing on a single-thread scenario, with 1 thread and 1 queue. Typically, this is the most effective way to measure storage access latency in an ideal scenario. The benchmarks focus on 4k random read and write patterns, including synchronous write operations.

Benchmark results in a table:

StarWind VSAN NVMe-oF HA (RDMA) – Host mirroring + MDRAID5 (1 VM)
Pattern Numjobs IOdepth IOPs MiB\s Latency (ms)
4k random read 1 1 2,974 12 0.335
4k random write 1 1 2,379 10 0.419
4k random write (synchronous) 1 1 967 4 1.032

 

Storage Spaces Direct (RDMA) – Nested mirror accelerated parity – Data in mirror tier (1 VM)
Pattern Numjobs IOdepth IOPs MiB\s Latency (ms)
4k random read 1 1 7,231 28 0.137
4k random write 1 1 5,660 22 0.175
4k random write (synchronous) 1 1 2,816 11 0.353

 

Storage Spaces Direct (RDMA) – Nested mirror accelerated parity – Data in mirror and parity tiers (1 VM)
Pattern Numjobs IOdepth IOPs MiB\s Latency (ms)
4k random read 1 1 5,922 23 0.167
4k random write 1 1 2,575 10 0.387
4k random write (synchronous) 1 1 1,754 7 0.568

Benchmark results in graphs:

This section presents visual comparisons of the performance and latency metrics across storage configurations under research.

4k random read:

Figure 1: 4K RR (IOPS)

Figure 1: 4K RR (IOPS)


Figure 1
 demonstrates IOPS for the 4K random read test at 1 IO depth and with one numjobs.

Here, Storage Spaces Direct (S2D) with data in the mirror tier outshines the other configurations. It achieves 7,231 IOPS, which is 143% higher than StarWind VSAN’s 2,974 IOPS.

This superior performance is again due to S2D’s ability to perform local reads at the host level, whereas StarWind VSAN operates within a VM, leading to a longer IO datapath.

 

Even when data spans both the mirror and parity tiers, S2D still leads with 5,922 IOPS, outperforming StarWind by 99%.

Figure 2: 4K RR (Latency)

Figure 2: 4K RR (Latency)


Latency metrics for the 4K random read test at 1 IO depth, as shown in Figure 2, similarly favor Storage Spaces Direct with the workload in the mirror tier, which records a swift 0.137 ms. S2D’s latency is 59% faster than StarWind’s 0.335 ms.

Even when data spans both tiers, S2D maintains a respectable 0.167 ms, which is still 50% faster than StarWind.

 

4k random write:

Figure 3: 4K RW (IOPS)

Figure 3: 4K RW (IOPS)


Figure 3
showcases the results of the 4K random write test at IO depth=1 with a numjob=1.

For 4k random writes, S2D with data in the mirror tier proves its prowess, achieving 5,660 IOPS, which is 138% higher than StarWind’s 2,379 IOPS. The superior performance of S2D is due to the direct writing to the mirror tier, bypassing the need to calculate parity, which is resource-intensive. For a more detailed explanation of how reading and writing occur in a mirror-accelerated parity scenario, please refer to the following link.

In scenarios where data spans both the mirror and parity tiers, S2D’s performance drops to 2,575 IOPS, but it still edges out StarWind by 8%. The additional step of invalidating data in the parity tier in S2D slightly reduces performance compared to when the workload is fully contained within the mirror tier. In contrast, StarWind VSAN writes directly to the MDRAID5 array, resulting in read-modify-write (RMW) operations, which further reduce performance.

 

Figure 4: 4K RW (Latency)

Figure 4: 4K RW (Latency)


Moving on to Figure 4, we examine the latency metrics for 4K random writes.

No surprises here. S2D in the mirror tier shows a clear advantage with a latency of 0.175 ms, which is 139% lower than StarWind’s 0.419 ms. This advantage stems from S2D’s direct writing to the mirror tier, bypassing the parity calculations that slow down write operations.

When S2D data is spread across both tiers, the latency increases to 0.387 ms but remains 8% faster than StarWind’s latency.

These results suggest that S2D can more effectively manage latency in 4K write operations at IO depth=1 with a numjob=1, ensuring quicker data processing, while StarWind’s longer IO datapath from inside a VM increases latency.

 

4k random write (synchronous):

Figure 5: 4K RW Synchronous (IOPS)

Figure 5: 4K RW Synchronous (IOPS)


In our synchronous 4K RW single-threaded IO tests, as shown in Figure 5, S2D in the mirror tier reaches 2,816 IOPS, again outperforming StarWind’s 967 IOPS by a significant 191%. This difference is again due to S2D’s ability to write directly to the mirror tier, avoiding the overhead of parity calculations.

When S2D data is distributed across both tiers, the performance drops to 1,754 IOPS but still surpasses StarWind by 81%.

Figure 6: 4K RW Synchronous (Latency)

Figure 6: 4K RW Synchronous (Latency)

The latency figures for synchronous 4K RW single-threaded IO, depicted in Figure 6, tell a similar story, with S2D’s mirror tier configuration offering a quick 0.353 ms, which is 192% lower than StarWind’s 1.032 ms. Even with data in both tiers, S2D’s latency is 0.568 ms – 82% lower than StarWind.

This consistency in performance highlights S2D’s capability in managing synchronous write operations efficiently, while StarWind’s VM-based operation leads to a longer IO datapath and higher latency.

Conclusion

In conclusion, both Storage Spaces Direct and StarWind VSAN bring distinct strengths and weaknesses to the table, each catering to different needs within your IT infrastructure.

Storage Spaces Direct, being a native Microsoft solution, excels in read performance, especially when virtual machines are aligned with their corresponding volume-owning nodes. However, this advantage hinges on careful workload management. If the VMs aren’t perfectly aligned, or if the workload spills over from the mirror tier to the parity tier, you might see a significant dip in write performance as we observed during 4K and 64K random-write tests. Additionally, S2D’s capacity efficiency is somewhat compromised, especially when you factor in the need to reserve extra space for fault tolerance.

On the flip side, StarWind VSAN shines in environments that demand consistent write and mixed IO performance. Its stable read and write performance, regardless of VM placement, and superior capacity efficiency make it a compelling option. However, the absence of local read optimization and the need to deploy an additional VM (StarWind VSAN CVM) are considerations that might tip the scale depending on your specific needs.

Ultimately, if your priority is top-notch read performance and you’re prepared to closely monitor your workloads, Storage Spaces Direct could be your go-to. But if you’re looking for reliable write performance and better capacity efficiency, StarWind VSAN might be the better fit.



from StarWind Blog https://ift.tt/74sDaup
via IFTTT