Thursday, February 22, 2024

Announcing Microsoft’s open automation framework to red team generative AI Systems

Today we are releasing an open automation framework, PyRIT (Python Risk Identification Toolkit for generative AI), to empower security professionals and machine learning engineers to proactively find risks in their generative AI systems.

At Microsoft, we believe that security practices and generative AI responsibilities need to be a collaborative effort. We are deeply committed to developing tools and resources that enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances. This tool, and the previous investments we have made in red teaming AI since 2019, represents our ongoing commitment to democratize securing AI for our customers, partners, and peers.   

The need for automation in AI Red Teaming

Red teaming AI systems is a complex, multistep process. Microsoft’s AI Red Team leverages a dedicated interdisciplinary group of security, adversarial machine learning, and responsible AI experts. The Red Team also leverages resources from the entire Microsoft ecosystem, including the Fairness center in Microsoft Research; AETHER, Microsoft’s cross-company initiative on AI Ethics and Effects in Engineering and Research; and the Office of Responsible AI. Our red teaming is part of our larger strategy to map AI risks, measure the identified risks, and then build scoped mitigations to minimize them.

Over the past year, we have proactively red teamed several high-value generative AI systems and models before they were released to customers. Through this journey, we found that red teaming generative AI systems is markedly different from red teaming classical AI systems or traditional software in three prominent ways.

1. Probing both security and responsible AI risks simultaneously

We first learned that while red teaming traditional software or classical AI systems mainly focuses on identifying security failures, red teaming generative AI systems includes identifying both security risk as well as responsible AI risks. Responsible AI risks, like security risks, can vary widely, ranging from generating content that includes fairness issues to producing ungrounded or inaccurate content. AI red teaming needs to explore the potential risk space of security and responsible AI failures simultaneously.

A diagram of a generative AI system. The input prompt is processed by App Specific Logic and then passed to the Generative AI Model, which may use additional skills, functions, or plugins if needed. The Generative AI Model’s response is then processed by the App Specific Logic to provide the GenAI Created Content as the system’s response.

2. Generative AI is more probabilistic than traditional red teaming

Secondly, we found that red teaming generative AI systems is more probabilistic than traditional red teaming. Put differently, executing the same attack path multiple times on traditional software systems would likely yield similar results. However, generative AI systems have multiple layers of non-determinism; in other words, the same input can provide different outputs. This could be because of the app-specific logic; the generative AI model itself; the orchestrator that controls the output of the system can engage different extensibility or plugins; and even the input (which tends to be language), with small variations can provide different outputs. Unlike traditional software systems with well-defined APIs and parameters that can be examined using tools during red teaming, we learned that generative AI systems require a strategy that considers the probabilistic nature of their underlying elements.

3. Generative AI systems architecture varies widely 

Finally, the architecture of these generative AI systems varies widely: from standalone applications to integrations in existing applications to the input and output modalities, such as text, audio, images, and videos.

These three differences make a triple threat for manual red team probing. To surface just one type of risk (say, generating violent content) in one modality of the application (say, a chat interface on browser), red teams need to try different strategies multiple times to gather evidence of potential failures. Doing this manually for all types of harms, across all modalities across different strategies, can be exceedingly tedious and slow.

This does not mean automation is always the solution. Manual probing, though time-consuming, is often needed for identifying potential blind spots. Automation is needed for scaling but is not a replacement for manual probing. We use automation in two ways to help the AI red team: automating our routine tasks and identifying potentially risky areas that require more attention.

In 2021, Microsoft developed and released a red team automation framework for classical machine learning systems. Although Counterfit still delivers value for traditional machine learning systems, we found that for generative AI applications, Counterfit did not meet our needs, as the underlying principles and the threat surface had changed. Because of this, we re-imagined how to help security professionals to red team AI systems in the generative AI paradigm and our new toolkit was born.

We like to acknowledge out that there have been work in the academic space to automate red teaming such as PAIR and open source projects including garak.

PyRIT for generative AI Red teaming 

PyRIT is battle-tested by the Microsoft AI Red Team. It started off as a set of one-off scripts as we began red teaming generative AI systems in 2022. As we red teamed different varieties of generative AI systems and probed for different risks, we added features that we found useful. Today, PyRIT is a reliable tool in the Microsoft AI Red Team’s arsenal.

A diagram of interactions between three components, the PyRIT Agent, the Target Gen AI System, and the PyRIT Scoring Engine. The PyRIT Agent first communicates with the Target Gen AI System. Then, it scores the response with the PyRIT Scoring Engine. Finally, it sends a new prompt to the Target Gen AI System based on scoring feedback.

The biggest advantage we have found so far using PyRIT is our efficiency gain. For instance, in one of our red teaming exercises on a Copilot system, we were able to pick a harm category, generate several thousand malicious prompts, and use PyRIT’s scoring engine to evaluate the output from the Copilot system all in the matter of hours instead of weeks.

PyRIT is not a replacement for manual red teaming of generative AI systems. Instead, it augments an AI red teamer’s existing domain expertise and automates the tedious tasks for them. PyRIT shines light on the hot spots of where the risk could be, which the security professional than can incisively explore. The security professional is always in control of the strategy and execution of the AI red team operation, and PyRIT provides the automation code to take the initial dataset of harmful prompts provided by the security professional, then uses the LLM endpoint to generate more harmful prompts.

However, PyRIT is more than a prompt generation tool; it changes its tactics based on the response from the generative AI system and generates the next input to the generative AI system. This automation continues until the security professional’s intended goal is achieved.

PyRIT components

Abstraction and Extensibility is built into PyRIT. That’s because we always want to be able to extend and adapt PyRIT’s capabilities to new capabilities that generative AI models engender. We achieve this by five interfaces: target, datasets, scoring engine, the ability to support multiple attack strategies and providing the system with memory.

An overview of PyRIT components including local and remote targets, static and dynamic datasets, the scoring engine with PyRIT itself or via API, attack strategies for single or multi-turn conversations, and memory with storage and other utilities.
  • Targets: PyRIT supports a variety of generative AI target formulations—be it as a web service or embedded in application. PyRIT out of the box supports text-based input and can be extended for other modalities as well. ​PyRIT supports integrating with models from Microsoft Azure OpenAI Service, Hugging Face, and Azure Machine Learning Managed Online Endpoint, effectively acting as an adaptable bot for AI red team exercises on designated targets, supporting both single and multi-turn interactions. 
  • Datasets: This is where the security professional encodes what they want the system to be probed for. It could either be a static set of malicious prompts or a dynamic prompt template. Prompt templates allow the security professionals to automatically encode multiple harm categories—security and responsible AI failures—and leverage automation to pursue harm exploration in all categories simultaneously. To get users started, our initial release includes prompts that contain well-known, publicly available jailbreaks from popular sources.
  • Extensible scoring engine: The scoring engine behind PyRIT offers two options for scoring the outputs from the target AI system: using a classical machine learning classifier or using an LLM endpoint and leveraging it for self-evaluation. Users can also use Azure AI Content filters as an API directly.  
  • Extensible attack strategy: PyRIT supports two styles of attack strategy. The first is single-turn; in other words, PyRIT sends a combination of jailbreak and harmful prompts to the AI system and scores the response. It also supports multiturn strategy, in which the system sends a combination of jailbreak and harmful prompts to the AI system, scores the response, and then responds to the AI system based on the score. While single-turn attack strategies are faster in computation time, multiturn red teaming allows for more realistic adversarial behavior and more advanced attack strategies.
  • Memory: PyRIT’s tool enables the saving of intermediate input and output interactions providing users with the capability for in-depth analysis later on. The memory feature facilitates the ability to share the conversations explored by the PyRIT agent and increases the range explored by the agents to facilitate longer turn conversations.

Get started with PyRIT

PyRIT was created in response to our belief that the sharing of AI red teaming resources across the industry raises all boats. We encourage our peers across the industry to spend time with the toolkit and see how it can be adopted for red teaming your own generative AI application.

  1. Get started with the PyRIT project here. To get acquainted with the toolkit, our initial release has a list of demos including common scenarios notebooks, including how to use PyRIT to automatically jailbreak using Lakera’s popular Gandalf game.
  2. We are hosting a webinar on PyRIT to demonstrate how to use it in red teaming generative AI systems. If you would like to see PyRIT in action, please register for our webinar in partnership with the Cloud Security Alliance.
  3. Learn more about what Microsoft’s AI Red Team is doing and explore more resources on how you can better prepare your organization for securing AI.
  4. Watch Microsoft Secure online to explore more product innovations to help you take advantage of AI safely, responsibly, and securely. 

Contributors 

Project created by Gary Lopez; Engineering: Richard Lundeen, Roman Lutz, Raja Sekhar Rao Dheekonda, Dr. Amanda Minnich; Broader involvement from Shiven Chawla, Pete Bryan, Peter Greko, Tori Westerhoff, Martin Pouliot, Bolor-Erdene Jagdagdorj, Chang Kawaguchi, Charlotte Siska, Nina Chikanov, Steph Ballard, Andrew Berkley, Forough Poursabzi, Xavier Fernandes, Dean Carignan, Kyle Jackson, Federico Zarfati, Jiayuan Huang, Chad Atalla, Dan Vann, Emily Sheng, Blake Bullwinkel, Christiano Bianchet, Keegan Hines, eric douglas, Yonatan Zunger, Christian Seifert, Ram Shankar Siva Kumar. Grateful for comments from Jonathan Spring. 

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Announcing Microsoft’s open automation framework to red team generative AI Systems appeared first on Microsoft Security Blog.



from Microsoft Security Blog https://ift.tt/TukZQ8d
via IFTTT

Get the most out of Microsoft Copilot for Security with good prompt engineering

The process of writing, refining, and optimizing inputs—or “prompts”—to encourage generative AI systems to create specific, high-quality outputs is called prompt engineering. It helps generative AI models organize better responses to a wide range of queries—from the simple to the highly technical. The basic rule is that good prompts equal good results.

Prompt engineering is a way to “program” generative AI models in natural language, without requiring coding experience or deep knowledge of datasets, statistics, and modeling techniques. Prompt engineers play a pivotal role in crafting queries that help generative AI models learn not just the language, but also the nuance and intent behind the query. A high-quality, thorough, and knowledgeable prompt in turn influences the quality of AI-generated content, whether it’s images, code, data summaries, or text.

Prompt engineering is important because it allows AI models to produce more accurate and relevant outputs. By creating precise and comprehensive prompts, an AI model is better able to synthesize the task it is performing and generate responses that are more useful to humans.

The benefits of prompt engineering include:

Icon of an odometer.

Improving the speed and efficiency of generative AI tasks, such as writing complex queries, summarizing data, and generating content.

Graphic icon of two text bubbles.

Enhancing the skills and confidence of generative AI users—especially novices—by providing guidance and feedback in natural language.

Graphic icon of three building blocks.

Leveraging the power of foundation models, which are large language models built on transformer architecture and packed with information, to produce optimal outputs with few revisions.

Icon of a sliding graph chart.

Helping mitigate biases, confusion, and errors in generative AI outputs by fine-tuning effective prompts.

Icon of a bridge.

Helping bridge the gap between raw queries and meaningful AI-generated responses—and reduce the need for manual review and post-generation editing.

Why good prompts are important

Prompt engineering is a skill that can be learned and improved over time by experimenting with different prompts and observing the results. There are also tools and resources that can help people with prompt engineering, such as prompt libraries, prompt generators, or prompt evaluators.

The following examples demonstrate the importance of clarity, specificity, and context in crafting effective prompts for generative AI.

Examples of poor prompts for Copilot for Security.

How to use prompts in security

Prompting is very important in Copilot, as it is the main way to query the generative AI system and get the desired outputs. Prompting is the process of writing, refining, and optimizing inputs—or “prompts”—to encourage Copilot for Security to create specific, high-quality outputs.

Example of an optimal prompt providing specific instructions for the “top Microsoft Sentinel incidents created overnight.”

Effective prompts give Copilot for Security adequate and useful parameters to generate valuable responses. Security analysts or researchers should include the following elements when writing a prompt:

  • Goal: Specific, security-related information that you need.
  • Context: Why you need this information or how you’ll use it.
  • Expectations: Format or target audience you want the response tailored to.
  • Source: Known information, data source(s), or plugins Copilot for Security should use.

By creating precise and comprehensive prompts, Copilot for Security can better understand the task it is performing and generate responses that are more useful to humans. Prompting also helps mitigate biases, confusion, and errors in Copilot for Security outputs by fine-tuning effective prompts.

Output of the “top 5 Microsoft Sentinel Incidents created overnight” prompt output showing list of the requested incidents.

Save time with top prompts

Featured prompts are a set of predefined prompts that are designed to help you accomplish common security-related tasks with Copilot for Security. They are based on best practices and feedback from security experts and customers.

Featured prompts are a set of predefined prompts that are designed to help you accomplish common security-related tasks with Copilot for Security. They are based on best practices and feedback from security experts and customers.

You can also access the featured prompts by typing a forward slash (/) in the prompt bar and selecting the one that matches your objective. For example, you can use the featured prompt “Analyze a script or command” to get information on a suspicious script or command.

Some of the featured prompts available in Copilot for Security are:

  • Analyze a script or command: This prompt helps you analyze and interpret a command or script. It identifies the script language, the purpose of the script, the potential risks, and the recommended actions.
  • Summarize a security article: This prompt helps you summarize a security article or blog post. It extracts the main points, the key takeaways, and the implications for your organization.
  • Generate a security query: This prompt helps you generate a security query for a specific data source, such as Microsoft Sentinel, Microsoft Defender XDR, or Microsoft Azure Monitor. It converts your natural language request into a query language, such as Kusto Query Language (KQL) or Microsoft Graph API.
  • Generate a security report: This prompt helps you generate a security report for a specific audience, such as executives, managers, or analysts. It uses the information from your previous prompts and responses to create a concise and informative report.
Example of the “featured prompts” tab within Microsoft Security for Copilot.

Use promptbooks to save time

A promptbook is a collection of prompts that have been put together to accomplish a specific security-related task—such as incident investigation, threat actor profile, suspicious script analysis, or vulnerability impact assessment. You can use the existing promptbooks as templates or examples and modify them to suit your needs.

Dashboard view of Promptbook examples in Microsoft Copilot for Security.

Using promptbooks in Copilot is a way to accomplish specific security-related tasks with a series of prompts that run in sequence. Each promptbook requires a specific input—such as an incident number, a threat actor name, or a script string—and then generates a response based on the input and the previous prompts. For example, the incident investigation promptbook can help you summarize an incident, assess its impact, and provide remediation steps.

Some of the promptbooks available in Copilot for Security are:

  • Incident investigation: This promptbook helps you investigate an incident by using either the Microsoft Sentinel or Microsoft Defender XDR plugin. It generates an executive report for a nontechnical audience that summarizes the investigation.
  • Threat actor profile: This promptbook helps you get an executive summary about a specific threat actor. It searches for any existing threat intelligence articles about the actor, including known tools, tactics, and procedures (TTPs) and indicators, and provides remediation suggestions.
  • Suspicious script analysis: This promptbook helps you analyze and interpret a command or script. It identifies the script language, the purpose of the script, the potential risks, and the recommended actions.
  • Vulnerability impact assessment: This promptbook helps you assess the impact of a publicly disclosed vulnerability on your organization. It provides information on the vulnerability, the affected products, the exploitation status, and the mitigation steps.

To use a promptbook, you can either type an asterisk (*) in the prompt bar and select the promptbook you want to use or select the Promptbooks button above the prompt area. Then you can provide the required input and wait for Copilot for Security to generate the response. You can also ask follow-up questions or provide feedback in the same session.

Dashboard view of how to access and search for Promptbooks within Microsoft Copilot for Security.

Common Copilot prompts

The following list of prompts is an excerpt of the Top 10 prompts infographic, which provides prompts utilized and recommended by customers and partners with great success. Use them to spark ideas for creating your own prompts.

Ten examples of suggested prompts within Microsoft Copilot for Security.

Get started with prompts in Copilot

We know creating precise and comprehensive prompts produces accurate, relevant responses. By understanding the fundamentals of good prompt engineering, security analysts can improve the speed and efficiency of generative AI tasks, mitigate biases, reduce output errors, and more—all without requiring coding experience or deep knowledge of datasets, statistics, and modeling techniques. The prompt engineering best practices described here, along with featured prompts and promptbooks included in Copilot for Security, can help security teams utilize the power of generative AI to improve their workflow, focus on higher-level tasks, and minimize tedious work.

Learn more about Microsoft Copilot for Security.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Get the most out of Microsoft Copilot for Security with good prompt engineering appeared first on Microsoft Security Blog.



from Microsoft Security Blog https://ift.tt/On0vGKD
via IFTTT

Navigating NIS2 requirements with Microsoft Security solutions

The Network and Information Security Directive 2 (NIS2) is a continuation and expansion of the previous European Union (EU) cybersecurity directive introduced back in 2016. With NIS2, the EU expands the original baseline of cybersecurity risk management measures and reporting obligations to include more sectors and critical organizations. The purpose of establishing a baseline of security measures for digital service providers and operators of essential services is to mitigate the risk of cyberthreats and improve the overall level of cybersecurity in the EU. It also introduces more accountability—through strengthened reporting obligations and increased sanctions or penalties. Organizations have until October 17, 2024, to improve their security posture before they’ll be legally obligated to live up to the requirements of NIS2. The broadened directive stands as a critical milestone for tech enthusiasts and professionals alike. Our team at Microsoft is excited to lead the charge in decoding and navigating this new regulation—especially its impact on compliance and how cloud technology can help organizations adapt. In this blog, we’ll share the key features of NIS2 for security professionals, how your organization can prepare, and how Microsoft Security solutions can help. And for business leaders, check out our downloadable guide for high-level insights into the people, plans, and partners that can help shape effective NIS2 compliance strategies. 

NIS2 key features 

As we take a closer look at the key features of NIS2, we see the new directive includes risk assessments, multifactor authentication, security procedures for employees with access to sensitive data, and more. NIS2 also includes requirements around supply chain security, incident management, and business recovery plans. In total, the comprehensive framework ups the bar from previous requirements to bring: 

  • Stronger requirements and more affected sectors.
  • A focus on securing business continuity—including supply chain security.
  • Improved and streamlined reporting obligations.
  • More serious repercussions—including fines and legal liability for management.
  • Localized enforcement in all EU Member States. 

Preparing for NIS2 may take considerable effort for organizations still working through digital transformation. But it doesn’t have to be overwhelming. 

logo, company name

NIS2 guiding principles guide

Get started on your transformation with three guiding principles for preparing for NIS2.

Proactive defense: The future of cloud security

At Microsoft, our approach to NIS2 readiness is a blend of technical insight, innovative strategies, and deep legal understanding. We’re dedicated to nurturing a security-first mindset—one that’s ingrained in every aspect of our operations and resonates with the tech community’s ethos. Our strategy for NIS2 compliance addresses the full range of risks associated with cloud technology. And we’re committed to ensuring that Microsoft’s cloud services set the benchmark for regulatory compliance and cybersecurity excellence in the tech world. Now more than ever, cloud technology is integral to business operations. With NIS2, organizations are facing a fresh set of security protocols, risk management strategies, and incident response tactics. Microsoft cloud security management tools are designed to tackle these challenges head-on, helping to ensure a secure digital environment for our community.  

NIS2 compliance aligns to the same Zero Trust principles addressed by Microsoft Security solutions, which can help provide a solid wall of protection against cyberthreats across any organization’s entire attack surface. If your security posture is aligned with Zero Trust, you’re well positioned to assess and help assure your organization’s compliance with NIS2. 

Diagram conveying the multiple cyber threats across an organizations entire attack surface.
Figure 1. Risks associated with securing an organizations external attack surface. 

For effective cybersecurity, it takes a fully integrated approach to protection and streamlined threat investigation and response. Microsoft Security solutions provide just that, with: 

  • Microsoft Sentinel – Gain visibility and manage threats across your entire digital estate with a modern security information and event management (SIEM). 
  • Microsoft XDR – Stop attacks and coordinate response across assets with extended detection and response (XDR) built into Microsoft 365 and Azure. 
  • Microsoft Defender Threat Intelligence – Expose and eliminate modern threats using dynamic cyberthreat intelligence. 

Next steps for navigating new regulatory terrain 

The introduction of NIS2 is reshaping the cybersecurity landscape. We’re at the forefront of this transformation, equipping tech professionals—especially Chief Information Security Officers and their teams—with the knowledge and tools to excel in this new regulatory environment. To take the next step for NIS2 in your organization, download our NIS2 guiding principles guide or reach out to your Microsoft account team to learn more. 

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 

The post Navigating NIS2 requirements with Microsoft Security solutions appeared first on Microsoft Security Blog.



from Microsoft Security Blog https://ift.tt/h4odl6m
via IFTTT

Introducing more access options with Multi-Workspace URL!

How users access their resources is a crucial part of the Citrix environment. Enterprises want  to make access as easy as possible while maintaining security and delivering the right resources. This is often done by separating the access layer into different URLs. Though secure, this process takes time for the administrators to implement to ensure they are maintaining the required security and resources for each use case. 

Our teams are continuously working to simplify processes. To address this need to simplify user access, we are excited to announce that the Multi-Workspace URL functionality is now available natively within Citrix Cloud! We recognize that many of our enterprise customers have mature access tiers often requiring more than a singular access URL. You can now create up to 10 URLs in your Citrix Cloud environment!

This functionality helps address three key concerns with multiple access points: branding, authentication methods, and resource filtering. Read further about how our new functionality can address these key areas.

Branding

There are often multiple groups of users accessing a Citrix environment. These users may be from a variety of business units such as HR and IT or even third parties like contractors or vendors. In order to help users identify that they are at their correct access URL, companies often employ different branding and customizations to make them easily identifiable at a glance. You have the ability to customize your branding per URL or by user group, or both! Leveraging Multi-Workspace URL allows admins to configure store URLs that are easily remembered by users. This simplifies access and increases productivity. 

Theme prioritization is also supported with Multi-Workspace URL. The theme that will be applied is the highest priority theme that applies to a given user group membership and Workspace URL, or both. Now you can ensure your Citrix UI looks great, and make it easier for your users to know they’re in the right spot!

Authentication

A common reason for different access points is that user groups require different authentication methods. To utilize different authentication methods within your Citrix environment, we will soon be introducing a new feature to private Tech Preview called Conditional Authentication. With Conditional Authentication, customers will be able to select authentication methods based on conditionals like a user’s domain,group, or Workspace URL. If you are interested in being a part of the private Tech preview, please fill out this form

Adaptive Authentication can also be used with multi-URL to implement nFactor flows. These nFactor flows allow for different authentication methods depending on whatever factors you configure — including your different Workspace URLs.

With Multi-Workspace URL, you can ensure your end users use  the right authentication methods for their use case. If migrating from an on-prem deployment, these features will help ensure your users can continue to use the URL and authentication methods they are familiar with, ensuring a smooth transition and user experience. 

Resource Filtering

Once a user logs on into the environment, they need to have access to all their vital resources to get their work done. With users potentially accessing multiple Workspace URLs, you must  ensure they only see their specific resources in each store. This is where our new resource filtering functionality can help.

With this feature rollout, we are adding new smart access functionality to  the Citrix UI. This allows for filtering within a delivery group based on the specific URL the user is accessing from.  These access policy filters in the following ways:

  • Match any: The access policy allows access if any of the given filter criteria is matched by the incoming request.
  • Match all: The access policy allows access only if all of the given filter criteria are matched by the incoming request.
  • Exclude: The access policy will exclude the rule if this filter criteria is met. 

Control access to your resources like never before with our new smart access policies for Workspace URLs.

Learn More

Check out our product documentation to learn more about this new feature.


Disclaimer: This publication may include references to the planned testing, release and/or availability of Cloud Software Group, Inc. products and services. The information provided in this publication is for informational purposes only, its contents are subject to change without notice, and it should not be relied on in making a purchasing decision. The information is not a commitment, promise or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for products remains at the sole discretion of Cloud Software Group, Inc.



from Citrix Blogs https://ift.tt/Y8eEopl
via IFTTT

Gemini models are coming to Performance Max

New improvements and AI-powered features are coming to Performance Max, including Gemini models for text generation.

from AI https://ift.tt/OlDJKdo
via IFTTT

How to Leverage Case Playbooks for Compliance 

Mature security processes should involve leveraging playbooks to guide their responses to potential breaches and ensure compliance with regulations. These playbooks serve as dynamic blueprints, outlining predefined steps, protocols, and best practices tailored to specific scenarios. Harnessing the power of playbooks in cybersecurity not only streamlines incident response but also empowers organizations to preemptively mitigate risks, fortify defenses, and maintain regulatory adherence. 

How LogRhythm Case Playbooks Help with Compliance 

LogRhythm Case Playbooks make it easier and more repeatable for analysts to respond to incidents and security events within a security information and event management (SIEM) platform. Through the rich feature set of Case Playbooks, analysts can not only create their own playbooks, but modify existing playbooks, as well as attach company policies and procedures to the playbook. All these features let the analyst react faster and more efficiently.  

Because Playbooks are a documented set of procedures, they also lend themselves well to performing activities related to compliance requirements. Maintaining proper compliance can be challenging, but implementing more compliance-focused playbooks or taking your existing security-focused use case playbooks and reviewing them for compliance considerations will benefit your company. This will be beneficial for new and inexperienced analysts with less experience with compliance. 

Cybersecurity Playbook Examples 

In this blog, the LogRhythm Labs compliance research team highlights two examples where Playbooks can help strengthen your compliance maturity and reduce gaps in your compliance. For LogRhythm customers, both Playbooks can be found in LogRhythm Community.

Playbook #1: Data Exfiltration 

In this first example, you can observe appropriate actions to consider when tackling potential data exfiltration from a compliance perspective. The goal of this playbook is to provide a template guidance for new analysts when mitigating a detected data breach while remaining compliant with relevant data protection laws/frameworks (in this case GDPR). Data breaches are one of the most common security risks an organization can experience, so it’s pivotal to establish an action plan that alleviates the pressure of singularity and encourages efficient collaboration.  

SIEM data exfiltration playbook for compliance

In the context of compliance and technical analysis, here are preliminary and closing steps to consider when building out a reaction plan towards data breaches: 

  1. Stay Calm and Composed 
    • Maintain a calm demeanor and focus on the task at hand. Security breaches can be stressful, but a composed response is crucial. 
  2. Incident or Event 
    • Determine if you are investigating an incident or event. 
  3. Identify the Affected Host 
    • Identify the host involved with the signaled data exfiltration alarm. 
  4. Validate Detection 
    • Collaborate with the security operations center (SOC) department to confirm the validity of the alert, whether it is a False Positive or a True Positive. Then check if the alarm context aligns with your organization’s policies and expectations regarding data exfiltration. 
  5. Notify Management and Incident Response Team 
    • If the detection is confirmed to be a True Positive immediately inform your supervisor or the designated incident response team within your organization about the breach. Time is critical so swift action is required. 
  6. Document the Incident 
    • Start documenting all relevant details about the breach within a breach report, including the date and time it was discovered, the method of breach, affected systems, and the potential impact. Ensure all information is accurate and well organized. 
  7. Activate the Incident Response Plan 
    • Refer to your organization’s incident response plan and follow the predefined steps for addressing security incidents. This plan should outline roles and responsibilities during a breach. 
  8.  Data Breach Notification 
    • Determine whether it constitutes a data breach as defined by GDPR and other relevant laws. If the breach meets the criteria for notification, work with legal and compliance teams to prepare and submit notifications to regulatory authorities and affected individuals within the required timeframe. 
  9. Communication 
    • Coordinate internal and external communication efforts. Ensure that affected parties are informed in a timely and appropriate manner. Maintain a record (for example emails, instant messages, etc.) of all communication related to the breach. 
  10. Legal and Regulatory Compliance 
    • Assist in the collaboration with legal counsel to ensure that all actions taken are compliant with GDPR and other data protection laws/regulations. 
  11. Continuous Improvement 
    • Continuously assess and improve data security and compliance measures based on lessons learned from the breach and changes in regulations. 

Handling data breaches from a compliance perspective requires a combination of technical knowledge, legal understanding, and strong communication skills. Collaboration with IT, legal, and internal security teams is crucial to effectively manage and mitigate the impact of a breach. Please ensure to employ this document as a template and refer to your organization’s policies/best practices when creating your playbook. 

Playbook #2: “Material” Incidents 

This second example is related to the SEC’s new rules for public companies around the disclosure of “material” cyber incidents. This has been a popular and often contentious topic amongst industry professionals as the requirements of the new rule have parameters that are open to interpretation, specifically around what constitutes a material incident and the timeline for disclosure.  

Materiality, as described by the SEC in the final rule, means that “there is a substantial likelihood that a reasonable shareholder would consider it important.” Once an incident is determined to be material, an organization has four days to disclose. By this definition, materiality is not necessarily a simple financial threshold or set of events that must occur, but a culmination of data points that in aggregate could be material to a reasonable investor. Given the nature of this independent evaluation of materiality, each organization is likely to have a decision maker, or set of decision makers, that evaluates the circumstances to determine the materiality.  

Because of this, our playbook template does not provide a step-by-step guide to determining if a given incident is material. This playbook is intended to help security analysts track the path of an incident across internal and external systems, capture related evidence, and communicate that information to appropriate stakeholders for ultimate evaluation of materiality. By following the guidance in this template, the appropriate stakeholders should be informed in a timely manner and have enough evidence to assess materiality and the need to make public disclosures. 

LogRhythm SIEM playbook for compliance

Figure 1: Material Incident Playbook Procedures in LogRhythm SIEM

  1. Determine if you are investigating an incident or event 
    • This step could take further investigation to complete depending on your organization’s definition of event and incident, but is a key delineation that should be made 
  2. Acquire, preserve, secure, and document evidence
    • By creating the case it is likely that some evidence related to the incident is already attached, but it is critical to gather all related data into the case for evaluation by yourself and others reviewing.  
  3. Identify the hosts impacted involved with the incident and obtain a forensic image of the system 
    • The importance of preserving evidence is critical. Additionally, this is a perfect opportunity to make note of whether the systems are in scope for any compliance efforts and make initial contact with those system owners.  
  4. Identify the services impacted 
    • Document the nature of the impact on any impacted services and the timing related to that impact. This information will help decision-makers evaluate the scope and size of the incident.  
  5. Identify the method of attack
    • This step may not be readily apparent but will be important for decision makers to understand and potentially include when disclosing. 
  6. Identify the source of attack 
    • If possible, identify and document the source of the attack. Use OSINT resources as necessary to source the activity and link it to potential threat actors. Like the step prior, this step may take time, but will also be important for decision makers in evaluating the overall significance of the incident and ongoing risk. 
  7. Disable any affected user accounts 
    • Any user accounts utilized during the attack, either compromised or newly created, should be tracked and recorded within case. This will be important in both stopping the potential spread of the incident and again tracking the overall scope of what occurred.  
  8. Identify data classification of the data involved 
    • Identify what data may have been impacted in relation to this incident. Once identified, determine the impact to affected data (read, exfiltrated, modified, deleted) and determine ownership of data. This step can be difficult without a robust and mature data governance program in place, but narrowing down the impacted data involved will again help define the overall scope and potential impact of the incident.  
  9. Notify Security Leader 
    • By this point it is important to engage the appropriate security and/or compliance leader of the active incident investigation and interim status along with an expected evaluation completion. This will allow decision makers to begin evaluating initial facts and considerations for disclosure. 
  10. Identify how the incident occurred 
    • Identify how the incident occurred by evaluating method of attack, impacted resources, and evaluation of host logs. 
  11. Provide feedback and lessons learned to reduce chances of a reoccurring incident 
    • Identify all vulnerabilities that were a part of this incident and provide steps to harden against such vulnerabilities in the future to the system, application, and data owners. Furthermore, provide a summary to the directors of those owners. Depending on the severity of the incident, you may also want to hold a lessons learned meeting with the directors and their managers. 

Both examples are prime compliance use-case templates. When considering these playbooks or any other template for compliance purposes, it is important to continuously reevaluate their value, alignment with internal procedures, and reducing duplication.  

To learn more about how LogRhythm can help you achieve compliance, visit our webpage overview.

The post How to Leverage Case Playbooks for Compliance  appeared first on LogRhythm.



from LogRhythm https://ift.tt/jeMgULS
via IFTTT

Setting up StarWind Virtual SAN (VSAN) as Hardened Repository for Veeam B&R

Introduction

Hi, fellas. If you’re in charge of keeping your business data safe, you know that protecting it from ransomware and other threats is a top priority. One of the best ways to do this is using a 3-2-1-1 backup practice. That extra “1” in 3-2-1-1 stands for using immutable storage, such as on a Linux-based hardened repository.

In this article, I will show you how to turn your existing or aging server hardware into immutable backup storage to keep your data safe against ransomware. All you need is Veeam Backup & Replication and StarWind VSAN that will perform a role of Hardened Repository. This is a super easy and efficient way to keep your data safe, and I’ll walk you through the entire process step by step.

Why immutability?

Firstly, let’s talk about why backups have to be immutable. Essentially, it means that once a backup is made, it can’t be deleted or changed. This is important for crucial data because it means that even if a hacker gets into your system and tries to remove your backups, they won’t be able to. This is why write-once-read-many (WORM) storage media like tape libraries and optical media are commonly used for backups. However, these types of media can be a bit of a pain to manage because they need to be rotated and replaced regularly to align with retention policies.

What’s the easiest way to enable immutability? 

This is where Veeam B&R comes in – it’s an industry-standard backup solution that works with most storage media types, including physical and virtual tapes. What’s more, in Veeam B&R v11, they added support for  Hardened Backup Repositories, allowing to enable immutability for backups without using object storage or any specialized third-party solutions. You can find the deployment documentation for this on Veeam B&R KB.

Now, you could go through the process of setting up a Linux server and configuring it to work with Veeam B&R, but it’s not exactly a walk in the park. However, there’s good news – we’ve made it super simple with StarWind VSAN. Diagram: StarWind VSAN as Hardened Repository for Veeam B&R

We’ve developed a set of management tools that are pre-configured in the web console, so you can easily set up a storage server using commodity hardware. All you need to do is use a few wizards in StarWind Virtual SAN  web-console, and you’ll have a hardened repository for Veeam B&R in no time.

How to set up?

In today’s article, I won’t be covering the initial setup process. For details on configuring the StarWind Controller Virtual Machine (CVM), its networking, and storage, refer to our previous article: ‘How to Create a File Share with StarWind VSAN‘. Make sure to review it before proceeding. We’ll also post instructions for the bare-metal StarWind VSAN deployment in a separate blog post later on, so stay tuned.

Assuming you’ve completed the preliminary steps and created a storage pool, we can now move on to creating a new volume in the Virtual SAN Web UI.

Once the storage pool is created, navigate to the “Volumes” tab and click the “+” button to open the “Create volume” wizard:

“Volumes” tab | “Create volume”

Now select the storage pool that you are going to use for the new volume and click “Next”:

Create volume wizard | Select storege pool

Specify the name of the new volume and select the required size:

Create volume | Specify settings

Now select the filesystem for your volume. Select the “Backup repository” option, because it is already configured according to Veaam best practices and recommendations.

Create volume | Choose Filesystem settings

Review your settings and click “Create” to create the new volume:

Create volume | Review summary

After this, you need to add a Veeam user to the CVM to provide Veeam access to the storage. For this, in the “Volumes” tab, select your newly created backup volume and click “Manage VHR (Veeam Hardened Repository) user.

In the “Manage VHR user” pop-up window, click the “+” button:

Manage VHR user | Create Veeam user

Specify the credentials for the new user:

Create Veeam user | Specify the credentials for the new user

Select the newly created user and enable SSH access for it, and click “Save”:

Manage VHR user | Select the newly created user and enable SSH

Congrats! You have completed the StarWind VSAN configuration. You’ll need to connect the created volume to Veeam B&R as the new backup repository. To do that, open the Veeam Backup and Replication console, navigate to “Backup Infrastructure”, and select “Backup Repositories”:

Veeam Backup and Replication console | Navigate to “Backup Infrastructure”, and select “Backup Repositories”

Click “Add Repository”  and select “Direct attached storage:

Add Backup Repository | Select “Direct attached storage

Next, select “Linux (Hardened Repository)”:

Direct Attached Storage | Select “Linux (Hardened Repository)”

In the “New Backup Repository” wizard, specify the name and description for the new repository and click “Next”:

“New Backup Repository” wizard | Specify the name and description for the new repository

In the next step, click “Add New”:

New Backup Repository wizard | Add New

In the “New Linux Server” wizard, specify the IP address or the DNS name of your StarWind CVM and click “Next”:

New Linux Server wizard | Specify the IP address or the DNS

Click “Add” and specify the credentials of the VHR user account that you created in StarWind VSAN Web UI:

New Linux Server | Specify the credentials of the VHR user account

SSH Connection | Provide Credentials

Review the installed components and click “Apply”:

New Linux Server | Review the installed components

Wait until the installation is completed and then click the “Next” button:

New Linux Server | Wait until the installation is completed

Review the summary and click “Finish”:

New Linux Server | Review the summary

In the “New Backup Repository” wizard, select the newly added Repository server from the drop-down menu:

New Backup Repository wizard | Select the newly added Repository server

Now, select the path to the volume on the StarWind VSAN appliance. Also, check that the “Use fast cloning on XFS volumes” setting is enabled and specify the required retention period for immutability:

New Backup Repository | Select the path to the volume on the StarWind VSAN appliance

After that, check the Mount Server settings, where you will be doing fast restores of your backups:

New Backup Repository | Check the Mount Server settings

Check the components that will be installed and click “Apply”:

New Backup Repository | Check the components that will be installed

Wait until the process is completed and click “Next”:

New Backup Repository | Wait until the process is completed

Review the summary and click “Finish”:

New Backup Repository | Review the summary

To secure the server from potential local threats such as credentials theft, in the StarWind VSAN Web UI, navigate to the “Volumes” page, launch the “Manage VHR user” wizard, and disable SSH for the VHR user account:

Manage VHR user wizard | Disable SSH for the VHR user account

 

Additionally, keep in mind that this method makes it easy to set up a hardened repository and secure your backups. It’s still important to regularly test and verify your backups to ensure they are working properly and can be restored in case of an emergency. Ensure your software and hardware up to date, and to have a disaster recovery plan in place in case the worst happens.

Conclusion

To sum up, using StarWind VSAN as hardened repository for Veeam B&R is a great way to protect your business data from any threats, while also making the process of setting it up easy and straightforward. With the help of our management tools, you can have a secure and reliable backup solution up and running in no time. So, if you’re looking to upgrade your backup strategy and keep your data safe, give StarWind VSAN a try.

This material has been prepared in collaboration with Asah Syxtus Mbuo, Technical Writer at StarWind.



from StarWind Blog https://ift.tt/0zrfQO6
via IFTTT