Monday, September 18, 2017

Backing up configs with the Ansible NCLU module



----
Backing up configs with the Ansible NCLU module
// Cumulus Networks Blog

With the release of Ansible 2.3 the Cumulus Linux NCLU module is now part of Ansible core. This means when you `apt-get install ansible`, you get the NCLU module pre-installed! This blog post will focus on using the NCLU module to backup and restore configs on Cumulus Linux. To read more about the NCLU module from its creator, Barry Peddycord, click here.

The consulting team uses Ansible very frequently when helping customers fully automate their data centers. A lot of our playbooks use the Ansible template module because it is very efficient and idempotent, and Cumulus Linux has been built with very robust reload capabilities for both networking and Quagga/FRR. This reload capability allows the box to perform a diff on either `etc/network/interfaces` or `etc/quagga/Quagga.conf` so when a flat-file is overridden with the template module, only the "diff" (or difference) is applied. This means if swp1-10 were already working and we added configuration for swp11-20, an ifreload will only add the additional config and be non-disruptive for swp1-10. This reload capability is imperative to data centers and our customers couldn't live without it.

However, many customers also want to build configs with NCLU (or the net commands) when they are first introduced to Cumulus Linux. Instead of starting from a flat-file and templating it out, they are going command by command as they learn Cumulus Linux. It is still fairly easy to build a configuration with NCLU, then pull the rendered configuration with the `net show config files` command and build a template out of it.

That being said, I wanted to provide an alternate method of backing up and restoring configs in a very simple playbook that does not require templating or configuration of flat-files. Check out the following Github repo: https://github.com/seanx820/ansible_nclu

NCLU module

There is a README.MD on the Github page but I will explain here in brevity what I am trying to accomplish. The pull_nclu.yml file will grab all the net commands from the Cumulus Linux switch(es) and store it locally on the server that ran the playbook. It literally just connects to the Cumulus Linux switch(es) and grabs a `net show config commands` and stores it locally. The push_nclu.yml playbook will then re-push these net commands to the Cumulus LInux switch(es) line per line, but in a idempotent way. This means if the configuration that is being applied is already configured it will skip it let you know it is already configured.

There are advantages and things to consider vs templating in my opinion and I will quickly go over some:

Advantages of the NCLU module backup method:

  • The NCLU module is literally typing what a user would type to configure the box. This means it is very easy for someone to figure out what the playbook is doing, and troubleshoot any issues.
  • The playbooks provided are so simple only a rudimentary understanding of how Ansible works is required
  • The NCLU module is still idempotent, so we are not just firing commands off, it knows if something is already configured.
  • No knowledge of Jinja2 or other templating mechanism needed. These playbooks are simply replaying commands in a smart and logical matter.

Things to consider with the NCLU module backup method:

  • Speed! This method literally replays each net command line by line back to the box. This will always be inherently slower than just overwriting a couple flat-files and performing an ifreload and a systemctl reload quagga.service. Configuring Cumulus Linux just with templates can require just four Ansible tasks, vs NCLU which can literally be hundreds of lines of net commands. Does it matter? Maybe? Depends on the situation and how you are using Ansible.
  • Config Management: While the NCLU module is idempotent, unless you perform a `net del all` and reset the config, you don't know if another user or program has configured the box in addition to you. Meaning if your config never configured swp10 but someone else did, this method would have no concept of swp10 being configured. All the commands you configured will be restored correctly but their is no concept of an end-state for the box holistically. With templates we know the entire config, not just the part we configured (because we are literally overwriting the entire configuration every time and doing a diff).

There you have it! I always like to think adoption of network automation by network engineers in 3 stages: crawl, walk and run. I see this method somewhere under the crawl and walk stages. I find myself using it during network POCs (proof of concept) labs and teaching Ansible to first-timers. Let me know what you think in the comments below.

The post Backing up configs with the Ansible NCLU module appeared first on Cumulus Networks Blog.


----

Read in my feedly


Sent from my iPhone

Automating network troubleshooting with NetQ + Ansible



----
Automating network troubleshooting with NetQ + Ansible
// Cumulus Networks Blog

Network Automation is so hot right now! Joking aside, DevOps tools like Ansible, Puppet, Chef and Salt as well as proprietary tools like Apstra are becoming all the rage in computer networks everywhere. There are python courses, network automation classes and even automation focused events for the first time in the history of computer networks (or at least it feels like it).

For this blog post I want to focus on automating network troubleshooting, the forgotten stepchild of network automation tasks. I think most automation tools focus on provisioning (or first time configuring) because so many network engineers are new to network automation in general. While I think that is great (and I want to encourage everyone to automate!) I think there is so much more potential for network automation. I am introducing Sean's third category of automation use-cases — OPS!

network troubleshooting

I want to combine Cumulus NetQ, a fabric validation system, with Ansible to:

  • Figure out IF there is a problem (solved by NetQ)
  • Figure out WHAT the problem is (solved by NetQ)
  • FIX the problem (solved by Ansible)
  • AUTOMATE the above 3 tasks (solved by Ansible)

Because I think looking at terminal windows is super boring (no matter how cool I think it is), I am going to combine this with my favorite chat app Slack. This category of automation is conveniently called ChatOps.

NetQ has an ability called check where it can give us a list of python dictionaries of every broken node with the network. Specifically I am going to use netq check bgp since BGP is the most common IP fabric used by Cumulus Networks customers. NetQ returns JSON and Ansible can easily parse through JSON, so this is ridiculously simple.

For this scenario, to "break" the network fabric, I manually logged into leaf01 and performed an ifdown swp51. Since network engineers are visual people, the diagram I am using for this scenario is the Cumulus Networks reference topology:

network troubleshooting

Here is the output of netq check bgp json:

cumulus@oob-mgmt-server:~/cldemo-bgp-tshoot$ netq check bgp json
{
        "failedNodes": [
                {
                        "node": "leaf01",
                        "reason": "Interface down",
                        "peerId": "-",
                        "neighbor": "swp51",
                        "time": "5m ago"
                },
                {
                        "node": "spine01",
                        "reason": "Hold Timer Expired",
                        "peerId": "-",
                        "neighbor": "swp1",
                        "time": "5m ago"
                }
        ],
        "summary": {
                "checkedNodeCount": 12,
                "failedSessionCount": 2,
                "failedNodeCount": 2,
                "totalSessionCount": 24
        }
}

We can see from the JSON output above that we have two failed nodes (leaf01 and spine01).
The playbook I wrote will:

  • Grab the output provided above
  • Report the broken nodes into Slack
  • Fix the broken node

For this simple example the ONLY use case I can fix is if an interface is down. The point of this is a showcase, not an all-inclusive "troubleshoot anything" playbook. Check out the Github project here: https://github.com/seanx820/autonetq/

Let's go ahead and run this playbook:

ansible-playbook autonetq.yml

And the Slack Ansible module lets us know what is happening:

network troubleshooting

I don't know about you…but I think this is really, really cool. I hope you can take this playbook and expand on it, or at least expand on the idea of Network Automation being used for network troubleshooting.

The post Automating network troubleshooting with NetQ + Ansible appeared first on Cumulus Networks Blog.


----

Read in my feedly


Sent from my iPhone

Fly be free: introducing Cumulus in the Cloud



----
Fly be free: introducing Cumulus in the Cloud
// Cumulus Networks Blog

I get really excited watching people use the technology that we develop at Cumulus Networks, and we're always looking to make it easier for people to get their heads and hands wrapped around our products and tools. Our first product, Cumulus Linux, is pretty easy; a curious techie can download our free Cumulus VX virtual machine and use it standalone or in concert with other virtual machines. If they want to see the rubber meet the road with a physical experience, they can buy a switch/license and experiment in a live network.

Cumulus VX

The introduction of Cumulus NetQ and Cumulus Host Pack upped the ante in demonstrability. These products work together to allow for high scale, operationally sane infrastructure. We wanted the curious to be able to explore all of our products in a comfortable setting. Thus was born a project we call Cumulus in the Cloud.

Cumulus in the Cloud

The awesome team here at Cumulus leveraged modern technology to set up a personal mini data center infrastructure complete with four servers and a multi-rack leaf/spine network. Then we put that technology to work in infrastructure related architectures that are meaningful to customers.

Leaf/spine

Our first personalization is a container deployment leveraging Mesos and Docker. An explorer sees a web page with a console into their own data center along with links to relevant user interfaces and a set of "Getting Started" steps. These steps let them learn about the environment itself as well as learn about the Cumulus technology used to make it operational.

Cumulus in the Cloud console

The feedback on Cumulus in the Cloud has been really rewarding — the curious have been able to explore the Cumulus technology, gaining comfort with what is in front of them, and they are able to start out on their own adventures, taking deeper looks into the infrastructure and making changes to see the effect.

Stay tuned to Cumulus in the Cloud. Over the new few weeks, we'll be rolling out an OpenStack environment and a bare metal environment based on BGP-EVPN.

Fly be free, JR

The post Fly be free: introducing Cumulus in the Cloud appeared first on Cumulus Networks Blog.


----

Read in my feedly


Sent from my iPhone

Network automation best practices for DevOps



----
Network automation best practices for DevOps
// Cumulus Networks Blog

Optimizing a network for maximum efficiency almost always requires some level of automation. From provisioning resources to configuring processes and applications, network automation can improve upon the consistency of network operations while also reducing the resources needed to maintain the network. That being said, network automation can be exceedingly complex as well. Following network automation best practices is necessary to ensure that automation doesn't interfere with or compromise the network.

Create a centralized hub for automated services

As networks grow, it can be tempting to add new services and tools one by one. Unfortunately, piecemeal additions can quickly become haphazard and difficult to maintain. Automated services should always be controlled through a single API or centralized hub, to improve upon reporting, maintenance, consistency and optimization.

Network automation suites have been developed to be robust enough that they can use the same code base for computing, networking, and storage, thereby significantly simplifying network optimization and other related processes. Ansible is one example of a network automation tool that can help you embrace DevOps as a network automation best practices, though there are many others. IT departments will find the process of automation easier to manage and maintain when filtered through a centralized monitoring and reporting system.

Use DevOps principles as network automation best practices

For a better and more stable environment, it's a good idea to apply standard DevOps best practices to the specific processes of automation. DevOps best practices are specifically designed to reduce risk and improve upon efficiency, which works extremely well for network automation overall.

  • Propose changes regarding automation. Before any work is done, a comprehensive proposal should be completed. This eliminates any network changes that may be unnecessary and reveals any potential concerns early on.
  • Add network automation to version control. Rather than adding network automation to a system separately, it should be included within the version control for the system. Network automation impacts the entirety of the infrastructure and should be treated as such.
  • Test and approve network management optimization. Before network automation is completed and approved, it has to be tested in an environment that is as similar as possible to the live environment. This will reveal issues before they become a problem.
  • Promote changes into production after testing and approval. Changes should only be promoted to production once carefully vetted.

Using these best practices will avoid two major issues: resources being consumed at greater than predicted rates and potential issues with security or functionality.

Resist the temptation of custom coding

There was a time when custom coding was considered to be a best practice in and of itself. Developing solutions from scratch allowed for optimal resource usage, paring down to strictly what was needed. But as technology has changed and advanced, the benefits of custom coding have been outweighed by the negatives. Custom coding now takes more time to maintain and can be easily broken by upgrades and updates.

It can also be prohibitively difficult for custom programs to maintain the level of security necessary for a network's services. Though a custom coded automated service may be well-optimized, it may not be able to work with a load balancing system or other network technology. Overall, a single code base or API will make the task ahead far easier, both in terms of implementation and maintenance.

Begin automating from the ground up

Automation is most frequently utilized for configuration and provisioning first. From there, the most basic and simplistic services should be automated first — such as simple error recovery. By beginning from the ground up, an organization can achieve the most significant efficiency gains without a substantial investment of resources. As systems become higher level and more complex, the network will experience diminishing returns in terms of automation.

Remember the "black list"

There are certain items that should usually not be automated. While there may be some exceptions, automating these items could cause other issues.

  • Sensitive workloads. Automation is designed to improve upon business operations. When it comes to sensitive workloads, it is more likely to interfere and cause problems rather than resolve them.
  • New and advanced applications. Newer applications often manage their own automation and load balancing; adding on another layer may actually cause further problems.

In general, if a service or process is complex — and if the idea of automating it appears overwhelming — it's a bad idea to automate it. Automation is intended for simple tasks that do not usually require human intervention.

Always keep complete control over reporting

Reporting is by far the most important component to network automation. All automated processes must consistently report both their state and their results — and a failure of automated processes must be immediately prioritized. Without a comprehensive reporting process, it becomes difficult for administrators to track and identity problems — and small issues can eventually snowball into much larger and more significant ones.

By following the above network automation best practices, DevOps can integrate their system with automated processes without unnecessarily exposing their organization to potential risk factors. These best practices will create stable changes in a system targeted towards future-proofing and resource management. When properly managed, automation will reduce administrative overhead and make it far easier to scale a system moving forward.

Are you interested in automating your network? Check out NetQ, a telemetry-based fabric validation system that ensures your network is behaving as it should. Or, head over to our solutions page and read more about NetDevOps and network automation best practices.

The post Network automation best practices for DevOps appeared first on Cumulus Networks Blog.


----

Read in my feedly


Sent from my iPhone

BlueBorne: Critical Bluetooth Attack Puts Billions of Devices at Risk of Hacking



----
BlueBorne: Critical Bluetooth Attack Puts Billions of Devices at Risk of Hacking
// The Hacker News

If you are using a Bluetooth enabled device, be it a smartphone, laptop, smart TV or any other IoT device, you are at risk of malware attacks that can carry out remotely to take over your device even without requiring any interaction from your side. Security researchers have just discovered total 8 zero-day vulnerabilities in Bluetooth protocol that impact more than 5.3 Billion devices—from

----

Read in my feedly


Sent from my iPhone

Adobe Patches Two Critical RCE Vulnerabilities in Flash Player



----
Adobe Patches Two Critical RCE Vulnerabilities in Flash Player
// The Hacker News

Adobe may kill Flash Player by the end of 2020, but until then, the company would not stop providing security updates to the buggy software. As part of its monthly security updates, Adobe has released patches for eight security vulnerabilities in its three products, including two vulnerabilities in Flash Player, four in ColdFusion, and two in RoboHelp—five of these are rated as critical.

----

Read in my feedly


Sent from my iPhone

Unpatched Windows Kernel Bug Could Help Malware Hinder Detection



----
Unpatched Windows Kernel Bug Could Help Malware Hinder Detection
// The Hacker News

A 17-year-old programming error has been discovered in Microsoft's Windows kernel that could prevent some security software from detecting malware at runtime when loaded into system memory. The security issue, described by enSilo security researcher Omri Misgav, resides in the kernel routine "PsSetLoadImageNotifyRoutine," which apparently impacts all versions of Windows operating systems

----

Read in my feedly


Sent from my iPhone

The Cloudcast #311 - Google Cloud & Kubernetes



----
The Cloudcast #311 - Google Cloud & Kubernetes
// The Cloudcast (.NET)

Brian talks with Chen Goldberg (@GoldbergChen, Director of Engineering, Container Engine & Kubernetes) and Mark Mandel (@Neurotic, Developer Advocate, Co-Host Google Cloud Platform podcast) at Google.about the state of Kubernetes maturity, how Google manages cloud services and open-source projects, Developers perspectives on Kubernetes, "boring" applications, Kubernetes Federation, and trends around Kubernetes community participation.

Show Links:

Show Notes
  • Topic 1 - Where do you see Kubernetes today in terms of maturity, breadth of capabilities, ease of use, etc.?
  • Topic 2 - How is Google's Engineering structured to manage GKE/GCP services, open source contributions and partner engagements?
  • Topic 3 - What are some of the common use-cases with GKE usage? Do you also track Kubernetes usage on GCP services to see where you need to expand/improve GKE?
  • Topic 4 - Is Kubernetes easy enough, or robust enough for developers today? If not, where does it need to expand? Should this be within Kubernetes, or in other areas like Istio (or other)?
  • Topic 5 - How do you see Kubernetes Federation evolving? What might be possible for end-customers over time?
Feedback?

----

Read in my feedly


Sent from my iPhone

CCleanup: A Vast Number of Machines at Risk



----
CCleanup: A Vast Number of Machines at Risk
// Talos Blog

This post was authored by: Edmund Brumaghin, Ross Gibb, Warren Mercer, Matthew Molyett, and Craig Williams

Introduction

 

Supply chain attacks are a very effective way to distribute malicious software into target organizations. This is because with supply chain attacks, the attackers are relying on the trust relationship between a manufacturer or supplier and a customer. This trust relationship is then abused to attack organizations and individuals and may be performed for a number of different reasons. The Nyetya worm that was released into the wild earlier in 2017 showed just how potent these types of attacks can be. Frequently, as with Nyetya, the initial infection vector can remain elusive for quite some time. Luckily with tools like AMP the additional visibility can usually help direct attention to the initial vector.

Talos recently observed a case where the download servers used by software vendor to distribute a legitimate software package were leveraged to deliver malware to unsuspecting victims. For a period of time, the legitimate signed version of CCleaner 5.33 being distributed by Avast also contained a multi-stage malware payload that rode on top of the installation of CCleaner. CCleaner boasted over 2 billion total downloads by November of 2016 with a growth rate of 5 million additional users per week. Given the potential damage that could be caused by a network of infected computers even a tiny fraction of this size we decided to move quickly. On September 13, 2017 Cisco Talos immediately notified Avast of our findings so that they could initiate appropriate response activities. The following sections will discuss the specific details regarding this attack.

Technical Details

 

CCleaner is an application that allows users to perform routine maintenance on their systems. It includes functionality such as cleaning of temporary files, analyzing the system to determine ways in which performance can be optimized and provides a more streamlined way to manage installed applications.
Figure 1: Screenshot of CCleaner 5.33

On September 13, 2017 while conducting customer beta testing of our new exploit detection technology, Cisco Talos identified a specific executable which was triggering our advanced malware protection systems. Upon closer inspection, the executable in question was the installer for CCleaner v5.33, which was being delivered to endpoints by the legitimate CCleaner download servers. Talos began initial analysis to determine what was causing this technology to flag CCleaner. We identified that even though the downloaded installation executable was signed using a valid digital signature issued to Piriform, CCleaner was not the only application that came with the download. During the installation of CCleaner 5.33, the 32-bit CCleaner binary that was included also contained a malicious payload that featured a Domain Generation Algorithm (DGA) as well as hardcoded Command and Control (C2) functionality. We confirmed that this malicious version of CCleaner was being hosted directly on CCleaner's download server as recently as September 11, 2017.

In reviewing the Version History page on the CCleaner download site, it appears that the affected version (5.33) was released on August 15, 2017. On September 12, 2017 version 5.34 was released. The version containing the malicious payload (5.33) was being distributed between these dates. This version was signed using a valid certificate that was issued to Piriform Ltd by Symantec and is valid through 10/10/2018. Piriform was the company that Avast recently acquired and was the original company who developed the CCleaner software application.
Figure 2: Digital Signature of CCleaner 5.33

A second sample associated with this threat was discovered. This second sample was also signed using a valid digital certificate, however the signing timestamp was approximately 15 minutes after the initial sample was signed.

The presence of a valid digital signature on the malicious CCleaner binary may be indicative of a larger issue that resulted in portions of the development or signing process being compromised. Ideally this certificate should be revoked and untrusted moving forward. When generating a new cert care must be taken to ensure attackers have no foothold within the environment with which to compromise the new certificate. Only the incident response process can provide details regarding the scope of this issue and how to best address it.

Interestingly the following compilation artifact was found within the CCleaner binary that Talos analyzed:

        S:\workspace\ccleaner\branches\v5.33\bin\CCleaner\Release\CCleaner.pdb

Given the presence of this compilation artifact as well as the fact that the binary was digitally signed using a valid certificate issued to the software developer, it is likely that an external attacker compromised a portion of their development or build environment and leveraged that access to insert malware into the CCleaner build that was released and hosted by the organization. It is also possible that an insider with access to either the development or build environments within the organization intentionally included the malicious code or could have had an account (or similar) compromised which allowed an attacker to include the code.

It is also important to note that while previous versions of the CCleaner installer are currently still available on the download server, the version containing the malicious payloads has been removed and is no longer available.

Malware Installation and Operation


Within the 32-bit CCleaner v5.33 binary included with the legitimate CCleaner v5.33 installer, '__scrt_get_dyn_tls_init_callback' was modified to call to the code at CC_InfectionBase(0x0040102C). This was done to redirect code execution flow within the CCleaner installer to the malicious code prior to continuing with the normal CCleaner operations. The code that is called is responsible for decrypting data which contains the two stages of the malicious payload, a PIC (Position Independent Code) PE loader as well as a DLL file that effectively functions as the malware payload. The malware author had tried to reduce the detection of the malicious DLL by ensuring the IMAGE_DOS_HEADER was zeroed out, suggesting this attacker was trying to remain under the radar to normal detection techniques.

The installer then creates an executable heap using HeapCreate(HEAP_CREATE_ENABLE_EXECUTE,0,0). Space is then allocated to this new heap which is where the contents of the decrypted data containing the malware is copied. As the data is copied to the heap, the source data is erased. The PE loader is then called and begins its operation. Once the infection process has been initiated, the installer erases the memory regions that previously contained the PE loader and the DLL file, frees the previously allocated memory, destroys the heap and continues on with normal CCleaner operations.

The PE loader utilizes position independent coding practices in order to locate the DLL file within memory. It then maps the DLL into executable memory, calls the DLLEntryPoint to begin execution of the DLL being loaded and the CCleaner binary continues as normal. Once this occurs the malware begins its full execution, following the process outlined in the following sections.

CBkrdr.dll


The DLL file (CBkdr.dll) was modified in an attempt to evade detection and had the IMAGE_DOS_HEADER zeroed out. The DLLEntryPoint creates an execution thread so that control can be returned to the loader. This thread is responsible for calling CCBkdr_GetShellcodeFromC2AndCall. It also sets up a Return Oriented Programming (ROP) chain that is used to deallocate the memory associated with the DLL and exit the thread.

CCBkrdr_GetShellcodeFromC2AndCall


This function is responsible for much of the malicious operations that Talos observed while analyzing this malware. First, it records the current system time on the infected system. It then delays for 601 seconds before continuing operations, likely an attempt to evade automated analysis systems that are configured to execute samples for a predefined period of time or determine whether the malware is being executed in a debugger. In order to implement this delay functionality, the malware calls a function which attempts to ping 224.0.0.0 using a delay_in_seconds timeout set to 601 seconds. It then checks to determine the current system time to see if 600 seconds has elapsed. If that condition is not met, the malware terminates execution while the CCleaner binary continues normal operations. In situations where the malware is unable to execute IcmpCreateFile, it then falls back to using Sleep() to implement the same delay functionality. The malware also compares the current system time to the value stored in the following registry location:

        HKLM\SOFTWARE\Piriform\Agomo:TCID

If the value stored in TCID is in the future, the malware will also terminate execution.
Figure 3: Delay Routine

The malware then checks to determine the privileges assigned to the user running on the system. If the current user running the malicious process is not an administrator the malware will terminate execution.
Figure 4: Privilege Check

If the user executing the malware does have administrative privileges on the infected system, SeDebugPrivilege is enabled for the process. The malware then reads the value of 'InstallID' which is stored in the following registry location:

        HKLM\SOFTWARE\Piriform\Agomo:MUID

If this value does not exist, the malware creates it using '((rand()*rand() ^ GetTickCount())'.

Once the aforementioned activities have been performed, the malware then begins profiling the system and gathering system information which is later transmitted to the C2 server. System information is stored in the following data structure:
Figure 5: CCBkdr_System_Information Data Structure

Once the system information has been collected, it is encrypted and then encoded using modified Base64. The malware then establishes a Command and Control (C2) channel as described in the following section.

Command and Control (C2)


While analyzing this malware, Talos identified what appears to be a software bug present in the malicious code related to the C2 function. The sample that Talos analyzed reads a DGA computed IP address located in the following registry location, but currently does nothing with it:

        HKLM\SOFTWARE\Piriform\Agomo:NID

It is unknown what the purpose of this IP address is at this time, as the malware does not appear to make use of it during subsequent operations. In any event, once the previously mentioned system information has been collected and prepared for transmission to the C2 server, the malware will then attempt to transmit it using an HTTPS POST request to 216[.]126[.]225[.]148. The HTTPS communications leverage a hardcoded HTTP Host header that is set to speccy[.]piriform[.]com, a legitimate platform which is also created by Piriform for hardware monitoring. This could make dynamic analysis more difficult as the domain would appear to be legitimate and perhaps even expected depending on the victim infrastructure. The requests also leverage HTTPS but ignore all security errors as the server currently returns a self-signed SSL certificate that was issued to the subdomain defined in the Host header field. In cases where no response is received from the C2 server, the malware then fails back to a Domain Generation Algorithm (DGA) as described in the section 'Domain Generation Algorithm' of this post.

Once a C2 server has been identified for use by the malware, it then sends the encoded data containing system profile information and stores the C2 IP address in the following registry location:

        HKLM\SOFTWARE\Piriform\Agomo:NID

The malware then stores the value of the current system time plus two days into the following registry location:

       HKLM\SOFTWARE\Piriform\Agomo:TCID

Data received from the C2 server is then validated to confirm that the received data is in the correct format for a CCBkdr_ShellCode_Payload structure. An example is shown below:
Figure 6: CCBkdr_ShellCode_Payload Data Structure

The malware then confirms that the value of EncryptedInstallID matches the value that was previously transmitted to the C2 server. It then allocates memory for the final shellcode payload. The payload is then decoded using modified Base64 and stored into the newly allocated memory region. It is then decrypted and called with the addresses of LoadLibraryA and GetProcAddress as parameters. Once the payload has been executed, the memory is deallocated and the following registry value is set to the current system time plus seven days:

        HKLM\SOFTWARE\Piriform\Agomo:TCID

The received buffer is then zeroed out and deallocated. The CCBkdr_ShellCode_Payload structure is also deallocated and the malware then continues with normal CCleaner operations. A diagram describing the high level operation of this malware is below:
Figure 7: Malware Operation Process Flow

Domain Generation Algorithm


In situations where the primary C2 server does not return a response to the HTTP POST request described in the previous section, the malware fails back to using a DGA algorithm. The algorithm used by this malware is time-based and can be calculated using the values of year and month. A list of DGA domains is below:
Figure 8: 12 Month DGA Genearation
The malware will initiate DNS lookups for each domain generated by the DGA algorithm. If the DNS lookup does not result in the return of an IP address, this process will continue. The malware will perform a DNS query of the active DGA domain and expects that two IP addresses will be returned from the name server managing the DGA domain's namespace. The malware will then compute a secondary C2 server by performing a series of bit operations on the returned IP address values and combine them to determine the actual fallback C2 server address to use for subsequent C2 operations. A diagram showing this process is below:
Figure 9: C2 Process Diagram

Cisco Talos observed during analysis that the DGA domains had not been registered, so we registered and sinkholed them to prevent attackers from being able to use them for malicious purposes.

Potential Impact


The impact of this attack could be severe given the extremely high number of systems possibly affected. CCleaner claims to have over 2 billion downloads worldwide as of November 2016 and is reportedly adding new users at a rate of 5 million a week.
Figure 10: CCleaner Consumer Demographics

If even a small fraction of those systems were compromised an attacker could use them for any number of malicious purposes. Affected systems need to be restored to a state before August 15, 2017 or reinstalled. Users should also update to the latest available version of CCleaner to avoid infection. At the time of this writing that is version 5.34. It is important to note that according to the CCleaner download page, the free version of CCleaner does not provide automated updates, so this might be a manual process for affected users.

In analyzing DNS-based telemetry data related to this attack, Talos identified a significant number of systems making DNS requests attempting to resolve the domains associated with the aforementioned DGA domains. As these domains have never been registered, it is reasonable to conclude that the only conditions in which systems would be attempting to resolve the IP addresses associated with them is if they had been impacted by this malware. While most of the domains associated with this DGA have little to no request traffic associated with them, the domains related to the months of August and September (which correlates with when this threat was active in the wild) show significantly more activity.

Looking at the DNS related activity observed by Cisco Umbrella for the month of July 2017 (prior to CCleaner 5.33 being released) we observed very little in the way of DNS requests to resolve the IP address for DGA domain associated with this malware:
Figure 11: DNS Activity for July 2017 DGA Domain

As mentioned earlier in this post, the version of CCleaner that included this malware was released on August 15, 2017. The following graph shows a significant increase in the amount of DNS activity associated with the DGA domain used in August 2017:
Figure 12: DNS Activity for August 2017 DGA Domain

Likewise, the DGA domain associated with September 2017 reflects the following activity with regards to attempts to resolve the IP associated with it:
Figure 13: DNS Activity for September 2017 DGA Domain

Note that in on September 1, 2017 it appears that the DNS activity shifted from the DGA domain previously used in August, to the one used in September, which matches the time-based DGA algorithm described in the "Domain Generation Algorithm" section of this blog post. After reaching out to Avast we noted that the server was taken down and became unavailable to already infected systems. As a result, we saw a significant increase in the amount of requests that were being directed at the failback DGA domains used by the malware.
Figure 14: Traffic Spike Following Server Takedown

It is also worth noting that at the time of this post, antivirus detection for this threat remains very low (The detections are at 1/64 at the time of this writing).
Figure 14: VirusTotal Detections for CCleaner Binary

As part of our response to this threat, Cisco Talos has released comprehensive coverage to protect customers. Details related to this coverage can be found in the "Coverage" section of this post.

Conclusion

 

This is a prime example of the extent that attackers are willing to go through in their attempt to distribute malware to organizations and individuals around the world. By exploiting the trust relationship between software vendors and the users of their software, attackers can benefit from users' inherent trust in the files and web servers used to distribute updates. In many organizations data received from commonly software vendors rarely receives the same level of scrutiny as that which is applied to what is perceived as untrusted sources. Attackers have shown that they are willing to leverage this trust to distribute malware while remaining undetected. Cisco Talos continues to monitor all aspects of the threat landscape to quickly identify new and innovative techniques used by attackers to target organizations and individuals around the world.

Coverage


The following ClamAV signatures have been released to detect this threat: 6336251, 6336252.

Additional ways our customers can detect and block this threat are listed below.



Advanced Malware Protection (AMP) is ideally suited to prevent the execution of the malware used by these threat actors.

CWS or WSA web scanning prevents access to malicious websites and detects malware used in these attacks.

AMP Threat Grid helps identify malicious binaries and build protection into all Cisco Security products.

Umbrella, our secure internet gateway (SIG), blocks users from connecting to malicious domains, IPs, and URLs, whether users are on or off the corporate network.

Indicators of Compromise (IOCs)

File Hashes

6f7840c77f99049d788155c1351e1560b62b8ad18ad0e9adda8218b9f432f0a9
1a4a5123d7b2c534cb3e3168f7032cf9ebf38b9a2a97226d0fdb7933cf6030ff
36b36ee9515e0a60629d2c722b006b33e543dce1c8c2611053e0651a0bfdb2e9

DGA Domains

ab6d54340c1a[.]com
aba9a949bc1d[.]com
ab2da3d400c20[.]com
ab3520430c23[.]com
ab1c403220c27[.]com
ab1abad1d0c2a[.]com
ab8cee60c2d[.]com
ab1145b758c30[.]com
ab890e964c34[.]com
ab3d685a0c37[.]com
ab70a139cc3a[.]com

IP Addresses

216[.]126[.]225[.]148

----

Read in my feedly


Sent from my iPhone

Contributing to the pfSense project gets easier!



----
Contributing to the pfSense project gets easier!
// Netgate Blog

In March 2014, in order to protect the pfSense community and project from any legal challenge as to ownership of contributions, we introduced a contributor license agreement, modeled on the CLA for the Apache project.


----

Read in my feedly


Sent from my iPhone

Changes to Hotfixes and Maintenance for XenServer Releases



----
Changes to Hotfixes and Maintenance for XenServer Releases
// Latest blog entries

XenServer (XS) is changing as a product with more aggressive release cycles as well as the recent introduction of both Current Release (CR) and Long Term Service Release (LTSR) options. Citrix recently announced changes to the way XenServer updates (hotfixes) will be regulated and this article summarizes the way these will be dealt with looking forward.

To first define a few terms and clarify some points, the CR model is intended for those who always want to install the latest and greatest versions, and these are being targeted for roughly quarterly releases. The LTSR model is primarily intended for installations that are required to run with a fixed release. These are by definition required to be under a service contract in order to gain access to periodic updates, known as Cumulative Updates or CUs. The first of these will be CU1. Since Citrix is targeting support for LTSR versions for ten years, this incurs quite a bit of cost over time and is much longer than a general CR cycle is maintained, hence the requirement for active support. The support for those running XS as part of running licensed versions of XenApp/XenDesktop will remain the same, as these CR licenses are provided as part of the XenApp/XenDesktop purchase. There will still be a free version of XenServer and the only change at this point will be that on-demand hotfixes for a given CR will no longer be available once the next CR is released. Instead, updates to quarterly releases will be needed to access and stay on top of hotfixes. More will be discussed about this later on. End-of-life timelines should help customers in planning for these changes. Licensed versions of XenServer need to fall under active Customer Success Services (CSS) contracts to be eligible for access to the corresponding types of updates and releases that fall outside of current product cycles.

 

At a Glance

The summary of how various XS versions are treated looking forward is presented in the following table, which provides an overview at a glance.

XenServer Version

Active Hotfix Cut-Off

Action Required

6.2 & 6.5 SP1

EOL June 2018

None. Hotfixes will continue to be available as before until EOL.

7.0

December 2017

Upgrade or buy CSS.

7.1

December 2017

Upgrade or buy CSS (to stay on the LTSR stream by deploying CU1)

7.2

At release of next CR

Upgrade to latest CR, or buy CSS to obtain 4 months more of hotfixes (and thus skip one CR)

 

In the future, the only way to get hotfixes for XS 7.2 without paying for a CSS will be to upgrade to the most current CR once released. Customers who do pay for CSS will also be able to access hotfixes on 7.2 for a further four months after the next CR is released. This would in principle allow you to skip a release and transition to the next one beyond it.

 

What Stays the Same

While changes are fairly significant for especially users of the free edition of XS, some aspects will not change. These include:

·        There will still be free versions of XenSever released as part of the quarterly CR cycle.

·        The free version of XS can still be updated with all released hotfixes until the next CR becomes generally available.

·        XS code still remains open source.

·        Any hotfixes that have already been released will remain publically available.

·        There will be no change to XS 6.2 through 6.5 SP1. They are only going to get security hotfixes at this point, anyway, until being EOLed in June 2017.

 

In Summary 

First off, stay calm and keep on maintaining XS pretty much as before except for the cut-offs on how long hotfixes will be available for non-paid-for versions. The large base of free XS users should not fear that all access to hotfixes is getting completely cut off. You will still be able to maintain your installation exactly as before, up to the point of a new CR. This will take place roughly every quarter. This is a pretty small amount of extra effort to keep up, especially given how much easier updating has become, in particular for pools. Most users would do well to keep up with current releases to benefit from improvements and to not have to apply huge numbers of hotfixes whenever installing a base version; the CR releases will include all hotfixes accumulated up to that point, which will be more convenient.

 

What To Do 

Any customers who already have active CSS contracts will be unaffected by any of these changes. Included are servers that support actively licensed XenApp/XenDesktop instances, who by the way, get all the XS Enterprise features incorporated into the latest XS releases – for all editions of XA/XD. Customers on XS 7.0 and 7.1 LTSR who have allowed their software maintenance to lapse and have not renewed/upgraded to CCS subscriptions should seriously consider that route.

Citrix will provide additional information as these changes are put into place.


----

Read in my feedly


Sent from my iPhone

Citrix Announces First Cumulative Update for XenServer 7.1



----
Citrix Announces First Cumulative Update for XenServer 7.1
// Latest blog entries

Citrix has posted a blog announcing the availability of XenServer 7.1 CU1, the first cumulative update for the XenServer 7.1 Long-Term Support Release (LTSR).

CU1 rolls-up all hotfixes released on XenServer 7.1 to date. While there are no new features introduced in this update, a number of additional fixes are included to further enhance and simplify XenApp and XenDesktop environments. Information pertaining to these fixes can be found in the release notes.

Of particular significance, this cumulative update provides customers on Long-Term Support Release (LTSR) 7.1 with the latest platform fixes applicable to v7.1. Unlike Current Releases (CR), which provide access to new features on an almost quarterly basis, Long-Term Support Releases are designed for customers who, possessing the features they need to optimize their XenApp and XenDesktop environments, intend to remain on the LTSR for the foreseeable future. For these customers, using a Long-Term Support Release offers stability - Citrix will support the LTSR for up to 10 years – as well as access to fixes as they become available. You can learn more about the differences between Long-Term Support and Current Releases here.

Wondering if XenServer 7.1 Enterprise edition is right for your organization? Click here to download a free trial version.

Cheers!

Andy

 


----

Read in my feedly


Sent from my iPhone

Chef Community



----
Chef Community
// Food Fight

Learn about getting involved, contributing to, and learning from your peers. We will also discuss the upcoming Chef Community Summits and share a promotional code to save some money on the ticket!

Panel

Show Notes

Download

  • Video
  • Audio - Coming Soon!

The Food Fight Show is brought to you by Nathen Harvey and Nell Shamrell with help from other hosts and the awesome community of Chefs.

The show is sponsored, in part, by Chef.

Feedback, suggestions, and questions: info@foodfightshow.com or http://github.com/foodfight/showz.


----

Read in my feedly


Sent from my iPhone

What’s new in Chef Certification 2.0



----
What's new in Chef Certification 2.0
// Chef Blog

There are some exciting things happening in Chef Certification right now. If you dipped in previously and haven't progressed, then you should come back to check it out. Before you do, here are some highlights of the changes.

New: Blended exam format

If you've sat Local Cookbook Development or Extending Chef exams, you'll know that the Multiple Choice and Lab components of those exams are separate and you must pass Part 1 before you can schedule Part 2, and you must pass both components to obtain that badge.

Well all that's about to change. Going forward all badges will be one single exam which will cover both multiple choice as well as hands on tasks.

This is an exciting new development for us, and it means we can include multiple choice questions based on a live environment,  like, "Use knife to find the IP Address of the node with role[database]" and have the actual IP as one the multiple choice options. How cool is that?

New: Open book testing

And there's more. In moving to this new format the Local Cookbook Development and Extending Chef exams will now be open book. Yes, you heard that right – open book! You will have full access to Chef documentation during the exam, just as you would in real life when solving a problem. But there's a catch, we will be limiting the time for completing an exam to 90-minutes. So you may be able to look up one or two things, but you won't have time to look up every answer. And watch out, since you can look up the docs, we may include some situational based problems for you to solve, and more difficult questions.

New: Individual certificates for each badge

Chef Certified Developer CertificateOne of the most frequently asked for features is a certificate for each individual badge that can be shared on LinkedIn. Well guess what? We've listened to the feedback and we're doing it now.

As of now, anyone who passes an exam will get a certificate for that badge. We will go back and update the accounts for anyone who obtained a badge up to now, so look out for an email coming through.

Of course, you'll also get a separate badge for the actual Certified Chef Developer certification once you obtain three badges.

New: Windows variant for Local Cookbook Development

Another common request is to have a Windows equivalent exam for Local Cookbook Development. Up till now the multiple choice questions have been platform agnostic, but for the hands on part you were given an Ubuntu workstation and asked to perform linux-based tasks. We fully understand this is less than ideal for Windows users, so now we're addressing it.

Now, when you schedule a Local Cookbook Development exam, you will be asked if you want to use the Linux or Windows variant. In both cases, the multiple choice questions are platform agnostic, but now the Windows version will ask you to perform Windows related tasks in a Windows environment.

New: "Chef Certification Prep" Learn Chef Rally track

One other popular question is, "do you have any sample questions so we know we are ready to take the exam?" Well the truth is we don't have sample questions, but we have gone one better.

Instead, we're giving you a bunch of tasks you can work through in your own time on Learn Chef Rally. Check out the new Chef Certification Prep track!

Learning Chef is one thing, but getting real, practical, hands on experience is something different entirely. Tutorials are great to help you learn, but they don't force you to think about the solutions.

These modules differ from other modules on Learn Chef Rally. Tutorials in other modules guide you through a task and give you steps to perform. Here you're on your own. You're told what needs to be done, but it's up to you to figure out how to complete it. If you're not sure how to do something, you can go search the docs and do your own research. Think of it as University level Chef education as opposed to secondary level where you're spoon fed the answers

If you want to become a Certified Chef Developer, then you really ought to be able to complete these tasks. Use them to gauge if you're ready to sit the exam, and also to get some real hands on experience.

Save on certification

In celebration of all the changes for Chef certification,  we're offering a 25% discount on all exams purchased through October. Go to https://training.chef.io/certification to schedule your exam and use the discount code "CERT25" at checkout.

The post What's new in Chef Certification 2.0 appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Understanding Continuous Automation



----
Understanding Continuous Automation
// Chef Blog

High velocity companies – those that can quickly turn ideas into digital experiences for customers – use automation to get there. The concept of continuous automation is therefore generating a lot of interest. But what is continuous automation? What are the benefits? And how do you get there?

Defining Continuous Automation

Continuous Automation is the practice of automating every aspect of an application's lifecycle to build and deploy software and changes quickly, consistently, and safely. It integrates automation of infrastructure, applications, and compliance, defining elements as code to make it easy to manage multiple versions, test for a variety of conditions, change when needed, and apply at scale. It is a sophisticated approach to building, deploying, and managing software.

But like a lot of sophisticated approaches, it's easy to get distracted by details or organizational constraints, and miss the larger goal.

The Benefits of Continuous Automation

High velocity companies are skilled at moving software from idea to ship, repeatedly. Underlying that skill are three measurable outcomes: speed, efficiency, and risk.

Benefits of Continuous Automation

The challenge, of course, is that the three outcomes are naturally in opposition to one another. In the quest for greater speed, organizations make mistakes and introduce errors that eat away at efficiency and increase the risk of opening security holes. What's needed is a step-wise path to continuous automation, enabling progress on all three dimensions simultaneously.

The Path to Continuous Automation

The most successful organizations pursue continuous automation in three stages: Detect, Correct, and Automate. First, gain visibility over the current state of your infrastructure and applications to detect security risks, performance inhibitors, and areas of concern. Next, correct priority issues in order to drive outcomes. Finally, automate by building the detect-and-correct cycle into how you operate on an ongoing basis.

As you automate more of the processes involved in the application lifecycle, detect and correct happens long before issues impact your business or your customers. And that provides the confidence to truly move with velocity.

Learn More

If you want to learn more about Continuous Automation and how to implement a detect, correct, and automate cycle into your workflow, join us at one of our upcoming Continuous Automation Summits.

To learn more about Agile, Lean, and DevOps practices, check out our white paper, Continuous Automation for the Continuous Enterprise.

Register: Continuous Automation Summit

The post Understanding Continuous Automation appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

DevOps on Federal News Radio



----
DevOps on Federal News Radio
// Chef Blog

DevOps is making its way through every industry yet is still a concept that is not widely understood. A recent episode of the Federal Tech Talk series provides a perspective on DevOps including some insights on the definition, how to start practicing DevOps, and where to learn more. Checkout "Three different perspectives on DevOps" on Federal News Radio.

The panelists for the show were Brent Wodicka of AIS, David Bock of Excella Consulting, and myself. The panel discussion was moderated by John Gilroy.

What is DevOps?

First a definition of DevOps:

A cultural and professional movement, focused on how to build and operate high-velocity organizations, born for the experience of its practitioners.

This definition gets to the heart of what DevOps is trying to achieve and should be familiar to any practitioner of Chef-style DevOps.

As panelists, we explored the idea of making sure that objectives and motivations across teams are fully aligned. Developers, operators, and everyone across the organization must align to the needs of the customers or constituents and the business or agency in order to deliver.

The scale of today's infrastructure and applications require automation as a foundation of any software-driven organization. Scale comes not just in the number of servers but also the number of engineers working on the systems and the complexity of the applications they deliver. As teams adopt DevOps there is a lot of work to do in modernizing both the infrastructure and the applications. Chef customers like the SAP NS2 and the National Geospatial Agency (NGA) are building on a foundation of Chef-powered automation to modernize their operations.

Community and Collaboration

The panel also discussed the importance of collaboration among software developers and other technology professionals. The discussion also includes sharing code on GitHub, working with open-source communities, and building internal communities of practice.

When it came time to discuss learning more about DevOps the panelists had plenty of ideas.  

The show also covers some of the historical context for DevOps and discussed how it is really an extension of the agile methodologies that have had success with software development teams for years.  

The discussion wrapped up with some predictions of the future. Spoiler alert: DevOps is about helping technology deliver business value and it is here to stay.

Listen and Learn

The post DevOps on Federal News Radio appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Take a test drive of Chef Automate on Microsoft Azure



----
Take a test drive of Chef Automate on Microsoft Azure
// Chef Blog

Chef Automate on Microsoft Azure is a Continuous Automation platform that provides the tools for automating modern and legacy architectures: a release pipeline model for test, review, and deployment automation; visibility into what's changing and why; and a comprehensive solution for compliance automation. Together, Chef Automate and Microsoft Azure give you everything you need to deliver infrastructure and applications quickly and safely.

One of the best ways to see how Chef Automate can solve your biggest compliance challenges is to take a test drive. The test drive for Chef Automate on Microsoft Azure puts you in the role of a DevOps professional, operating a critical application that runs on a fleet of Windows Servers in the cloud. You will identify servers vulnerable to the WannaCry exploit, reproduce the error in a test environment, apply remediation code across your server fleet, and confirm that the exploit has been mitigated.

While working in the test drive, you will triage the situation, reproduce, and confirm the vulnerability with the use of InSpec and publicly available compliance profiles. Next, you will mitigate the vulnerability using publicly available Chef cookbooks. Finally, in Chef Automate you will confirm the vulnerability has been successfully remediated. The test drive gives you a first-hand experience on why Chef Automate is your best continuous automation platform.

After the test drive, you have multiple options for using Chef Automate:

Don't have a Chef Automate license?

You can deploy Chef Automate from the Azure Marketplace without using a license file, which will enable a 30-day free trial mode. The free 30-day trial is an ideal opportunity to use Chef Automate on Azure with your own data. Chef Automate on Azure

Purchase Chef Automate in the Azure Marketplace

Install Chef Automate in your Azure subscription and get all the benefits of Chef Automate in an easy to deploy model. Chef Automate on Azure uses the Bring Your Own License model (BYOL), which means that you supply your current Chef Automate license and only pay for the compute time you use on Azure. Visit the Azure Marketplace at Chef Automate on Azure

Self-Hosting

If you want complete control of your Chef Automate installation, you can also install Chef Automate yourself on virtual machine instances if you have an Azure subscription.   

Take a Test Drive

Try Chef Automate on Microsoft Azure today.

The post Take a test drive of Chef Automate on Microsoft Azure appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

GDPR as part of your corporate compliance profile



----
GDPR as part of your corporate compliance profile
// Chef Blog

With the changes in EU regulation that GDPR introduces, specifically relating to the processing of EU citizens' personal data, organisations are facing fresh challenges in how they prove compliance. GDPR brings particular burdens with the 'Privacy by Design' mandate that requires data privacy be part of the system design process from day one.

In previous GDPR blog posts I've spoken a lot about specific use cases, such as GDPR scanning in an application delivery pipeline. In this post, I will describe how the InSpec language makes it incredibly easy to develop your corporate compliance profile over time, adding additional standards as they become relevant for your business. This approach makes it very simple to include a GDPR standard, or superset of, in your existing InSpec based profiles.

Compliance as Code

InSpec, the open source testing and compliance framework from Chef, allows us to define traditional security controls in a high level, human readable language. We can then execute these control code blocks against our various systems (servers, containers, virtual machines) to audit and report on their state.

As you can see in the example below, the controls often found in standard documents, very easily map into the InSpec language.

By grouping these executable security controls together we can form 'profiles' reflecting a security standard; either an in-house security baseline, or perhaps an industry standard such as PCI DSS, or ISO 27001.

To learn more about how InSpec can execute audits against your systems, check out this video on GDPR Detection and Correction:

Building a corporate profile

Once an organisation begins to redefine compliance requirements in InSpec code they'll gain from the following:

  • Flexible – it's easy to modify a small code block.
  • Modular – controls can be inherited or skipped depending on the business needs.
  • Collaborative – as with any code base, the ecosystem is rich for code collaboration.

For me, the stand out benefit is the ability to adapt quickly to change as new security requirements impact a business. GDPR is a perfect example of this. Organisations need to move quickly, but adapting to GDPR can be time consuming. By evolving existing processes, especially InSpec based auditing, it's possible to remain agile without compromising on security.

Flexible compliance profiles

InSpec's flexibility is a huge benefit. It natively supports inheritance, control skipping, and customisations- as you would expect of any code base! It's very easy to include community and Chef supported test sets (profiles) in an existing InSpec code base.

By taking this approach, adding GDPR specific tests to a profile is quick and simple allowing GDPR auditing to form part of your existing auditing process.

Learn more about profile structures in the Compliance Automation track on Learn Chef.

Implementation

Here's an example of how you would add a (sample link in this case) GDPR profile to an existing InSpec profile.

Firstly, add the following to the profiles inspec.yml file.

depends:    - name: gdpr-example      url: https://github.com/grdnrio/gdpr-example/archive/example.tar.gz

Now that the dependency is specified it's possible to include the upstream profile into a set of tests.

include_controls 'gdpr-example'

When executing a compliance scan the external GDPR profile will be retrieved from the remote location specified, for example GitHub.

Now GDPR scanning will automatically be applied as part of an InSpec and chef-client based auditing process.

Wrapping up

By redefining our compliance requirements in InSpec code we can easily apply auditing at scale, and on a very short scan cycle. We also have the flexibility that comes with any code base – it's simple to make changes.

What makes InSpec special is the ability to pull in profiles from any location to immediately make them part of an audit and testing lifecycle. By applying this principle, adding GDPR scanning to your existing efforts becomes trivial.

You can find an example of profile inheritance here: https://github.com/adamleff/inspec-profile-wrapper-example

Stay tuned for updates about GDPR testing profiles from Chef.

The post GDPR as part of your corporate compliance profile appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone