Friday, April 29, 2016

Changes to How Chef Products Handle Licenses [feedly]

Changes to How Chef Products Handle Licenses
https://www.chef.io/blog/2016/04/26/changes-to-how-chef-products-handle-licenses/

-- via my feedly newsfeed 

We want to make you aware of two changes that will affect Chef products going forward:

  1. All Chef products will be making their license file and the license files of all included software easier to find.
  2. For proprietary Chef products, you will be required to explicitly accept the Chef Master License and Services Agreement (Chef MLSA) on the first reconfigure of the software. The first products to include this change will be Chef Analytics, Chef Compliance, Chef Manage, and Chef Reporting. This change will require updates to any automation that runs the reconfigure command. See Accepting the License For Proprietary Products section for further detail.

Finding License Information

All Chef products will be adding a new file during install: /opt/<PRODUCT-NAME>/LICENSE. This will identify the license that governs the entire product and it will include pointers to license files for any third-party software that has been used in the product. This will make it easier to find and track all the licenses. The directory /opt/<PRODUCT-NAME>/LICENSES will contain individual copies of all the licenses referenced in the license file.

In addition, this information will be displayed on downloads.chef.io for each product.

Accepting the License For Proprietary Products

For Chef proprietary software, users must now explicitly accept the Chef MLSA. There are three possible ways to accept the license. Please read the documentation on the About Chef Licenses page for full details.  This information is also available on the install page for each of the affected products.

The initial products affected are Chef Analytics, Chef Compliance, Chef Manage, and Chef Reporting. Chef Delivery is not affected at this time, as it handles licenses in a different manner. As the change is added to each product, it will be noted in the release announcements on Discourse. This blog post details the process for signing up for release announcements.

Reconfigure Automation

If you have automation around the reconfigure command, you will need to ensure one of the ways of accepting the license has been implemented in your automation. This will ensure that the reconfigure command can continue to be run without a person present.  


Watch: Automating Windows PowerShell DSC with Chef [feedly]

Watch: Automating Windows PowerShell DSC with Chef
https://www.chef.io/blog/2016/04/22/watch-automating-windows-powershell-dsc-with-chef/

-- via my feedly newsfeed


On April 14th, I presented a live webinar on Automating Windows PowerShell Desired State Configuration (DSC) with Chef. Automating PowerShell DSC with Chef makes it safer and easier to manage hundreds or thousands of servers.

Watch the recording below to learn how you can bring order to chaotic environments by using Chef and DSC together. At the end of this post, I've included some Q&A from the presentation.


How does Chef surface errors that a DSC resource throws? Can you see them in the logs or Chef console?

Errors from the resource run are surfaced in the chef logs.

With DSC_Script (the sample with the actual DSC declarative code), is Chef compiling that into a MOF and feeding that to the LCM, or is it parsing the data and feeding it directly to the LCM without the MOF?

dsc_script creates a configuration command in a PowerShell session and generates the MOF document, which is then supplied to the LCM to apply to the node.

What's the recommended Chef agent version to use with dsc_resource? Where can I find a comprehensive list of resources I can use with DSC?

You can use dsc_resource starting with 12.2, but there were some enhancements and bug fixes since then. I'd recommend starting at 12.8 or newer. As for a comprehensive list of DSC resources, I'd start with using Find-DscResource, which will search PowerShellGallery for available resources. There are undoubtably more out there, but that's the primary distribution hub.

Is the LCM configured for push or pull?

In the case of dscresource (with WMF 5 RTM), it doesn't matter if the LCM's refresh mode is 'PUSH', 'PULL' or 'DISABLED'. The use of dscscript will force the LCM into PUSH mode. If you are using DSC with Chef, you should have the LCM set to 'PUSH' or 'DISABLED'. You should definitely configure the LCM for ApplyOnly if you plan to use dsc_script.

Is LCM a totally different agent from the local Chef Client on a VM ? Or do they run interdependently ?

The LCM – Local Configuration Manager – is a component of Windows Management Framework 5 and is independent of Chef. Chef uses the LCM for dscscript and dscresource.

What's the best way to get DSC resources enhanced/fixed?

There are two avenues – first the OSS route. If you submit a fix for a resource, include tests that validate your fix makes things behave in a way that they are expected to. Second, if you are a Microsoft customer with a support agreement, use your TAM to lobby for support from Microsoft to drive fixes to the DSC resources.

What's your favorite sandwich?

I love sandwiches of all kinds, but a ham, turkey, bacon and cheese club is my go-to favorite.

What is the support of composite resources in Chef?

Composite resources are supported in dscscript, but not via dscresource. dsc_resource directly invokes DSC resources via the LCM and the LCM does not know about composite resources. Composite resources are a way to abstract your configuration, but when you run the configuration command, each resource specified in the composite configuration is included in the MOF that is passed to the LCM.

What was the name of the editor you mentioned you really liked using for DSC coding at the beginning of the webinar again?

Visual Studio Code (with the PowerShell and Chef extensions).

What kind of reporting is there in Chef on the compliancy of DSC resources?

dscresource will report the compliance of each resource declared individually, and Chef reports a roll up of the number of individual resources as well as how many were updated. dscscript tries to parse the output, but due to the changing formats and the removal of '-whatif' from start-dscconfiguration, in WMF 5 it reports the resource as a unit.

Can Chef work without a chef agent on the target system? Can Chef build MOF files which can placed on a DSC Pull Server?

Chef can work against remote node or can be used to generate MOF documents, but both those scenarios remove much of the advantage of using Chef to manage the configuration of a node.

Are there any limitations on 2008 r2 server with WMF 5.0 utilizing dsc_resource with chef or DSC in general. Does it work the same as 2012 r2/2016 server as long as WMF 5.0 is installed?

There are no WMF 5 DSC or LCM limitations on 2008 R2. 2008 R2 does, however, have some challenges. Being a much older operating system, it does not have the newer WMI api's that drive many of the new PowerShell cmdlets. This means resources that rely on those cmdlets wil not work on 2008 R2 without specific workarounds in the resource itself.

When using Chef client 12.4.1, executing multiple dscscript or dscresource resources executes PowerShell multiple times which takes a long time. What is being done to improve the performance?

We are constantly looking for ways to improve performance. Part of that is (in later version of Chef Client) using newer versions of Ruby which offer better overall performance. We are also investigating new ways of calling PowerShell without the overhead we currently have.

DSC4 and DSC5 have a slightly different format when you build the config file. Does that come into play at all when using DSC within Chef?

Using DSC through Chef is agnostic on whether you use WMF4 or WMF5 for dscscript (dscresource does not support WMF 4). Because we generate the configuration on the node where it is applied, the WMF version is the same.

Is it correct that dscresource will not risk contention with the LCM like dscscript might when executed via a Chef recipe?

dscresource doesn't leave behind a configuration that the LCM will try to periodically check, so it does not risk the same potential contention that dscscript does. However, that contention can be avoided if the LCM is set to 'ApplyOnly'.

Question about Leveraging DSC Configurations and dscresource resource, I find value in fully defining a DSC configuration and placing it in a ps module (maybe with some parameters.). Now i could leverage Chef to call an install-module | import-module | call the defined configuration, then run start-dscconfiguration. However, if you want to leverage the dscresource resource, you have to set the refresh mode to disable, which disables the Start-DSCConfiguration, Get-DSCconfigurat, and test.

So, this was true with the preview versions of WMF 5. WIth the RTM version, you do not need to disable the LCM and dscscript and dscresource can be used side by side. (This also requires at least Chef Client 12.6.)

What if neither resource has a solution for what you are trying to do? Which to go with to start from scratch?

I would use the same criteria – which would be easiest to test? Where does my team's strength lie? And what kind of community support is there for building that resource?

What about class-based resources? Can't they be used to enable resources to consume other resources?

Class based resources can be used on some level, but there is not great support in the DSC DSL for doing that. Also, currently class based resources do not support side by side deployment, so you may end up with conflicting dependencies and not actually be able to use any of the resources that depend on different versions of a class based resource.

Is there a AWS OpsWorks like for Azure?

Azure Operations Management Suite is probably the closest in concept. Now, to use Chef in Azure, we have a VM Agent Extension that'll install Chef, we have Chef Server images in the gallery, and of course, there is hosted Chef.

What is the status of reboot handling for Windows cookbooks?

Reboot handling is baked in to Chef Client 12 with the rebootpending? helper and reboot resource. dscresource can also queue a reboot resource if a DSC resource requests a reboot (via the rebootaction property of the dscresource resource).

Thursday, April 21, 2016

Are you giving your business a competitive advantage? [feedly]



----
Are you giving your business a competitive advantage?
// Virtualization Management Software & Data Center Control | VMTurbo » VMTurbo Blog

I talk with a lot of different individuals who work in IT in some way shape or form on a regular basis. Lately I am seeing a trend that transcends from systems engineers all the way up to CIOs that I can't seem to wrap my head around. If I am working together with a potential VMTurbo candidate we will typically discuss things like how and why something was architected a particular way or maybe how they ensure they are getting the most out of the infrastructure resources they have already invested in.

Somewhere during these sessions the conversation almost always comes back to performance, and this is where I get somewhat lost. Usually I will ask something like "how do you guys typically identify performance issues in environment today?" The answers that I get range from "usually when an end user calls us" or "only when my phone rings" and sometimes responses like "our goal is to keep lights on, if it ain't broke don't fix it."

On the surface these seem like completely logical answers right? If you work with data center technologies, your goal is to maintain uptime, ensure that customers and end users can access applications required to perform their day to day jobs. But is this type of logic actually aiding your organization in a sufficient manner? Are you actually allowing your organization to recognize the benefits of a skilled information technology team and providing them with a true competitive advantage, or are you simply keeping the lights on?

How do you support the business?

Think about it for a minute. What types of applications does your organization leverage? How do these applications support the business? How do YOU support these applications? It can be a scary thought for some if you actually take a step back and think about it.

The fact that Google apparently uses cheap homegrown servers with a KVM hypervisor on top gives them a competitive advantage. The fact that trading firms have identified that a 1 millisecond difference in latency can result in upwards of $100 million per year is giving them a competitive advantage. The fact that the Indianapolis Colts reported deflated footballs to the NFL in an attempt to suspend Tom Brady from play was an attempt (see failed one) at a competitive advantage.

The point is, uptime isn't good enough anymore. Slow is the new down, and the quicker we start adopting that mindset, the more of an asset we become to the business.

old-fashioned-it

A simple example

A colleague of mine used to use this simple example whenever he was working with a potential client and as elementary as it seems, it makes a lot of sense when you think of delivering a competitive advantage.

Have you ever moved a virtual machine to fix a problem? Have you ever moved a virtual machine to prevent a problem?

That simple statement summarizes how IT operations teams think about managing environments today. We monitor things in the environment, wait for something negative to occur, and then take action as quickly as possible. If we want to assist the business in gaining a competitive advantage, then the software we invest in needs to be a competitive advantage itself. We need to adopt the logic of a software that is going to aggressively put our infrastructure in the best position possible at all times versus continuing down the antiquated path of "why would I do anything at all when nothing seems to be a problem right now?"

it-with-vmturbo

The post Are you giving your business a competitive advantage? appeared first on Virtualization Management Software & Data Center Control | VMTurbo.


----

Shared via my feedly reader


Sent from my iPhone

Before You Begin: Why Your OpenStack Initiative Will Probably Fail [feedly]



----
Before You Begin: Why Your OpenStack Initiative Will Probably Fail
// Virtualization Management Software & Data Center Control | VMTurbo » VMTurbo Blog

If you are a data center professional that has been around the block, you understand that you either keep up with technology or you become obsolete. You are probably looking at some of the newer IaaS and PaaS offerings. Perhaps investigating Docker or other containers for your organization. The public cloud providers are an option that becomes real by the day, however – high barriers to usage will probably lead you to test OpenStack, the leading private cloud solution in today's market. Explore the possibilities of developing an open source IaaS while moving away from your vendor lock-ins.

Your First Challenge: The Learning Curve is Steep

There are only a handful of certifications (for example openstack.org, Mirantis and Red Hat) which means that you will spend a lot of time trying things out for yourself. Fortunately there are some good resources on openstack.org such as all the OpenStack summit videos and once you want to get some hands on experience –you can run devstack which is basically a way to run OpenStack cloud on your laptop.

One of the challenges of learning an open source product as dynamic and fast moving as this one, is that there is a HUGE amount of documents out there and a lot of them are already outdated. In other words – you can spend months reading. And a lot of what you read doesn't even matter.

It is a technically challenging struggle, but eventually you get it. You dive into each component (Nova, Keystone etc.) and actually have a good understanding of how to get it all working. You set it all up.

CONGRATULATIONS! You have delivered an OpenStack private cloud!

oh-what-have-i-doneInvite your users to start using it. This is where the real problem starts. YOU CREATED A MONSTER!

You actually enabled your customers to use your infrastructure like a cloud. And they want to take advantage of it. Your good old VMWare environment was within your control (or so you thought), at least it was the devil you knew. Now you have VMs popping up everywhere – you don't know what those are running, if they are important, if they are right sized and when are they going away? Do you have enough capacity to handle all the demand of these VMs?

If you have no control over the workload demand: How will you manage it? This problem isn't specific to OpenStack. Any IaaS initiative will suffer from the same outcome. However, it is a lot more painful when you had to set up a whole new virtualization environment from the ground up.

As you set up your IaaS offering, you need to think about a control platform from an early stage. Any cloud provider has one, and if you want to be a cloud provider (internally), you need a co

The post Before You Begin: Why Your OpenStack Initiative Will Probably Fail appeared first on Virtualization Management Software & Data Center Control | VMTurbo.


----

Shared via my feedly reader


Sent from my iPhone

The Cloudcast #248 - Trouble Inside Your Containers [feedly]



----
The Cloudcast #248 - Trouble Inside Your Containers
// The Cloudcast (.NET)

Aaron and Brian talk with Tim Gerla (@tybstar, VP Product) and Dan Nurmi (@dannurmi, CTO) at @Anchore about their background as entrepreneurs, the challenges of container security, the challenges of CI/CD and security and how to avoid slowly down developers.

Show Links:

Topic 1 - Get an understanding of the team and their background

Topic 2 - What customer challenge is Anchore attempting to solve? What are some of the scary security/container stories out there?

Topic 2a - Let's talk about awareness of this security challenge, especially amongst Dev teams vs. Ops teams.

Topic 3 - Anchore was founded by people from Eucalyptus and Ansible, which were very popular open-source projects. Does Anchore have an open-source model as well?

Topic 4 - How does Anchore fit into existing "container" systems and PaaS like Docker (or Docker Data Center), Cloud Foundry, Kubernetes, etc.? Don't some of those systems already have security elements built in?

Topic 5 - How does Anchore fit into some of the CI/CD systems, where developers could be adding random things into containers?

Topic 6 - Topic 6 - How does the reporting and management framework fit into other existing security systems for the CISO?


----

Shared via my feedly reader


Sent from my iPhone

Announcing More Speakers for DockerCon 2016 [feedly]



----
Announcing More Speakers for DockerCon 2016
// Docker Blog

Today, we're excited to share with you another round of speakers selected by the DockerCon Community Review Committee. Once again, we'd like to thank everyone who took the time to both submit and review the proposals for DockerCon 2016! Everyone … Continued
----

Shared via my feedly reader


Sent from my iPhone

Building USDA’s Platform of the Future with Docker [feedly]



----
Building USDA's Platform of the Future with Docker
// Docker Blog

The United States Department of Agriculture (USDA) develops the federal government's policies on farming, agriculture, forestry and food and has a budget of over $139 billion. Its goal is to enable farmers and ranchers to thrive, to drive greater awareness … Continued
----

Shared via my feedly reader


Sent from my iPhone

Video Whiteboard Series for Docker Universal Control Plane [feedly]



----
Video Whiteboard Series for Docker Universal Control Plane
// Docker Blog

It's been less than two months since Docker Universal Control Plane (UCP) became generally available as part of the Docker Datacenter (DDC) subscription. With DDC, organizations are able to set up a Containers as a Service (CaaS) application environment on-premises/VPC … Continued
----

Shared via my feedly reader


Sent from my iPhone

A Look Back at One Year of Docker Security [feedly]



----
A Look Back at One Year of Docker Security
// Docker Blog

Security is one of the most important topics in the container ecosystem right now, and over the past year, our team and the community have been hard at work adding new security-focused features and improvements to the Docker platform.   … Continued
----

Shared via my feedly reader


Sent from my iPhone

Managing Secrets with Chef v2 [feedly]



----
Managing Secrets with Chef v2
// Chef Software

Managing infrastructure requires you to coordinate interactions between applications and individuals. Enabling secure communication between the two relies on a number of systems trusting one another. In practice, this means distributing keys, passwords, and certificates. Handling these secrets securely can definitely be a challenge. In this webinar, Franklin Webber, a training and technical content lead at Chef, will explain why it is important to manage secrets and how you can do it with Chef and tools such as encrypted data bags and Chef Vault. When you're done, you'll see why security through encryption is preferable to security through obscurity, be able to safely manage secrets with Chef, and know where to go to learn even more.
----

Shared via my feedly reader


Sent from my iPhone

Managing Secrets with Chef [feedly]



----
Managing Secrets with Chef
// Chef Software

Managing infrastructure requires you to coordinate interactions between applications and individuals. Enabling secure communication between the two relies on a number of systems trusting one another. In practice, this means distributing keys, passwords, and certificates. Handling these secrets securely can definitely be a challenge. In this webinar, Franklin Webber, a training and technical content lead at Chef, will explain why it is important to manage secrets and how you can do it with Chef and tools such as encrypted data bags and Chef Vault. When you're done, you'll see why security through encryption is preferable to security through obscurity, be able to safely manage secrets with Chef, and know where to go to learn even more.
----

Shared via my feedly reader


Sent from my iPhone

Managing Secrets with Chef v2 [feedly]



----
Managing Secrets with Chef v2
// Chef Software

Managing infrastructure requires you to coordinate interactions between applications and individuals. Enabling secure communication between the two relies on a number of systems trusting one another. In practice, this means distributing keys, passwords, and certificates. Handling these secrets securely can definitely be a challenge. In this webinar, Franklin Webber, a training and technical content lead at Chef, will explain why it is important to manage secrets and how you can do it with Chef and tools such as encrypted data bags and Chef Vault. When you're done, you'll see why security through encryption is preferable to security through obscurity, be able to safely manage secrets with Chef, and know where to go to learn even more.
----

Shared via my feedly reader


Sent from my iPhone

Tuesday, April 19, 2016

Recovering From a vCenter Failure [feedly]



----
Recovering From a vCenter Failure
// CloudStack Consultancy & CloudStack...

While, in my opinion VMware's vSphere is the best performing and most stable hypervisor available, vCenter obstinately remains a single point of failure when using vSphere and it's no different when leveraging vCenter in a CloudStack environment.  Therefore, very occasionally there is a requirement to rebuild a vCentre server which was previously running in your CloudStack environment.  Working with one of our clients we found out (the hard way) how to do this.

Replacing a Pre-Existing vCenter Instance

Recreating a vCenter Instance

The recovery steps below apply to both the Windows-based vCenter server or an appliance.

The first step, which I won't cover in detail here, is to build a new vCenter server and attach the hosts to it.  Follow VMware standard procedure when building your new vCenter server.  To save some additional steps later on, reuse the host name and IP address of the previous vCenter server. Ensure that the permissions have been reapplied allowing CloudStack to connect at the vCentre level with full administrative privileges and that datacenter and cluster names have been recreated accurately.  When you re-add the hosts, the VMs running on those hosts will automatically be pulled into the vCentre inventory.

Networking

The environment that we were working on was using standard vSwitches; therefore the configuration of those switches was held on each of the hosts independently and pulled into the vCenter inventory when the hosts were re-added to vCenter.  Distributed vSwitches (dvSwitches) have their configuration held in the vCenter.  We did not have to deal with this complication and so the recreation of dvSwitches is not covered here.

Roll-back

The changes that are to be made can easily be undone, as they make no change and trigger no change in the physical environment.  However, it is never a bad idea to have a backup, you shouldn't ever need to roll back the whole database, but it's good to have a record of your before/after states.

Update vCenter Password in CloudStack

If you have changed the vCenter password, it'll need updating in the CloudStack database in its encrypted form.  The following instructions are taken from the CloudStack documentation:

  • Generate the encrypted equivalent of your vCenter password:
$ java -classpath /usr/share/cloudstack-common/lib/jasypt-1.9.0.jar org.jasypt.intf.cli.JasyptPBEStringEncryptionCLI encrypt.sh input="_your_vCenter_password_" password="`cat /etc/cloudstack/management/key`" verbose=false
  • Store the output from this step, we need to add this in cluster_details table and vmware_data_center tables in place of the plain text password
  • Find the ID of the row of cluster_details table that you have to update:
$ mysql -u <USERNAME> -p<PASSWORD>  select * from cloud.cluster_details;
  • Update the password with the new encrypted one
update cloud.cluster_details set value = '_ciphertext_from_step_1_' where id = _id_from_step_2_;
  • Confirm that the table is updated:
select * from cloud.cluster_details;
  • Find the ID of the correct row of vmware_data_center that you want to update
select * from cloud.vmware_data_center;
  • update the plain text password with the encrypted one:
update cloud.vmware_data_center set password = '_ciphertext_from_step_1_' where id = _id_from_step_5_;
  • Confirm that the table is updated:
select * from cloud.vmware_data_center;

Reallocating the VMware Datacenters to CloudStack

The next step is to recreate the cloud.zone property within vSphere which is used to track VMware datacenters which have been connected to CloudStack.   Adding a VMware datacenter to CloudStack creates this custom attribute and sets it to "true".  CloudStack will check for this attribute and value and show an error on the management log if it attempts to connect to a vCenter server which does not have this attribute set.  While the property can be edited using the full vSphere client, it must be created using PowerCLI utility.

Using the PowerCLI utility do the following:

Connect-VIServer <VCENTER_SERVER_IP>

(you'll be prompted for a username and password with sufficient permissions for the vCenter Server)

New-CustomAttribute -Name "cloud.zone" -TargetType Datacenter    Get-Datacenter <DATACENTER_NAME> | Set-Annotation -CustomAttribute "cloud.zone" -Value true

Reconnecting the ESXi Hosts to CloudStack

CloudStack will now be connected to the vCenter server, but the hosts will still probably all be marked as down.  This is because vCenter uses an internal identifier for hosts, so that names and IP addresses can change, but it still can keep track of which host is which.

This identifier appears in two places in CloudStack's database; The 'guid' in the cloud.host table and also in the cloud.host_details table against the 'guid' value. The host table can be queried as follows:

SELECT id,name,status,guid FROM cloud.host WHERE hypervisor_type = 'VMware' AND status != 'Removed';

The guid takes the form:

HostSystem<HOST_ID>@<VCENTER_NAME>

You can find the new Host ID either though the PowerCLI again, or via a utility like Robware's RVTools.  RVTools is a very light-weight way of viewing information from a vCenter. It also allows export to csv so is very handy for manual reporting and analysis. The screen grap below shows a view called vHosts; note the Object ID.  This is vCenter's internal ID of the host.

rvtools

Through PowerCLI you would use:

Connect-VIServer <VCENTER_SERVER_IP>;    Get-VMHost | ft -Property Name,Id -autosize

Now you can update the CloudStack tables with the new host ids for each of your hosts.

cloud.host.guid for  host '10.0.1.32' on vCenter '10.0.1.11' might previously have read:

HostSystem:host-99@10.0.1.11

should be updated to:

HostSystem:host-10@10.0.1.11

Remember to check the host and host_details table.

Once that you have updated the entries for all of the active hosts in the zone, you can start the management service again now and all of the hosts should reconnect, as their pointers have been fixed in the CloudStack database.

Summary

This article explains how to re-insert a lost or failed vCenter server into CloudStack.  This blog describes the use of PowerCLI to both set and retrieve values from vCenter.

This process could be used to migrate an appliance based vCenter to a Windows based environment in order to support additional capacity as well as replacing a broken vCenter installation.

About The Author

Paul Angus is VP Technology & Cloud Architect at ShapeBlue, The Cloud Specialists. He has designed and implemented numerous CloudStack environments for customers across 4 continents, based on Apache CloudstackCitrix Cloudplatform and Citrix CloudPortal.

When not building Clouds, Paul likes to create Ansible playbooks that build clouds


----

Shared via my feedly reader


Sent from my iPhone

Saturday, April 16, 2016

Widespread JBoss Backdoors a Major Threat [feedly]



----
Widespread JBoss Backdoors a Major Threat
// Talos Blog

Recently a large scale ransomware campaign delivering Samsam changed the threat landscape for ransomware delivery. Targeting vulnerabilities in servers to spread ransomware is a new dimension to an already prolific threat. Due to information provided from our Cisco IR Services Team, stemming from a recent customer engagement, we began looking deeper into the JBoss vectors that were used as the initial point of compromise. Initially, we started scanning the internet for vulnerable machines. This led us to approximately 3.2 million at-risk machines.

As part of this investigation, we scanned for machines that were already compromised and potentially waiting for a ransomware payload. We found just over 2,100 backdoors installed across nearly 1600 ip addresses. Over the last few days, Talos has been in the process of notifying affected parties including: schools, governments, aviation companies, and more.

Several of these systems had Follett "Destiny" software installed. Destiny is a Library Management System that is designed to track school library assets and is primarily used in K-12 schools across the globe. We contacted Follett, who described an impressive patching system that not only patches all systems from version 9.0-13.5, but also captured any non-Destiny files that were present on the system to help remove any existing backdoors on the system. Follett technical support will then reach out to customers who are found to have suspicious files on their system. It is imperative, given the wide reach of this threat, that all Destiny users ensure that they've taken advantage of this patch.

Follett asked us to share the following:

Based on our internal systems security monitoring and protocol, Follett identified the issue and immediately took actions to address and close the vulnerability on behalf of our customers.

Follett takes data security very seriously and as a result, we are continuously monitoring our systems and software for threats, and enhancing our technology environment with the goal of minimizing risks for the institutions we serve.

As part of this investigation, Talos and Follett will continue to work together to analyze the webshells found on compromised servers and to ensure that our customers are informed about how best to protect their networks.

In this process we've learned that there is normally more than one webshell on compromised JBoss servers and that it is important to review the contents of the jobs status page. We've seen several different backdoors including "mela", "shellinvoker", "jbossinvoker", "zecmd", "cmd", "genesis", "sh3ll" and possibly "Inovkermngrt" and "jbot". This implies that that many of these systems have been compromised several times by different actors.

US-CERT has published the following advisory regarding webshells:
https://www.us-cert.gov/ncas/alerts/TA15-314A

Webshells are a major security concern as it indicates an attacker has already compromised this server and can remotely control it. As a result, a compromised web server could be used to pivot and move laterally within an internal network.

Given the severity of this problem, a compromised host should be taken down immediately as this host could be abused in a number of ways. These servers are hosting JBoss which has been recently involved in a high profile ransomware campaign.

The software for the shell itself can be found here.

Recommended Remediation


If you find that a webshell has been installed on a server there are several steps that need to be taken. Our first recommendation, if at all possible, is to remove external access to the server. This will prevent the adversaries from accessing the server remotely. Ideally, you would also re-image the system and install updated versions of the software. This is the best way to ensure that the adversaries won't be able to access the server. If for some reason you are unable to rebuild completely, the next best option would be to restore from a backup prior to the compromise and then upgrade the server to a non-vulnerable version before returning it to production.

For users of Follett Destiny, please respect the autoupdate notifications and ensure that you have patched correctly. This process, according to Follett, should remove unwanted backdoor shells.

As always, running a reputable anti-virus software is recommended.

Conclusion


With around 2100 servers affected, there are a lot of stories about how this happened. But a consistent thread in them all is the need to patch. Patching is a key component to software maintenance. It is neglected by both users and makers of the software far too often. Failures anywhere along the chain will ensure that this type of attack remains successful. With the addition of ransomware, the potential impacts could be devastating for small and large businesses alike.

Indicators


This list is not meant to be comprehensive at this time, but provides for the basis to develop more Indicators that are present or left behind by various webshell and related actor tools.


jbossass.jspjbossass_jsp.class
shellinvoker.jspshellinvoker_jsp.class
mela.jspmela_jsp.class
zecmd.jspzecmd_jsp.class
cmd.jspcmd_jsp.class
wstats.jspwstats_jsp.class
idssvc.jspidssvc_jsp.class
iesvc.jspiesvc_jsp.class



Coverage


The following Snort rules address this threat. Please note that additional rules may be released at a future date and current rules are subject to change pending additional vulnerability information. For the most current rule information, please refer to your FireSIGHT Management Center or Snort.org.

Snort Rules

  • JBoss Server Vulnerabilities: 18794, 21516-21517, 24342-24343, 24642, 29909
  • Web Shell: 1090,21117-21140,23829,23830,27729-27732,27966-27968,28323,37245
  • Samas: 38279,38280, 38304,38360,38361


Additionally, Advanced Malware Protection (AMP) can help detect and prevent the execution of this malware on targeted systems.

Network Security encompasses IPS and NGFW. Both have up-to-date signatures to detect malicious network activity that this campaign exhibits.





----

Shared via my feedly reader


Sent from my iPhone

Dev report 1 on Xen Orchestra 5.0 [feedly]



----
Dev report 1 on Xen Orchestra 5.0
// Xen Orchestra

Hello everyone!

This is a blog post about our progress on Xen Orchestra 5.0, which is a complete rewrite of the web interface, from scratch (for those who wonder why, read this previous post about announcing Xen Orchestra 5.x).

Before releasing it, we want to reach 2 goals:

  • avoid any major features regression against 4.x
  • provide new UI possibilities to give you the vest of XenServer

The first is obvious, but let me explain what we already have, despite not been released yet.

This is work in progress and not a definitive design. But it will give you a good taste of what's coming!

XenServer the easy way

Because you want probably access various part of the UI quickly, we decided to create a side menu. But that's not all: we also reduced the number of entries and added "usual actions":

In this way, you can quickly create a VM or add SR/host and even import a VM.

Backup menu is also easier:

It's only in the New backup view where you could choose between all our supported XenServer backup modes: Rolling snapshots, Basic backup, Continuous delta backup, DR and Continuous replication.

Better dashboard

Here is the new dashboard with more info than the current one (menu is collapsed):

In extra from 4.x, we already have:

  • total storage usage (on all VDI SRs)
  • alarm messages number
  • pending tasks number
  • number of users
  • Top 5 SR usage in %

Enhanced VM view

The goal here is to provide a quick recap of the VM without giving too many info:

Here, you got:

  • number of vCPUs, RAM usage, number of networks, total disk usage
  • last 10 minutes activity for CPU, RAM, network and disks combined
  • relative start time, Xen virt mode, IP address and the distro
  • and finally tags (see below)

Tags

Tags are really useful for managing your infrastructure. We focused on this to give you the best Ux with them:

Just click on the plus icon in Tags section, it will open a text field:

Just validated with enter, it recreates a new text field:

This way you could enter multiple tags without leaving your keyboard!

Editable text

Just click on the VM description, title or whatever, you could edit it:

You got a green check after a successful modification:

Revert your change just by hovering the green check:

Better stats

Stats are great, and now you can even change the granularity: stats from last 10 min, 2 hours, 1 week or even last year!

Close up on the time selector:

Better console

The new console view allows you to keep control of VM usage thanks to the spark lines graphs:

Internationalization

We added other languages support from the start. We already have a French translation, but others will come quickly (Portuguese, Chinese, and a lot more with community support).

The interface could be instantly switched when you select the language, without refreshing anything.

Remember the stats? Change the language in French, now everything is translated, even the date format!

Conclusion

This is a first post about the new interface, more will come as soon we could present you our results :)


----

Shared via my feedly reader


Sent from my iPhone

The Cloudcast #247 - How Do I Talk To An API? [feedly]



----
The Cloudcast #247 - How Do I Talk To An API?
// The Cloudcast (.NET)

Aaron and Brian talk to Kendrick "Kenny" Coleman (@kendrickcoleman; Developer Advocate @EMCcode) about the basics of interacting with APIs. They also discuss how the millennial generation is changing the media industry and if/how The Cloudcast should change.

Show Links:

Topic 1 - What are you up to these days?

Topic 2 - We need help. Everything is an API these days and we don't speak API. What are the basics we need to understand and the tools we need?

Topic 3 - Any good API tips and tricks for beginners?

Topic 4 - You were recently at a conference that discussed how the millennial generation is changing the media industry. What is changing, what do we need to know and how do we need to change the podcast?


----

Shared via my feedly reader


Sent from my iPhone

MaxScale 1.4.1 GA is available for download [feedly]



----
MaxScale 1.4.1 GA is available for download
// MariaDB blogs

Tue, 2016-04-12 16:06
Johan

We are pleased to announce that MaxScale 1.4.1 GA is now available for download!

If MaxScale is new to you, we recommend reading this page first.

MaxScale 1.4 brings:

  1. The Firewall Filter has been extended and can now be used for either black-listing or white-listing queries. In addition it is capable of logging both queries that match and queries that do not match.
  2. Client-side SSL has been available in MaxScale for a while, but it has been somewhat unreliable. We now believe that client side SSL is fully functional and usable.

Additional improvements:

  • POSIX Extended Regular Expression Syntax can now be used in conjunction with qlafilter, topfilter and namedserverfilter.
  • Improved user grant detection.
  • Improved password encryption.
  • Compared to the earlier 1.4.0 Beta release, a number of bugs have been fixed.

    The release notes can be found here and the binaries can be downloaded from the portal.

    In case you want to build the binaries yourself, the source can be found at GitHub, tagged with 1.4.1.

    We hope you will download and use this stable release, and we encourage you to create a bug report in Jira for any bugs you might encounter.

    On behalf of the entire MaxScale team.

    Tags: 

    About the Author

    Johan's picture

    Johan Wikman is a senior developer working on MaxScale at MariaDB Corporation. 


    ----

    Shared via my feedly reader


    Sent from my iPhone

    Database Firewall Filter in MaxScale 1.4.1 [feedly]



    ----
    Database Firewall Filter in MaxScale 1.4.1
    // MariaDB blogs

    Thu, 2016-04-14 08:27
    markusmakela

    New and Improved Functionality

    The recently released 1.4.1 version of MariaDB MaxScale contains a bundle of great improvements to the Database Firewall Filter, dbfwfilter. This article starts by describing the dbfwfilter module and how it is used. Next we'll find out what kinds of improvements were made to the filter in MaxScale 1.4.1 and we'll finish by looking at a few use cases for it.

    Here are the highlights of the new dbfwfilter functionality in the 1.4.1 release of MaxScale.

    • Configurable filter actions on rule match
      • Allow the query, block the query or ignore the match
    • Logging of matching and/or non-matching queries

    With these new features, you can easily implement various types of configurations including a dry-run mode where no action is taken but all matching and non-matching queries are logged.

    Later on we'll introduce the new features and explore how we can better secure our database environment by using these new features.

    What Is the Database Firewall Filter?

    The database firewall filter, dbfwfilter, is a module which acts as a firewall between the clients and the backend cluster. Similar to the iptables software found in most Linux based distributions, this module either allows or denies SQL queries based on a set of rules.

    A rules is defined by a small and simple syntax that can be used to describe the kind of content it matches. These rules can then be assigned to users to make sets of user and rule groups. For more details about the rule syntax, read the Database Firewall Filter Documentation.

    The dbfwfilter module allows you to control what kinds of queries are allowed. Because the filter understands the content that passes through it, it can prevent malicious attempts to execute SQL which can compromise your data.

    Here are a few examples how the dbfwfilter can help improve the security of your database cluster.

    • Block delete queries with no "WHERE" clause - preventing attacker from mass deleting data from tables and damage to customer data
    • Block select queries on certain table (such as user data, customer data) with no "WHERE" clause - preventing attacker from getting mass access to confidential user data
    • Only allow queries with certain columns on certain tables for a set of users. So these users will only have access to subset of columns and will not be able to access any other data

    Configuring the Filter

    The best way to understand how the dbfwfilter works is to configure it for use.We start by defining the rules for the filter. We'll define a simple rule and apply it to all possible users. We already have MaxScale installed and configured for normal operation. For a good tutorial on setting up MaxScale, read the MaxScale Tutorial.

    The rule we'll create is a no_where_clause rule which matches if the query lacks a WHERE/HAVING clause. We'll also add an optional on_queries part to the rule which allows us to limit the matching to update only.

    rule protected_update deny no_where_clause on_queries update  users %@% match any rules protected_update

    The first line defines the rule protected_update while the second line applies this rule to all users. The match any makes it so that any rule in the list will cause it to be considered a match. Since we only have one rule, the value of match is not very important. The matching type allows you to combine simpler rules into a more complex one.

    The next step is to create a filter definition. The following filter definition uses the dbfwfilter module and defines the rules parameter, which tells us where the rules for this filter are. The rules we defined earlier have been saved to /usr/home/markusjm/rules. The action parameter tells the filter what it should do when a query matches a rule. We'll set to to block so the filter blocks any query that matches a rule.

    [firewall-filter]  type=filter  module=dbfwfilter  rules=/home/markusjm/rules  action=block

    We will use the following service configuration.

    [RW Split Router]  type=service  router=readwritesplit  servers=server1,server2,server3,server4  user=maxuser  passwd=maxpwd  filters=firewall-filter

    Testing the Configuration

    After we've configuration, we can start MaxScale and execute some queries. First we'll create a table for our test and insert some values into it.

    MySQL [test]> create table t1(id int);  Query OK, 0 rows affected (0.41 sec)  MySQL [test]> insert into test.t1 values (1), (2), (3);  Query OK, 3 rows affected (0.08 sec)  Records: 3  Duplicates: 0  Warnings: 0

    Next we'll try to update the values without defining a where clause.

    MySQL [test]> update t1 set id=0;  ERROR 1141 (HY000): Access denied for user 'maxuser'@'127.0.0.1' to database 'test': Required WHERE/HAVING clause is missing.

    We can see that it was rejected because it matched the rule we defined. Let's try an update with a where clause.

    MySQL [test]> update t1 set id=0 where id=1;  Query OK, 1 row affected (0.07 sec)  Rows matched: 1  Changed: 1  Warnings: 0

    It works as expected. How about a select without a where clause?

    MySQL [test]> select * from t1;  +------+  | id   |  +------+  |    0 |  |    2 |  |    3 |  +------+  3 rows in set (0.00 sec)

    So our simple rule matches and works as expected. Next we'll show a simple use case where two dbfwfilter instances are combined to form a more complex rule set.

    Combining block and allow

    Once we have our basic setup we can expand it by creating a second set of rules and a second filter definition. We'll then combine these two filters into one filter pipeline which blocks queries that match the rule we defined earlier and only allow rules that match our new rule set. We start by defining the new rules.

    rule safe_columns deny columns name email  users %@% match any rules safe_columns

    This rule matches when one of the name, address or salary columns are accessed. The rule is a simple one which allows us to restrict queries to a certain set of columns. We'll save the configuration in /home/markusjm/rules-whitelist and continue to configure the new filter definition.

    [whitelist-filter]  type=filter  module=dbfwfilter  rules=/home/markusjm/rules-whitelist  log_no_match=true  action=allow

    The filter definition is similar to the one we defined before apart from the different action value and the new log_no_match parameter. The filter will only allow queries that match the rules to be executed. In addition to this, all non-matching queries will be logged so we'll know when an unexpected query is blocked.

    Once we've configured the second filter, we can combine them into a pipeline in the following way.

    [RW Split Router]  type=service  router=readwritesplit  servers=server1,server2,server3,server4  user=maxuser  passwd=maxpwd  filters=whitelist-filter|firewall-filter

    Now we can test how our new combined filters work. We'll test using a simple table and one row of data.

    MariaDB [(none)]> show create table test.t1\G  *************************** 1. row ***************************         Table: t1  Create Table: CREATE TABLE `t1` (    `name` varchar(60) DEFAULT NULL,    `address` varchar(60) DEFAULT NULL,    `email` varchar(120) DEFAULT NULL  ) ENGINE=InnoDB DEFAULT CHARSET=latin1  1 row in set (0.00 sec)    MariaDB [(none)]> select * from test.t1;  +----------+---------------+-------------------+  | name     | address       | email             |  +----------+---------------+-------------------+  | John Doe | Castle Hill 1 | johndoe@gmail.com |  +----------+---------------+-------------------+  1 row in set (0.00 sec)

    Let's try selecting name and email from the table.

    MariaDB [(none)]> select name, email from test.t1;  +----------+-------------------+  | name     | email             |  +----------+-------------------+  | John Doe | johndoe@gmail.com |  +----------+-------------------+

    As expected, the query is successful. We can try to select only address from the table but we will be denied access.

    MySQL [(none)]> select address from test.t1;  ERROR 1141 (HY000): Access denied for user 'maxuser'@'127.0.0.1'

    So only queries which target either the name or email column pass through the whitelist-filter we've configured. Next we can test if updates to name work.

    MySQL [(none)]> update test.t1 set name="No Name";  ERROR 1141 (HY000): Access denied for user 'maxuser'@'127.0.0.1': Required WHERE/HAVING clause is missing.    MySQL [(none)]> update test.t1 set name="No Name" where name="John Doe";  Query OK, 1 row affected (0.11 sec)  Rows matched: 1  Changed: 1  Warnings: 0    MySQL [(none)]> select name from test.t1;  +---------+  | name    |  +---------+  | No Name |  +---------+

    As we can see, the filter we previously configured still works and by combining these two filters, we only allow queries that are in the set of allowed queries but not in the set of denied queries.
    This combining of rule sets allows us to create rich sets of rules that all the queries must conform to. Since we added the log_no_match parameter to the filter definition, we can see a log message with details about the non-matching query we executed.

    2016-04-02 16:21:42   notice : [RW Split Router] Query for '%@%' by maxuser@127.0.0.1 was not matched: select address from test.t1

    With this, we could implement a simple auditing mechanism at the cluster level which would allow us to detect unexpected queries and reveal information about the user who executed them.

    What does the future hold?

    In the near future, we're aiming to implement smarter firewalling functionality into MaxScale. The smart firewall is aimed for minimal configuration and maximal ease of use by automating rule generation. The smart firewall is planned to be included in the next release of MaxScale.

    If you have any questions, feedback or great ideas, join us on maxscale@googlegroups.com for discussion about MaxScale. We also have the #maxscale IRC channel on FreeNode.

    About the Author

    markusmakela's picture

    Markus Mäkelä is a Software Engineer working on MaxScale. He graduated from Metropolia University of Applied Sciences in Helsinki, Finland.


    ----

    Shared via my feedly reader


    Sent from my iPhone

    Downloading MariaDB MaxScale binaries without registration [feedly]



    ----
    Downloading MariaDB MaxScale binaries without registration
    // MariaDB blogs

    Sat, 2016-04-16 16:17
    rasmusjohansson

    MariaDB MaxScale, the dynamic routing platform for MariaDB Server (and MySQL Server) had its first stable 1.0 GA release 15 months ago. Since then, the popularity of MariaDB MaxScale has grown exponentially. It has in many cases become a default piece of the architecture in clustered setups of MariaDB Server, in master-slave replication setups and in very large replication topologies making use of MariaDB MaxScale's Binlog Server functionality.

    MariaDB MaxScale has come far in a short time and it's getting attention also from the point of view of how it's being distributed. There has been several active community members pointing out that MariaDB MaxScale binaries (and not just the source code) should be made available to the broader user community in a similar fashion to MariaDB Server. We want to address this by making the community version available as easy as possible. The binaries have therefore been made available from downloads.mariadb.com/files/MaxScale without registration.

    Please note that I wrote "the community version". That currently equals the latest and greatest version of MariaDB MaxScale, version 1.4.1 as of writing this. What it also implies is that we plan to release an Enterprise version of MariaDB MaxScale that will include additional functionality on top of the community version. We are working on the Terms of Use for that purpose. Stay tuned.

    It's now straightforward to download the MariaDB MaxScale community binaries. Get them here!

    Tags: 

    About the Author

    rasmusjohansson's picture

    Rasmus has worked with MariaDB since 2010 and was appointed VP Engineering in 2013. As such, he takes overall responsibility for the architecting and development of MariaDB Server, MariaDB Galera Cluster and MariaDB Enterprise.


    ----

    Shared via my feedly reader


    Sent from my iPhone

    Announcing the Black Belt Sessions at DockerCon 2016 [feedly]



    ----
    Announcing the Black Belt Sessions at DockerCon 2016
    // Docker Blog

    We are excited to announce the first talks in the curated Black Belt Track at DockerCon 2016! Attendees of this track will learn from technical deep dives that haven't been presented anywhere else by members of the Docker team and … Continued
    ----

    Shared via my feedly reader


    Sent from my iPhone