Friday, June 10, 2016

DevOps Handbook is coming! [feedly]



----
DevOps Handbook is coming!
// IT Revolution

DevOps HandbookPreview copy for each delegate at DevOps Enterprise Summit London 2016

As you likely know, I never learn more than during the DevOps Enterprise Summit, where we have technology leaders from large, complex organizations share their transformation stories. Part of the magic is that all the stories are in the form of 30-minute experience reports. We asked speakers to tell us about their organization and industry and then address these questions:

  1. what business problem were you trying to solve?
  2. where did you start and why?
  3. what did you do?
  4. what were your outcomes?
  5. and what problems still remain?

It was these experience reports that formed the basis of many of the case studies in the  DevOps Handbook. I'm proud to announce that the book has gone through editing and design. Which means that every attendee at the DevOps Enterprise Summit in London will receive a 490-page print preview copy of the book!

Yes, that's right, the DevOps Handbook is done, and will launch later this year — but everyone who is at the London conference will get a custom print-run version. As an uncorrected proof, it will still have some errors in it which have been corrected, but you'll be the first to actually see the book in its semi-final state.

I've kept a daily work log since 2010, and I looked for the first entry for the DevOps Handbook. It was February 15, 2011.  Yes, that was over five years ago! The original plan was for the book to come out before The Phoenix Project, but there's so much that we've learned about how organizations are successfully putting DevOps principles and practices into place.

Since 2011, my coauthors and I have put nearly 2,000 hours of work creating the DevOps Handbook — as you can imagine, Jez Humble, Patrick Debois, John Willis and I are extremely excited that we're finally nearing production of the book!

Tickets are available for DevOps Enterprise Summit in London.
Enter HANDBOOK15 to receive 15% off the cost of the full-price ticket.

Register

Here are some other potentially interesting stats on the book:

  • 23 chapters
  • 48 case studies
  • 98,124 words
  • 48 images
  • 503 endnotes
  • 192 footnotes

And remember, there's only one way to get the galley of the book, which is at the DevOps Enterprise Summit in London, which is only 21 days away!

If you have any interest in how large, complex organizations are creating amazing outcomes using DevOps principles and patterns, this is the conference for you!

Hope to see you there!

Gene

Use code HANDBOOK15 AND SAVE 15%!

DevOps Enterprise Summit London

June 30 & July 1, 2016

Hilton London Metropole

225 Edgware Road, London, UK

The post DevOps Handbook is coming! appeared first on IT Revolution.


----

Shared via my feedly newsfeed


Sent from my iPhone

Shapeblue Security Advisory For CVE-2016-3085: Apache CloudStack Authentication Bypass Vulnerability [feedly]



----
Shapeblue Security Advisory For CVE-2016-3085: Apache CloudStack Authentication Bypass Vulnerability
// CloudStack Consultancy & CloudStack...

Overview

Apache CloudStack contains an authentication module providing "single sign-on" functionality via the SAML data format. Under certain conditions, a user could manage to access the user interface without providing proper credentials. As the SAML plugin is disabled by default, this issue only affects installations that have enabled and use SAML-based authentication.

Mitigation:
Users of Apache CloudStack using the SAML plugin should upgrade to one of the following versions, based on which release they are currently using: 4.5.2.1, 4.6.2.1, 4.7.1.1, or 4.8.0.1. These versions contain only security updates, and no other functionality change.

Versions affected:
CloudStack versions 4.5.0 and newer with SAML authetication enabled.

 

What is ShapeBlue Doing

ShapeBlue has analysed the impact of this issue on Apache CloudStack (ACS). The issue only affects users of CloudStack version 4.5.0 and newer, who use CloudStack SAML plugin. The vulnerability allows an attacker to bypass SAML authentication and log in as a SAML user using any non-empty password. ShapeBlue discovered the issue, reported it to the CloudStack security team and created the necessary secuity pathces. We have since worked with the secuity team to create the security release(s) that all CloudStack operators are recommended to update to. It is also thought that this vulnerability affects commercial distributions of Apache CloudStack.

Security release upgrade procedure

Users of Apache CloudStack using the SAML plugin should upgrade to one of the following versions, based on which release they are currently using: 4.5.2.1, 4.6.2.1, 4.7.1.1, or 4.8.0.1. These versions contain only security updates, and no other functionality change. The rpm and debian packages are available from ShapeBlue's CloudStack repositories that can be used by users to upgrade the cloudstack-management package and restart their management server(s) to fix the issue.

Further information

For ShapeBlue support customers, please contact the support team for further information.

For other CloudStack users, please use the community mailing lists.

For users of commerical distribution of Cloudstack, please contact your vendor

 


----

Shared via my feedly newsfeed


Sent from my iPhone

X Marks the Spot: XenApp, XenDesktop & XenServer with Intel Xeon & HPE Moonshot! [feedly]



----
X Marks the Spot: XenApp, XenDesktop & XenServer with Intel Xeon & HPE Moonshot!
// Citrix Blogs

XenApp, XenDesktop and XenServer with Intel Xeon with Iris Pro Graphics powered by the HPE Moonshot m710x! Designing the right solution to solve business and technical challenges can sometimes feel like you're on a treasure hunt. In the IT world, the treasure you seek is often considered to be the one complete, all-up platform that can […]

  

Related Stories


----

Shared via my feedly newsfeed


Sent from my iPhone

Doctor, Doctor, Give Me the News … [feedly]



----
Doctor, Doctor, Give Me the News …
// Citrix Blogs

Now that we've announced the general availability of the latest major release of our virtualization platform, XenServer 7, we can finally talk about its significant new features and capabilities.  The focus of this blog is the new XenServer Health Check and its integration with Citrix Information Services (CIS). XenServer Health Check with Citrix Insight Services […]

  

Related Stories


----

Shared via my feedly newsfeed


Sent from my iPhone

Nominate an Awesome Community Chef [feedly]



----
Nominate an Awesome Community Chef
// Chef Blog

The Chef community is full of many awesome individuals who contribute and do exceptional things every day. Each year at ChefConf, individuals are awarded the Awesome Community Chef award. The Awesome Community Chef awards are a way for the community to recognize a few of the individuals who have made a dramatic impact and have helped further the cause.

Think about the people who have helped you succeed with Chef. Did they help you learn about Chef, build a tool that helped make Chef better, or done something awesome for the community?

We are planning to award the Awesome Community Chef award to three community members and one Chef employee this year.

Nominations will close on Thursday, June 23 at 12 PM (noon) Pacific Time.

Winners will be announced at ChefConf during the Awesome Chef Awards and Closing Ceremony, Wednesday, July 13, 2016 at 5:00PM.

Take a few minutes to nominate your favorite Awesome Community Chefs today.

Current Awesome Community Chefs

2013

2014 2015 Nominate your favorite Awesome Community Chefs now!
----

Shared via my feedly newsfeed


Sent from my iPhone

Wednesday, June 8, 2016

“New” Citrix Best Practices 2.0 [feedly]



----
"New" Citrix Best Practices 2.0
// Citrix Blogs

It's been a couple years since I published the first "New Citrix Best Practices" article, so I wanted to publish another article for a couple reasons. The first is pretty obvious in that things change quickly in this industry – what we considered leading practices last year might not be anymore. Even I look back […]

  

Related Stories


----

Shared via my feedly newsfeed


Sent from my iPhone

Dev report 3 on Xen Orchestra 5.0 [feedly]



----
Dev report 3 on Xen Orchestra 5.0
// Xen Orchestra

Third dev report! You can check dev report 1 and dev report 2 if you missed them!

Let's see what's new on the performance side. And because we have some users willing to let us test, we could validate XO performances before a release. And that's very cool.

38 times faster

Our goal is to give you an UI which load almost instantly. So we spent time to optimize our data model in xo-web to be blazing fast. And guess what? We did it!

Real example? The time needed to load the home view when you are coming from the VM view (could be anywhere else in the application).

In milliseconds, in 2 cases:

  • with 50 VMs
  • and with 1200 VMs

Here is the loading time between XO 4 (current) and XO 5 (less is better):

Respectively 8 and 38 times faster!

In short: it doesn't depend of your infrastructure size anymore!

But that's not all: we also fixed some xo-server to be more memory efficient and faster. For larger infrastructure, XOA can still fit in a 2GB RAM for the whole system!

As you can see, this next release of Xen Orchestra will be really a huge leap in many ways!

XO 5 almost ready to take off


----

Shared via my feedly newsfeed


Sent from my iPhone

Planning for default changes in MariaDB Server 10.2 [feedly]



----
Planning for default changes in MariaDB Server 10.2
// MariaDB blogs

Wed, 2016-05-18 14:13
Colin Charles

MariaDB Server 10.2 has been a long time coming, as planning goes. We met in Amsterdam in October 2015 to start fleshing things out (and also managed a 10.1 GA then), and made a first alpha release in April 2016. If all goes well, 2016 will definitely see the GA release of MariaDB Server 10.2.

But this also means that it may be time to make changes in the server, and there is lively discussion on the maria-discuss/maria-developers mailing lists on some of these topics. In this post I'd like to bring your attention to removing reserved keywords, looking at syntax for aggregate stored functions as well as looking forward to default configuration changes in MariaDB Server 10.2.

One of the first discussions would be started by developer Alexander Barkov, as to why there are reserved keywords: UTC_TIME, UTC_DATE, UTC_DATETIME. The idea was to make them non-reserved keywords in MariaDB Server 10.2, and Peter Laursen and I started of with the idea that things remain compatible with MySQL, and maybe filing a bug to have them removed as reserved keywords there too. Sergei offered a good explanation as to why they were made reserved in the first place (i.e. a mistake), and in principle this made me OK with removing their reserved nature. Jean-François Gagné from Booking chimed in, suggesting that wider consensus would be a good idea -- hence this post! Will Fong, support engineer at MariaDB Corporation cites this as a migration issue (and if this is the case, I'd like to see examples, so please feel free to drop a comment here). While Sergei believes this is a bikeshed colour issue, and the change won't happen in MariaDB Server 10.2, I obviously think it deserves more attention.

For MariaDB Server 10.2 its also worth noting that if we're focused on syntax, aggregate stored functions will require this decision soon, since its part of Google Summer of Code 2016.

And while we're on planning MariaDB Server 10.2, Daniel Black has kicked off discussions on configuration changes to consider in MariaDB Server 10.2, with the attached Jira ticket, MDEV-7635 update defaults and simplify mysqld config parameters.

Looking forward to comments here, on the mailing lists, or even Jira tickets.

About the Author

Colin Charles's picture

Colin Charles is the Chief Evangelist for MariaDB since 2009, work ranging from speaking engagements to consultancy and engineering works around MariaDB. He lives in Kuala Lumpur, Malaysia and had worked at MySQL since 2005, and been a MySQL user since 2000. Before joining MySQL, he worked actively on the Fedora and OpenOffice.org projects. He's well known on the conference track having spoken at many of them over the course of his career.


----

Shared via my feedly newsfeed


Sent from my iPhone

MariaDB 10.1 Now Supports Amazon RDS [feedly]



----
MariaDB 10.1 Now Supports Amazon RDS
// MariaDB blogs

Fri, 2016-06-03 17:49
Jessica Taylor

Starting today, customers can launch MariaDB version 10.1 instances on Amazon RDS. They can also upgrade their existing Amazon RDS for MariaDB database instances from version 10.0 to 10.1 using either the console or API.

Since the launch of support for the open source MariaDB database in Amazon RDS in October 2015, thousands of customers have leveraged RDS to make it easy to set up, operate, and scale their MariaDB servers in the cloud.

MariaDB 10.1 is the latest major version release and offers a number of enhancements for better performance, and scalability.

 

Some of the key new features in MariaDB 10.1 are:

 

  • XtraDB/InnoDB page compression

  • XtraDB/InnoDB data scrubbing

  • XtraDB/InnoDB defragmentation

  • Optimistic in-order parallel replication

  • ORDER BY optimization

  • WebScale SQL patches

 

Amazon RDS for MariaDB 10.1 is available in all AWS regions. To learn more about Amazon RDS for MariaDB, please refer to the RDS documentation.

 

Learn more on our MariaDB AWS Partner Page


 


----

Shared via my feedly newsfeed


Sent from my iPhone

Open Source at Docker, Part 2: The Processes [feedly]



----
Open Source at Docker, Part 2: The Processes
// Docker Blog

The Docker open source project is among the most successful in recent history by every possible metric: number of contributors, GitHub stars, commit frequency, … Managing an open source project at that scale and preserving a healthy community doesn't come without challenges.

This post is the second of a 3-part series on how we deal with those challenges on the Docker Engine project. Part 1 was about the people, part 2 covers the processes.
 

Optimizing the right thing

The numbers we have shown in this series' previous blog post demonstrates the scale at which we are operating, and the huge amount of contributions that Docker receives from people who are neither maintainers nor employees. I believe that a reason for that is that our processes are tailored for a great contributor experience first, and to optimize maintainer time (a very scarce resource!) second.


Our processes are tailored for a great #opensource contributor experience first – @icecrime
Click To Tweet


Contributors and maintainers have a very different perspective: the former have put significant amount of effort in a pull request and want feedback in a timely manner, while the latter are battling to stay on top of an overwhelming flow of contributions (there is the typical "pet versus cattle" analogy to be made here). For both these actors, visibility is key: understanding how far a contribution is in the process, what is the next step, and who is responsible for it.

We improve visibility thanks to an extensive use of GitHub labels. In particular, we model the pull request reviewing workflow as an iterative process where every step is associated with a numbered label. The order of these steps is meant to avoid frustration for the contributor: for example, we don't want to have someone fix typos in the documentation unless we are reasonably confident the changeset will get merged (code and design approved).

 

status_opensource

 

Each pull request can only be in one of those states at a given point in time, and how we transition from one step to the next is pretty straightforward: for example, going from 2-code-review to 3-docs-review requires two maintainers to comment with "LGTM" (Looks Good To Me). But here again, it's the underlying trust and mutual respect that makes the project work smoothly while preserving a high level of quality. Participants' opinions are not taken lightly, and an established member of the community expressing legitimate concerns against a particular pull request will usually call for more discussion until consensus can hopefully be reached.

In addition to the reviewing workflow, we use a few special labels to keep track of pull requests that aren't making good progress and need maintainers' attention, or pull requests that are currently not passing tests.

 

status_failing_ci
status_needs_attn

 

This is all about giving to contributors and maintainers alike an understanding of progress at first sight. As any other process in the project, its description is stored in the repository and managed as code: if you want to improve the pull request reviewing process, you'll have to send a pull request for it! And of course, feel free to copy our rules for your project (you can even easily replicate our labels).

 

Becoming a maintainer

I've already mentioned maintainers a lot in this blog series, but how can somebody become one? Well guess what: the rules are documented in a repository and managed as code.

We want to make it possible for anyone to become a maintainer, regardless of experience and amount of time one can dedicate to the project. Hence, what we're looking at is regularity: it's about being an active member of the community for an extensive period of time (over 3 months). The maintainers will notice that, and eventually they will start a vote on the mailing list to grant commit rights.

But being an active member of the community is not only about submitting code. It's also about helping to review other people's code, coaching them with their pull requests, answering questions, reproducing issues, … Really, the best way to become a maintainer is to start helping like one.


Best way to became an #opensource maintainer is to start helping like one – @icecrime
Click To Tweet


Finally, a fun fact is that even Docker employees must earn their maintainer rights. How so? Well, by becoming an active member of the community for an extensive period of time and get voted by the other maintainers!
 

What's next

There would be a lot more to say about our issue triage, our release process, or our maintainer meetings. Feel free to come ask questions on IRC if you are curious, or read through the documentation!

Maintainer activity, pull requests, issues, … All those things can be measured, some tasks can be automated, and this is exactly what the next and final post of the series will cover.


Read more about #OpenSource at #Docker, Part 2: The Processes by @icecrime
Click To Tweet



 

Learn More about Docker


----

Shared via my feedly newsfeed


Sent from my iPhone

Docker 101: Getting to Know Docker [feedly]



----
Docker 101: Getting to Know Docker
// Docker Blog

At Docker, we strive to create tools of mass innovation. But what exactly is Docker? And how can it benefit both your developers looking to build applications quickly and your IT team looking to manage the IT environment?

As part of our mission to educate practitioners on Docker and revolutionize the way that they build, ship and run their applications, Technical Evangelist Mike Coleman and myself presented a Docker 101 webinar.

In this webinar, we discussed several introductory topics including:

  • The difference between containers and VMs
  • Defined key Docker terminology that beginners should familiarize themselves with
  • Explained how Docker drives modern application initiatives taking place in the enterprise
  • Gave an overview of the Docker Containers-as-a-Service platform
  • Walked through a live demo of deploying a website via Docker Cloud

At the end of Mike's presentation, we answered a few questions from the attendees. You can watch the webinar replay and read the answers from the Q&A section below:


Watch @mikegcoleman present a #Docker 101 overview in this recorded webinar #learndocker
Click To Tweet



Webinar Q&A

 

Q: What exactly is a container?

A: Containerization uses the kernel on the host operating system (Linux today with Windows container support coming with Windows Server 2016) to run multiple root file systems. Each root file system is called a container. A Container is a standard unit in which an application resides. A container packages an application and everything it needs to run into one portable unit. Each container has its own: processes, memory, devices and network stack. Containers are managed by the Docker engine. The Docker Engine is responsible for creating and managing containers and can run in any physical, virtual or cloud environment.

 

Q: How is a container different from a VM?

A: It's important to realize that containers are NOT vms. Containers leverage shared resources, are lighter weight, have faster instantiation, do not require a hypervisor and provide greater portability. They are ideal for microservices environments. Because of this they can reduce costs (no hypervisor licensing costs and potentially more efficient hardware utilization) to the enterprises while accelerating application development.

VMs use isolated resources, require a full OS, take several minutes to boot, are hypervisor based and are used for monolithic app architectures.

 

Q: What exactly is an image registry, and what are the Docker options?

A: A registry is where a Docker image is stored and secured. A Docker image is a snapshot of an application and serves as the basis of a container. Once an image is instantiated by the Docker engine via the docker run command the engine spins up a container. The engine instantiates a new process based on the image, and adds a read/write layer to the top of the image to create a container

Here at Docker there are three registry options: the open source registry, Hub or Docker Trusted Registry. Docker  Hub is our SaaS hosted commercial registry and Docker Trusted Registry (DTR) is our commercial on-premises registry.

 

Q: How can one container be aware of the other containers being run?

A: Containers themselves are isolated. With the help of the 200,000 person strong Docker Community we have created Docker Networking. At its most basic level Docker Networking enables containers to talk to one another. We can support host only networks as well as multi-host networks as well, where containers can talk across hosts. You can learn more about Docker Networking here.

 

Q: How can we deploy apps into cluster environments with Docker?

A: You can use Docker Swarm for orchestration of your dockerized applications. Docker Swarm is a scalable and production-ready Docker engine clustering and scheduling tool. It allows you to create clusters of Docker nodes, and deploy apps across various nodes within your environment. Swarm has built in high availability, so if a node goes down, containers can automatically failover to another node.

You can utilize different deployment strategies as well. For instance, you can use the "spread" method to event spread your code across the nodes within your environment.The  "random" method will deploy code to random nodes within your environment. The "binpack" strategy allows you to fully load a node before deploying to another one.

NOTE: Docker Swarm is embedded with Docker Universal Control Plane (UCP), giving UCP the ability to manage production nodes and leverage Docker APIs.

Docker Compose is another tool that you can leverage for orchestration of your applications. Compose allows you to deploy multi-container applications into your environment.

 

Q: I'm joining a startup where we want to go all-in with Docker, and I as a person need to get everything done. How do I go from web developer to Docker DevOps hero?

A: Love this question. We are always happy to hear when companies are going all in on Docker. So first off, thank you. The good news is that you are already on your way. The best thing to do is come up to speed on the Docker technology. We have several Docker docs that can walk you through getting started with Docker.

In addition here is our  "Making the Journey to DevOps" whitepaper. Also take a look at our Docker and the 3 Ways of DevOps whitepaper as well. They provide guidance for how Docker enables DevOps.

 

Q: What is the difference between OSS and Docker's commercial offerings?

A: It really comes down to your team. Our OSS provides the tools that "do it yourself " teams can use to build, ship and run their applications. We encourage users to leverage our OSS technology when looking to build a container-based platform, themselves.

However, if your team is looking for an end to end Container-as-a-Service (CaaS) platform that is built by Docker and that you can leverage versus building themselves, we have our commercial options. The commercial options offer key enterprise features (LDAP/AD integration, web UI, SLAs), security features (i.e role based access controls, image security scanning, on-premises deployment), as well as support from the Docker team. Docker Cloud and Docker Datacenter are our two commercial offerings. Docker Cloud is built for smaller teams in need of a CaaS platform that is hosted in the cloud. Docker Datacenter is ideal for larger mid to enterprise teams that require an on-premises CaaS platform.

 

Q: How can devs log into a cluster of containers to access logs or to config?

A: So you wouldn't actually log into the container to access logs. What you would do is push logs out to something  like an ELK stack so you can look at them. Essentially the same way that you would monitor today in an enterprise deployment. Docker Universal Control Plane actually has the ability to push logs out to external logging services as well.

In terms of configuration. Configuration is defined by the Dockerfile. Containers are stateless so you kill or restart them. We recommend you keep the dockerfile in something like GitHub.

 

Q: Can you speak more about load balancing?

A: Sure, for load balancing Docker users can leverage a load balancer like HA Proxy (which is a container) or NGINX. You could use these in combination with Interlock. Interlock monitors Docker events and updates NGINX and HA Proxy when something is added to the load balancer. If a container is killed, Interlock also notifies the load balancers and the load balancer will then remove the workload.

 

Q: Is there a way to learn about Docker from a professional standpoint?

A: Absolutely. We offer Docker Training which will help teach you the skills you need to become a Docker professional and start helping your company reduce costs, and accelerate the application development process.

 

As companies embrace DevOps, move to microservices and migrate to the cloud, we hope that they will consider Docker as their platform of choice.

Get started with Docker today by installing the tools.

 


 

Learn More about Docker


----

Shared via my feedly newsfeed


Sent from my iPhone

Open Source at Docker, Part 1: The People [feedly]



----
Open Source at Docker, Part 1: The People
// Docker Blog

The Docker open source project is among the most successful in recent history by every possible metric: number of contributors, GitHub stars, commit frequency, etc. Managing an open source project at that scale and preserving a healthy community doesn't come without challenges.

This post is the first of a 3-part series on how we deal with those challenges on the Docker Engine project, starting with the most important aspect of all: the people.

 

Open Source Culture

Do you know what Docker's IPv6 support, user namespaces, or the recently added docker update command have in common? They have been contributed by members of community.

There are as many open source cultures as there are open source projects, and the spectrum is extremely wide ranging from "projects with open code", to projects that foster and encourage outside participation.

One of the things that I love the most about the Docker project is how much effort maintainers put into creating a welcoming environment. One part of it is defining the right processes supported by the right tooling (more on that in the upcoming posts in this series), but far more importantly, this is about being helpful, positive, and supportive.

For every pull request, Docker maintainers will work their best to "reach a yes". Of course we don't merge everything we receive, but we do merge 80% of pull requests! And an important reason for that is because whether you are unfamiliar with git, with Go, or with Docker itself, maintainers will help you get to the point where we can push the green button. We know how intimidating it can be as a first time contributor to open source because we have all been there: we hope we can encourage more people to become open source citizens this way, whether on our projects or on others.

 

The Shape of the Community

It won't come as a surprise to anyone that a fast-growing open source project with that much attention from the industry naturally attracts commercial interests, and that the Docker project is backed by the Docker Inc company. In that context, culture and respect are essential to preserve a healthy community and a welcoming environment for professionals and hobbyists alike.

And the numbers are here to tell! In the past year, 58% of pull requests submitted to the Docker Engine were authored by people who are neither maintainers nor Docker employees, 12% by maintainers working for other companies, and 30% by Docker employees themselves.

docker_maintainers_pie

 

Putting It All Together

This is how the Docker Engine is made:

 

symE-toifbbb1k-DX6OJ76Q

 

So really the project would be nothing without its community, and although the 1,962 people who contributed to the project so far would deserve their picture on this post, we can start by giving a shout-out the some of our outstanding maintainers, past and present:

 

Lehmann

Aaron Lehmann
@aaronlehmann

Morozov

Alexander Morozov
@LK4D4

calavera

David Calavera
@calavera

Venugopal

Madhu Venugopal
@mavenugo

Jitang

Lei Jitang
@coolljt0725

Bauer

Morgan Bauer
@MHBauer

Goff

Brian Goff
@cpuguy83

Anthony

Mary Anthony
@moxiegirl

crosby

Michael Crosby
@crosbymichael

Murdaca

Antonio Murdaca
@runcom

McGowan

Derek McGowan
@dmcgowan

Day

Stephen Day
@stevvooe

Nephin

Daniel Nephin
@dnephin

Van Stijn

Sebastiaan Van Stijn
@thaJeztah

Davis

Doug Davis
@duglin

Gravi

Tianon Gravi
@tianon

Hollensbe

Erik Hollensbe
@erikh

Vass

Tibor Vass
@tiborvass

Windisch

Eric Windisch
@ewindisch

Windisch

Tonis Tiigi
@tonistiigi

Estes

Phil Estes
@estesp

Jack

Uncle Jack
@unclejack

Porterie

Arnaud Porterie
@icecrime

Batts

Vincent Batts
@vbatts

Frazelle

Jessie Frazelle
@jfrazelle

Demeester

Vincent Demeester
@vdemeester

Howard

John Howard
@jhowardmsft

Vieux

Victor Vieux
@vieux

Cormack

Justin Cormack
@justincormack

Kannan

Vish Kannan
@vishh


 

Even with the most amazing maintainers, some amount of process is required to keep things going smoothly given this huge activity, and this is what the next post in this series will be focusing on. In the meantime, if you want to join the fun, check out one of our numerous repositories, read our contributing guides, or come chat on IRC Freenode #docker-dev!


Big thank you to the awesome #Docker maintainers for supporting the #opensource project
Click To Tweet



 

Learn More about Docker


----

Shared via my feedly newsfeed


Sent from my iPhone