Tuesday, May 5, 2015

CoreOS State of the Union at CoreOS Fest [feedly]



----
CoreOS State of the Union at CoreOS Fest
// CoreOS Blog

At CoreOS Fest we have much to celebrate with the open source community. Today over 800 people contribute to CoreOS projects and we want to thank all of you for being a part of our community.

We want to take this opportunity to reflect on where we started from with CoreOS Linux. Below, we go into depth about each project, but first, a few highlights:

  • We've now shipped CoreOS Linux images for nearly 674 days, since the CoreOS epoch on July 1, 2013.
  • We've rolled out 13 major releases of the Linux kernel from 3.8.0, released in February 2013, to the 4.0 release in April 2015.
  • In that time, we have tagged 329 releases of our images.
  • We have 500+ projects on GitHub that mention etcd, including major projects like Kubernetes, using etcd.

CoreOS Linux

Our namesake project, CoreOS Linux, started with the idea of continuous delivery of a Linux operating system. Best practice in the industry is to ship applications regularly to get the latest security fixes and newest features to users – we think an operating system can be shipped in a similar way. And for nearly two years, since the CoreOS epoch on July 1, 2013, we have been shipping regular updates to CoreOS Linux machines.

In a way, CoreOS Linux is a kernel delivery system. The alpha channel has rolled through 13 major releases of the Linux kernel from 3.8.0 in February 2013 to the recent 4.0 release in April 2015. This doesn't include all of the minor patch releases we have bumped through as well. In that time we have tagged 329 releases of our images. To achieve this goal, CoreOS uses a transactional system so upgrades can happen automatically.

CoreOS Linux stats and community

CoreOS Linux stats shared at CoreOS Fest

Community feedback has been incredibly important throughout this journey: users help us track down bugs in upstream projects like the Linux kernel, give us feedback on new features, and flag regressions that are missed by our testing.

A wide variety of companies are building their products and infrastructure on top of CoreOS Linux, including many participants at CoreOS Fest:

Deis, a project recently acquired by Engine Yard, spoke yesterday on "Lessons Learned From Building Platforms on Top of CoreOS" Mesosphere DCOS uses CoreOS by default, and we are happy to have them sponsor CoreOS Fest Salesforce Data.com spoke today on how they are using distributed systems and application containers Coinbase presented a talk today on "Container Management & Analytics"

etcd

We build CoreOS Linux with just a single-host use case in mind, but wanted people to trust and use CoreOS to update their entire fleet of machines. To solve this problem of automated yet controlled updates across a distributed set of systems, we built etcd.

etcd was initially created to provide an API-driven distributed "reboot lock" to a cluster of hosts, and it has been very successful serving this basic purpose. But over the last two years, adoption and usage of etcd has been exploded: today it is being used as a key part of projects like Google's Kubernetes, Cloud Foundry's Diego, Mailgun's Vulcan and many more custom service discovery and master election systems.

At CoreOS Fest we have seen demonstrations of a PostgreSQL master election system built by Compose.io, a MySQL master election system built by HP, and a discussion by Yodlr about how they use it for their internal microservice infrastructure. With feedback from all of these users of etcd, we are planning an advanced V3 API, a next-generation disk-backed store and writing new punishing long-running tests to ensure etcd remains a highly reliable component of distributed infrastructure.

CoreOS' etcd stats and community

etcd stats shared at CoreOS Fest

fleet on top of etcd

After etcd, we built fleet, a scheduler system that ties together systemd and etcd into a distributed init system. fleet can be thought of as a logical extension of systemd that operates at the cluster level instead of the machine level.

The fleet project is low level and designed as a foundation for higher order orchestration: its goal is to be a simple and resilient init system for your cluster. It can be used to run containers directly and also as a tool to bootstrap higher-level software like Kubernetes, Mesos, Deis and others.

For more on fleet, see the documentation on launching containers with fleet.

CoreOS' fleet stats and community

fleet stats shared at CoreOS Fest

rkt

The youngest CoreOS project is rkt, a container runtime, which was launched in December. rkt has security as a core focus and was designed to fit into the existing Unix process model to integrate well with tools like systemd and Kubernetes. And rkt was also built to support the concept of pods: a container composed of multiple processes that share resources like local network and IPC.

Where is rkt today? At CoreOS fest we discussed how rkt was integrated into Kubernetes, and showed this functionality in a demo yesterday. rkt is also used in Tectonic, our new integrated container platform. Looking forward, we are planning improved UX around trust and image handling tools, advanced networking capabilities, and splitting the stage1 out from rkt to support other isolation mechanisms like KVM.

CoreOS' rkt stats and community

rkt stats shared at CoreOS Fest

Container networking

Containers are most useful when they can interact with other systems over the network. Today in the container ecosystem we have some fairly basic patterns for network configuration, but over time we will need to give users the ability to configure more complex topologies. CNI (Container Network Interface) defines the API between a runtime like rkt and how a container actually joins a network, via an external plugin interface. Our intention with CNI is to develop a generic networking solution supporting a variety of tools, with reusable plugins for different backend technologies like macvlan, ipvlan, Open vSwitch and more.

flannel is another important and useful component in container network environments. In our future work with flannel, we'd like to introduce a flannel server, integrate it into Kubernetes and add generic UDP encapsulation support.

Ignition: Machine Configuration

Ignition is a new utility for configuring machines on first boot. This utility provides similar mechanisms to coreos-cloudinit but will provide the ability to configure a machine before the first boot. By configuring the system early, problems like ordering around network configuration are more easily solved. Just like coreos-cloudinit, Ignition will also have the ability to mark services to start on boot and configure user accounts.

Ignition is still under heavy development, but we are hoping to be able to start shipping it in CoreOS in the next couple of months.

Participate!

We encourage all of you as users of our systems and to continue having conversations with us. Please share ideas and tell us about what is working well, what may not be working well, and how can continue to have a useful feedback loop. In the GitHub repos for each of these projects, you can find a CONTRIBUTING.md and ROADMAP.md which outlines how to get started and where the projects are going. Thank you to our contributors!

We will also have the replays of the talks available at a later date, which will include a demo of Ignition and more


----

Shared via my feedly reader


Sent from my iPhone

No comments:

Post a Comment