Wednesday, August 16, 2017

Install OpenVAS for Broad Vulnerability Assessment [feedly]

Install OpenVAS for Broad Vulnerability Assessment
https://null-byte.wonderhowto.com/how-to/install-openvas-for-broad-vulnerability-assessment-0179318/

-- via my feedly newsfeed

OpenVAS is a powerful vulnerability assessment tool. Forked from Nessus after Nessus became a proprietary product, OpenVAS stepped in to fill the niche. OpenVAS really shines for information gathering in large networks where manual scanning to establish a foothold can be time-consuming. OpenVAS is also helpful for administrators who need to identify potential security issues on a network. In this article, I will demonstrate the configuration and installation of OpenVAS, or Open Vulnerability Assessment System, in Kali Linux. Step 1: We're Going to Need a Shell for This The first step is to... more

Chef Automate Release – August 2017 [feedly]

Chef Automate Release – August 2017
https://blog.chef.io/2017/08/15/chef-automate-release-august-2017/

-- via my feedly newsfeed

Last week, Chef announced Chef Automate 1.6, which provides significant new capabilities to help organizations reduce risk, improve efficiency, and increase speed. Headline features of the release include new capabilities to make it easier to detect and act on compliance issues, improved data handling to reduce storage requirements by 20%, and notifications to help teams respond quickly to critical problems.

Compliance Features to Help Detect & Correct Issues

Organizations often use Chef Automate to gain visibility of compliance status across the fleet. Updates in 1.6 put more power in operators' hands to assess current status, satisfy reporting requirements, and take corrective action. All compliance profiles have been updated with build numbers to make it easier to apply precisely the set of controls needed. A new filter allows users to search and filter by control, enabling more granular insight into status. And users can now see detailed compliance status and history by node. Importantly, Chef Automate 1.6 adds the ability to export reports to CSV, making it easier to satisfy audit requirements.

Improved Data Handling to Support Growth

Chef Automate is used in a variety of demanding environments, including those that span beyond 50,000 nodes.  In the 1.6 release, we've improved data handling in Chef Automate and upgraded to the latest version of Elasticsearch.  As a result, users can expect a 20% reduction in on-disk index size for converge and compliance data, improving capacity for growth and helping drive down costs.

Notifications for Faster Response

With the 1.6 release, we have made notifications available as an open beta program. Chef Automate now supports simple configuration of Slack or webhook notifications for Chef client run failures and critical compliance control failures. This capability helps shorten the detect-and-correct cycle to reduce security and compliance risk while maintaining application availability and performance.

Learn More

The post Chef Automate Release – August 2017 appeared first on Chef Blog.

Build your IT automation and DevOps skills this summer [feedly]

Build your IT automation and DevOps skills this summer
https://blog.chef.io/2017/08/14/build-your-it-automation-and-devops-skills-this-summer/

-- via my feedly newsfeed

The dog days of summer are upon us making it a great time to stay inside with the air-conditioning to focus on developing your IT automation and DevOps skills. While planning this blog post, I came across a quote from Bob Melk, President of Dice:

"Skills that were used a year ago may not be as prominent today; skills that are relevant today will evolve tomorrow. This creates a marketplace where both tech professionals and employers must keep their fingers on the pulse of skills training and demand."

Skill building for Career Growth

Here at Chef, we are staying on top of the changing tech skills and providing on demand training modules on Learn Chef Rally. This learning site is full of content and resources to help you develop the skills you need on your resume to achieve career growth.

Developing your DevOps skills and building a career around IT automation has many benefits. This year, DevOps Engineer is listed as #2 on the Glassdoor list of "best jobs of the year" with a median based pay of $110,000. Yowzer! But wait, it gets even better. According to a recent Dice.com salary survey, being proficient with Chef brings in an average annual salary of $112,523. The reports also indicate that there is a large demand for those skilled in DevOps and Chef.

Learn new skills on Learn Chef Rally

With those numbers in mind, and your motivation to learn new skills, the Learn Chef Rally site is the place to go. By creating an account and logging in each time you visit the site, we'll help you monitor your progress and award you with badges every time you complete a track.

Here are a few tracks I recommend to get started:

Infrastructure Automation

In this track you will discover how the test and repair approach enables you to turn infrastructure into code and serve it up quickly. You will explore the Chef basics and learn to configure a system using a mix of resources, recipes, and cookbooks.

 

Local Development and Testing

This track teaches you to find errors in cookbooks by testing them on local machines. You will learn to set up a virtual environment, develop code, and use every tool in the kitchen to ensure that everything works.

 

Integrated Compliance

Chef Automate now features extensive compliance automation capabilities and powerful reporting features. In this track you will use continuous automation to automatically detect and remediate compliance issues. Put your InSpec knowledge to the test and try your hand at ensuring that a service is HIPAA-compliant.

 

Try Chef Automate

Not ready for a deep dive learning experience but you want to get an overview of Chef Automate? In this module, you're going to get Chef Automate running in 3 steps. Once you are up and running, you will explore what Chef Automate can do, you'll scan a few systems for compliance, and check whether they adhere to the recommended guidelines.

Start Learning

Don't let this summer slip away without taking steps to develop your skills and move your career forward with Chef and DevOps. As an added incentive, prove your skills with Chef Certification by August 31st and save 20% off exam fees with discount code: SUMMERCERT.

The post Build your IT automation and DevOps skills this summer appeared first on Chef Blog.

Continuous Automation at Texas A&M University [feedly]

Continuous Automation at Texas A&M University
https://blog.chef.io/2017/08/11/continuous-automation-texas-am-university/

-- via my feedly newsfeed

Velocity increasingly drives the modern IT landscape. The convenience of being able to control nearly every aspect of our lives from the phones in our pockets brings with it an expectation that the companies and services we patronize and depend on evolve to meet our needs at breakneck speed. IT automation has grown up in that environment. Achieving velocity that would have seemed impossibly ambitious scant years ago, is now not only possible, but standard operating procedure in more and more organizations. Of course, new challenges continue to crop up, as they inevitably do. Now more than ever, it's critical that as we adapt to achieve the speed our industry demands, we don't lose sight of ensuring those demands are met efficiently and without increasing risk to our systems.

Balancing Speed and Risk in Universities

This combination of challenges is keenly felt in the academic community. Universities are subject to the same demand for increasingly rapid innovation as their peers in private industry. They must look to meet those demands while serving the needs of a diverse array of actors, from students to faculty to researchers, all while adhering to the myriad regulatory requirements inherent in managing sensitive student data or government research.

Using Chef at Texas A&M University

I recently had the opportunity to talk with two IT professionals at Texas A&M University on a live broadcast webinar. You can watch a recording of the presentation below. Adam Mikeal, Director of IT at the College of Architecture, discusses the regulatory landscape, challenges it historically brings with it, and how automating compliance helps his team meet and exceed their goals. Blake Dworaczyk, Senior IT Professional at the College of Engineering, shares what implementing Chef has looked like for his team, and in particular how he was able to meet the challenge of expanding his team's adoption of Chef beyond the initial group of practitioners that put it in place.

Interested in learning more about automating your compliance? Check out the new Integrated Compliance track on Learn Chef Rally. If you'd like to get involved with our community, ask questions, or just hang out, come join us in community slack!

The post Continuous Automation at Texas A&M University appeared first on Chef Blog.

Tuesday, August 8, 2017

How to Map Networks & Connect to Discovered Devices Using Your Phone



----
How to Map Networks & Connect to Discovered Devices Using Your Phone
// Null Byte « WonderHowTo

Sharing your Wi-Fi password is like giving an unlimited pass to snoop around your network, allowing direct access even to LAN-connected devices like printers, routers, and security cameras. Most networks allow users to scan and attempt to log in to these connected devices. And if you haven't changed the default password on these devices, an attacker can simply try plugging them in. Networks scanners are recon tools for finding vulnerabilities and are often seen as the first stage in an attack. While primarily an information gathering tool, the Fing network scanner actually allows us to... more


----

Read in my feedly


Sent from my iPhone

Critical Flaws Found in Solar Panels Could Shut Down Power Grids



----
Critical Flaws Found in Solar Panels Could Shut Down Power Grids
// The Hacker News

A Dutch security researcher has uncovered a slew of security vulnerabilities in an essential component of solar panels which could be exploited to cause widespread outages in European power grids. Willem Westerhof, a cybersecurity researcher at Dutch security firm ITsec, discovered 21 security vulnerabilities in the Internet-connected inverters – an essential component of solar panel that

----

Read in my feedly


Sent from my iPhone

GDPR Compliance: Don’t compromise your speed and efficiency



----
GDPR Compliance: Don't compromise your speed and efficiency
// Chef Blog

With the changes in EU regulation that GDPR introduces, specifically relating to how the personal data of EU citizens must be handled, organisations are facing fresh challenges in how they prove compliance. GDPR brings particular burdens with the 'Privacy by Design' mandate that requires data privacy is part of the system design process from day one.

Failing to comply with GDPR could result in fines equal to 4% of Global revenue or ₠20m, whichever is greater.

Can we meet GDPR requirements without slowing down?

High velocity innovation is accepted as a necessity to remaining competitive in our increasingly digital industries, but with regulatory responsibilities, such as GDPR, we need to guarantee we're not exposing our businesses to reputational, legal, or financial risk.

Many organisations tell me that they're compromising their ability to move fast with their security responsibilities. Based on a Gartner report, 81% of IT operations professionals say they believe information security policies slow them down.

We're doing the DevOps; it makes our software deployment faster, we're much better at shipping the things our customers want. The problem with moving quickly is that we're potentially shipping insecure system changes or code vulnerabilities more rapidly too.

I see a lot of organisations running scans on production systems only – it's already too late at this point. Others have quarterly audit cycles – what happens in between audits? Does configuration drift, are there unknown risks? How about the cost of meeting the audit requirements?

According to a recent Chef survey of IT practitioners and decision-makers, 22% of respondents test compliance inconsistently and 23% don't test at all. When GDPR becomes enforceable in May of 2018, this lack of visibility may become very costly. Many organisations are faced with an unpleasant choice: slow down and become less responsive to customers, or risk steep GDPR penalties.

Applying Continuous Automation to address GDPR

Continuous automation is the foundation of a high velocity, software-focused organisation. When we treat compliance this way, we get out of reactive mode and make our applications continuously compliant by applying the DevOps principle – everything as code – to the GDPR controls supporting the privacy by design mandate. We do this at the start of the project, not as an afterthought.

By doing this we can put our code based compliance controls through the normal development workflow: we can test them, version them, apply them at scale and easily modify them. Most importantly it makes the controls incredibly easy to collaborate on by treating them as any other code asset in your software development process. Running compliance scans becomes as common as running unit tests.

Compliance becomes part of our development stage, our testing environments, and our production systems. We can execute scans every time we make a change, on a regular schedule or as a triggered event. Anyone in our IT org, or business as a whole, can access real time compliance data on demand and use this information to correct any issues that need to be remediated.

The average idle time before identifying a system breach is thought to be 200 days. In a GDPR audit this could cost your business 4% of its global turnover. Imagine if you could identify this on an engineering team's development workstation before it gets anywhere near a production like system. How would this ability change your business?

Detect, correct, and automate compliance

Continuous automation provides an inherent solution for complying with the GDPR privacy by design mandate. At Chef, we help customers on a journey to continuous automation that starts by detecting issues that could impact GDPR compliance, moves on to correcting those issues and proving compliance, then puts in place automation to make applications continuously compliant. Our continuous automation platform, Chef Automate, is designed to help organisations achieve success on that journey while reducing risk, improving efficiency, and increasing speed at each step.

It's important that, as GDPR looms on the horizon, we make the necessary changes to ensure we're meeting the standard, but this is not easy. I see the introduction of GDPR as an opportunity to rethink how we handle our overall compliance responsibilities in our businesses, and how evolving our InfoSec operations can be part of a larger digital transformation. As a first step, get visibility across your fleet to detect existing compliance risks and prioritize subsequent actions.

Learn More

To find out more about implementing continuous compliance in your organisation, go to https://www.chef.io/solutions/compliance/.

The post GDPR Compliance: Don't compromise your speed and efficiency appeared first on Chef Blog.


----

Read in my feedly


Sent from my iPhone

Monday, August 7, 2017

Cybersecurity headhunter shares 10 secrets from Black Hat 2017



----
Cybersecurity headhunter shares 10 secrets from Black Hat 2017
// CSO Online

Recruiting cybersecurity talent has never been more difficult. Cybersecurity Ventures predicts there will be 3.5 million unfilled cybersecurity jobs by 2021, and the unemployment rate is holding steady at zero percent

An unrelenting cybercrime epidemic has employers searching for the proverbial needle in a haystack when it comes to hiring experienced cybersecurity candidates. 

Thousands of security-minded professionals gathered under one roof at the popular Black Hat USA 2017 Conference last week in Las Vegas. Recruiters from executive search firms, large organizations, and technology vendors were busy networking with the hacker crowd. 

To read this article in full or to leave a comment, please click here


----

Read in my feedly


Sent from my iPhone

The MSRC 2017 list of “Top 100” security researchers



----
The MSRC 2017 list of "Top 100" security researchers
// MSRC

Security researchers play an essential role in Microsoft's security strategy and are key to community-based defense. To show our appreciation for their hard work and partnership, each year at BlackHat North America, the Microsoft Security Response Center highlights contributions of these researchers through the list of "Top 100" security researchers reporting to Microsoft.

This list ranks security researchers reporting directly to Microsoft according to the quantity and quality of all reports for which we've issued fixes. While one criteria for the ranking is volume of reports a researcher has made, the severity and impact of the reports is very important to the ranking. Higher-impact issues carry more weight than lower-impact ones. While this list does not include security researchers who report to our partners ZDI and iDefense as we do not always have full information to recognize their efforts, we very much appreciate the partnership with ZDI and iDefense as they ensure that we know about any reports affecting Microsoft products.

Given the number of individuals reporting to Microsoft, anyone ranked among the Top 100 is among some of the top talent in the industry. Regardless of where security researchers are ranked in this list, we appreciate their active and ongoing participation with the Microsoft Security Response Center, and encourage new researchers to report potential vulnerabilities to us at secure@microsoft.com.  We're excited to see who's going to be on the list next year.

MSRC team


----

Read in my feedly


Sent from my iPhone

Early release Video! Elie Bursztein - How We Created the First SHA 1 Collision



----
Early release Video! Elie Bursztein - How We Created the First SHA 1 Collision
// DEF CON Announcements!

DEF CON 25 Bursztein image

Today we bring you another Early Release Talk from DEF CON 25! This time it's a more nuts-and-bolts crypto talk about the creation of the first SHA-1 collision. In this talk, Elie Bursztein delves into the challenges faced from developing a meaningful payload, to scaling the computation to that massive scale, to solving unexpected cryptanalytic challenges.

As ever, enjoy and share the love. Pass it on.


----

Read in my feedly


Sent from my iPhone

Early Release Video from DEF CON 25!



----
Early Release Video from DEF CON 25!
// DEF CON Announcements!

DEF CON 25 kasparov image

Early release video from DEF CON 25 - Garry Kasparov's presentation 'The Brain's Last Stand'. As always, enjoy and make sure to pass it on!


----

Read in my feedly


Sent from my iPhone

On Conveying Doubt



----
On Conveying Doubt
// Talos Blog

This post was authored by Matt Olney.

Typically, Talos has the luxury of time when conducting research. We can carefully draft a report that clearly lays out the evidence and leads the reader to a clear understanding of our well supported findings. A great deal of time is spent ensuring that the correct words and logical paths are used so that we are both absolutely clear and absolutely correct.  Frequently, the goal is to inform and educate readers about specific threats or techniques.

There are times, however, when we are documenting our research in something very close to real-time. The recent WannaCry and Nyetya events are excellent examples of this. Our goal changes here, as does our process. Here we are racing the clock to get accurate, impactful, and actionable information to help customers react even while new information is coming in.

In these situations, and in certain other kinds of investigations, it is necessary for us to talk about something when we aren't 100% certain we are correct.  I'll provide two examples from our Nyetya blog posts:

Example 1:

"Given the circumstances of this attack, Talos assesses with high confidence that the intent of the actor behind Nyetya was destructive in nature and not economically motivated."

This is our response to customers who were asking "If I pay will I get my data back?".   There were a number of indications that made us think that this was unlikely, but we couldn't necessarily prove that there was no way it could occur at the time we published.  We weren't certain, but it was important to share our analysis quickly because customers needed information in order to make time-sensitive decisions, so we did so with a clear statement that there was room for error.

Example 2:

"This is a significant loss in operational capability, and the Threat Intelligence and Interdiction team assesses with moderate confidence that it is unlikely that they would have expended this capability without confidence that they now have or can easily obtain similar capability in target networks of highest priority to the threat actor."

Here we are speaking about an actor's thought process.  Obviously we aren't in a position to authoritatively speak about what is going through an actor's head.  But we can look at a broad set of circumstances, analyze them in the light of our past observations and experiences, and then try to understand what underlying meaning they might have.  Based on what we saw, we thought it important to express that the actor may have additional capability it had not shown, so again, we spoke in plain language that gave the reader information they could evaluate.

Speaking with doubt doesn't mean guessing.  At Talos it means applying experience and knowledge to a set of information that is incomplete and trying to extract actionable intelligence from that information.  When we document our findings externally, we are under an obligation to be crystal clear if we are engaging in some form of speculation in order to develop a thoughtful assessment based on strong indicators.  This doesn't make the information less valuable, but it does allow the reader to correctly weigh the information when prioritizing their own response.  As we move ahead, when Talos communicates doubt, we will do so using the following as guidance:

Phrase Estimated % Confidence
Low Confidence / Possible / Unlikely <35%
Moderate Confidence / Probable / Likely 35% - 69%
High Confidence / Highly Probable / Highly Likely >70%

Our primary mission is to place into our reader's hands the information they need to defend their systems and their networks.  We can't always wait until we are 100% certain of findings, particularly while we are in the midst of an incident.  By utilizing this language, we can share findings earlier and give customers the ability to evaluate our information and apply it to their defenses if necessary.

----

Read in my feedly


Sent from my iPhone

What is containerd ?



----
What is containerd ?
// Docker Blog

containerd

We have done a few talks in the past on different features of containerd, how it was designed, and some of the problems that we have fixed along the way.  Containerd is used by Docker, Kubernetes CRI, and a few other projects but this is a post for people who may not know what containerd actually does within these platforms.  I would like to do more posts on the featureset and design of containerd in the future but for now, we will start with the basics.

I think the container ecosystem can be confusing at times. Especially with the terminology that we use. Whats this? A runtime. And this? A runtime…  containerd as the name implies, not contain nerd as some would like to troll me with, is a container daemon.  It was originally built as an integration point for OCI runtimes like runc but over the past six months it has added a lot of functionality to bring it up to par with the needs of modern container platforms like Docker and Kubernetes.

containerd

Since there is no such thing as Linux containers in the kernelspace, containers are various kernel features tied together, when you are building a large platform or distributed system you want an abstraction layer between your management code and the syscalls and duct tape of features to run a container.  That is where containerd lives.  It provides a client layer of types that platforms can build on top of without ever having to drop down to the kernel level.  It's so much nicer towork with Container, Task, and Snapshot types than it is to manage calls to clone() or mount().

Containerd was designed to be used by Docker and Kubernetes as well as any other container platform that wants to abstract away syscalls or OS specific functionality to run containers on linux, windows, solaris, or other OSes.  With these users in mind, we wanted to make sure that containerd has only what they need and nothing that they don't.  Realistically this is impossible but at least that is what we try for.  Things like networking are out of scope for containerd.  The reason for this is, when you are building a distributed system, networking is a very central aspect.  With SDN and service discovery today, networking is way more platform specific than abstracting away netlink calls on linux.  Most of the new overlay networks are route based and require routing tables to be updated each time a new container is created or deleted.  Service discovery, DNS, etc all have to be notified of these changes as well.  It would be a large chunk of code to be able to support all the different network interfaces, hooks, and integration points to support this if we added networking to containerd.  What we did instead is opted for a robust events system inside containerd so that multiple consumers can subscribe to the events that they care about.  We also expose a task API that lets users create a running task, have the ability to add interfaces to the network namespace of the container, and then start the container's process without the need for complex hooks in various points of a container's lifecycle.

Another area that has been added to containerd over the past few months is a complete storage and distribution system that supports both OCI and Docker image formats.  You have a complete content addressed storage system across the containerd API that works not only for images but also metadata, checkpoints, and arbitrary data attached to containers.

We also took the time to rethink how "graphdrivers" work.  These are the overlay or block level filesystems that allow images to have layers and you to perform efficient builds.  Graphdrivers were initially written by Solomon and I when we added support for devicemapper.  Docker only supported AUFS at the time so we modeled the graphdrivers after the overlay filesystem.  However, making a block level filesystem such as devicemapper/lvm act like an overlay fillesystem proved to be much harder to do in the long run.  The interfaces had to expand over time to support different features than what we originally thought would be needed.  With containerd, we took a different approach, make overlay filesystems act like a snapshotter instead of vice versa.  This was much easier to do as overlay filesystems provide much more flexibility than snapshotting filesystems like BTRFS, ZFS, and devicemapper as they don't have a strict parent/child relationship.  This helped us build out a smaller interface for the snapshotters while still fulfilling the requirements needed from things like a builder as well as reduce the amount of code needed, making it much easier to maintain in the long run.

So what do you actually get using containerd?  You get push and pull functionality as well as image management.  You get container lifecycle APIs to create, execute, and manage containers and their tasks. An entire API dedicated to snapshot management.  Basically everything that you need to build a container platform without having to deal with the underlying OS details.  I think the most important part of containerd is having a versioned and stable API that will have bug fixes and security patches backported.


What's #containerd? All you need to know about #Docker's open and reliable #container runtime
Click To Tweet


Learn more about containerd:

The post What is containerd ? appeared first on Docker Blog.


----

Read in my feedly


Sent from my iPhone

Ace Your Next Audit: Enroll in New Citrix Security Courses



----
Ace Your Next Audit: Enroll in New Citrix Security Courses
// Citrix Blogs

Information security is mission-critical to everyone, but especially to companies in highly regulated industries such as healthcare, government and financial services. Your organization is constantly facing new security threats from ransomware to mobile malware, as well as new vulnerabilities

  

Related Stories


----

Read in my feedly


Sent from my iPhone

Sunday, August 6, 2017

Marcus Hutchins (MalwareTech) Gets $30,000 Bail, But Can't Leave United States



----
Marcus Hutchins (MalwareTech) Gets $30,000 Bail, But Can't Leave United States
// The Hacker News

Marcus Hutchins, the malware analyst who helped stop global Wannacry menace, has reportedly pleaded not guilty to charges of creating and distributing the infamous Kronos banking malware and is set to release on $30,000 bail on Monday. Hutchins, the 23-year-old who operates under the alias MalwareTech on Twitter, stormed to fame and hailed as a hero over two months ago when he stopped a

----

Read in my feedly


Sent from my iPhone

Friday, August 4, 2017

How to Think About Digital Workspaces (Part 1) [feedly]

How to Think About Digital Workspaces (Part 1)
https://www.citrix.com/blogs/2017/08/04/how-to-think-about-digital-workspaces-part-1/

-- via my feedly newsfeed

One of the latest phrases in the world of techie buzz-speak is the "Digital Workspace." But what, exactly, does it mean? Is it just a desktop? A virtual personal desktop? Must it reside in the Cloud? Does it need to 

   

Thursday, August 3, 2017

IT Starts with Docker



----
IT Starts with Docker
// Docker Blog

Happy SysAdmin Day! Cheers to all of you who keep your organizations running, keep our data secure, respond at a moment's notice and bring servers and apps back to life after a crash. Today we say, "Thank You!"

Docker Sysadmin Day

 Anniversaries are a great time to reflect on accomplishments of the last year: the projects you've completed, the occasions you've saved your company money or time, the new technology you've learned. In a role like IT, so much can change each year as technology progresses and becomes more challenging to stay ahead of that curve. So this SysAdmin Day, we at Docker want to congratulate your past successes and prepare you for the year to come.

Containers are not just for developers anymore and Docker is the standard for packaging all kinds of applications – Windows, Linux, traditional, and microservices. Over the next few months, we'll be covering how SysAdmins like yourself are enabling their organizations to innovate faster while saving their companies' money by embracing containers with Docker Enterprise Edition.

Sign up here to start your journey and learn how IT Starts with Docker. 

Docker for IT Pros

This multi-part series will include:

  • How Docker Enterprise Edition is helping IT organizations free up money for new initiatives by changing the way applications are deployed and maintained, and how customers are seeing 50-75% infrastructure savings when running containers in production.
  • Hands-on learning around container management and security to see how organizations are using containers across a broad spectrum of applications and infrastructure platforms.
  • How IT can lead the containerizing of traditional applications and gain application portability, security, efficiency in just 5 days. We'll provide a closer look at the Modernize Traditional Applications (MTA) program that is co-delivered by Docker and our strategic partners and how you can leverage that to start your organization's modernization efforts.
  • Close examination and customer stories of the key use cases for Docker Enterprise Edition to help you apply this new knowledge to your own upcoming IT projects.

Sign up today and we'll make sure that by next year's SysAdmin Day, you'll be able to reflect on how Docker has helped you accomplish even more in your organization.

To get started:


Sign up for your IT journey with Docker's new learning series. #SysAdminDay #ITstartswithdocker 
Click To Tweet


The post IT Starts with Docker appeared first on Docker Blog.


----

Read in my feedly


Sent from my iPhone

Docker 101: Introduction to Docker webinar recap



----
Docker 101: Introduction to Docker webinar recap
// Docker Blog

Introduction to Docker

Docker is standardizing the way to package applications, making it easier for developers to code and build apps on their laptop or workstation and for IT to manage, secure and deploy into a variety of infrastructure platforms

In last week's webinar, Docker 101: An Introduction to Docker, we went from describing what a container is, all the way to what a production deployment of Docker looks like, including how large enterprise organizations and world-class universities are leveraging Docker Enterprise Edition (EE)  to modernize their legacy applications and accelerate public cloud adoption.

If you missed the webinar, you can watch the recording here:

We ran out of time to go through everyone's questions, so here are some of the top questions from the webinar:

­Q: How does Docker get access to platform resources, such as I/O, networking, etc.­ Is it a type of hypervisor?

A: Docker EE is not a type of hypervisor. Hypervisors create virtual hardware: they make one server appear to be many servers but generally know little or nothing about the applications running inside them. Containers are the opposite: they make one OS or one application server appear to be many isolated instances. Containers explicitly must know the OS and application stack but the hardware underneath is less important to the container. In Linux operating systems, the Docker engine is a daemon installed directly in a host operating system kernel that isolates and segregates different procedures for the different containers running on that operating system. The platform resources are accessed by the host operating system and each container gets isolated access to these resources through segregated namespaces and control groups (cgroups). cgroups allow Docker to share available hardware resources to containers and optionally enforce limits and constraints. You can read more about this here.

Q: ­Are containers secure since they run on the same OS?­

Yes, cgroups, namespaces, seccomp profiles and the "secure by default" approach of Docker all contribute to the security of containers. Separate namespaces protects processes running within a container meaning it cannot see, and even less affect, processes running in another container, or in the host system. Cgroups help ensure that each container gets its fair share of memory, CPU, disk I/O; and, more importantly, that a single container cannot bring the system down by exhausting one of those resources. And Docker is designed to limit root access of containers themselves by default, meaning that even if an intruder manages to escalate to root within a container, it will be much harder to do serious damage, or to escalate to the host. These are just some of the many ways Docker is designed to be secure by default. Read more about Docker security and security features here. 

Docker Enterprise Edition includes additional advanced security options including role-based access control (RBAC), image signing to validate image integrity, secrets management, and image scanning to protect images from known vulnerabilities. These advanced capabilities provide an additional layer of security across the entire software supply chain, from developer's laptop to production.

Q: ­Can a Docker image created under one OS (e.g Windows) be used to run on a different operating system (e.g RedHat 7.x)?

A: Unlike VMs, Docker containers share the OS kernel of the underlying host so containers can go from one Linux OS to another but not from Windows to Linux. So you cannot run a .NET app natively on a Linux machine, but you can run a RHEL-based container on a SUSE-based host because they both leverage the same OS kernel.

Q: Is there another advantage other than DevOps for implementing Docker in enterprise IT infrastructure?

A: Yes! Docker addresses many different IT challenges and aligns well with major IT initiatives including hybrid/multi-cloud, data center and app modernization. Legacy applications are difficult and expensive to maintain. They can be fragile and insecure due to neglect over time while maintaining them consumes a large portion of the overall IT budget. By containerizing these traditional applications, IT organizations save time and money and make these applications more nimble. For example:

  • Cloud portability: By containerizing applications, they can be easily deployed across different certified platforms without requiring code changes.
  • Easier application deployment and maintenance: Containers are based on images which are defined in Dockerfiles. This simplifies the dependencies of an application, making them easier to move between dev, test, QA, and production environments and also easier to update and maintain when needed. 62% of customers with Docker EE see a reduction in their mean time to resolution (MTTR).
  • Cost savings: Moving to containers provides overall increased utilization of available resources which means that customers often see up to 75% improved consolidation of virtual machines or CPU utilization. That frees up more budget to spend on innovation,

To learn more about how IT can benefit from modernizing traditional applications with Docker, check out www.docker.com/MTA.

Q: Can you explain more about how Docker EE can be used to convert apps to microservices?

A: Replacing an existing application with a microservices architecture is often a large undertaking that requires significant investment in application development. Sometimes it is impossible as it requires systems of record that cannot be replaced. What we see many companies do is containerize an entire traditional application as a starting point. They then peel away pieces of the application and convert those to microservices rather than taking on the whole application. This allows the organization to modernize components like the web interface without complete re-architecture, allowing the application to have a modern interface while still accessing legacy data.

­Q: Are there any tools that will help us manage private/corporate images? ­Can we have host our own image repository in-house vs using the cloud?

A: Yes! Docker Trusted Registry (DTR) is a private registry included in Docker Enterprise Edition Standard and Advanced. In addition, DTR provides additional advanced capabilities around security (eg. image signing, image scanning) and access controls (eg. LDAP/AD integration, RBAC). It is intended to be a private registry for you to install either in your data center or in your virtual private cloud environment.

Q: ­Is there any way to access the host OS file system(s)?  I want to put my security scan software in a Docker container but scan the host file system.

A: The best way to do this is to mount the host directory as a volume in the container with "-v /:/root_fs" so that the file system and directory are shared and visible in both places. More information around storage volumes, mounting shared volumes, backup and more are here.


Top 7 questions from #Docker 101 – Webinar recap
Click To Tweet


Next Steps:

The post Docker 101: Introduction to Docker webinar recap appeared first on Docker Blog.


----

Read in my feedly


Sent from my iPhone

Open Container Initiative Specifications are 1.0



----
Open Container Initiative Specifications are 1.0
// CoreOS Blog

Open Container Initiative Specifications are 1.0

July 19, 2017
• By Brandon Philips
Tags:

After two years of work with major stakeholders in the community, we're excited to announce that the Open Container Initiative (OCI) image and runtime specifications have now hit version 1.0. This means there is now a stable industry standard for application containers that has been created and approved by leaders in the container industry. This is an important milestone for the OCI community, and we look forward to working with our partners to further facilitate standards and innovation.

CoreOS started the conversation about container specifications years ago, and we are pleased to have worked alongside the major leaders across the industry to create this release. As chair of the OCI Technical Oversight Board, we appreciate the work the open source community has done to reach this milestone. Users can expect the OCI to continue to help grow the market of interoperable and pluggable tools, giving them confidence that containers are here to stay. Further, Kubernetes users are already using parts of OCI specifications today and the community is actively working to ensure growing support for both the OCI Image and OCI Runtime specifications in future releases.

OCI: A history

At CoreOS, we believe that open standards are key to the success of the container ecosystem, and that the best way to achieve standards is by working closely with the community. The Open Container Initiative (OCI) is an open governance organization tasked with creating standards for container image formats and runtimes. CoreOS is a founding member of the OCI, and we've worked diligently with other leaders in the container industry – including such organizations as Docker, Microsoft, Red Hat, IBM and Google – to bring this project to its first stable release. The goal of the OCI is to create specifications that enable a compliant container to be portable across all major, compliant operating systems and platforms while minimizing technical barriers.

The origins of the OCI date back to 2015, when we met with Docker, Google and other stakeholders to discuss the future of the container industry. You can read more about it in this 2015 blog post. We all shared an interest in creating an industry standard which led directly to the creation of the OCI.

The OCI began with a focus on the runtime of containers; that is, creating a specification on the mechanisms and environment of an actively executing container. As the conversation developed, we agreed that a standard image format was even more critical to ensure application packagers could create one container image that could run in and be ported to any environment. And in 2016 the OCI Image Specification began. Over the last two years these two specifications have developed to maturity and have been implemented in a variety of implementations.

Thank you to the OCI and container community

We'd like to thank the founding members of the OCI and our Technical Oversight Board from CoreOS, Red Hat, Docker, Microsoft, Google, and the Linux Foundation, including: Chris Wright, Vincent Batts, Diogo Mónica, Michael Crosby, John Gossman, Jason Bouzane, Vishnu Kannan, and Greg Kroah-Hartman. We also want to thank Chris Aniszczyk and Jill Lovato, our leadership at the OCI. Finally, we'd like to thank the community for their help and for being the inspiration behind this project.

What to expect now?

The work done in the OCI will help ensure users can create container images, using any number of OCI conforming tools, and be assured they will run on any number of container orchestration environments that can execute OCI conforming images. This will ensure teams can choose the build and runtime tools that best meets their needs and with this 1.0 release users can be confident that any images and tooling built against this release will receive wide support well into the future.

However, the work of OCI isn't complete with this release. The maintainers of the OCI specifications will now turn their attention to a number of features and ideas that could wait until a post-1.0 release, including distribution, signing and continued platform support. Also, the conformance process for OCI can now begin to build processes to enable tools to be recognized as OCI conformant implementations. And finally, we anticipate that with this release a number of new and existing tools will implement the specifications, including the ecosystem of container engines, container orchestrators and container build tools.

You can stay up to date with information on implementations of the OCI, as well as Kubernetes and product releases by signing up for our newsletter or the OCI mailing list if you are interested in getting more involved.


----

Read in my feedly


Sent from my iPhone

August is back to school time! Join CoreOS for Kubernetes trainings and webinars, and meet us at Gartner Catalyst and PromCon



----
August is back to school time! Join CoreOS for Kubernetes trainings and webinars, and meet us at Gartner Catalyst and PromCon
// CoreOS Blog

August is back to school time! Join CoreOS for Kubernetes trainings and webinars, and meet us at Gartner Catalyst and PromCon

August 01, 2017
• By Nick Knight
Tags:

Head back to school this August at our Kubernetes trainings in New York and San Francisco. We'll host a number of webinars on Kubernetes, Tectonic, and more. Also find the CoreOS team at Gartner Catalyst in San Diego, and at PromCon in Munich!

Meetups and events

Webinar: How to correctly and quickly set up Kubernetes environments: August 2, San Francisco, Online
On Wednesday, August 2, Rob Szumski (@robszumski), Tectonic product manager at CoreOS, leads a discussion on how to correctly and quickly set up Kubernetes environments. Be sure to join him live at 10:00 a.m. PT and ask him questions.


Webinar Demo: Introduction to CoreOS Tectonic: August 2, San Francisco, Online
Join Jordan Cooks, solutions engineer at CoreOS, at 1:00 p.m. PT for a webinar where he'll discuss Tectonic's features, including automated operations, portability, and much more. He'll also touch on how Tectonic is helping businesses prepare for the digital demands of tomorrow.


Webinar: Build Your Own Operators Pilot Program: August 9, San Francisco, Online
On August 9, don't miss Mike Metral, senior architect at CoreOS, at 11:00 a.m. PT for an introduction to Operators. He'll discuss the migration changes from Third Party Resources to Custom Resource Definitions in Kubernetes, and walk through a demo on how to create a new Operator for your Kubernetes cluster. Register today.


August Kubernetes Training @ MicroTek New York: August 10-11, New York
Join us August 10 and 11 from 9:00 a.m. to 4:00 p.m. in New York for a Kubernetes training session. Get up to speed with the latest updates from the Kubernetes 1.7 release, and learn from the distributed systems experts. Reserve your spot today.


Kubernetes Colorado Meetup @ Deis: August 15, Boulder, CO
Helm's a popular package manager for Kubernetes. On August 15, Scott Sumner, senior solutions engineer at CoreOS, will show how to use Helm to deploy ElasticSearch, Kibana, and Fluentd to monitor a Kubernetes cluster. Doors to the Kubernetes Colorado meetup, held at the Deis office, will open at 6:30 p.m.


SF Microservices Meetup @ MuleSoft: August 16, San Francisco
Hear the latest on zetcd, the ZooKeeper proxy for etcd, at the San Francisco Microservices Meetup. Anthony Romano, software engineer at CoreOS, will introduce the project and give a detailed explanation of how the proxy works. Doors open at 6:00 p.m.


PromCon @ Google Munich: August 17-18, Munich
Don't miss the Prometheus Conference, held at the Google offices in Munich. Join us for two days of discussions of monitoring distributed systems with the open source tool.


Webinar Demo: Introduction to CoreOS Tectonic: August 23, San Francisco, Online
On Wednesday, August 23 at 10 a.m. PT, Scott Summer, solutions engineer at CoreOS, will host a webinar and discuss Tectonic's features, including automated operations and portability across private and public cloud providers. Sign up to attend.


Gartner Catalyst @ Manchester Grand Hyatt San Diego: August 21-24, San Diego
We'll be traveling to sunny San Diego August 21-24 for Gartner Catalyst! Be sure to stop by Booth 2 from August 21-23 and ask our team any questions, or request a meeting.


August Kubernetes Training @ MicroTek San Francisco: August 29-30, San Francisco
From 9:00 a.m. to 4:00 p.m. PT on August 29 and 30 in San Francisco, we'll hold another Kubernetes training session. Get a better understanding of Kubernetes from the distributed systems experts from CoreOS. Seats are limited, so sign up today.


Webinar: Introduction to CoreOS Tectonic: August 30, New York, Online
On Wednesday, August 30 at 12:00 p.m. PT, Praveen Rajagopalan, solutions engineer at CoreOS, will lead a webinar on CoreOS Tectonic. He will discuss its features, and how they are helping businesses succeed as they modernize their infrastructure. Register to attend and see what distributed infrastructure is doing for businesses of all sizes.

Interested in hosting your own meetup or want to learn more about getting involved with the CoreOS community? Email us at community@coreos.com.


----

Read in my feedly


Sent from my iPhone