Tuesday, August 23, 2016

Speed AND Safety: No Longer at Odds [feedly]

Speed AND Safety: No Longer at Odds
https://blog.chef.io/2016/08/22/speed-safety-no-longer-odds/

-- via my feedly newsfeed

Today in IT, the need to maintain security and governance is often at odds with the imperative to move quickly. At this year's ChefConf, compliance and security were topics of much discussion among presenters and attendees alike. Enterprise IT teams are adopting a new way to deliver experiences for customers safely and quickly: by expressing compliance and security as code. This brings regulatory protocols into the build process earlier, allowing teams to deliver infrastructure and applications at velocity.

We caught up with Michael Hedgpeth, Senior Software Architect at NCR Corporation, to get his thoughts on how to marry these seemingly conflicting priorities.

The reality is, there needs to be a partnership [between security and dev teams] and the only way that they're going to be able to audit at scale and velocity is if they automate that audit. Our security people have really gotten that and are getting behind Inspec," said Hedgpeth. "But that does fundamentally change their organization from that of spreadsheets and manual checking or scanning with software to coding, checking things in, and being a part of the development pipeline just like everybody else is.

There are few organizations that understand this need to bring compliance into the pipeline better than SAP NS2, which specializes in providing the SAP portfolio to federal organizations. Cheerag Patel, DevOps Manager at SAP NS2, says his organization meets this need by looking at the workflow holistically and incorporating compliance into the process from day one. Approaching compliance as an endpoint can cause trouble because if you build environments and then add compliance, those environments will often break because they weren't designed with security in mind. Instead, Patel recommends implementing security controls at the outset of the workflow.

Learn More

The Ruby Behind Chef [feedly]

The Ruby Behind Chef
https://www.chef.io/webinars/?commid=220923

-- via my feedly newsfeed

Chef is built in Ruby - a conscious choice for its great flexibility and developer friendliness. For some people, learning the language can feel difficult because most examples lack your perspective as a Chef practitioner. In this interactive webinar, we invite you to follow along in your favorite editor as we dive through the source code to teach you core Ruby concepts. Join us to learn: - Fundamental Ruby concepts and how they create the Recipe Domain Specific Language and the tools that power Chef - Pry's ability to navigate and query source code Prerequisites: Come with the Chef Development Kit installed Who should attend: Chefs with a basic understanding of writing recipes and cookbooks that want to gain a better understanding of the cookbooks they author and the tools that they employ each day.

Monday, August 22, 2016

Wrap-Up: AWS Summit in New York City [feedly]

Wrap-Up: AWS Summit in New York City
https://blog.chef.io/2016/08/19/wrap-up-aws-summit-nyc/

-- via my feedly newsfeed

We enjoyed meeting so many members of the community at the AWS Summit in New York City last week!

The Summit started Wednesday, August 10th with a full day of sessions including one on Securing Cloud Workloads with DevOps Automation. There was also an awesome startup simulation where participants acted as a DevOps lead.

Thursday, August 11th began with a keynote from Amazon.com CTO Werner Vogels. He presented with guests from Lyft,  Airtime, and Comcast. This led into lightning talks and breakout sessions covering CI/CD, microservices, security, and everything in between.

Meanwhile, our team of twelve awesome Chefs, were in the expo hall fueling the love of Chef with plenty of t-shirts, stickers, and great conversations. We met hundreds of attendees, from people who are just starting to think about automation, to advanced Chef users who have been part of our community for years. Our booth was full of great energy throughout the day with folks interested in learning about our new product, Chef Automate. In particular, I heard many conversations about the benefits of the Workflow feature. People were also excited to learn about our open source projects: Habitat for application automation and InSpec for compliance and security automation.

If you haven't been to an AWS Summit, they are awesome (and free). We're looking forward to returning to NYC next year!

Not in New York? Check out our full calendar of events to find out when we'll be in your area.

Learn More

Visit our AWS partner page for AWS and Chef resources.

You might also be interested in our upcoming webinar on September 21, "Automated DevOps Workflows with Chef on AWS."

Join us to learn:

  • How to develop at high velocity with Chef on AWS
  • How to create a culture of treating your AWS infrastructure as code
  • How Gannet uses Chef "cookbooks" on AWS to manage their USA Today infrastructure

The post Wrap-Up: AWS Summit in New York City appeared first on Chef Blog.


Thursday, August 18, 2016

How to Disable WPAD on Your PC So Your HTTPS Traffic Won't Be Vulnerable to the Latest SSL Attack [feedly]



----
How to Disable WPAD on Your PC So Your HTTPS Traffic Won't Be Vulnerable to the Latest SSL Attack
// Null Byte « WonderHowTo

You may not know what HTTP is exactly, but you definitely know that every single website you visit starts with it. Without the Hypertext Transfer Protocol, there'd be no easy way to view all the text, media, and data that you're able to see online. However, all communication between your browser and a website are unencrypted, which means it can be eavesdropped on. Don't Miss: The Difference Between HTTP & HTTPS This is where HTTPS comes in, the "S" standing for "Secure." It's an encrypted way to communicate between browser and website so that your data stays safe. While it was used mostly... more


----

Shared via my feedly newsfeed


Sent from my iPhone

SSH keys management in Xen Orchestra [feedly]



----
SSH keys management in Xen Orchestra
// Xen Orchestra

Remember our article on playing with CloudInit and XenServer thanks to Xen Orchestra? We managed to improve our UI to get it even easier. Let's take a tour.

Manage your SSH keys

In your user zone, you can manage your keys (create, remove). Click on "New SSH key":

Then fill the key zone. We'll automatically fetch the key name (user@host at the end):

You're done:

Create a VM to use your key(s)

Now we got our "brucewayne@mypc" key, let's create a VM. Use a template with existing disks and CloudInit ready. For those who want a recap, don't forget to read our previous blog posts on this topic:

In the "Install settings" category, activate "Config drive" option:

We automatically add the first existing key. If you create a VM right now with this setting, user "brucewayne" with his key can access the VM without a password.

But we did more.

Create keys on the fly

Maybe you want to add a key without going back to your user zone (because you set a lot of things in the current VM creation view). No problem, just fill the "SSH key" field and click on "+" icon. It will add the key in the select and save it for later use:

And it's added:

If you create the VM now, both users with their keys can access the VM without a password.

And if you go back to your user zone, you can see all these news keys:

This feature will help you to create VMs in few clicks and seconds, without anything to manually enter after your keys are saved.


----

Shared via my feedly newsfeed


Sent from my iPhone

Install XenServer tools in your VM [feedly]



----
Install XenServer tools in your VM
// Xen Orchestra

This blog post is for people discovering XenServer, and wonder about how to install XenServer tools (or xen tools) in their VMs. And also what are those tools.

It's also a guide to install xentools for both Linux and Windows VMs.

First, we'll see how to check if tools are installed, and then, install those if necessary, on both Linux and Windows.

Are tools installed?

It's really easy to check this with Xen Orchestra: in the home view, a running VM without any operating system icon don't have tools:

See the difference with a "correct" VM (Debian logo):

You can also display all running VMs without tools thanks to the following search: power_state:running !xenTools:""

Here, two running VMs are without tools

In the VM view, you can also read "No Xen tools detected.":

Install XenServer tools

Next step is to install those tools. Also I'll give you some tips.

For any VM, go in the console view on your VM, and insert the appropriate ISO:

  • xs-tools.iso for XenServer 6 and older
  • guest-tools.iso for XenServer 7 and high

Then, each system is a bit different.

For Linux VMs

Debian, Ubuntu (deb based)

For a Debian VM, it's pretty simple, as root:

  • mount /dev/cdrom /mnt
  • bash /mnt/Linux/install.sh
  • umount /dev/cdrom

In a real example:

root@myVM:~# mount /dev/cdrom /mnt    mount: block device /dev/xvdd is write-protected, mounting read-only    root@myVM:~# bash /mnt/Linux/install.sh    Detected `Debian GNU/Linux 7.9 (wheezy)' (debian version 7).    The following changes will be made to this Virtual Machine:      * update arp_notify sysctl.    * packages to be installed/upgraded:      - xe-guest-utilities_7.0.0-24_all.deb    Continue? [y/n] Y    (Reading database ... 37679 files and directories currently installed.)  Preparing to replace xe-guest-utilities 6.2.0-1133 (using .../xe-guest-utilities_7.0.0-24_all.deb) ...    Stopping xe daemon:  OK    Unpacking replacement xe-guest-utilities ...    Setting up xe-guest-utilities (7.0.0-24) ...    Installing new version of config file /etc/init.d/xe-linux-distribution ...    $Detecting Linux distribution version: OK  $Starting xe daemon:  OK    You should now reboot this Virtual Machine.    root@myVM:~#    

That's all! You can eject the ISO now.

As soon the .deb is installed, tools will report their info: no need to reboot!

CentOS, RHEL (rpm based)

Same principle, almost same procedure than on a Deb based distro:

# mount /dev/cdrom /mnt/  mount: block device /dev/xvdd is write-protected, mounting read-only    [root@localhost ~]# bash /mnt/Linux/install.sh   Detected `CentOS release 6.6 (Final)' (centos version 6).    The following changes will be made to this Virtual Machine:      * update arp_notify sysctl.    * packages to be installed/upgraded:      - xe-guest-utilities-7.0.0-24.x86_64.rpm      - xe-guest-utilities-xenstore-7.0.0-24.x86_64.rpm    Continue? [y/n] y      Preparing...                ########################################### [100%]       1:xe-guest-utilities-xens########################################### [ 50%]     2:xe-guest-utilities     ########################################### [100%]    You should now reboot this Virtual Machine.    [root@localhost ~]#   

Nope, rebooting is not mandatory.

For Windows VMs

After loading the appropriate ISO, you should see a CD with tools:

Start the setup.exe:

This time, you must reboot.

After the initial reboot, your Windows OS has tools but also extra drivers for Xen (better perfs), eg in your device manager:

Windows quiesce snapshots

You have the agent management, but that's not enough if you want to take quiesced snapshots!

You can read mon on quiesce snapshots on our previous blog post: XenServer quiesce snapshots

What those tools are doing?

Basically, goals of those tools are:

  • to report extra VM info (that only the Operating system can know, not your underlying hypervisor), like VM IP address, kernel version etc.
  • communicate with the OS in case of quiesce snapshots (Windows and its VSS)
  • allow sending signals to the OS (clean reboot, hotplug hardware etc.)

Update tools

You want to update tools? Just do the same procedure than for installing them.

XenServer Enterprise users can enjoy Windows Update doing the work of updating those tools too (but sadly, it's not in the free edition of XenServer).


----

Shared via my feedly newsfeed


Sent from my iPhone

Xen Orchestra 5.1 [feedly]



----
Xen Orchestra 5.1
// Xen Orchestra

And it's done: Xen Orchestra 5.1 is available!

In short:

  • new UI is by default, the old one is still accessible when adding "/v4" in the URL
  • we added a lot of great UI features (see the content below)
  • we fixed a lot of small issues due to fresh UI

In fact, that's a pretty big release: we closed 75 issues in one month!

UI improvements

Thanks to a lot of feedback from everywhere (customers, community), we were able to improve greatly the new UI. Let's take a tour of what you can do now.

Save your (default) search

That's the big step forward for a powerful AND customizable interface.

Save a search

Want to use a search you are doing often? Type it, see the result, and then use the "Save" icon.

Example: I want to only display my production running VMs. Let's say I use a "prod" tag for those VMs: power_state:running tags:prod

Now, if I click on "Save", I can save this search and give it a name:

Finally, I can find it in the existing Filter list (see "Prod"):

Obviously, you can use it (and combine it!) for far more advanced filters, like filtering by pool, hosts, virtualization mode, whatever you need!

Managing saved search

Now you have saved a search, let's manage it: edit or remove it. Go inside your user settings:

You can remove or edit an existing filter. You can also set a default filter for each type view (VM, Host or Pool view). This way, every time you go on the home view, this default filter will be applied!

Smart migration system

Migrating multiple VMs at once can be complicated. Imagine you want to migrate various VMs from various pools to a specific host:

  • some VMs can be on the same host that you want to migrate to (no need to migrate)
  • some others can be on another host in the same pool (no need to migrate networks)
  • plus some of them can be on a shared SR (no VDI migration needed)
  • finally, others can be on a different pool (need to migrate both storage and network)

We are now managing all of these cases at once: don't bother to think, select any VMs you want to move and do it. Also, we have a "smart network mapping": if we found a same network name on the destination, we'll bind to it.

Improved backup scheduler

We are now able to display the XOA timezone for scheduled backup. You can also select your own timezone when you set your backup/DR/whatever!

Better patches display

It's very easy now to see any missing patch. See our previous blog post: now you can't miss a XenServer patch.

Submenu for Home view

You can now directly select which type of object you want to display in the home view, via the menu:

Plugin presets

For helping users to manage their plugins, we added "presets" to guide you.


----

Shared via my feedly newsfeed


Sent from my iPhone

Installing MariaDB 10.1.16 on Mac OS X with Homebrew [feedly]



----
Installing MariaDB 10.1.16 on Mac OS X with Homebrew
// MariaDB blogs

Thu, 2016-08-18 15:27
Ben Stillman

Developing on your Mac? Get the latest stable MariaDB version set up on OS X easily with Homebrew. See this step by step guide on installing MariaDB 10.1.16. 

 

1 Install Xcode

xcode-select --install

bens-mbp:~ ben$ xcode-select --install  xcode-select: note: install requested for command line developer tools

 

2 Install Homebrew

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Bens-MacBook-Pro:~ ben$ ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"  ==> This script will install:  /usr/local/bin/brew  /usr/local/Library/...  /usr/local/share/doc/homebrew  /usr/local/share/man/man1/brew.1  /usr/local/share/zsh/site-functions/_brew  /usr/local/etc/bash_completion.d/brew    Press RETURN to continue or any other key to abort  ==> /usr/bin/sudo /bin/mkdir -p /Users/ben/Library/Caches/Homebrew  Password:  ==> /usr/bin/sudo /bin/chmod g+rwx /Users/ben/Library/Caches/Homebrew  ==> /usr/bin/sudo /usr/sbin/chown ben /Users/ben/Library/Caches/Homebrew  ==> Downloading and installing Homebrew...  remote: Counting objects: 537, done.  remote: Compressing objects: 100% (478/478), done.  remote: Total 537 (delta 31), reused 341 (delta 28), pack-reused 0  Receiving objects: 100% (537/537), 817.70 KiB | 1.25 MiB/s, done.  Resolving deltas: 100% (31/31), done.  From https://github.com/Homebrew/brew   * [new branch]      master     -> origin/master  HEAD is now at 984ed83 doctor: print check on --debug.  ==> Tapping homebrew/core  Cloning into '/usr/local/Library/Taps/homebrew/homebrew-core'...  remote: Counting objects: 3716, done.  remote: Compressing objects: 100% (3603/3603), done.  remote: Total 3716 (delta 15), reused 1863 (delta 4), pack-reused 0  Receiving objects: 100% (3716/3716), 2.88 MiB | 3.74 MiB/s, done.  Resolving deltas: 100% (15/15), done.  Checking connectivity... done.  Tapped 3594 formulae (3,743 files, 8.9M)  ==> Installation successful!  ==> Next steps  Run `brew help` to get started  Further documentation: https://git.io/brew-docs  ==> Homebrew has enabled anonymous aggregate user behaviour analytics  Read the analytics documentation (and how to opt-out) here:    https://git.io/brew-analytics

 

3 Check Homebrew

brew doctor

bens-mbp:~ ben$ brew doctor  Your system is ready to brew.

 

4 Update Homebrew

brew update

bens-mbp:~ ben$ brew update  Already up-to-date.

 

5 Verify MariaDB Version in Homebrew Repo

brew info mariadb

Bens-MacBook-Pro:~ ben$ brew info mariadb  mariadb: stable 10.1.16 (bottled), devel 10.2.1  Drop-in replacement for MySQL  https://mariadb.org/  Conflicts with: mariadb-connector-c, mysql, mysql-cluster, mysql-connector-c, mytop, percona-server  Not installed  From: https://github.com/Homebrew/homebrew-core/blob/master/Formula/mariadb.rb  ==> Dependencies  Build: cmake ✘  Required: openssl ✘  ==> Options  --universal      Build a universal binary  --with-archive-storage-engine      Compile with the ARCHIVE storage engine enabled  --with-bench      Keep benchmark app when installing  --with-blackhole-storage-engine      Compile with the BLACKHOLE storage engine enabled  --with-embedded      Build the embedded server  --with-libedit      Compile with editline wrapper instead of readline  --with-local-infile      Build with local infile loading support  --with-test      Keep test when installing  --devel      Install development version 10.2.1  ==> Caveats  A "/etc/my.cnf" from another install may interfere with a Homebrew-built  server starting up correctly.    To connect:      mysql -uroot    To have launchd start mariadb now and restart at login:    brew services start mariadb  Or, if you don't want/need a background service you can just run:    mysql.server start

 

6 Install MariaDB

brew install mariadb

Bens-MacBook-Pro:~ ben$ brew install mariadb  ==> Installing dependencies for mariadb: openssl  ==> Installing mariadb dependency: openssl  ==> Downloading https://homebrew.bintray.com/bottles/openssl-1.0.2h_1.el_capitan.bottle.tar.gz  ######################################################################## 100.0%  ==> Pouring openssl-1.0.2h_1.el_capitan.bottle.tar.gz  ==> Caveats  A CA file has been bootstrapped using certificates from the system  keychain. To add additional certificates, place .pem files in    /usr/local/etc/openssl/certs    and run    /usr/local/opt/openssl/bin/c_rehash    This formula is keg-only, which means it was not symlinked into /usr/local.    Apple has deprecated use of OpenSSL in favor of its own TLS and crypto libraries    Generally there are no consequences of this for you. If you build your  own software and it requires this formula, you'll need to add to your  build variables:        LDFLAGS:  -L/usr/local/opt/openssl/lib      CPPFLAGS: -I/usr/local/opt/openssl/include    ==> Summary    /usr/local/Cellar/openssl/1.0.2h_1: 1,691 files, 12M  ==> Installing mariadb  ==> Downloading https://homebrew.bintray.com/bottles/mariadb-10.1.16.el_capitan.bottle.tar.gz  ######################################################################## 100.0%  ==> Pouring mariadb-10.1.16.el_capitan.bottle.tar.gz  ==> /usr/local/Cellar/mariadb/10.1.16/bin/mysql_install_db --verbose --user=ben --basedir=/usr/local/Cellar/mariadb/10.1.16 --datadir=/usr/local/var/mysql --tmpdir=/tmp  ==> Caveats  A "/etc/my.cnf" from another install may interfere with a Homebrew-built  server starting up correctly.    To connect:      mysql -uroot    To have launchd start mariadb now and restart at login:    brew services start mariadb  Or, if you don't want/need a background service you can just run:    mysql.server start  ==> Summary    /usr/local/Cellar/mariadb/10.1.16: 573 files, 137.1M

 

7 Run the Database Installer

mysql_install_db

Bens-MacBook-Pro:10.1.16 ben$ mysql_install_db  Installing MariaDB/MySQL system tables in '/usr/local/var/mysql' ...  2016-08-16 19:15:02 140735320776704 [Note] /usr/local/Cellar/mariadb/10.1.16/bin/mysqld (mysqld 10.1.16-MariaDB) starting as process 83824 ...  2016-08-16 19:15:02 140735320776704 [Note] InnoDB: Using mutexes to ref count buffer pool pages  2016-08-16 19:15:02 140735320776704 [Note] InnoDB: The InnoDB memory heap is disabled  2016-08-16 19:15:02 140735320776704 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins  2016-08-16 19:15:02 140735320776704 [Note] InnoDB: Memory barrier is not used  2016-08-16 19:15:02 140735320776704 [Note] InnoDB: Compressed tables use zlib 1.2.5  2016-08-16 19:15:02 140735320776704 [Note] InnoDB: Using SSE crc32 instructions  2016-08-16 19:15:02 140735320776704 [Note] InnoDB: Initializing buffer pool, size = 128.0M  2016-08-16 19:15:02 140735320776704 [Note] InnoDB: Completed initialization of buffer pool  2016-08-16 19:15:02 140735320776704 [Note] InnoDB: Highest supported file format is Barracuda.  2016-08-16 19:15:02 140735320776704 [Note] InnoDB: 128 rollback segment(s) are active.  2016-08-16 19:15:02 140735320776704 [Note] InnoDB: Waiting for purge to start  2016-08-16 19:15:02 140735320776704 [Note] InnoDB:  Percona XtraDB (http://www.percona.com) 5.6.30-76.3 started; log sequence number 1616819  2016-08-16 19:15:02 123145313034240 [Note] InnoDB: Dumping buffer pool(s) not yet started  OK  Filling help tables...  2016-08-16 19:15:04 140735320776704 [Note] /usr/local/Cellar/mariadb/10.1.16/bin/mysqld (mysqld 10.1.16-MariaDB) starting as process 83828 ...  2016-08-16 19:15:04 140735320776704 [Note] InnoDB: Using mutexes to ref count buffer pool pages  2016-08-16 19:15:04 140735320776704 [Note] InnoDB: The InnoDB memory heap is disabled  2016-08-16 19:15:04 140735320776704 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins  2016-08-16 19:15:04 140735320776704 [Note] InnoDB: Memory barrier is not used  2016-08-16 19:15:04 140735320776704 [Note] InnoDB: Compressed tables use zlib 1.2.5  2016-08-16 19:15:04 140735320776704 [Note] InnoDB: Using SSE crc32 instructions  2016-08-16 19:15:04 140735320776704 [Note] InnoDB: Initializing buffer pool, size = 128.0M  2016-08-16 19:15:04 140735320776704 [Note] InnoDB: Completed initialization of buffer pool  2016-08-16 19:15:04 140735320776704 [Note] InnoDB: Highest supported file format is Barracuda.  2016-08-16 19:15:04 140735320776704 [Note] InnoDB: 128 rollback segment(s) are active.  2016-08-16 19:15:04 140735320776704 [Note] InnoDB: Waiting for purge to start  2016-08-16 19:15:04 140735320776704 [Note] InnoDB:  Percona XtraDB (http://www.percona.com) 5.6.30-76.3 started; log sequence number 1616829  2016-08-16 19:15:04 123145313034240 [Note] InnoDB: Dumping buffer pool(s) not yet started  OK  Creating OpenGIS required SP-s...  2016-08-16 19:15:07 140735320776704 [Note] /usr/local/Cellar/mariadb/10.1.16/bin/mysqld (mysqld 10.1.16-MariaDB) starting as process 83833 ...  2016-08-16 19:15:07 140735320776704 [Note] InnoDB: Using mutexes to ref count buffer pool pages  2016-08-16 19:15:07 140735320776704 [Note] InnoDB: The InnoDB memory heap is disabled  2016-08-16 19:15:07 140735320776704 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins  2016-08-16 19:15:07 140735320776704 [Note] InnoDB: Memory barrier is not used  2016-08-16 19:15:07 140735320776704 [Note] InnoDB: Compressed tables use zlib 1.2.5  2016-08-16 19:15:07 140735320776704 [Note] InnoDB: Using SSE crc32 instructions  2016-08-16 19:15:07 140735320776704 [Note] InnoDB: Initializing buffer pool, size = 128.0M  2016-08-16 19:15:07 140735320776704 [Note] InnoDB: Completed initialization of buffer pool  2016-08-16 19:15:07 140735320776704 [Note] InnoDB: Highest supported file format is Barracuda.  2016-08-16 19:15:07 140735320776704 [Note] InnoDB: 128 rollback segment(s) are active.  2016-08-16 19:15:07 140735320776704 [Note] InnoDB: Waiting for purge to start  2016-08-16 19:15:07 140735320776704 [Note] InnoDB:  Percona XtraDB (http://www.percona.com) 5.6.30-76.3 started; log sequence number 1616839  2016-08-16 19:15:07 123145313034240 [Note] InnoDB: Dumping buffer pool(s) not yet started  OK    To start mysqld at boot time you have to copy  support-files/mysql.server to the right place for your system    PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER !  To do so, start the server, then issue the following commands:    '/usr/local/Cellar/mariadb/10.1.16/bin/mysqladmin' -u root password 'new-password'  '/usr/local/Cellar/mariadb/10.1.16/bin/mysqladmin' -u root -h Bens-MacBook-Pro.local password 'new-password'    Alternatively you can run:  '/usr/local/Cellar/mariadb/10.1.16/bin/mysql_secure_installation'    which will also give you the option of removing the test  databases and anonymous user created by default.  This is  strongly recommended for production servers.    See the MariaDB Knowledgebase at http://mariadb.com/kb or the  MySQL manual for more instructions.    You can start the MariaDB daemon with:  cd '/usr/local/Cellar/mariadb/10.1.16' ; /usr/local/Cellar/mariadb/10.1.16/bin/mysqld_safe --datadir='/usr/local/var/mysql'    You can test the MariaDB daemon with mysql-test-run.pl  cd '/usr/local/Cellar/mariadb/10.1.16/mysql-test' ; perl mysql-test-run.pl    Please report any problems at http://mariadb.org/jira    The latest information about MariaDB is available at http://mariadb.org/.  You can find additional information about the MySQL part at:  http://dev.mysql.com  Support MariaDB development by buying support/new features from MariaDB  Corporation Ab. You can contact us about this at sales@mariadb.com.  Alternatively consider joining our community based development effort:  http://mariadb.com/kb/en/contributing-to-the-mariadb-project/

 

8 Start MariaDB

mysql.server start

Bens-MacBook-Pro:10.1.16 ben$ mysql.server start  Starting MySQL  . SUCCESS!

 

9 Secure the Installation

mysql_secure_installation

Bens-MacBook-Pro:10.1.16 ben$ mysql_secure_installation    NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB        SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!    In order to log into MariaDB to secure it, we'll need the current  password for the root user.  If you've just installed MariaDB, and  you haven't set the root password yet, the password will be blank,  so you should just press enter here.    Enter current password for root (enter for none):  OK, successfully used password, moving on...    Setting the root password ensures that nobody can log into the MariaDB  root user without the proper authorisation.    Set root password? [Y/n]  New password:  Re-enter new password:  Password updated successfully!  Reloading privilege tables..   ... Success!      By default, a MariaDB installation has an anonymous user, allowing anyone  to log into MariaDB without having to have a user account created for  them.  This is intended only for testing, and to make the installation  go a bit smoother.  You should remove them before moving into a  production environment.    Remove anonymous users? [Y/n]   ... Success!    Normally, root should only be allowed to connect from 'localhost'.  This  ensures that someone cannot guess at the root password from the network.    Disallow root login remotely? [Y/n]   ... Success!    By default, MariaDB comes with a database named 'test' that anyone can  access.  This is also intended only for testing, and should be removed  before moving into a production environment.    Remove test database and access to it? [Y/n]   - Dropping test database...   ... Success!   - Removing privileges on test database...   ... Success!    Reloading the privilege tables will ensure that all changes made so far  will take effect immediately.    Reload privilege tables now? [Y/n]   ... Success!    Cleaning up...    All done!  If you've completed all of the above steps, your MariaDB  installation should now be secure.    Thanks for using MariaDB!

 

10 Connect to MariaDB

mysql -u root -p

Bens-MacBook-Pro:10.1.16 ben$ mysql -u root -p  Enter password:  Welcome to the MariaDB monitor.  Commands end with ; or \g.  Your MariaDB connection id is 11  Server version: 10.1.16-MariaDB Homebrew    Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

About the Author

Ben Stillman's picture

Ben Stillman is a Principal Consultant working with MariaDB and MySQL.


----

Shared via my feedly newsfeed


Sent from my iPhone

XenServer 7.0 performance improvements part 4: Aggregate I/O throughput improvements [feedly]



----
XenServer 7.0 performance improvements part 4: Aggregate I/O throughput improvements
// Latest blog entries

The XenServer team has made a number of significant performance and scalability improvements in the XenServer 7.0 release. This is the fourth in a series of articles that will describe the principal improvements. For the previous ones, see:

  1. http://xenserver.org/blog/entry/dundee-tapdisk3-polling.html
  2. http://xenserver.org/blog/entry/dundee-networking-multi-queue.html
  3. http://xenserver.org/blog/entry/dundee-parallel-vbd-operations.html

In this article we return to the theme of I/O throughput. Specifically, we focus on improvements to the total throughput achieved by a number of VMs performing I/O concurrently. Measurements show that XenServer 7.0 enjoys aggregate network throughput over three times faster than XenServer 6.5, and also has an improvement to aggregate storage throughput.

What limits aggregate I/O throughput?

When a number of VMs are performing I/O concurrently, the total throughput that can be achieved is often limited by dom0 becoming fully busy, meaning it cannot do any additional work per unit time. The I/O backends (netback for network I/O and tapdisk3 for storage I/O) together consume 100% of available dom0 CPU time.

How can this limit be overcome?

Whenever there is a CPU bottleneck like this, there are two possible approaches to improving the performance:

  1. Reduce the amount of CPU time required to perform I/O.
  2. Increase the processing capacity of dom0, by giving it more vCPUs.

Surely approach 2 is easy and will give a quick win...? Intuitively, we might expect the total throughput to increase proportionally with the number of dom0 vCPUs.

Unfortunately it's not as straightforward as that. The following graph shows what happened to the aggregate network throughput on XenServer 6.5 if the number of dom0 vCPUs is artificially increased. (In this case, we are measuring the total network throughput of 40 VMs communicating amongst themselves on a single Dell R730 host.)

b2ap3_thumbnail_5179.png

Counter-intuitively, the aggregate throughput decreases as we add more processing power to dom0! (This explains why the default was at most 8 vCPUs in XenServer 6.5.)

So is there no hope for giving dom0 more processing power...?

The explanation for the degradation in performance is that certain operations run more slowly when there are more vCPUs present. In order to make dom0 work better with more vCPUs, we needed to understand what those operations are, and whether they can be made to scale better.

Three such areas of poor scalability were discovered deep in the innards of Xen by Malcolm Crossley and David Vrabel, and improvements were made for each:

  1. Maptrack lock contention – improved by http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=dff515dfeac4c1c13422a128c558ac21ddc6c8db
  2. Grant-table lock contention – improved by http://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=b4650e9a96d78b87ccf7deb4f74733ccfcc64db5
  3. TLB flush on grant-unmap – improved by https://github.com/xenserver/xen-4.6.pg/blob/master/master/avoid-gnt-unmap-tlb-flush-if-not-accessed.patch

The result of improving these areas is dramatic – see the green line in the following graph:

b2ap3_thumbnail_4190.png

Now, throughput scales very well as the number of vCPUs increases. This means that, for the first time, it is now beneficial to allocate many vCPUs to dom0 – so that when there is demand, dom0 can deliver. Hence we have given XenServer 7.0 a higher default number of dom0 vCPUs.

How many vCPUs are now allocated to dom0 by default?

Most hosts will now get 16 vCPUs by default, but the exact number depends on the number of CPU cores on the host. The following graph summarises how the default number of dom0 vCPUs is calculated from the number of CPU cores on various current and historic XenServer releases:

b2ap3_thumbnail_dom0-vcpus.png

Summary of improvements

I will conclude with some aggregate I/O measurements comparing XenServer 6.5 and 7.0 under default settings (no dom0 configuration changes) on a Dell R730xd.

  1. Aggregate network throughput – twenty pairs of 32-bit Debian 6.0 VMs sending and receiving traffic generated with iperf 2.0.5.
    b2ap3_thumbnail_aggr-intrahost-r730_20160729-093608_1.png
  2. Aggregate storage IOPS – twenty 32-bit Windows 7 SP1 VMs each doing single-threaded, serial, sequential 4KB reads with fio to a virtual disk on an Intel P3700 NVMe drive.
    b2ap3_thumbnail_storage-iops-aggr-p3700-win7.png

Read More
----

Shared via my feedly newsfeed


Sent from my iPhone

XenServer 7.0 performance improvements part 3: Parallelised plug and unplug VBD operations in xenopsd [feedly]



----
XenServer 7.0 performance improvements part 3: Parallelised plug and unplug VBD operations in xenopsd
// Latest blog entries

The XenServer team has made a number of significant performance and scalability improvements in the XenServer 7.0 release. This is the third in a series of articles that will describe the principal improvements. For the first two, see here:

  1. http://xenserver.org/blog/entry/dundee-tapdisk3-polling.html
  2. http://xenserver.org/blog/entry/dundee-networking-multi-queue.html

The topic of this post is control plane performance. XenServer 7.0 achieves significant performance improvements through the support for parallel VBD operations in xenopsd. With the improvements, xenopsd is able to plug and unplug many VBDs (virtual block devices) at the same time, substantially improving the duration of VM lifecycle operations (start, migrate, shutdown) for VMs with many VBDs, and making it practical to operate VMs with up to 255 VBDs.

Background of the VM lifecycle operations

In XenServer, xenopsd is the dom0 component responsible for VM lifecycle operations:

  • during a VM start, xenopsd creates the VM container and then plugs the VBDs before starting the VCPUs;
  • during a VM shutdown, xenopsd stops the VCPUs and then unplugs the VBDs before destroying the VM container;
  • during a VM migrate, xenopsd creates a new VM container, unplugs the VBDs of the old VM container, and plugs the VBDs for the new VM before starting its VCPUs; while the VBDs are being unplugged and plugged on the other VM container, the user experiences a VM downtime when the VM is unresponsive because both old and new VM containers are paused.

Measurements have shown that a large part, usually most of the duration of these VM lifecycle operations is due to plugging and unplugging the VBDs, especially on slow or contended storage backends.

b2ap3_thumbnail_vbd-plugs-sequential.png

 

Why does xenopsd take some time to plug and unplug the VBDs?

The completion of a xenopsd VBD plug operation involves the execution of two storage layer operations, VDI attach and VDI activate (where VDI stands for virtual disk image). These VDI operations include control plane manipulation of daemons, block devices and disk metadata in dom0, which will take different amounts of time to execute depending on the type of the underlying Storage Repository (SRs, such as LVM, NFS or iSCSI) used to hold the VDIs, and the current load on the storage backend disks and their types (SSDs or HDs). Similarly, the completion of a xenopsd VBD unplug operation involves the execution of two storage layer operations, VDI deactivate and VDI detach, with the corresponding overhead of manipulating the control plane of the storage layer.

If the underlying physical disks are under high load, there may be contention preventing progress of the storage layer operations, and therefore xenopsd may need to wait many seconds before the requests to plug and unplug the VBDs can be served.

Originally, xenopsd would execute these VBD operations sequentially, and the total time to finish all of them for a single VM would depend of the number of VBDs in the VM. Essentially, it would be a sum of the time to operate each of othe VBDs of this VM, which would result in several minutes of wait for a lifecycle operation of a VM that had, for instance, 255 VBDs.

What are the advantages of parallel VBD operations?

Plugging and unplugging the VBDs in parallel in xenopsd:

  • provides a total duration for the VM lifecycle operations that is independent of the number of VBDs in the VM. This duration will typically be the duration of the longest individual VBD operation amongst the parallel VBD operations for that VM;
  • provides a significant instantaneous improvement for the user, across all the VBD operations involving more than 1 VBD per VM. The more devices involved, the larger the noticeable improvement, up to the saturation of the underlying storage layer;
  • this single improvement is immediately applicable across all of VM start, VM shutdown and VM migrate lifecycle operations.

b2ap3_thumbnail_vbd-plugs-parallel.png

 

Are there any disadvantages or limitations?

Plugging and unplugging VBDs uses dom0 memory. The main disadvantage of doing these in parallel is that dom0 needs more memory to handle all the parallel operations. To prevent situations where a large number of such operations would cause dom0 to run out of memory, we have added two limits:

  • the maximum number of global parallel operations that xenopsd can request is the same as the number of xenopsd worker-pool threads as defined by worker-pool-size in /etc/xenopsd.conf. This prevents regression in the maximum dom0 memory usage compared to when sequential VBD operations per VM was used in xenopsd. An increase in this value will increase the number of parallel VBD operations, at the expense of having to increase the dom0 memory for about 15MB for each extra parallel VBD operation.
  • the maximum number of per-VM parallel operations that xenopsd can request is currently fixed to 10, which covers a wide range of VMs and still provides a 10x improvement in lifecycle operation times for those VMs that have more than 10 VBDs.

Where do I find the changes?

The changes that implemented this feature are available in github at https://github.com/xapi-project/xenopsd/pull/250

What sort of theoretical improvements should I expect in XenServer 7.0?

The exact numbers depend on the SR type, storage backend load characteristics and the limits specified in the previous section. Given the limits in the previous section, the results for the duration of VDB plugs for a single VM will follow the pattern in the following table:

Number n of VBDs/VM
Improvement of VBD operations
<=10 VBDs/VM times faster
> 10 VBDs/VM

10 times faster

The table above assumes that the maximum number of global parallel operations discussed in the section above is not reached. If you want to guarantee the improvement in the table above for x>1 simultaneous VM lifecycle operations, at the expense of using more dom0 memory in the worst case, you will probably want to set worker-pool-size = (n * x) in /etc/xenopsd.conf, where is a number reflecting the average number of VBDs/VM amongst all VMs up to a maximum of n=10.

What sort of practical improvements should I expect in XenServer 7.0?

The VBD plug and unplug operations are only part of the overall operations necessary to execute a VM lifecycle operation. The remaining parts, such as creation of the VM container and VIF plugs, will disperse the VBD improvements of the previous section, though they are still significant. Some examples of improvements, using a EXT SR on a local SSD storage backend:

VM lifecycle operation
mImprovement with 8 VBDs/VM

Toolstack time to start a single VM

b2ap3_thumbnail_vmstart-8vbds-1vm.png

 

Toolstack time to bootstorm 125 VMs

b2ap3_thumbnail_bootstorm-8vbds-125vms.png

 

The approximately 2s improvement in single VM start time was caused by plugging the 8 VBDs in parallel. As we see in the second row of the table, this can be a significant advantage in a bootstorm.

In XenServer 7.0, not only does xenopsd execute VBD operations in parallel, but it also has improvements in the storage layer operation times on VDIs, so you may observe that in your XenServer 7.0 environment further VM lifecycle time improvements beyond the expected ones from parallel VBD operations are noticeable, compared to XenServer 6.5SP1.

 


Read More
----

Shared via my feedly newsfeed


Sent from my iPhone

Chef Launches Certification and Training Program Amid Increasing Demand for DevOps Skills [feedly]



----
Chef Launches Certification and Training Program Amid Increasing Demand for DevOps Skills
// Chef Blog

New Certification Program Provides Developers, System Administrators, and IT Professionals with Automation Skills at the Core of DevOps

AUSTIN, TX – July 12, 2016 – From ChefConf 2016, Chef, the leader in automation for DevOps, today announced the Chef Certification Program, providing IT practitioners with the tools and resources necessary to build modern automation expertise. This new program combines with Chef's existing Partner Certification Program to offer skills training and best practices for Chef's ecosystem of technology and services providers to successfully deploy, service, and support Chef deployments.

Based on nearly 10 years of expertise in automation, the Chef Certification Program uses a combined knowledge and performance assessment approach to demonstrate Chef proficiency in real world scenarios. This means learning and testing Chef skills for a variety of capabilities, from local cookbook development to Windows automation. Combining the extensive curriculum of Learn Chef — including tutorials, best practice guides, in-person meetups, and online training — with customized assessments for corresponding skills "badges," the Chef Certification Program enables practitioners to attain the skills most applicable to their role, organization, and career goals. Assessment badges reflect a mastery of practical skills for solving the toughest IT and business challenges using automation.

As the enterprise increasingly employs DevOps initiatives to drive business velocity, automation skills are fundamental to achieving success. Automation skills also present a significant career opportunity for IT practitioners. As DevOps adoption continues to rise, the value of Chef capabilities is also dramatically increasing. The most recent Dice Technology Salary Survey found that practitioners who cultivate Chef skills can command more than $130,000 in annual salary, ranking among the top ten technology skills in the world.

Program Highlights

The Chef Certification Program offers credentials to developers, system administrators, and IT practitioners everywhere who demonstrate the skills needed for DevOps success using Chef products. This new program augments Chef's existing Partner Certification Program by providing a clear skills development path that partners can use to train their services and support teams on Chef Automate.

Badges immediately available include:

  • Basic Chef Fluency: This badge certifies the recipient understands and can explain Chef concepts and features. From mastering common Chef terminology to detailing how Chef works, practitioners who earn this badge are certified in Chef's basic design philosophy and value proposition across both open source and commercial platforms.
  • Local Cookbook Development: To obtain this badge, practitioners must demonstrate the ability to properly develop a basic Chef cookbook. This means the recipient can take an existing process and automate it using Chef recipes that follow data-driven and composable code patterns. The Local Cookbook Development badge certifies the recipient can compose a recipe, package it in a Cookbook, test, and deploy the code to model an automated solution for a previously manual process or task.

Badges for Chef on Windows and Extending Chef, both offering practitioners an opportunity to become certified for more specific and advanced skills, will be available later this year.

Chef certification assessments are delivered online in a scheduled, timed, and proctored exam environment at https://training.chef.io/certification. Certification is valid for two years.

Supporting Quotes

"Automation skills are at the heart of DevOps. Our new Certification Program gives practitioners a direct path to not only drive DevOps success within their organizations, but also advance their own careers. Plus, it's now even easier for our partners to get their teams certified on Chef."

  • Brian Turner, Director of Learning Services, Chef

Additional Resources

The post Chef Launches Certification and Training Program Amid Increasing Demand for DevOps Skills appeared first on Chef Blog.


----

Shared via my feedly newsfeed


Sent from my iPhone

Chef Unveils New Flagship Product Chef Automate [feedly]



----
Chef Unveils New Flagship Product Chef Automate
// Chef Blog

New Commercial Offering Unifies How Modern Application Teams Collaborate to Automate Software Delivery at Speed and Scale

AUSTIN, TX – July 12, 2016 – From ChefConf 2016, Chef, the leader in automation for DevOps, today announced Chef Automate, its new commercial offering that gives businesses unprecedented control and collaboration across the entire software delivery process. Chef Automate unifies Chef's entire product portfolio into a single offering, integrating a shared workflow pipeline for continuous deployment, formerly Chef Delivery, and Chef Compliance, which allows customers to meet security and regulatory requirements. In addition, Chef Automate includes a new Visibility feature that, for the first time, provides sophisticated analytics into all resources managed by Chef through a single interface. Chef Automate enables any enterprise to safely deploy infrastructure and applications at high velocity and scale.

Chef Automate builds on the company's widely adopted open-source projects: Chef for infrastructure and cloud automation, InSpec for encoding compliance policy, and the newly released Habitat for application automation.

Automation is the foundation of high velocity business. Chef's flexible, open source technology has become the automation standard upon which businesses build systems that respond quickly and safely to changing needs. Chef Automate brings together the best of Chef's proven automation capabilities into a unified offering, providing enterprises with a common workflow for testing and releasing software. This encourages successful collaboration and makes it easier to deploy projects with many dependencies that span multiple teams.

Product Highlights

Chef Automate is a complete solution for automating the entire technology stack — from IT system components all the way to the applications — through a fixed, efficient, and reusable workflow. Chef Automate enables DevOps success in the enterprise by delivering:

  • Comprehensive Visibility: Chef Automate provides system-wide insight across all applications and their supporting environments, such as development, quality assurance and production. For the first time, Chef is giving users the power to monitor and manage IT processes and environments all through a single interface. This helps companies improve efficiency, quickly identify system issues and reduce risk. Visibility delivers deep insight into operational, compliance and workflow events with:
    • A single dashboard with access to analytics, trending data, and health status about all Chef-managed resources, including Chef Automate, as well as open source Chef, InSpec, and Habitat.
    • Sophisticated search and filter capabilities to see the status of single resources or groups of resources.
    • Visualization of successful changes, failures and entire deployments to gain actionable insight into system trends.
  • Unified Workflow: Chef Automate includes a shared pipeline (previously Chef Delivery) that provides a fixed, efficient and reusable workflow delivering both speed and safety when changing infrastructure or applications. The new enterprise-grade workflow management console provides a single interface for monitoring progress and promoting changes as they move from development to production.
  • Proactive Compliance: Chef Automate lets companies proactively know whether they are operating according to their organization's security policies. Chef Automate enables you to identify and remediate any compliance issues early in the development process, well before deployment to production. This makes determining compliance a fast, automated process that occurs as the software moves through the pipeline. Chef Automate also includes prewritten compliance profiles based on Center for Internet Security (CIS) benchmarks. Because it is an integrated platform, Chef Automate visualizes the status of nodes in terms of their adherence to policy.

Chef Automate is available immediately through an annual subscription. It includes commercial support for Chef, InSpec and Habitat. These three open source projects are all under the Apache 2.0 license and available for free download.

Chef also today announced its new Chef Certification Program (see separate release). The program offers training and credentials for developers, system administrators and IT practitioners everywhere who demonstrate the skills needed for DevOps success using Chef products. This new program augments Chef's existing Partner Certification Program by providing a clear skills development path that partners can use to train their services and support teams on Chef Automate.

Company Momentum

Chef is at the forefront of the Agile, Lean and DevOps movements and continues to experience rapid business and community growth. The release of Chef Automate reflects the company's deep experience with hundreds of commercial customers who use automation to drive DevOps transformation.

Today, more than 80 percent of the company's revenue comes from enterprise organizations, with more than 900 commercial customers using Chef to become software-driven organizations. Chef has tens of millions of machines under management on any given day. To date, the Chef client has been downloaded more than 37 million times, with an average of more than 1.5 million downloads per month in 2016. This momentum contributed to Chef's total bookings growing more than 80 percent year-over-year in Q2 2016.

Regionally, Chef's EMEA business continues to rapidly expand with revenue, annual recurring revenue (ARR) and total number of customers all doubling in the last 12 months. Chef's EMEA business now serves more than 150 customers in the entire region, from South Africa to the United Kingdom to the Netherlands.

The partner ecosystem surrounding Chef continues to rapidly expand and mature. Chef's Partner Cookbook Program, which guides partners through best practices for Cookbook creation and maintenance, has more than doubled in size in just four months. The program began with 3Scale, Alert Logic, CloudPassage, Dynatrace, Infoblox, and NetApp all open-sourcing tested Cookbooks users can trust to automate the toughest enterprise IT environments. The program has expanded and additional partners now include Heavy Water Consulting, Datadog, Graylog, Rackspace, Sumo Logic, and Threat Stack, setting a new standard in code quality for automating everything from applications to storage resources. Partner Cookbooks are now available for download through the Chef Supermarket.

ChefConf 2016 brings together more than 1,500 IT leaders and DevOps practitioners at the JW Marriott in Austin, TX, today and tomorrow to take DevOps further than ever before. The conference will be live streamed at https://chefconf.chef.io/. Chef customers and partners, including Amazon Web Services, Adobe Systems, Alaska Airlines, Arista Networks, Booz Allen Hamilton, Facebook, GE Digital, Hearst Business Media, Hewlett Packard Enterprise, Liberty Mutual Insurance, Microsoft, NCR, National Football League, Nordstrom, SAP NS2, Samsung Electronics America, Standard Bank, Target, Texas A&M University, and WestPac NCZ are all presenting at this year's conference.

Supporting Quotes

"Velocity is the ultimate goal of every business and in today's software-driven economy, that can only happen through IT automation. However, many organizations are still forced to choose between speed and security. We think that's absurd. Chef Automate allows businesses everywhere to stop trading off between velocity and safety — you can have both."

  • Ken Cheney, Vice President of Business Development and Product Marketing, Chef

Additional Resources

The post Chef Unveils New Flagship Product Chef Automate appeared first on Chef Blog.


----

Shared via my feedly newsfeed


Sent from my iPhone

Hewlett Packard Enterprise and the Chef Partner Cookbook Program [feedly]



----
Hewlett Packard Enterprise and the Chef Partner Cookbook Program
// Chef Blog

I'd like to announce that Hewlett Packard Enterprise (HPE) is now part of the Chef Partner Cookbook Program. They have certified the OneView and the iLO cookbooks which provide interfaces via Chef to two flagship HPE products.

The OneView cookbook allows users to configure and manage HPE hardware using OneView's unified API.  Using this cookbook, users specify the types of infrastructure to be configured (e.g., an HPE server blade), and the way they need to be configured (e.g., attached to this storage pool and connected to these network sets). The cookbook allows complex infrastructure configurations to be fully automated without requiring complex choreography across servers, storage and networking APIs.

The iLO cookbook allows users to configure the Integrated Lights-Out remote server management layer for HPE Servers. You can easily apply configuration across your fleet of servers, taking the headache out of managing your data center. It also provides a mechanism for consolidating configuration information from the iLOs locally or on the Chef server, giving you visibility and reporting capabilities.

The Chef Partner Cookbook Program is a collaboration between Chef and the vendor to help validate cookbooks in our public supermarket.

Congratulations to Hewlett Packard Enterprise!

The post Hewlett Packard Enterprise and the Chef Partner Cookbook Program appeared first on Chef Blog.


----

Shared via my feedly newsfeed


Sent from my iPhone

The Community Talks DevOps & Open Source at ChefConf [feedly]



----
The Community Talks DevOps & Open Source at ChefConf
// Chef Blog

The Chef Community  came alive at ChefConf in Austin last week with more than 1,500 passionate DevOps leaders, practitioners, and innovators meeting to talk infrastructure, applications, automation and driving business value through IT. Our CEO Barry Crist took the stage to talk about what it means to ship ideas in today's modern workplace. Our CTO and Co-founder Adam Jacob expressed the importance of designing technology with humanity in mind.

We heard many discussions about what it means to be a part of the DevOps community. But who better to tell you about those discussions than ChefConf attendees themselves!

What does it mean to practice DevOps?

"DevOps is the embodiment of process and improvements that other industries have learned, applied to IT and how companies can deliver value to customers." – Nirmal Mehta, Chief Technologist, Booz Allen Hamilton

"Alaska has always been an innovator looking to provide differentiated customer experience. In order to do that, you need backend tools that provide stability and the ability to push things out quickly." – Veresh Sita, CIO, Alaska Airlines

Why DevOps is the key to customer experience?

"Delivering on the final product is key. Your job's not done if the customer can't accomplish what they want. If the customer can't do it, it doesn't matter if your section of the code was good. The customer experience is all that matters." – Adam Mikeal, Director of Information Technology at the College of Architecture, Texas A&M University

What are the benefits to being part of an open source community?

"Open source is important because it's a community practice. It is building a people who are invested in driving a whole industry forward, not just our individual silos of information or product areas." – Naomi Reeves, Senior Engineer, Target

"When something is broken you want to be able to look under the hood and fix it. That's a key factor in open source. It helps you think better, contribute back to the community and fix bugs. It allows the community to experiment and build things – like GitHub – that didn't exist 10 years ago." – Matt Medeiros, Systems Engineer

Thanks for joining us at ChefConf – and if you weren't there – we hope you can meet us for next year's event. See you in May 2017, and until then, stay weird Austin!

pool-chefconf

The post The Community Talks DevOps & Open Source at ChefConf appeared first on Chef Blog.


----

Shared via my feedly newsfeed


Sent from my iPhone

Cask Data and the Chef Partner Cookbook Program [feedly]



----
Cask Data and the Chef Partner Cookbook Program
// Chef Blog

I'd like to announce that Cask Data is now part of the Chef Partner Cookbook Program. They have certified the Cask Data Application Platform Cookbook.

e0C0BcQ - Imgur

Cask Data is a startup in Palo Alto, California focused on enabling developers access to the powerful technology of big data without the steep learning curve. The Cask Data Application Platform (CDAP) is an open source framework for rapidly delivering solutions on Apache Hadoop™. It integrates and abstracts the underlying Hadoop technologies to provide a simple and consistent platform to build, deploy, and manage complex data analytics applications in the cloud or on-premise.

The Chef Partner Cookbook Program is a collaboration between Chef and the vendor to help validate cookbooks in our public supermarket.

Congratulations to Cask Data!

The post Cask Data and the Chef Partner Cookbook Program appeared first on Chef Blog.


----

Shared via my feedly newsfeed


Sent from my iPhone

[Watch] Chef Automate: Scale your use of automation [feedly]



----
[Watch] Chef Automate: Scale your use of automation
// Chef Blog

Chef Automate is our new approach to scaling up the use of automation across your organization. It builds on our open-source projects: Chef for infrastructure and cloud automation, InSpec for compliance automation, and the newly released Habitat for application automation.

Moving from manual processes to automated ones, incorporating compliance tests into your deployment pipeline, or better managing your applications throughout their lifecycle all present similar obstacles. With Chef Automate, your team has one product that brings end-to-end visibility, compliance, and a unified workflow to your organization's deployment pipelines.

Watch this recorded webinar to learn about the different capabilities of Chef Automate and how it unifies Chef, Inspec, and Habitat into a comprehensive automation strategy for any company in today's digital world.

We'll show you how:

  • Workflow provides a common pipeline for governance and dependency management.
  • Visibility gives you deep insight into what's happening in your organization, including serverless chef-client runs and data from multiple Chef servers.
  • Compliance enables automated compliance assessments as part of your workflow pipelines.

At the end of this post you'll find a summary of Q&A from the live presentation, including questions we didn't have time to answer during the live event.

Q&A From Live Presentation:

Q: Where does Test Kitchen fit into the Workflow feature of Chef Automate?

A: Workflow pipelines are driven by what is defined in the underlying build cookbook. As an example, for delivering a cookbook through workflow one would use the delivery-truck build-cookbook which would run all of the included InSpec tests that you write with your cookbook.

Q: Does Chef Automate licensing replace Chef Delivery licensing? Does this replace the Chef server licensing or is it an additional license?

A: The Chef Automate licensing is designed to simplify the licensing that was used for the Chef product line historically. If you have previous licensing, it's best to reach out to your account manager or sales@chef.io for how to get started with Chef Automate.

Q: Is Chef Delivery integrated into Chef Automate?

A: Chef Delivery is now the Workflow feature of Chef Automate.

Q: Is there a migration path from chef-server -> chef-automate? e.g. to port over the organizations, nodes, and clients?

A: The Chef Server is still a core piece of Chef Automate. For information on ensuring your existing Chef Server implementation is ready for Chef Automate please refer to https://docs.chef.io/install_chef_automate.html

Q: What does it look like switching from chef-server, chef-analytics, chef-reporting to Chef Automate? Both technically and license-wise?

A: Technically the Chef Server API is part of Chef Automate, which means all of your cookbooks will just work. Functionality of Reporting and Analytics is now wrapped in the visibility feature of Automate. License-wise, it's a simpler model too – just one license.

Q: Is Chef Automate a hosted offering? Is this different than Hosted Chef?

A: Chef Automate is not a hosted offering, it is designed to be installed in your environment. Hosted Chef is a great solution to minimal overhead when using the core features of Chef, however it is not planned to include the features of Chef Automate.

Q: Are there details on how to setup Chef Automate as a highly available system on the website or documentation?

A: Chef Automate does not support a high availability setup.

Q: How do I learn Chef from a basic level?

A: Learn Chef (learn.chef.io) is going to be your primary source for beginning your journey of learning Chef.

Q: In my org we use git and Jenkins, will this be part of the workflow in Chef Automate?

A: Workflow can be used to call out to your existing Jenkins implementation and be the primary way you interact with your existing implementation.

Q: What are the default security standards that come with Compliance?

A: Compliance includes several InSpec profiles out of the box. Included with these profiles are base security practices as well as CIS standards.

Q: Compliance has 2 hard parts — all the manual work responding to requests, but also the audit requirements change each time. We don't know what the auditors will ask until they arrive. How do we automate when we don't know the questions ahead of time? Any suggestions?

A: Compliance allows you to trigger ad-hoc audits to verify compliance against newly added profiles. Workflow allows you to integrate these profiles into your pipeline so that they are continually verified every time you ship. As you continually add more to your compliance profiles verifying the things you know as well as quickly adding to that set becomes easier.

Q: Can Chef Automate run on EC2 in AWS?

A: Yes, the Chef Automate installation can be setup in AWS.

Q: We are a very large organization. I can't change everyone's workflow, but i want to use Chef Automate in Operations for Infra and Compliance. Can I skip Habitat and use Chef Automate without that? I don't ship software.

A: Definitely. Chef Automate will bring additional value whether you're using it for your entire stack or slowly adding to what is being managed through it.

Q: Does this workflow/pipeline support other languages/integration testing? Such as: PowerShell and Pester?

A: Absolutely! Workflow in Chef Automate is flexible around the content of jobs it needs to run. E.g. if you're building Windows boxes then Pester is a great testing tool and PowerShell is the native Shell to Windows – you'll want to use both and workflow supports that.

The post [Watch] Chef Automate: Scale your use of automation appeared first on Chef Blog.


----

Shared via my feedly newsfeed


Sent from my iPhone