Tuesday, August 25, 2015

Apple Granted 45 Patents Covering Several Apple Watch Design Wins, Touch ID & NFC - Patently Apple

Apple Granted 45 Patents Covering Several Apple Watch Design Wins, Touch ID & NFC - Patently Apple

Sent from my iPhone

Apple patent hints that the company is considering a network-attached storage device — Apple World Today

Apple patent hints that the company is considering a network-attached storage device — Apple World Today

Sent from my iPhone

Thursday, August 20, 2015

How to Set Up a VPN in Kali 2.0. Works Like a Charm! [feedly]

How to Set Up a VPN in Kali 2.0. Works Like a Charm!
// Null Byte « WonderHowTo

Kali 2.0 is out and she is awesome! In terms of functionality, one thing that's changed in this distribution is how to go about setting up a VPN. Formerly, a user could add a VPN connection by right clicking the small computer icon located in the upper right hand side of the screen which indicates Internet connectivity and add a VPN. Since that option is no longer available, this tutorial demonstrates how to set up a VPN on your new Kali 2.0. VPNs serve a number of purposes, from a hacker's perspective, the most important of which is to hide his or her's ip address. Following this tutorial... more


Shared via my feedly reader

Sent from my iPhone

Creating Custom Reports and Dashboards Using MySQL Queries [feedly]

Creating Custom Reports and Dashboards Using MySQL Queries
// Virtualization Management Software & Data Center Control | VMTurbo » VMTurbo Blog

If I were to ask the owner of a traditional alerting or monitoring tool why they need reports of their virtual environment's state, it would probably seem like a silly question. Of course you need reports to help you determine … Continue Reading »

The post Creating Custom Reports and Dashboards Using MySQL Queries appeared first on Virtualization Management Software & Data Center Control | VMTurbo.


Shared via my feedly reader

Sent from my iPhone

Using Virtual Machines to Improve Container Security with rkt v0.8.0 [feedly]

Using Virtual Machines to Improve Container Security with rkt v0.8.0
// CoreOS Blog

Today we are releasing rkt v0.8.0. rkt is an application container runtime built to be efficient, secure and composable for production environments.

This release includes new security features, including initial support for user namespaces and enhanced container isolation using hardware virtualization. We have also introduced a number of improvements such as host journal integration, container socket activation, improved image caching, and speed enhancements.

Intel Contributes rkt stage1 with Virtualization

Intel and rkt

The modular design of rkt enables different execution engines and containerization systems to be built and plugged in. This is achieved using a staged architecture, where the second stage ("stage1") is responsible for creating and launching the container. When we launched rkt, it featured a single, default stage1 which leverages Linux cgroups and namespaces (a combination commonly referred to as "Linux containers").

With the help of engineers at Intel, we have added a new rkt stage1 runtime that utilizes virtualization technology. This means an application running under rkt using this new stage1 can be isolated from the host kernel using the same hardware features that are used in hypervisors like Linux KVM.

In May, Intel announced a proof-of-concept of this feature built on top of rkt, as part of their Intel® Clear Containers effort to utilize hardware-embedded virtualization technology features to better secure container runtimes and isolate applications. We were excited to see this work taking place and being prototyped on top of rkt as it validated some of the early design choices we made, such as the concepts of runtime stages and pods. Here is what Arjan van de Ven from Intel's Open Source Technology Center had to say:

"Thanks to rkt's stage-based architecture, the Intel®Clear Containers team was able to rapidly integrate our work to bring the enhanced security of Intel® Virtualization Technology (Intel® VT-x) to the container ecosystem. We are excited to continue working with the rkt community to realize our vision of how we can enhance container security with hardware-embedded technology, while delivering the deployment benefits of containerized apps."

Since the prototype announcement in May we have worked closely with the team from Intel to ensure that features such as one IP-per-pod networking and volumes work in a similar way when using virtualization. Today's release of rkt sees this functionality fully integrated to make the lkvm backend a first-class stage1 experience. So, let's try it out!

In this example, we will first run a pod using the default cgroups/namespace-based stage1. Let's launch the container with systemd-run, which will construct a unit file on the fly and start it. Checking the status of this unit will show us what's going on under the hood.

$ sudo systemd-run --uid=0 \     ./rkt run \     --private-net --port=client:2379 \     --volume data-dir,kind=host,source=/tmp/etcd \     coreos.com/etcd,version=v2.2.0-alpha.0 \      -- --advertise-client-urls="" \       --listen-client-urls=""  Running as unit run-1377.service.    $ systemctl status run-1377.service  ● run-1377.service      CGroup: /system.slice/run-1377.service             ├─1378 stage1/rootfs/usr/bin/systemd-nspawn             ├─1425 /usr/lib/systemd/systemd              └─system.slice               ├─etcd.service               │ └─1430 /etcd               └─systemd-journald.service                 └─1426 /usr/lib/systemd/systemd-journald  

Notice that we can see the complete process hierarchy inside the pod, including a systemd instance and the etcd process.

Next, let's launch the same container under the new KVM-based stage1 by adding the --stage1-image flag:

$ sudo systemd-run -t --uid=0 \    ./rkt run --stage1-image=sha512-c5b3b60ed4493fd77222afcb860543b9 \    --private-net --port=client:2379 \    --volume data-dir,kind=host,source=/tmp/etcd2 \    coreos.com/etcd,version=v2.2.0-alpha.0 \    -- --advertise-client-urls="" \    --listen-client-urls=""  ...    $ systemctl status run-1505.service  ● run-1505.service     CGroup: /system.slice/run-1505.service             └─1506 ./stage1/rootfs/lkvm  

Notice that the process hierarchy now ends at lkvm. This is because the entire pod is being executed inside a KVM process, including the systemd process and the etcd process: to the host system, it simply looks like a single virtual machine process. By adding a single flag to our container invocation, we have taken advantage of the same KVM technologies used by public clouds to isolate tenants to isolate our application container from the host, adding another layer of security to the host.

Thank you to Piotr Skamruk, Paweł Pałucki, Dimitri John Ledkov, Arjan van de Ven from Intel for their support and contributions. For more details on this feature see the lkvm stage1 guide.

Seamless Integration With Host Level-Logging

On systemd hosts, the journal is the default log aggregation system. With the v0.8.0 release, rkt now automatically integrates with the host journal, if detected, to provide a systemd native log management experience. To explore the logs of a rkt pod, all you need to do is add a machine specifier like -M rkt-$UUID to a journalctl command on the host.

As a simple example, let's explore the logs of the etcd container we launched earlier. First we use machinectl to list the pods that rkt has registered with systemd:

$ machinectl list  MACHINE                                  CLASS     SERVICE  rkt-bccc16ea-3e63-4a1f-80aa-4358777ce473 container nspawn  rkt-c3a7fabc-9eb8-4e06-be1d-21d57cdaf682 container nspawn    2 machines listed.  

We can see our etcd pod listed as the second machine known by systemd. Now we use the journal to directly access the logs of the pod:

$ sudo journalctl -M rkt-c3a7fabc-9eb8-4e06-be1d-21d57cdaf682  etcd[4]: 2015-08-18 07:04:24.362297 N | etcdserver: set the initial cluster version to 2.2.0  

User Namespace Support

This release includes initial support for user namespaces to improve container isolation. By leveraging user namespaces, an application may run as the root user inside of the container but will be mapped to a non-root user outside of the container. This adds an extra layer of security by isolating containers from the real root user on the host. This early preview of the feature is experimental and uses privileged user namespaces, but future versions of rkt will improve on the foundation found in this release and offer more granular control.

To turn user namespaces on, two flags need to be added to our original example: --private-users and --no-overlay. The first turns on the user namespace feature and the second disables rkt's overlayfs subsystem, as it is not currently compatible with user namespaces:

$ ./rkt run --no-overlay --private-users \    --private-net --port=client:2379 \    --volume data-dir,kind=host,source=/tmp/etcd \    coreos.com/etcd,version=v2.2.0-alpha.0 \    -- --advertise-client-urls="" \       --listen-client-urls=""  

We can confirm this is working by using curl to verify etcd's functionality and then checking the permissions on the etcd data directory, noting that from the host's perspective the etcd member directory is owned by a very high user id:

$ curl  {"etcdserver":"2.2.0-alpha.0","etcdcluster":"2.2.0"}    $ ls -la /tmp/etcd  total 0  drwxrwxrwx  3 core       core        60 Aug 18 07:31 .  drwxrwxrwt 10 root       root       200 Aug 18 07:31 ..  drwx------  4 1037893632 1037893632  80 Aug 18 07:31 member  

Adding user namespaces support is an important step towards our goal of making rkt the most secure container runtime, and we will be working hard to improve this feature in coming releases - you can see the roadmap in this issue.

Open Containers Initiative Progress

With rkt v0.8.0 we are furthering our efforts with security hardening and moving closer to a 1.0 stable and production-ready release. We are also dedicated to ensuring that the container ecosystem continues down a path that enables people publishing containers to "build once, sign once, and run anywhere." Today rkt is an implementation of the App Container spec (appc), and in the future we hope to make rkt an implementation of the Open Container Initiative (OCI) specification. However, the OCI effort is still in its infancy and there is a lot of work left to do. To check on the progress of the effort to harmonize OCI and appc, you can read more about it on the OCI dev mailing list.

Contribute to rkt

One of the goals of rkt is to make it the most secure container runtime, and there is a lot of exciting work to be done as we move closer to 1.0. Join us on our mission: we welcome your involvement in the development of rkt, via discussion on the rkt-dev mailing list, filing GitHub issues, or contributing directly to the project.


Shared via my feedly reader

Sent from my iPhone

Radically Simplify Management of Citrix Workloads and Enterprise Application Lifecycles [feedly]

Radically Simplify Management of Citrix Workloads and Enterprise Application Lifecycles
// Citrix Blogs

Citrix Lifecycle Management is a comprehensive cloud-based service lifecycle management solution to accelerate and simplify the design, deployment and ongoing management of Citrix workloads and enterprise applications.


Shared via my feedly reader

Sent from my iPhone

Lights! Camera! Action! Announcing Americas Partner Demo Derby [feedly]

Lights! Camera! Action! Announcing Americas Partner Demo Derby
// Citrix Blogs

Demos are powerful tools that can help customers make decisions, and we are looking for the best Workspace Suite demos out there. Does your demo knock people's socks off? If so, have we got news for you! Submit your demo to America's Partner Demo Derby, so you get the recognition you deserve (and some cash prizes, too!)


Shared via my feedly reader

Sent from my iPhone

Citrix Education eLearning Subscription: Now with Citrix Insider! [feedly]

Citrix Education eLearning Subscription: Now with Citrix Insider!
// Citrix Blogs

Citrix Insider is the ultimate look behind the curtain. It's the training material that the Citrix Support Readiness team produces to enable our internal support staff with the knowledge they need for new product and new product version releases. This material was previously only available to Citrix employees, but for the first time ever, anyone with an active Citrix Education eLearning Subscription can access it!


Shared via my feedly reader

Sent from my iPhone

Digging into PVS with PoolMon and WPA [feedly]

Digging into PVS with PoolMon and WPA
// Citrix Blogs

Using PoolMon to Analyze RAM Cache in Nonpaged Pool Memory In case you missed it a couple weeks ago, Andrew Morgan (one of our CTPs), posted a great article on how to accurately determine the size of the new RAM Cache. As Andrew pointed out in his article, we now use nonpaged pool memory, so it's fairly easy to fire up PoolMon and investigate. But…

Read More


Shared via my feedly reader

Sent from my iPhone

Send Your Social Media Networks into Overdrive With Citrix Social [feedly]

Send Your Social Media Networks into Overdrive With Citrix Social
// Citrix Blogs

IT and business decision makers are not only starting their buying journey on social media, but they're also referencing it at various stages along the way. So much so that three in five have reported being influenced by at least one social network during their decision making process. Pretty neat if you're using social in your marketing mix. A bit of a missed opportunity if you're not.


Shared via my feedly reader

Sent from my iPhone

Daily Hacker News for 2015-08-20 [feedly]

Sent from my iPhone

Radically Simplify Management of Citrix Workloads and Enterprise Application Lifecycles [feedly]

Radically Simplify Management of Citrix Workloads and Enterprise Application Lifecycles

-- via my feedly.com reader

Citrix Lifecycle Management is a comprehensive cloud-based service lifecycle management solution to accelerate and simplify the design, deployment and ongoing management of Citrix workloads and enterprise applications.

Now Available: Workspace Cloud Changes Everything [feedly]

Now Available: Workspace Cloud Changes Everything

-- via my feedly.com reader

Citrix is pleased to announce the general availability of Citrix Workspace Cloud, the industry's simplest way to build and deliver a complete workspace without compromise.

Citrix Workspace Cloud: A Partner-First Cloud Services Model [feedly]

Citrix Workspace Cloud: A Partner-First Cloud Services Model

-- via my feedly.com reader

Citrix Workspace Cloud uniquely adds value to Citrix partners, allowing them to speed customer success, increase value-added services and provide ongoing engagement with customers. 

Chef and AWS Bring DevOps to AWS London Loft [feedly]

Chef and AWS Bring DevOps to AWS London Loft

-- via my feedly.com reader

Chef and AWS Meet the Rising Demand for DevOps Training in the UK

LONDON – Aug. 20, 2015 – Chef, the leader in automation for DevOps, today announced it is bringing its DevOps expertise to the new Amazon Web Services (AWS) Pop-up Loft in London. The AWS Pop-up Loft will open on Sept. 10, 2015 and Chef will offer hosted sessions and training curriculum to developers, engineers, entrepreneurs and tech enthusiasts visiting the Loft.

Chef's support in the new London Loft builds on the success of the sessions hosted in the AWS Pop-up Lofts in San Francisco and New York City.

Justin Arbuckle, Chef's chief enterprise architect and vice president of EMEA, commented:

"We're seeing a surge of interest in DevOps and Chef in the UK. The AWS Pop-up Loft provides a unique opportunity for Chef to offer more dedicated training and resources in London. This type of hands-on effort is needed to accelerate IT initiatives in the UK. Practitioners are hungry to learn and we're excited to be at the forefront of this new effort with AWS."
The AWS Pop-up Loft in London opens Sept. 10, 2015 at 1 Fore Street, London, EC2Y 5EJ.

Join Chef at the following upcoming events:

  • The AWS Pop-up Loft opening party on Sept. 10, 2015
  • Chef training Sept. 21 – 24, 2015
  • Chef "Ask an Architect" Session on Sept. 25, 2015
  • Chef will host an evening event in October, more details coming soon.
Building on current field and engineering efforts, Chef and AWS are making it easier for businesses across the globe to migrate to, deploy, automate, and manage change in the cloud. Chef was recently made available on the AWS Marketplace and businesses of all sizes can now harness the power of Chef together with their investments on AWS to rapidly and safely deliver innovation through software.

To learn more about the AWS Pop-up Lofts, visit: http://awsloft.london

Additional Resources:


XenServer Dundee Alpha.3 Available [feedly]

XenServer Dundee Alpha.3 Available

-- via my feedly.com reader

The XenServer team is pleased to announce the availability of the third alpha release in the Dundee release train. This release includes a number of performance oriented items and includes three new functional areas.

  • Microsoft Windows 10 driver support is now present in the XenServer tools. The tools have yet to be WHQL certified and are not yet working for GPU use cases, but users can safely use them to validate Windows 10 support.
  • FCoE storage support has been enabled for the Linux Bridge network stack. Note that the default network stack is OVS, so users wishing to test FCoE will need to convert the network stack to Bridge and will need to be aware of the feature limitations in Bridge relative to OVS.
  • Docker support present in XenServer 6.5 SP1 is now also present in Dundee

Considerable work has been performed to improve overall I/O throughput on larger systems and improve system responsiveness under heavy load. As part of this work, the number of vCPUs available to dom0 have been increased on systems with more than 8 pCPUs. Early results indicate a significant improvement in throughput compared to Creedence. We are particularly interested in hearing from users who have previously experienced responsiveness or I/O bottlenecks to look at Alpha.3 and provide their observations.

Dundee alpha.3 can be downloaded from the pre-release download page.     

Read More

Wednesday, August 19, 2015

MDT 2013 Update 1 Now Available [feedly]

MDT 2013 Update 1 Now Available

-- via my feedly.com reader

The Microsoft Deployment Toolkit (MDT) 2013 Update 1 is now available on the Microsoft Download Center. This update requires the Windows Assessment and Deployment Kit (ADK) for Windows 10, available on the Microsoft Hardware Dev Center. (Scroll to the bottom of the page to the section, "Customize, assess, and deploy Windows on your hardware." The page also includes other Windows kits; remember for deployment you only need the Windows ADK for Windows 10.)

Significant changes in MDT 2013 Update 1:

  • Support for the Windows Assessment and Deployment Kit (ADK) for Windows 10
  • Support for deployment and upgrade of Windows 10
  • Support for integration with System Center 2012 R2 Configuration Manager SP1 with the Windows 10 ADK (see this post on the Configuration Manager Team blog for more information on using the Windows 10 ADK with Configuration Manager)

Here is a more detailed list of some specific changes in this release:

  • Support for new Enterprise LTSB and Education editions of Windows 10
  • Support for modern app (.appx) dependencies and bundles
  • Improved support for split image files (.swm)
  • Switched to using DISM for imaging processes (instead of deprecated ImageX)
  • Deployment Workbench revisions for deprecated content
  • Enhanced accessibility within the Deployment Workbench
  • Revised lists of time zones, regions and languages in the Deployment Wizard
  • Removed Start menu shortcut for "Remove PXE Filter"
  • Several MVP recommended fixes for Windows Updates, password handling, and PowerShell cmdlets
  • Added missing OOBE settings to Unattend.xml
  • Unattend.xml default screen resolution changed to allow for automatic scaling
  • Updated task sequence binaries from System Center 2012 R2 Configuration Manager SP1
  • New GetMajorMinorVersion function for integer comparison of Windows version numbers

We are still working to update the MDT documentation on TechNet, so in the short-term will use this blog to share any additional necessary information regarding this release.

Please continue to use the MDT Connect site to file bugs and feedback.

-- Aaron Czechowski, Senior Program Manager

Announcing VMTurbo 5.3: QoS for Virtualized MS Exchange, MySQL Control, Nutanix Integration, and more! [feedly]

Announcing VMTurbo 5.3: QoS for Virtualized MS Exchange, MySQL Control, Nutanix Integration, and more!

-- via my feedly.com reader

With every new release we announce more proof points in our journey towards controlling any workload on any infrastructure at any time. Today Operations Manager 5.3 is generally available and you can update your VMTurbo appliance online or offline. Before … Continue Reading »

The post Announcing VMTurbo 5.3: QoS for Virtualized MS Exchange, MySQL Control, Nutanix Integration, and more! appeared first on Virtualization Management Software & Data Center Control | VMTurbo.

Riak and Mesos – Automating Scale [feedly]

Riak and Mesos – Automating Scale

-- via my feedly.com reader

Today we are excited to announce a beta framework for running Riak KV on Mesos. Some of you may be familiar with Mesos, for those who are new to Mesos, we will provide a brief overview.

Last year at Ricon 2014, David Greenberg gave a presentation entitled Mesos: The Operating System for your ClusterIt provides a technical overview of Mesos itself, some of the common usage scenarios, and series of tools to better understand why, and where, Mesos is used in production environments. We highly encourage those interested in learning more about Mesos to begin with this presentation. In short, Mesos is an open-source software that provides a resource scheduling model and "common services". These enable multiple services and applications to run across a cluster of machines in a datacenter or cloud.

We will be demonstrating this framework at MesosCon August 20 and 21 in Basho's booth #209 as well as in Cisco's booth. While the technical implementation is compelling, using technology (in particular datacenter scale orchestration) to solve for operational challenges at datacenter scale is even more compelling.

Orchestration: A Customer Example

We know that elasticity and scalability are required by those who face the challenges of seasonal scale. None typify this need more than the eCommerce provider. Amazon is even known for saying that an increase of 100ms of latency can cost them 1% of sales. If latency is a percentage of sales, the cost of downtime averages in the millions per hour.

To overcome this challenge, which encompasses both availability and scalability, an eCommerce company has chosen to deploy Mesosphere DCOS to manage their datacenter and cloud infrastructures and has chosen Riak KV as their datastore. Using the framework it is trivial to create a cluster, add nodes to the cluster, and view the status of the nodes via Riak Explorer.

In the current implementation, running on Mesosphere DCOS, the installation and adding 3 nodes to the cluster looks something like the below:

A cluster is now in place, and you are now enjoying the benefits of using Riak KV… but the holiday season approaches. As we know the benefit of Riak KV's near-linear scale is that additional nodes not only increase capacity but throughput. Given expected or occurring increases in data volumes, you need to add nodes and add them quickly.

Once again, the framework can be used to add Riak nodes with a simple command. That's it.  It's that simple. The cluster can meet your  seasonal demands to scale while keeping operational costs extremely low.

But in a complex environment we know that there is not  just one production environment to consider, but also the staging and development environments. In fact, when using shared production resources, there could be multiple development teams who need to create, and remove, clusters frequently in their development and test environments. In traditional environments, such as with RDBMS deployments, this can be a lengthy and onerous process of provisioning. With Riak KV and Mesos, it's as simple as issuing the command to create another cluster.

The above example shows the principle scenario we considered when building the demonstration that is being shown at MesosCon. In fact, you can walk through the same demo yourself on the github page.

Architectural Considerations

To be clear, this is a beta version and unsupported codebase. Much of the current code is expected to change as we harden and prepare it for a fully-supported release. With that said, there are some key architectural considerations that are worth exploring in greater detail. Many of the architectural decisions are based on the fact that Mesos implementations have an assumption that resources are ephemeral. Or, put slightly differently, Mesos is stateless.

Our customers implement Riak KV for its characteristics of scalability and fault-tolerance. If Mesos can be used to assist in the scalability, that is a positive outcome. But it cannot come at the expense of fault-tolerance.

To that end, the Riak Mesos Framework scheduler, at present, attempts to spread Riak nodes across as many different Mesos agents as possible to increase fault tolerance. If there are more nodes requested than there are agents available, the scheduler will then starting adding more Riak nodes to existing agents.

In addition, there is an inherent assumption in client code that a cluster is a stable set of available resources. Due to the nature of Mesos and the potential for Riak nodes to come and go regularly, client applications using a Mesos based cluster must be kept up to date on the cluster's current state. Instead of requiring this intelligence to be built into the Riak client libraries, we chose a smart, proxy application that runs alongside client applications. This proxy, communicates with Zookeeper to maintain the status of Riak cluster changes and, subsequently, updates its list of Riak connections for consumption by the client application.

What's next?

We are pleased to announce this work, in collaboration with Cisco. If you are at MesosCon please stop by the Basho booth (#209) for a live demonstration.

If you have a perspective on the usage of Riak KV with Mesos, please contact us to discuss.

Tyler Hannan

Basho and Cisco Collaborate to Integrate Apache Mesos to Support Efficient Distribution of Data Services at Global Scale [feedly]

Basho and Cisco Collaborate to Integrate Apache Mesos to Support Efficient Distribution of Data Services at Global Scale

-- via my feedly.com reader

Basho to demonstrate a beta framework for Riak KV running on Mesos at MesosCon

Seattle, Wash. and San Francisco, Calif. – August 19, 2015 – Basho Technologies, the creator and developer of Riak® software announced that they, in collaboration with Cisco, have developed a framework enabling the Riak KV NoSQL database to run on Apache Mesos, bringing operational efficiency and high-elasticity to big data services. The integration with Mesos, an open-source resource manager from the Apache Software Foundation, automates the data center infrastructure beneath Riak KV instances. Pairing Mesos with Riak's own automation and orchestration technology allows enterprise businesses to deploy distributed data services at global scale in support of next generation Internet of Things (IoT) and Big Data applications, while ensuring efficient utilization of cloud resources.

Riak KV is a highly available, distributed database that remains operationally efficient at global scale. By integrating with Mesos, customers no longer need to make guesses about the infrastructure requirements of the Riak nodes as resource management is optimized by Mesos. With Riak KV managing the data tier and Mesos managing the underlying infrastructure, customers now have access to a highly efficient and easily scalable distributed data platform. The integration also allows for true "push button" scale up/scale down as Mesos can aggregate and re-aggregate resources for/from Riak nodes. Users will be able to write scripts to make scale up/down events automatic based on business logic, providing the unique ability for enterprises to auto-scale a global, multi-data center database.

"Enabling Riak KV with Mesos on Intercloud, we can seamlessly and efficiently manage the cloud resources required by a globally scalable NoSQL database, allowing us to provide the back-end for large-scale data processing, web, mobile and IoT applications. This integration will accelerate developers' ability to create innovative new cloud services for the Intercloud — the globally connected network of clouds Cisco is building with its partners, which will offer cloud services to help customers capture the multitude of opportunities created by the Internet of Everything (IoE)," said Ken Owens, Chief Technology Officer for Cisco Intercloud Services. "We're making it easier for customers to develop and deploy highly complex, distributed applications for big data and IoT."

Mesos, which provides efficient resource scheduling and sharing across multi-data center environments, decides how many resources to offer each framework, like Riak KV, and what application to execute on the available physical resources. With Mesos integration, users can easily scale an application from one to thousands of instances in minutes. Basho is developing an open source integration with Mesos that will also be commercialized around a supported enterprise offering. Basho also plans to incorporate the resource management capabilities of Mesos as a Core Service offering of the Basho Data Platform.

"Companies using traditional architectures are struggling to accommodate the need for distributed applications and distributed data services," said Dave McCrory, Chief Technology Officer at Basho. "By combining Basho's Riak KV with Mesos, we're able to deliver an easy-to-deploy platform for real-time data processing. We thereby enable a new class of modern data center developers who can break free of infrastructure restraints and give rise to a whole new class of hyper-scale applications."

Basho will demonstrate the beta framework for Riak KV running on Mesos at MesosCon, in the Basho booth and in the Cisco booth, which is set to take place August 20-21, 2015 in Seattle, Washington. Stop by the Basho booth #209 or the Cisco booth to learn more.

About Basho Technologies

Basho Technologies, Inc. is a distributed systems company dedicated to developing disruptive technology that simplifies enterprises' most critical data management challenges. Basho has attracted one of the most talented groups of engineers and technical experts ever assembled devoted exclusively to solving some of the most complex issues presented by scaling distributed systems and enjoys a large and growing following among influential programmers, architects and academics.

Basho's distributed database software, Riak KV, the industry leading distributed NoSQL database software, and Basho's cloud storage software, Riak S2, are used by fast-growing Web businesses and by one-third of the Fortune 50 to power their critical Web, mobile and social applications, and their public and private cloud platforms. The Basho Data Platform was recently introduced to help enterprises control and simplify distributed Big Data. 

Basho is the organizer of RICON – a distributed systems conference.

Riak is the registered trademark of Basho Technologies, Inc.  Apache Mesos is the trademark of the Apache Software Foundation.  The trademarks and names of other companies and products mentioned herein are the property of their respective owners.

Press Contact

Michael Kellner

BOCA Communications for Basho

basho@bocacommunications.com: +1-415-425-4773