The new Apple campus construction site, as seen from a drone
// Apple Mac News - MacHash
Shared via my feedly reader
Sent from my iPhone
After 30 years in a corporate learning setting, I wanted to try something different. So, I decided to experiment in the world of online teaching, knowing I could share my passion of big data and learning with an even larger audience (after all, the "M" in MOOC stands for "massive").
Armed with a topic I was passionate about (big data for learning), I worked with online learning platform Udemy to build my first course. Not only did this experience allow me the opportunity to expand my audience, but it also yielded a few best practices for others looking to do the same.
Lesson 1: The content itself is only half the battle
When it comes to the content, think about it in context. While the content of any course is key, I was surprised to find that it's just as important to emphasize the timing and format of the content's "unveiling."
Massive Open Online Course instructors have two routes they can take:
1: Release each course separately
2: Release all their content at once
When trying to identify which route is best for you, consider whether your intended audience is likely to have the time to binge-watch. For example, an audience of busy working professionals likely only has the time (and patience) for courses that are cut into small installments. This type of audience consumes online content in a way best described as "primal" – i.e., they devour the skills and topics for which they are hungriest and those that are the most "nutritious," then quickly get on with their busy lives.
Instructors who clearly lay out the course's content in an introductory syllabus-style outline streamline the process for their students, allowing them to more directly access the course lessons that are most relevant to them. Releasing the content in phases has the added benefit of giving you the chance to incorporate audience feedback and make improvements along the way.
For instance, A trio of three-minute videos might be more digestible than a single video that is nine minutes long.
Lesson #2: Don't overvalue the "course completion rate" statistic
One of the most frequent – and quite frankly bogus – criticisms we hear about MOOCs is that course completion rates are extremely low, suggesting that students lose interest and ultimately learn nothing.
The beauty of the on-demand MOOC format (i.e., students start and stop their classes as they desire) is that the student is in the driver's seat. Asking "what are completion rates?" is the wrong question. Rather, you should be asking, "Did students learn what they needed to know?"
Online learning is different from a traditional academic setting; everyone comes in with a different level of understanding and expertise. Therefore, not everyone needs every segment of every course. A low rate of course completion is a meaningless statistic without any additional context.
Consider your course's student completion rates in tandem with student feedback. For example: If low completion rates are paired with negative student commentary, then the content may be to blame; however, if completion rates are low and yet the feedback is positive on the whole, this tells a different story (and is a good sign!). It means the student got what he/she wanted and moved along.
Remember, skill seekers have enrolled in your course to gain a specific skill, so they are likely to focus on the segments of the course that are most relevant and of most value.
Lesson #3: Do pay attention to the numbers in general
The numbers that really matter are the reviews and ratings. Since this is an online learning marketplace, it is important to understand what students perceive the value of your course to be.
But don't stop there. Each of the various MOOC platforms offer insight into what adjustments can be made to make the course better; or even highlight opportunities for the creation of other courses that may be in demand.
While some data points look exactly as you would expect them to look (e.g., a course on Microsoft Word may have more baby boomers than millennials enrolled), some of the insights will be unexpected and beneficial. For example, I was surprised that none of my students took my lectures over the weekend, and most chose to learn during the day rather than in the evening. Most of my content was consumed between the hours of 4 and 5 p.m. — interestingly during the last hour of their work day.
Rather than make assumptions about when your audience might find it convenient to take your course, offer it on-demand and let them decide.
Data also showed that my students still returned to my lectures to review the content more than three months after it originally launched, which, to me, reinforces the evergreen nature of the content itself. Instructors that create content with a practical, on-the-job application are likely to see students refer back to it in a similar way.
Metrics at your disposal will not only provide insight into possible adjustments, but will shed light on the content itself and the ways it is bringing value to your audience.
Despite my vast experience as an instructor around the world, my first foray into the MOOC world was an enlightening one. When I started out, I expected to expand the size and reach of my audience; what I didn't expect was the degree to which I would learn something new and expand my own experiences.
Elliott Masie heads The MASIE Center, a New York think tank focused on how organizations can support learning and knowledge within the workforce. In May 2014, Masie created a corporate MOOC on Udemy to deliver content to his Learning CONSORTIUM, a coalition of 230 global organizations cooperating on the evolution of learning strategies. Click here to learn more.
It is a very well-known fact that if your virtual machines are swapping memory pages to disk, then their associated applications are going to experience heavy delay. Even worse, swap state is one of the most challenging issues to solve … READ MORE
VMworld 2014 Docker's often been cast as an enfant terrible so talented that it makes mature predecessors suddenly look a bit old, slow and irrelevant.…
The Microsoft Open Technologies, Inc. engineering team has been hard at work bringing Docker and Kubernetes support to Microsoft Azure, as we promised in July. Today we are announcing that Kubernetes can be used to manage your Docker containers on Microsoft Azure. In addition the Azure team has released the Azure Kubernetes Visualizer project which builds upon this work and makes it much easier to experiment with and learn Kubernetes on Azure.
Docker is an open-source engine that automates the deployment of any application as a portable, self-sufficient container that will run almost anywhere. Kubernetes is an open source cluster management tool, a declarative technology supporting orchestration and scheduling of Docker containers. With these latest contributions to the Kubernetes toolset, developers can transparently deploy and manage container clusters on Azure.
The key features we have implemented are documented in the Kubernetes project and can be summarized as:
Whilst these features enable the deployment and management of complex application clusters, doing so requires an understanding of some key concepts introduced by Kubernetes.
Kubernetes uses a variety of terms to describe an application consisting of a cluster of containers on Azure. Some of the most frequent include:
Unfortunately, merely defining the words used in managing a container cluster is not enough to build an understanding of how Kubernetes works. To this end, we are pleased to report that the Microsoft Azure team has created some software that helps to visualize Kubernetes deployments on Azure.
The Azure team have built the Azure Kubernetes Visualizer, which is also being released today. This open source project provides a web application, written in node.js, that automatically creates a pod definition and replication controller file that is then used by Kubernetes developers to deploy to an existing cluster of Virtual Machines running Docker.
The applications user interface provides helpful visual representations of what is happening on your cluster. Furthermore, users can edit the automatically generated files and watch as Kubernetes updates the cluster configuration.
The project is described in more detail on the Microsoft Azure blog and is a great example of how the Kubernetes project is enabling further experimentation and innovation. This visualizer work was primarily driven by Michael Blouin, an intern who brought a bright idea to a recent Microsoft hackathon, The subsequent collaboration between MS Open Tech and Microsoft Azure staff and interns has delivered software that validates our work on Kubernetes and is useful to those wanting to explore Kubernetes on Microsoft Azure.
Within MS Open Tech we are excited to see our ongoing work on Docker and Kubernetes helping to build a community that is driving interoperability and innovation in container cluster management. We hope it will benefit those of you who wish to leverage this improved interoperability across cloud platforms.
The post Understanding Docker Containers on Microsoft Azure with Kubernetes Visualizer appeared first on MS OpenTech.
This Redis version adds a variety of new features, bug fixes and other changes, as outlined in Salvatore Sanfilippo's release notes for the 2.8.12 version. We've also made some changes to the Windows codebase, such as reworking the COW (Copy On Write) join for Windows 8+, moving the heap memory mapped file into a Redis subfolder under the local app data folder, and adding a heapdir directive to allow for configuration of where the QFork memory mapped file is stored.
Interest in the Windows version of Redis continues to grow, with over 1500 followers now for the MS Open Tech Redis repo. We're excited to have this release out, and the team is already working on the next update. If you're interested in getting involved or contributing to the project, we'd love to hear from you.
Check out the new 2.8.12 release and let us know what you think!
Kirk Shoop, Software Design Engineer
Doug Mahugh, Technical Evangelist
Microsoft Open Technologies, Inc.
Join our panelists as we discuss ChefDK!
Chef Community Summit London - October 15 & 16 - FOODFIGHT saves 10%
DevOpsDays Chicago - October 7 & 8 - The Food Fight Show is a Media Sponsor code FOODFIGHT10 will save you 10% off!
FlowCon - September 3 & 4
PowerShell Summit EU - September 29 through October 1 in Amsterdam, The Netherlands
Use discount code FOODFIGHT to save 10% off upcoming Chef training that's being held in
There are also a number of classes being offered online.
Here's a brief outline of some of the things we discussed:
Commands to install ChefDK from command line: * Install latest released version -
curl -L https://www.getchef.com/chef/install.sh | sudo bash -s -- -P chefdk * Install nightly build -
curl -L https://www.getchef.com/chef/install.sh | sudo bash -s -- -P chefdk -n
The show is sponsored, in part, by Chef.
As you may already know, we planned to integrate backups directly in Xen Orchestra: this feature is in our current road map.
The goal is to provide a simple interface to select, plan and automatize all your VMs backups. But, until its done, one of our clients needs quickly a way to snapshot all its running VMs in one command. Remember this diagram?
Well, why not connect a kind of "client" to send order directly to "xo-server" to automatize all this stuff? After all, we had a unique entry point for all XenServer via the core of XO.
That's how "xo-backup" (as a plug in) is born in less than 2 days and 200 lines of code! Let's see the architecture diagram with this new client:
As you can see, "xo-backup" is a completely standalone client, it could be executed on any machine you like, even not on the same box running "xo-server". For example, it can run on your backup server, dedicated to that kind of tasks.
Let's take a real and simple example. I will use "xo-backup" on my PC at home, to start a backup on the existing demo instance of Xen Orchestra (running in a data-center), itself connected to our small lab in our offices:
xo-backup (home) -> xo (datacenter) -> xenserver (office)
On one running VM, I've got no snapshot:
Here is a simple example of this client:
$ xo-backup --max-snapshots 2 --user email@example.com https://dev1.vates.fr/api/ [?] Password: ******** ✔︎ vm1 snapshotted ✔︎ vm2 snapshotted ✔︎ vm3 snapshotted ✔︎ vm4 snapshotted
As you can see, we created a snapshot on each of this VM. If I go on a VM screen in XO, now I can see:
Great! Now I can revert to this snapshot in two clicks, also remove or rename it if I want:
Or, but let's try to restart "xo-backup", using exactly the same command. The result is now:
Remember the "--max-snapshots 2" parameter in the command line? What if I choose to start a new "xo-backup" now?
$ xo-backup --max-snapshots 2 --user firstname.lastname@example.org https://dev1.vates.fr/api/ [?] Password: ******** ✔︎ vm1 snapshotted ✔︎ vm1 old snapshot deleted auto-2014-07-27T09:34:49.434Z
The oldest one is automatically removed! That's why you can use this script directly in a Cron job (like the example given here). In this way, you made a snapshot rotation, allowing a rollback D-1 etc.
No problem! Our client filters automatically on the "auto-" prefix present in your snapshot name. If you manually create one, with for example "snapbeforeupdate" name, it will NOT be removed by "xo-backup"! That's why you can continue to use snapshots manually in parallel of "xo-backup".
You're totally right! That's because "xo-backup" is just at its first version. Our next objective is to allow full exports of VM, *.xva files on your local disk, or any accessible mount on your system.
This feature will come with parameters to target only VM you like (or all, or with a filter you give). We choose to start with snapshots, because that's the first step allowing full export of a running VM: we'll export its fresh snapshot during its execution (because you can't export directly a running VM).
This client was sponsored by OOWorx. Thanks to them for helping XO to be better!
Remember: you can do the same and sponsor a feature you need! Contact us for me details.
This a quick post about how XO is built. 1 year ago, we choose to switch from PHP to NodeJS: what a great idea!
It's not completely done, but we made recent progress and we'll finish soon. In this way, updating and deploying XO packages and their respective dependencies will be painless. Picture is related ;)
A minor release for a minor fix ;)
Want to upgrade in XOA? As explained in the documentation, just type that:
npm update --global xo-web xo-server systemctl restart xo-server.service
And you're done! You should see "(xo-web 3.5.1)" in the About page.
This is also the occasion to told you we officially exceed 200 unique downloads since the 3.5 release!
We're back with a new blog and a new website :) Farewell Wordpress! Say hello to Ghost.
We decided to wipe our current stack and to use better tools.
We switched from WP to Ghost. It uses Markdown for writing. And that's waaaaaaay better than WYSYWYG editor or plain HTML. It runs on NodeJS, reverse proxified by Apache.
Old articles will be migrated soon, preserving their old URLs.
Written from scratch, using Jade, Bootstrap, Gulp. Running on top of NodeJS.
We'll publish an article about our new pricing table and why we choose to do that. We'll give you informations about the project too. Stay tuned!
This coming feature in Xen Orchestra is really important. But not trivial to implement. Let's what it is all about.
Our current architecture connects directly to the XAPI, listen to events and send orders. But statistics are not directly in the XAPI. It was the case in previous versions of XenServer (4 and before), but for performance reasons, it was placed in Round Robin Databases (RRD). Thus, you can call directly those metrics through the XAPI.
Let's hear one of the main XAPI dev, Jon Ludlam:
RRDs are maintained for individual VMs (including dom0) and the host. Internally, the RRD is updated and maintained by a module similar to (but not actually) rrdtool. RRDs are resident on the host on which the VM is running, or the pool master when the VM is not running. For this reason, to obtain the data requires knowledge of where the VM is running.
You can read more about it here.
xo-serverto any client requesting it.
And we're done for the backend.
Wait a minute! Ok, we got data, but now we need to display what we want in the user interface. We already know we'll use that kind of graphs:
A first draft of where we'll put them:
In this mock up, you'll have CPU, RAM, Network and Disk activity in the VM view. This not the final design, but it gaves a general idea. Thanks to our experience using a nice graph lib (d3js), we'll find the best way to show you what is really happening in your VM, host or any other object having metrics.
Implementing graphs and metrics in XO is not that complicated, but it requires some time to do it properly. We hope to manage this as soon as we got time to work on it. If you have any question or suggestion, do not hesitate to comment this post.
In this post, we'll see how to activate the Wake-on-LAN (WOL) on XenServer 6.2
Naturally, it depends of your hardware configuration. If you don't have any Server Manager Tool, like iLO (HP) or iDRAC (Dell), you can count on the WOL feature of your network card.
In our lab, we got basic hardware: ITX cards and Core i5 CPU, not the kind of hardware you'll find in a data-center, but it's compatible with WOL.
XenServer 6 is based on CentOS. By default, when you shutdown the host, it doesn't activate the network card. In this case, you can't wake the box, because your interface is completely off. So, we need to configure it:
ethtool -s eth0 wol g
But if you want to do this permanently, this command will do the trick:
echo '/usr/sbin/ethtool -s eth0 wol g' >> /etc/rc.d/rc.local
Now, when the system shutdown, it's ready to listen for a WOL packet.
But that's not enough to start a server using Xen Orchestra or XenCenter. You need to tell the XAPI how to wake it: because we saw there is different options to do that (WOL, iDRAC/iLO, even custom scripts).
Again, go in SSH on your host (or any host in the pool) and use this:
xe host-power-on host=MyHost power-on-mode=wake-on-lan
For people using iDRAC or iLO, just replace
iLO respectively, but that's not all. You need to give to XAPI the IP and the credentials of your Server Manager Tool. The command will look like this:
xe host-power-on host=MyHost power-on-mode=DRAC power-on-config=xx.xx.xx.xx, user, password with xx.xx.xx.xx the IP of your DRAC controller. For iLO, just replace
iLO. That's it!
In the (incoming) 3.5.2 release, you'll see a Start button in the host menu if your host is Halted. Just click and you can boot it.
Now, you can imagine what it's possible to do in XO with this kind of feature: