Monday, October 31, 2011

CTX123177 - How to Trigger a Memory Dump from a Windows Virtual Machine Running on XenServer - Citrix Knowledge Center

CTX123177 - How to Trigger a Memory Dump from a Windows Virtual Machine Running on XenServer - Citrix Knowledge Center:

'via Blog this'

CTX131244 - Error Occurs when Attempting to Make a Request from the Logical Volume - Citrix Knowledge Center

CTX131244 - Error Occurs when Attempting to Make a Request from the Logical Volume - Citrix Knowledge Center:

'via Blog this'

CTX131279 - Partition Details are not Displayed after a Clean Install of XenServer 6.0 - Citrix Knowledge Center

CTX131279 - Partition Details are not Displayed after a Clean Install of XenServer 6.0 - Citrix Knowledge Center:

'via Blog this'

CTX131237 - Driver Disk for HP CISS driver v4.6.28-12 - For XenServer 6.0 - Citrix Knowledge Center

CTX131237 - Driver Disk for HP CISS driver v4.6.28-12 - For XenServer 6.0 - Citrix Knowledge Center:

'via Blog this'

CTX131238 - Driver Disk for HP Smart Array RAID Controller hpsa v3.0.0-2 - For XenServer 6.0 - Citrix Knowledge Center

CTX131238 - Driver Disk for HP Smart Array RAID Controller hpsa v3.0.0-2 - For XenServer 6.0 - Citrix Knowledge Center:

'via Blog this'

CTX131258 - FAQ - XenServer 6.0 - Microsoft .NET 4 Requirements - Citrix Knowledge Center

CTX131258 - FAQ - XenServer 6.0 - Microsoft .NET 4 Requirements - Citrix Knowledge Center:

'via Blog this'

Cisco Blog » Blog Archive » A Treat for U-verse Customers: Cisco and AT&T Launch Wireless TV Today

Cisco Blog » Blog Archive » A Treat for U-verse Customers: Cisco and AT&T Launch Wireless TV Today:

'via Blog this'

PVS Write Cache Sizing & Considerations – Follow Up | Citrix Blogs

PVS Write Cache Sizing & Considerations – Follow Up | Citrix Blogs:

'via Blog this'

Cloud Cover: how far does your cloud really extend? | Citrix Blogs

Cloud Cover: how far does your cloud really extend? | Citrix Blogs:

'via Blog this'

Dynamic and runtime configuration in NetScaler using HTTP callout and NITRO | Citrix Blogs

Dynamic and runtime configuration in NetScaler using HTTP callout and NITRO | Citrix Blogs:

'via Blog this'

CTX123111 - Data Store Migration Strategies - Citrix Knowledge Center

CTX123111 - Data Store Migration Strategies - Citrix Knowledge Center:

'via Blog this'

Friday, October 28, 2011

CTX122442 - Lifecycle Announcement for Citrix XenApp - Citrix Knowledge Center

CTX122442 - Lifecycle Announcement for Citrix XenApp - Citrix Knowledge Center:


NDDI and Open Science, Scholarship and Services Exchange (OS3E)

NDDI and Open Science, Scholarship and Services Exchange (OS3E):

'via Blog this'

Clouds, open source, and new network models: Part 3

http://news.cnet.com/8301-19413_3-20126245-240/clouds-open-source-and-new-network-models-part-3/?part=rss&tag=feed&subj=TheWisdomofClouds



by  
The most common question I get from those I brief about OpenStack's new network service is "How does Quantum relate to software defined networking?"
Especially confusing to many is the difference between these kinds of cloud networking services and the increasingly discussed OpenFlow open-networking protocol.
In part 1 of this series, I described what is becoming an increasingly ubiquitous model for cloud computing networks, namely the use of simple abstractions delivered by network systems of varying sophistication. In part 2, I then described OpenStack's Quantum network service stack and how it reflected that model.
Software defined networking (SDN) is an increasingly popular--but extremely nascent--model for network control, based on the idea that network traffic flow can be made programmable at scale, thus enabling new dynamic models for traffic management. Because it can create "virtual" networks in the form of custom traffic flows, it can be confusing to see how SDN and cloud network abstractions like Quantum's are related.
To help clarify the difference, I'd like to use one particular example of "software defined networking," which was discussed in depth this week at the PacketPushers/TechFieldDay OpenFlow Symposium. Initially developed for computer science research (and--rumor has it--national intelligence networks), OpenFlow is an "open" protocol to control the traffic flows of multiple switches from a centralized controller.
The whitepaper (PDF) on OpenFlow's Web site provides the following description of the protocol:
"OpenFlow provides an open protocol to program the flow-table in different switches and routers. A network administrator can partition traffic into production and research flows. Researchers can control their own flows--by choosing the routes their packets follow and the processing they receive. In this way, researchers can try new routing protocols, security models, addressing schemes, and even alternatives to IP. On the same network, the production traffic is isolated and processed in the same way as today."
(Credit: OpenFlow.org)
That last sentence is interesting. If a switch (or router) supports OpenFlow, it doesn't mean that every packet sent through the device has to be processed through the protocol. Standard "built-in" algorithms can be used for the majority of traffic, and specific destination addresses (or other packet characteristics) can be used to determine the subset that should be handled via OpenFlow.
What can OpenFlow be used for? Well, Ethan Banks of the Packet Pushers blog has a great post outlining the "state of the union" for OpenFlow. Today, it is very much a research project, but commercial networking companies (including both established vendors and startups) are interested in the possibility that it will allow innovation where innovation wasn't possible before.
But finding the "killer apps" for OpenFlow is proving elusive at this early, early stage. Chris Hoff, security architect at Juniper, thinks it is information security. However, the quote of the day may have come from Peter Christy, co-founder of the Internet Research Group, in this article:
"I completely embrace the disruptive potential but I'm still puzzled on how it will impact the enterprise. The enterprise is the biggest part of the networking market. Building a reliable SDN controller is a challenging task. Customers don't value reproducing the same thing with a new technology. There have been few OF/SDN 'killer' apps so far. SDN is not ready for the enterprise market yet."
That said, how does OpenFlow relate to Quantum? It's simple, really. Quantum is an application-level abstraction of networking that relies on plug-in implementations to map the abstraction(s) to reality. OpenFlow-based networking systems are one possible mechanism to be used by a plug-in to deliver a Quantum abstraction.
OpenFlow itself does not provide a network abstraction; that takes software that implements the protocol. Quantum itself does not talk to switches directly; that takes additional software (in the form of a plug-in). Those software components may be one and the same, or a Quantum plug-in might talk to an OpenFlow-based controller software via an API (like the Open vSwitch API).
I know this gets a little complicated if you don't understand networking concepts very well, but the point is that you shouldn't ever need to deal with this stuff, unless you are a network engineer. Quantum hides the complexity of the network from the application developer's perspective. The same is true of Amazon's VPC, as well various emerging products from networking companies.
However, powerful new ways of viewing network management are being introduced to allow for an explosion of innovation in how those "simple" network representations are delivered. In the end, that will create a truly game-changing marketplace for cloud services in the years to come.
That, when all is said and done, may be the network's role in cloud computing.

Cloud, open source, and new network models: Part 2

http://news.cnet.com/8301-19413_3-20121638-240/cloud-open-source-and-new-network-models-part-2/?part=rss&tag=feed&subj=TheWisdomofClouds


by  
OpenStack's Quantum network service project is an early attempt to define a common, simple abstraction of an OSI Layer 2 network segment. What does that abstraction look like, and how does Quantum allow the networking market to flourish and innovate under such a simple concept?
OpenStack itself is an open-source project that aims to deliver a massively scalable cloud operating system, the software that coordinates how infrastructure (such as servers, networks, and data storage) are delivered to the applications and services that consume that infrastructure. Easily the largest open-source community in this space--others include Eucalyptus andCloudStack--OpenStack consists of three core projects:
  • Nova: a compute service that delivers virtual servers (or, theoretically, bare metal servers) on demand via an application programming interface, much like Amazon Web Service's EC2 compute service
  • Swift: an object storage service that operates much like Amazon's S3 service
  • Glance: a virtual machine image management service
Quantum is one of the new so-called incubation projects within OpenStack. The Quantum wiki page describes the project in the following terms:
Quantum is an incubated OpenStack project to provide "network connectivity as a service" between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova).
In other words, Quantum provides a way to manage links between the virtual network cards in your virtual machines, similar devices in network services (such as load balancers and firewalls), and other elements, such as gateways between network segments. It's a pretty straightforward service concept.
How does Quantum achieve this goal? Through a network abstraction, naturally. In part 1 of this series, I noted how the basic accepted model of the network in cloud computing is some simple network abstractions delivered by advanced physical networking infrastructure. Quantum addresses this model directly.
First, the abstraction itself. Quantum's abstraction, as pictured below, consists of a very simple combination of three basic components:
  • network segment, which represents a connection space through which interfaces can communicate with each other.
  • Ports, which are simple abstractions of connection points to the network segment, and which have configurable traits that define what kinds of interfaces they support, who can connect to the port, and so on.
  • Virtual interfaces (or VIFs), which are the (typically virtual) network controllers that reside on a virtual machine, network service appliance, or anything else that wants to connect to a port on the network segment.
The Quantum network abstraction
(Credit: James Urquhart)
Quantum itself is made up of two elements: the service itself, and a plug-in (typically vendor or technology specific).
Quantum's architecture
(Credit: James Urquhart)
The Quantum service handles managing network definitions, and things like making sure users are authorized to perform a given function. It provides an API for the management of network segments, and an API for plug-ins.
A plug-in owns every action necessary to map the abstractions to the physical networking it is managing. Today, there are two plug-ins in the official Quantum release: one for Open vSwitch, and one for Cisco's Nexus switches via the 802.1Qbh standard. Other vendors are reportedly creating additional plug-ins to be released with the next OpenStack release.
It is important to note that this separation of concerns between abstraction management and abstraction implementation allows for any abstraction defined solely on core Quantum elements and APIs to be deployed on any Quantum instance, regardless of the plug-in and underlying networking technologies.
Of course, there are mechanisms to allow vendors and technologists to extend both the API and the abstractions themselves where innovation dictates the need. Quantum hopes to evolve its core API based in part on concepts identified through the success of various plug-in extensions. This feedback loop should allow for the relatively rapid evolution of the service and its APIs based on market needs.
Quantum isn't finished, though. Today's implementation is entirely focused on OSI Layer 2 mechanisms--the next version is going to focus on network service attachment (for things like load balancers, firewalls, and so on), as well as other critical Layer 3 concepts, such as subnets, addressing, and DNS.
You might be asking how Quantum relates to software-defined networking, the now hot trend in network architecture that separates control of the network from the devices that deliver packets to their destination. In part 3 of this series, I'll describe how technologies such as OpenFlow fit into the network virtualization picture.

Cloud, open source, and new network models: Part 1



by  
What is the network's role in cloud computing? What are the best practices for defining and delivering network architectures to meet the needs of a variety of cloud workloads? Is there a standard model for networking in the cloud?
Last week's OpenStack developer summit in Boston was, by all accounts, a demonstration of the strength and determination of its still young open-source community. Nowhere was this more evident than in the standing-room-only sessions about the Quantum network services project.
I should be clear that though I worked on Quantum planning through Cisco's OpenStack program, I did not personally contribute code to the project. All opinions expressed here are mine, and not necessarily my employer's.
Why is Quantum important in the context of cloud networking? Because, I believe, it represents the model that makes the most sense in cloud infrastructure services today--a model that's increasingly become known as "virtual networking."
In this context, virtual networking refers to a new set of abstractions representing OSI ModelLayer 2 concepts, like network segments and interfaces, and Layer 3 concepts, like gateways and subnets, while removing the user from any interaction with actual switches or routers. In fact, there are no direct representations of either switches or routers in most cloud networks.
The diagram below comes from my "Cloud and the Future of Networked Systems" talk, most recently given at Virtual Cloud Connect in late September:
(Credit: James Urquhart)
Here's what's interesting about the way cloud networking is shaking out:
  • From the perspective of application developers, the network is getting "big, flat, and dumb", with less complexity directly exposed to the application. The network provides connectivity, and--as far as the application knows--all services are explicitly delivered by virtual devices or online "cloud services."
    There may be multiple network segments connected together (as in a three-tier Web architecture), but in general the basic network segment abstractions are simply used for connectivity of servers, storage, and supporting services.
  • From the service provider's perspective, that abstraction is delivered on a physical infrastructure with varying degrees of intelligence and automation that greatly expands the deployment and operations options that the application owner has for that abstraction. Want cross-data-center networks? The real infrastructure can make that happen without the application having to "program" the network abstraction to do so.
Using an electric utility analogy (which I normally hate, but it works in this case), the L2 abstraction is like the standard voltage, current, and outlet specifications that all utilities must deliver to the home. It's a commodity mechanism, with no real differentiation within a given utility market.
The underlying physical systems capabilities (at the "real" L2 and L3), however, are much like the power generation and transmission market today. A highly competitive market, electric utility infrastructure differentiates on such traits as efficiency, cost, and "greenness." We all benefit from the rush to innovate in this market, despite the fact that the output is exactly the same from each option.
Is abstraction really becoming a standard model for cloud? Well, I would say there is still a lot of diversity in the specifics of implementation--both with respect to the abstraction and the underlying physical networking--but there is plenty of evidence that most clouds have embraced the overall concept. It's just hard to see most of the time, as network provisioning is usually implicit within some other service, like a compute service such as Amazon Web Services's EC2, or a platform service, such as Google App Engine.
Remember, public cloud services are multitenant, so they must find a way to share physical networking resources among many independent service consumers. The best way to address this is with some level of virtualization of network concepts--more than just VLANs (though VLANs are often used as a way to map abstractions to physical networks).
Most services give you a sense of controlling network ingress and egress to individual VMs or (in the case of platform services) applications. Amazon's Security Zones are an example of that. A few, such as GoGrid and Amazon's Virtual Private Cloud (pictured at the top of this post), give you a subnet level abstraction.
In part 2 of this series, I'll explain how Quantum explicitly addresses this model, and the next steps that Quantum faces in expanding the applicability of its abstractions to real world scenarios. In the meantime, if you use cloud services, look closely at how networking is handled as a part of those services.
You'll see evidence of a new network model in the making.