Transcript Selecting the Correct Hypervisor - Home
Selecting the Correct Hypervisor
Boston Virtualization Deep Dive Day 2011 Tim Mackey XenServer Evangelist
What to Expect Today ….
• Balanced representation of each hypervisor • Where the sweet spots are for each vendor • No discussion of performance • No discussion of ROI and TCO • What you should be thinking of with cloud
The Land Before Time …
• Virtualization meant mainframe/mini • x86 was “real mode” • • Until 1986 and the 80386DX changed the world Now “protected mode” and rings of execution (typically ring 0 and ring 3) • • Real mode OS vs. Protected mode x86 always boots to real mode (even today) • Kernel takes power on and enables protection models • Early kernels performed poorly in protected mode • Focus was on application virtualization not OS virtualization
VMware Creates Mainstream x86 Virtualization
• Early 2001 ESX released as first type-1 for x86 • ESX uses an emulation model known as “binary translation” to trap protected mode operations and execute protected • operations cleanly in the VMkernel Heavily tuned over years of experience • Leverages 80386 protection rings and exception handlers • Can result in FASTER code execution
Enter Hardware Assist
• • 2005-2006 Intel and AMD introduce hardware assist Idea was to take non-trappable privileged CPU OP codes and isolate them • Introduced “user mode” and “kernel mode” • Introduced “Ring -1” • Binary translation could still be faster • • 2008-2009 Intel and AMD introduce memory assist CPU Op code only addressed part of the problem • Memory paging seen as key to future performance • Hardware + Moore’s Law > Software + Tuning
What About IO?
• • Shared IO bottlenecks VM density magnifies problem • Throughput demands impact peer VMs • • Enter SR-IOV in 2010 Hardware is virtualized in hardware • Virtual Function presented to guest
The Core Architectures
vSphere Hypervisor
• • ESX VMkernel provides hypervisor • Service console is for management • IO is managed through emulated devices • • ESX is EOL long live ESXi Service console is gone • Management via API/CLI • VMkernel now includes management, agents and support consoles • Security vastly improved over ESX
XenServer
• Based on Open Source Xen • Requires hardware assist • Management through Linux control domain (dom0) • IO managed using split drivers
Hyper-V
• Requires hardware assist • Management through Windows 2008 “Parent partition” • VMs run as child partitions • Linux enabled using “Xenified” kernels • IO is managed through parent partition and enlightened drivers
KVM
• Requires hardware assist • KVM modules part of Linux kernel • Converts Linux into type-1 • • Each VM is a process Defined as “guest mode” • IO managed via Linux and VirtIO
Commercial Free Contenders for Your Budget
VMware vSphere Hypervisor (ESXi)
Manageability Scalability
• Single server management via vSphere client • 256 GB Host RAM • 2 physical cores
Key Features
• Thin provisioning
Guest Support
• Very broad OS support
Costs
• Edition and feature based licensing • Support a percentage of sale
Microsoft Hyper-V Server R2 SP1
Manageability
• Single server management via Remote Server Admin Tools
Scalability Key Features Guest Support Costs
• 1TB host RAM • 8 Logical CPUs per host • Host clustering • Live migration • Windows Vista and Windows Server 2003 and higher • RHEL 5.2 and SLES 10 and higher • Edition and VM based pricing • Support and SA extra
Red Hat Enterprise Virtualization (KVM)
Manageability Scalability Key Features Guest Support
• Centralized multi-server management • Resource pools • 1TB host RAM – 256 GB guest RAM • 96 Logical CPUs per host – 16 vCPUs per guest • All RHEL 5 devices and storage types • Memory overcommit (KSM) • Windows XP and Windows Server 2003 and higher • RHEL 3 and higher
Costs
• Annual support options priced per six sockets
Oracle VM
Manageability
• Centralized multi-server management • Resource pools
Scalability
• 1TB host RAM – 32 GB guest RAM • 128 Logical CPUs per host – 32 vCPUs per guest
Key Features
• Secure live migration using shared storage (NFS, OCFS32 iSCSI) • Load balancing and Cluster High Availability
Guest Support
• Windows 2000 and higher • Oracle Linux, RHEL
Costs
• Annual per host support options priced per socket
Citrix XenServer
Manageability Scalability Key Features Guest Support Costs
• Centralized multi-server management • Resource pools • 512 GB host RAM – 128 GB guest RAM • 64 logical CPUs per host – 16 vCPUs per guest • Live migration using shared storage (NFS, iSCSI, Fiber) • VM snapshot and revert • Windows XP and higher • CentOS, Debian,Oracle, SuSE, RHEL • Edition based per host licensing • Support is incident based
Hypervisor is now a commodity!!
Maximizing Your Budget
• • Single hypervisor model is flawed Wasted dollars, wasted performance • • Spend your resources where you need to OS compatibility • VM density • IO performance • Application support models • Application availability
Deconstructing Key Functionality
Memory Over Commit
• Objective: Increase VM density and efficiently use host RAM • Risks: Performance and Security • Options: Ballooning, Page sharing, Compression, Swap vSphere 4.1
XenServer 5.6
Ballooning Method
•Starts large •Windows and Linux
Page sharing
4k pages only with hash; latent coalesce with CoW None
Compression
Compression of memory during oversubscribe None
Performance/Security
•Hash collisions •Recovery from swap •Compatible page scans •Doesn’t resize up Hyper-V SP1 RHEV (KVM) •Starts large •Windows and Linux •Starts small •Windows only •Linux only None Kernel Samepage Merging; CoW None None •Memory space growth •B-tree collisions •Can use swap
Load Balancing
• Objective: Ensure optimal performance of guests and hosts • Risks: Performance and Security • Options: Input metrics, reporting, variable usage models vSphere 4.1
XenServer 5.6
Hyper-V R2 RHEV (KVM)
Feature name
Dynamic Resource Scheduling Workload Balancing PRO (SCVMM) Load Balancing
Input metrics
•CPU •Memory •CPU •Memory •Disk IO R/W •Network IO R/W •CPU •Memory None
Reporting
None •Pool/Host •VM •Audit SCVMM + SCOM None
Control points
•Host affinity/anti-affinity •Initial placement 100% •Consolidation •Schedulable •Historical placement •Initial placement 100% N/A
Virtual Networking
• Objective: Support data center and cloud networking • Risks: Data leakage and performance • Requirement: Make server virtualization compatible with networking
Feature name
vSphere 4.1
Virtual Distributed Switch XenServer 5.6 FP1 Distributed Virtual Switch Hyper-V R2 RHEV (KVM) Windows network stack Linux bridge
Key features
•Centralized management •Full Cisco Nexus features •Centralized management •RSPAN •QoS •ACLs N/A N/A
Reporting
NetFlow v9
Dependencies
Cisco Nexus 1000V NetFlow v5 None N/A N/A N/A N/A
The Sweet Spots
VMware vSphere 4.1
Key play: Legacy server virtualization • Large operating system support • Large eco-system => experienced talent readily available Bonus opportunities • Feature rich data center requirements • Cloud consolidation through Cisco Nexus 1000V Weaknesses • Complex licensing model • Reliance on SQL Server management database
Microsoft Hyper-V R2 SP1
Key play: Desktop virtualization • VM density is key • Memory over commit + deep understanding of Windows 7 => success Bonus opportunities • Microsoft Server software • Ease of management for System Center customers Weaknesses • Complex desktop virtualization licensing model • Complex setup at scale • “Patch Tuesday” reputation
RedHat KVM
Key plays: Linux virtualization • RHEL data centers Weaknesses • Limited enterprise level feature set • Niche deployments and early adopter syndrome • Support only model may limit feature set
Oracle VM
Key play: Hosted Oracle Applications • Oracle only supports its products on OVM Bonus opportunities • Server virtualization • Applications requiring application level high availability • Data centers requiring secure VM motion Weaknesses • Limited penetration outside of Oracle application suite • Support only model may limit future development
Citrix XenServer 5.6 FP1
Key play: Cloud platforms • Largest public cloud deployments Bonus opportunities • Citrix infrastructure • Linux data centers • General purpose virtualization • Windows XP/Vista desktop virtualization Weaknesses • Application support statements • HCL gaps
Beyond the Data Center and into the Cloud
Hybrid Cloud
Traditional Datacenter • On premise • High fixed cost • Full control • Known security
Hybrid Cloud
• On/off premise • Low utility cost • Self-service • Fully elastic • Trusted security • Corporate control Public Cloud • Off premise • Low utility cost • Self-service • Fully elastic
Transparency is a Key Requirement
Traditional Datacenter • On premise • High fixed cost • Full control • Known security
Traditional Datacenter Hybrid Cloud Issues
• Disparate Networks • Disjoint User Experience • Unpredictable SLAs • Different Locations • Corporate control
Hybrid Cloud
Public Cloud • Off premise • Low utility cost • Self-service • Fully elastic
Enabling Transparency Enables Hybrid Cloud Traditional Datacenter Cloud OpenCloud Bridge
• Network transparency for Disparate Networks • Services transparency to make SLAs predictable • Location transparency to allow Anywhere Access
Provider
• Latency transparency to preserve the same User Experience
OpenCloud Bridge Use-Case Premise Datacenter
Hypervisor
IP:
192.168.1.100
Subnet:
255.255.254.0
Reqs:
DB, Web and LDAP
Private Public Switch vSwitch Public Private Switch Cloud
Hypervisor
vSwitch LDAP DB Server Storage Network:
10.2.1.0
Subnet:
255.255.254.0
= Netscaler VPX
It’s Your Budget … Spend it Wisely
Single Vendor ROI Can be Manipulated Understand Support Model Use Correct Tool Leverage Costly Features as Required • Vendor lock-in great for vendor • Beware product lifecycles and tool set changes • ROI Calculators always show vendor author as best • Use your own numbers • Over buying is costly; get what you need • Support call priority with tiered models • Some projects have requirements best suited to specific tool • Understand deployment and licensing impact • Blanket purchases benefit only vendor • Chargeback to project for feature requirements
Shameless XenServer Plug
• • Social Media Twitter: @XenServerArmy • Facebook: http://www.facebook.com/CitrixXenServer • LinkedIn: http://www.linkedin.com/groups?mostPopular=&gid=3231138 • • Major Events XenServer Master Class – March 23 rd next edition • Citrix Synergy – San Francisco May 25-27 2011 ( http://citrixsynergy.com
)