Microsoft Cloud OS Public Cloud Azure Virtual Machines Windows Azure Pack Windows Azure Pack Consistent Platform Private Cloud Service Providers DEVELOPMENT MANAGEMENT DATA IDENTITY VIRTUALIZATION.

Download Report

Transcript Microsoft Cloud OS Public Cloud Azure Virtual Machines Windows Azure Pack Windows Azure Pack Consistent Platform Private Cloud Service Providers DEVELOPMENT MANAGEMENT DATA IDENTITY VIRTUALIZATION.

Microsoft Cloud OS
Public Cloud
Azure Virtual Machines
Windows Azure Pack
1
Windows Azure Pack
Consistent
Platform
Private Cloud Service Providers
DEVELOPMENT
MANAGEMENT
DATA
IDENTITY
VIRTUALIZATION
Microsoft Cloud OS
Public Cloud
Azure Virtual Machines
Windows Azure Pack
1
Windows Azure Pack
Consistent
Platform
Private Cloud Service Providers
Microsoft: To provide our valued customers the best cloud whenever and wherever it makes business sense.
Offline Operations
Remote Debug Tag Expressions
Site to Site Virtual Network
Xamarin integration
Stop
without
Billing
Traffic Manager Large Memory
SKU
Hyper-V Recovery
SQL, SharePoint, BizTalk
Cloud
Services
SDK
2.0
Images
HDInsight
Mercurial Deployment
Windows Phone Support
Distributed Cache Scheduler
Partitioned Queues/Topics
Per Minute Billing Dynamic Remote Desktop Log Streaming
AutoScale
Android Support IaaS
HTML 5/CORS
Active Directory
IP and SNI SSL
Custom Mobile API
BizTalk Services
2013
Hyper-V Disaster Recovery Support
http Logs to Storage
IP/DDOS Protection
Multi-Factor Auth
http Logs to Storage
MSDN Dev/Test
Dynamic Remote Desktop
Storage
Analytics
Integration
Delete Disks
WebSockets AMQP Support
iOS Notification SupportVIP ACLs
New VM Gallery
PowerBI
Read-Only Secondary Storage Windows Server Backup Queue Geo Replication
Mobile Services
Manage Azure in AD New Relic
Notification Hubs
Windows 8
Git Source Control
B2B/EDI
AD Management Portal CORS/JSON Storage Support AD Directory
and EAI Adapters
VOD Streaming + Encoding
AutoScale/Monitoring
Web Sites
Point to Site
Media Services
Message Pump Programming Model
Sync
Notification Support
Software VPN
VS Online
Import/Export Hard Drives
16 regions worldwide in 2014
Azure
footprint
>57% >250k
Fortune 500 using Azure
>20
>2
TRILLION
storage
objects
MILLION
requests/sec
Active websites
>300
>13
MILLION
AD users
BILLION
authentication/wk
Greater than
1,000,000
SQL Databases in Azure
>1
MILLION
Developers
registered with
Visual Studio
Online
Olympics
NBC Sports
Live video encoding and
streaming
Web + Mobile
Over 100 million viewers in 22
countries and 4 continents
More than 100TB of storage
Over 500 Billion Storage
Transactions
The Sochi Olympics were powered worldwide by Azure & Hyper-V
World Record: 2.1 million
concurrent HD viewers during
the USA vs. Canada hockey
match
joint letter
Microsoft Dynamics CRM
Windows Intune
clauses
Microsoft Azure Office 365
model
Data Protection Directive
http://blogs.technet.com/b/microsoft_blog/archive/2014/04/10/privacy-authorities-across-europeapprove-microsoft-s-cloud-commitments.aspx
•
•
•
www.5nine.com
[email protected]
Massive scalability for the
most demanding workloads
Hosts
• Support for up to 320 logical processors
& 4TB physical memory per host
• Support for up to 1,024 virtual machines
per host
Enterprise
Class Scale
for Key
Workloads
320
Virtual
CPU
64
Virtual
Memory
1TB
Cluster
Nodes
64
4TB
Clusters
• Support for up to 64 physical nodes &
8,000 virtual machines per cluster
Virtual Machines
• Support for up to 64 virtual processors and
1TB memory per VM
Logical
Physical
Processors Memory
Confidently Virtualize Mission Critical Workloads with Hyper-V
•
•
64 vCPU support drove 6x performance
increase over previous version of Hyper-V
6.3% overhead compared with physical
Exchange 2013
•
Virtualized 48,000 simulated users on a
single Hyper-V host across 12 VMs, with
low response times
SharePoint 2013
•
Scaled to over 2 million heavy users at 1%
concurrency, across 5 VMs on a single
Hyper-V host
50000
2,400,000
900
2,200,000
45000
800
2,000,000
40000
700
1,800,000
35000
1,600,000
600
30000
1,400,000
500
25000
1,200,000
400
1,000,000
20000
45
0.9
0.9
40
0.8
0.8
35
0.7
0.7
30
0.6
0.6
25
0.5
0.5
20
0.4
0.4
800,000
300
15000
600,000
200
10000
400,000
100
5000
200,000
00
50
11
Average Transaction Response Time (Sec)
Exchange DB Read Response Time (ms)
Average Response Time (Sec)
SQL Server 2012
Exchange
Workload
Scalability
on Windows
SharePoint
Workload
on
Hyper-V
Virtual
CPUScalability
Scalability
Server
2012
Hyper-V
Windows
Server
2012
with Hyper-V
with
OLTPwith
Workloads
Exchange 2013 Mailboxes
(1% Concurrency)
Heavy Users
Transactions/Sec
Highest levels of performance
for key Microsoft workloads
15
0.3
0.3
10
0.2
0.2
0.1
50.1
000
0
1
1
2 2
3
3
4
2
3 4
Virtual
Processors
VM
Virtual
WebMachines
FrontPer
Ends
5
54 6
Windows
Server
2012, SQL
Server
2012,
Single
VM,
64GB
of RAM
3 vCPU,
16GB
RAM
per
VM,
JetStress
2010
8 vCPU,
12GB
RAM
per
WFE
VM
http://blogs.msdn.com/b/saponsqlserver/archive/2013/06/27/sap-hyper-v-benchmark-released.aspx
https://blogs.oracle.com/cloud/entry/oracle_and_microsoft_join_forces
System
Host
VM
Cluster
1.
2.
Windows Server
2012 R2 Hyper-V
vSphere
Hypervisor
vSphere 5.5
Enterprise Plus
Logical Processors
320
320
320
Physical Memory
4TB
4TB
4TB
Virtual CPUs per Host
2,048
4,096
4,096
Virtual CPUs per VM
64
8
641
1TB
1TB
1TB
1,024
512
512
Guest NUMA
Yes
Yes
Yes
Maximum Nodes
64
N/A2
32
8,000
N/A2
4,000
Resource
Memory per VM
Active VMs per Host
Maximum VMs
vSphere 5.5 Enterprise Plus is the only vSphere edition that supports 64 vCPUs. Enterprise edition supports 32 vCPU per VM with all other editions supporting 8 vCPUs per VM
For clustering/high availability, customers must purchase vSphere
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf and http://www.vmware.com/products/vsphere-hypervisor/faq.html, http://www.vmware.com/files/pdf/vsphere/VMwarevSphere-Platform-Whats-New.pdf
Industry Leading Storage I/O
Performance
• VM storage performance on par
with native
• Performance scales linearly with
increase in virtual processors
• Windows Server 2012 R2 Hyper-V
can virtualize even larger OLTP
workloads with confidence
Inbox solution for Windows to
manage storage
Virtualize storage by grouping industrystandard disks into storage pools
Pools are sliced into virtual disks, or Spaces.
Spaces can be Thin Provisioned, and can be
striped across all physical disks in a pool.
Mirroring or Parity are also
supported.
Windows then creates a volume on the
Space, and allows data to be placed on the
volume.
Spaces can use DAS only
(local to the chassis, or via SAS)
F:\
} Volumes
} Spaces
} Pools
} DAS Disks
Optimizing storage
performance on Spaces
Disk pool consists of both high performance
SSDs and higher capacity HDDs
Tiering: Hot data is moved automatically to
SSD and cold data to HDD using
Sub-File-Level data movement
Write-back-cache: SSDs absorb random
writes that are typical in virtualized
deployments
Pinning: Admins can pin hot files to SSDs
manually to drive high performance
New PowerShell cmdlets are available for the
management of storage tiers
Storage Space
SSD Tier - 400GB EMLC SAS SSD
Hot Data
Cold Data
HDD Tier - 4TB 7200RPM SAS
Store Hyper-V VMs on SMB
3.0 File Shares
\\SOFSFileServerName\VMs
Simplified Provisioning & Management
Low OPEX and CAPEX
Adding multiple NICs in File Servers unlocks
SMB Multichannel – enables higher
throughput and reliability. Requires NICs of
same type and speed.
Using RDMA capable NICs unlocks SMB
Direct offloading network I/O processing to
the NIC.
SMB Direct provides high throughput and
low latency and can reach 40Gbps (RoCE) and
56Gbps (Infiniband) speeds
Scale-out
file server
Storage
spaces
Storage
pools
Physical
disks
Provides continuous
availability for key file shares
Provides continuous availability during
failures of cluster nodes
Failover transparent to server
applications with zero downtime
and with only a small I/O delay.
Support for planned moves, load
balancing, operating system restart,
unplanned failures, and client redirection
(scale-out only).
\\foo\share
\\foo1\share1
\\foo2\share1
Resilient for file and directory operations.
All servers involved should have
Windows Server 2012 and above.
Windows Server
File Server Cluster
Hyper-V
Compute
Scale Out Compute
Ethernet
SMB3 (Fault Tolerant, Multi-Channel, Auto-Scaling, Encryption, RDMA optional)
Scale-out File Server
Dell R720
Scale Out File Server
File Server
Dell MD1220
Block Storage
Scale Out Storage
HIPAA Breach: Stolen Hard Drives
March 2012: Large Medical Provider in Tennessee paying
$1.5 million to the US Dept. Health & Human Services
Theft of 57 hard drives that contained protected health information (ePHI) for over 1
million individuals
Secured by:
Security Patrols
Biometric scanner
Keycard scanner
Magnetic locks
Keyed locks
“71% of health care organizations have suffered at least one data breach within the last year”
-Study by Veriphyr
In-box Disk Encryption to
Protect Sensitive Data
Data Protection, built in
• Supports Used Disk Space Only Encryption
• Integrates with TPM chip
• Network Unlock & AD Integration
Multiple Disk Type Support
• Direct Attached Storage (DAS)
• Traditional SAN LUN
• Cluster Shared Volumes
• Windows Server 2012 R2 File Server Share
Windows Server
2012 R2 Hyper-V
vSphere
Hypervisor
vSphere 5.5
Enterprise Plus
Host iSCSI/FC Support
Yes
Yes
Yes
3rd Party Multipathing (MPIO)
Yes
No
Yes (VAMP)1
Yes (ODX)
No
Yes (VAAI)2
Yes (Spaces)
No
Yes (vSAN)
Storage Tiering
Yes
No
Yes3
Data Deduplication
Yes
No
No
Yes (SMB 3.0)
Yes (NFS)
Yes (NFS)
Yes
No
No
Capability
SAN Offload Capability
Storage Virtualization
Network File System Support
Physical Disk Encryption
1.
2.
3.
vSphere API for Multipathing (VAMP) is only available in Enterprise & Enterprise Plus editions of vSphere 5.5
vSphere API for Array Integration (VAAI) is only available in Enterprise & Enterprise Plus editions of vSphere 5.5
vSphere Flash Read Cache has a write-through caching only, so reads only are accelerated. vSAN also has SSD caching capabilities built in, acting as a read cache & write buffer.
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf, http://www.vmware.com/files/pdf/vsphere/VMware-vSphere-Platform-Whats-New.pdf,
http://www.vmware.com/products/vsphere/compare.html, http://www.vmware.com/files/pdf/vSphere_55_Flash_Read_Cache_Whats_New_WP.pdf, http://www.vmware.com/products/virtual-san/features.html,
Access Fibre Channel SAN
data from a virtual machine
• Unmediated access to a storage area
network (SAN)
• Hardware-based I/O path to virtual hard
disk stack
• N_Port ID Virtualization (NPIV) support
• Single Hyper-V host connected to different
SANs
• Up to four Virtual Fibre Channel adapters
on a virtual machine
• Multipath I/O (MPIO) functionality
• Supports Live migration
Complete Flexibility for
Deploying App-Level HA
• Full support for running clustered
workloads on Hyper-V host cluster
• Guest Clusters that require shared
storage can utilize software iSCSI,
Virtual FC or SMB based storage
• Full support for Live Migration of
Guest Cluster Nodes
• Full Support for Dynamic Memory of
Guest Cluster Nodes
• Restart Priority, Possible & Preferred
Ownership, & AntiAffinityClassNames
help ensure optimal operation
Guest
Cluster
running
onona physical
Hyper-V
Cluster
node
restarts
failure
Guest
cluster
nodes
supported
with Livehost
Migration
Windows Server
2012 R2 Hyper-V
vSphere
Hypervisor
vSphere 5.5
Enterprise Plus
Max Guest Cluster Size (iSCSI)
64 Nodes
51
51
Max Guest Cluster Size (Fibre)
64 Nodes
51
51
Max Guest Cluster Size (File Based)
64 Nodes
51
51
Guest Clustering with Live Migration
Yes
N/A
No2
Guest Clustering with Dynamic Memory
Yes
No3
No3
Capability
1.
2.
3.
Guest Clusters can be created on vSphere 5.1 and 5.5 but are supported to a maximum of just 5 nodes in that guest cluster, regardless of storage.
VMware does not support the use of vMotion with a VM that is part of a Guest Cluster
VMware does not support the use of Memory Overcommit with a VM that is part of a Guest Cluster
vSphere Hypervisor / vSphere 5.x Ent+ Information: Information http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf, http://pubs.vmware.com/vsphere55/index.jsp?topic=%2Fcom.vmware.vsphere.mscs.doc%2FGUID-6BD834AE-69BB-4D0E-B0B6-7E176907E0C7.html, http://kb.vmware.com/kb/1037959
Customers require/demand high availability for their applications
Infrastructure Administrators
Focused on uptime, agility, performance and lowering cost
VMs and Services are provisioned, but physical resources are opaque to users
Guest Clustering requires opening a hole to present a LUN from
physical infrastructure
Tenant VMs/Services
Cloud Service Provider
Infrastructure
Compute
Storage
Networking
Solving the Challenge of Guest Clustering
Shared VHDX Virtual Disks
Virtual disks that can be shared without presenting real LUNs to tenants
Enabling Guest Clustering
Eases operations and management
Provides a business opportunity
Tenant VMs/Services
Cloud Service Provider
Infrastructure
Compute
Storage
Networking
Guest Clustering No Longer
Bound to Storage Topology
• VHDX files can be presented to multiple
VMs simultaneously, as shared storage
• VM sees shared virtual SAS disk
• Unrestricted number of VMs can
connect to a shared VHDX file
• Utilizes SCSI-persistent reservations
• VHDX can reside on a Cluster Shared
Volume on block storage, or on
File-based storage
• Supports both Dynamic and Fixed VHDX
Flexible choices for placement of Shared VHDX
Windows Server
2012 R2 Hyper-V
vSphere
Hypervisor
vSphere 5.5
Enterprise Plus
VM Virtual Fibre Channel Support
Yes
Yes
Yes
VM Virtual Fibre Channel MPIO Support
Yes
No
No
Advanced Format Drives (4K) Support
Yes
No
No
Maximum Virtual Hard Disk Size
64TB
62TB1
62TB1
Maximum Pass Through Disk Size
256TB+2
64TB
64TB
Yes
Yes
Yes
Yes, Grow & Shrink
Grow Only
Grow Only
Yes
No3
No3
Capability
Online Checkpoint/Snapshot Merge
Online Virtual Disk Resize
Guest Clustering with Shared Virtual Disk
for Production Use
1.
2.
3.
VMDK limited to 62TB as VMFS limited to 64TB, so additional room on top of VMDK required for Snapshots, Management etc.
The maximum size of a physical disk attached to a virtual machine is determined by the guest operating system and the chosen file system within the guest. More recent Windows
Server operating systems support disks in excess of 256TB in size
VMware supports guest clusters using a shared virtual disk, but those guest cluster nodes must reside on the same physical host, defeating the object of clustering and resilience.
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf, http://www.vmware.com/files/pdf/vsphere/VMware-vSphere-Platform-Whats-New.pdf,
http://www.vmware.com/products/vsphere/compare.html, http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004047, http://pubs.vmware.com/vsphere55/index.jsp?topic=%2Fcom.vmware.vsphere.vm_admin.doc%2FGUID-A42FA14C-7D67-44A7-823B-854AA9F5FD3E.html
•
Utilizes available CPU resources on the
host to perform compression
•
Compressed memory sent across the
network faster and decompressed on
target host
•
Operates on networks with less than 10
gigabit bandwidth available
•
Enables a 2X improvement in Live
Migration performance
•
Enabled by default but will only operate if
there is spare CPU available to compress
the VM memory.
Modified
Memory pages
Storage
Livecompressed,
migration
handle moved
setup
then transferred
MEMORY
Intelligently Accelerates Live
Migration Transfer Speed
•
SMB Multichannel uses multiple NICs for
increased throughput and resiliency
•
Remote Direct Memory Access delivers
low latency network, CPU utilization &
higher bandwidth
•
Supports speeds up to 56Gb/s
•
Windows Server 2012 R2 supports RoCE,
iWARP & Infiniband RDMA solutions
•
Delivers the highest performance for
Live Migrations
•
Cannot be used with Compression
Modified
Memory Storage
pages
Live migration
transferred
handle moved
setup
at high speed
MEMORY
Harness RDMA to Accelerate
Live Migration Performance
1.
2.
3.
4.
5.
Windows Server
2012 R2 Hyper-V
vSphere
Hypervisor
vSphere 5.5
Enterprise Plus
VM Live Migration
Yes
No1
Yes2
VM Live Migration with Compression
Yes
N/A
No
VM Live Migration using RDMA
Yes
N/A
No
1GB Simultaneous Live Migrations
Unlimited3
N/A
4
10GB Simultaneous Live Migrations
Unlimited3
N/A
8
Live Storage Migration
Yes
No4
Yes5
Shared Nothing Live Migration
Yes
No
Yes6
Live Migration Upgrades
Yes
N/A
Yes
Capability
1.
2.
3.
4.
5.
6.
Live Migration (vMotion) is unavailable in the vSphere Hypervisor – vSphere 5.5 required
Live Migration (vMotion) and Shared Nothing Live Migration (Enhanced vMotion) is available in Essentials Plus & higher editions of vSphere 5.5
Within the technical capabilities of the networking hardware
Live Storage Migration (Storage vMotion) is unavailable in the vSphere Hypervisor
Live Storage Migration (Storage vMotion) is available in Standard, Enterprise & Enterprise Plus editions of vSphere 5.5
Shared Nothing Live Migration is only accessible via the vSphere Web Client, not core vCenter.
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf, http://www.vmware.com/products/vsphere/compare.html
Layer-2 Network Switch for
Virtual Machine Connectivity
Extensible Switch
• Virtual Ethernet switch that runs in the
management OS of the host
• Exists on Windows Server Hyper-V, and
Windows Client Hyper-V
• Managed programmatically
• Extensible by partners and customers
• Virtual machines connect to the
extensible switch with their
virtual network adaptor
• Can bind to a physical NIC or team
• Bypassed by SR-IOV
Layer-2 Network Switch for
Virtual Machine Connectivity
Granular In-box Capabilities
• ARP/ND Poisoning (spoofing)
protection
• DHCP Guard protection
• Virtual Port ACLs
• Trunk Mode to VMs
• Network Traffic Monitoring
• Isolated (Private) VLAN (PVLANs)
• PowerShell & WMI Interfaces for
extensibility
Build Extensions for Capturing,
Filtering & Forwarding
2 Platforms for Extensions
• Network Device Interface Specification
(NDIS) filter drivers
• Windows Filtering Platform (WFP) callout
drivers
Extensions
• NDIS filter drivers
• WFP callout drivers
• Ingress filtering
• Destination lookup and forwarding
• Egress filtering
Build Extensions for Capturing,
Filtering & Forwarding
Many Key Features
• Extension monitoring & uniqueness
• Extensions that learn VM life cycle
• Extensions that can veto state changes
• Multiple extensions on same switch
Several Partner Solutions Available
• Cisco – Nexus 1000V & UCS-VMFEX
• NEC – ProgrammableFlow PF1000
• 5nine – Security Manager
• InMon - SFlow
Windows Server
2012 R2 Hyper-V
vSphere
Hypervisor
vSphere 5.5
Enterprise Plus
Yes
No
Replaceable
Confirmed Partner Solutions
4
N/A
2
Private Virtual LAN (PVLAN)
Yes
No
Yes1
ARP/ND Protection
Yes
No
vCloud/Partner2
DHCP Snooping Protection
Yes
No
vCloud/Partner2
Virtual Port ACLs
Yes
No
vCloud/Partner2
Trunk Mode to Virtual Machines
Yes
No
Yes3
Port Monitoring
Yes
No
Yes3
Port Mirroring
Yes
No
Yes3
Capability
Extensible Network Switch
1.
2.
3.
The vSphere Distributed Switch (required for PVLAN capability) is available only in the Enterprise Plus edition of vSphere 5.5 and is replaceable rather than extensible.
ARP Spoofing, DHCP Snooping Protection & Virtual Port ACLs require the vCloud Networking & Security package, which is part of the vCloud Suite or a Partner solution, all of which
are additional purchases
Trunking VLANs to individual vNICs, Port Monitoring and Mirroring at a granular level requires vSphere Distributed Switch, which is available in vSphere 5.5 Enterprise Plus only.
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/products/cisco-nexus-1000V/overview.html, http://www-03.ibm.com/systems/networking/switches/virtual/dvs5000v/, http://www.vmware.com/technicalresources/virtualization-topics/virtual-networking/distributed-virtual-switches.html, http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/data_sheet_c78-492971.html, http://www.vmware.com/products/vcloud-network-security
Integrated with NIC hardware
for increased performance
• Standard that allows PCI Express devices to
be shared by multiple VMs
Virtual Machine
VM Network Stack
Synthetic NIC
• More direct hardware path for I/O
• Reduces network latency, CPU utilization
for processing traffic and increases
throughput
• SR-IOV capable physical NICs contain
virtual functions that are securely
mapped to VM
• This bypasses the Hyper-V Extensible
Switch
• Full support for Live Migration
Hyper-V
Extensible Switch
Virtual Function
Windows Server
2012 R2 Hyper-V
vSphere
Hypervisor
vSphere 5.5
Enterprise Plus
Dynamic Virtual Machine Queue
Yes
NetQueue
NetQueue1
IPsec Task Offload
Yes
No
No
SR-IOV with Live Migration
Yes
No
No2
Virtual Receive Side Scaling
Yes
Yes (VMXNet3)
Yes (VMXNet3)
Capability
1.
2.
VMware vSphere and the vSphere Hypervisor support VMq only (NetQueue)
VMware’s SR-IOV implementation does not support vMotion, HA or Fault Tolerance.. SR-IOV also requires the vSphere Distributed Switch, meaning customers have to upgrade to the
highest vSphere edition to take advantage of this capability. No such restrictions are imposed when using SR-IOV in Hyper-V, ensuring customers can combine the highest levels of
performance with the flexibility they need for an agile infrastructure.
vSphere Hypervisor / vSphere 5.x Ent+ Information: http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.5.pdf
Network Isolation & Flexibility
without VLAN Complexity
• Secure Isolation for traffic segregation,
without VLANs
• VM migration flexibility
• Seamless Integration
Key Concepts
• Provider Address – Unique IP addresses
routable on physical network
• VM Networks – Boundary of isolation
between different sets of VMs
Network/VSID
Provider Address
Customer Address
Red (6001)
192.168.2.13
10.10.10.10
Red (6001)
192.168.2.14
10.10.10.11
Red (6001)
192.168.2.12
10.10.10.12
• Customer Address – VM Guest OS IP
addresses within the VM Networks
• Policy Table – maintains relationship
between different addresses & networks
Network Isolation & Flexibility
without VLAN Complexity
•
Network Virtualization using Generic
Route Encapsulation uses
encapsulation & tunneling
•
Standard proposed by Microsoft, Intel,
Arista Networks, HP, Dell & Emulex
•
VM traffic within the same VSID routable
over different physical subnets
•
VM’s packet encapsulated for
transmission over physical network
•
Network Virtualization is part of the
Hyper-V Extensible Switch
GRE Key
(5001)
MAC
10.10.10.10 ->
10.10.10.11
Bridge Between VM Networks
& Physical Networks
• Multi-tenant VPN gateway built-in to
Windows Server 2012 R2
• Integral multitenant edge gateway for
seamless connectivity
• Guest clustering for high availability
• BGP for dynamic routes update
• Encapsulates & De-encapsulates
NVGRE packets
• Multitenant aware NAT for
Internet access
http://searchsdn.techtarget.com/news/2240207383/NSX-available-but-price-ofVMware-network-virtualization-stays-secret
VMware network virtualization
Easter Eggs
Industry firsts
• The first hypervisor with Shared Nothing Live
Migration
• The first hypervisor with built-in replication
(Hyper-V Replica)
• The first virtualization platform with built-in
network virtualization
• The first virtualization platform with built-in
storage virtualization
• The first virtualization platform with SMB3
support
• The first and only virtualization platform to
include integrated storage encryption
Microsoft is leading
• The first and only virtualization platform to
support 64 nodes/8000 VMs in a cluster
• The only hypervisor with SR-IOV networking
and Live Migration
• The only hypervisor with true virtual fibre
channel within the VM
• The only hypervisor with production ready
Shared Virtual Disks
• The only hypervisor with Live Migration via
SMB Direct and RDMA
• The only hypervisor with Generation 2 virtual
machines
• Industry leading performance
Microsoft Cloud OS Vision
Public Cloud
Azure Virtual Machines
Windows Azure Pack
1
Windows Azure Pack
Consistent
Platform
Private Cloud Service Providers
DEVELOPMENT
MANAGEMENT
DATA
IDENTITY
VIRTUALIZATION
Windows Server 2012 R2 and System
Center 2012 R2 (DCIM-B349)
Cloud Optimized Networking in
Windows Server 2012 R2 (DCIM-B315)
Using VMware? The Advantages
of Microsoft Cloud Fundamentals
with Virtualization (DCIM-B379)
Windows PowerShell Unplugged
with Jeffrey Snover (DCIM-B318)
Introduction to Windows Azure
Automation (DCIM-B347)
Configuring Networking with Microsoft
System Center 2012 R2 Virtual Machine
Manager (DCIM-IL300)
Introducing Windows Azure Pack
(DCIM-IL304)
Windows Server 2012 R2
http://technet.microsoft.com/en-US/evalcenter/dn205286
System Center 2012 R2
http://technet.microsoft.com/en-US/evalcenter/dn205295
Azure Pack
http://www.microsoft.com/en-us/servercloud/products/windows-azure-pack
Microsoft Azure
http://azure.microsoft.com/en-us/
Come Visit Us in the Microsoft Solutions Experience!
Look for Datacenter and Infrastructure Management
TechExpo Level 1 Hall CD
http://channel9.msdn.com/Events/TechEd
www.microsoft.com/learning
http://microsoft.com/technet
http://microsoft.com/msdn