http://smb3.info • Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. • Enumerate the most common performance.

Download Report

Transcript http://smb3.info • Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. • Enumerate the most common performance.

http://smb3.info
•
Describe the basics of the Hyper-V over SMB scenario,
including the main reasons to implement it.
•
Enumerate the most common performance bottlenecks
in Hyper over SMB configurations.
•
Outline a few Hyper-V over SMB configurations that can
provide continuous availability, including details on
networking and storage.
Hyper-V over SMB - Overview
Basic Configurations
Performance Considerations
Sample Configurations
What is it?
• Store Hyper-V files in shares over the SMB 3.0 protocol
(including VM configuration, VHD files, snapshots)
• Works with both standalone and clustered servers
(file storage used as cluster shared storage)
Highlights
• Increases flexibility
• Eases provisioning, management and migration
• Leverages converged network
• Reduces CapEx and OpEx
Supporting Features
• SMB Transparent Failover - Continuous availability
• SMB Scale-Out – Active/Active file server clusters
• SMB Direct (SMB over RDMA) - Low latency, low CPU use
• SMB Multichannel – Network throughput and failover
• SMB Encryption - Security
• VSS for SMB File Shares - Backup and restore
• SMB PowerShell - Manageability
Failover transparent to server application
•
Zero downtime – small IO delay during failover
Supports planned and unplanned failovers
•
•
•
Hardware/Software Maintenance
Hardware/Software Failures
Load Rebalancing
1
Normal operation
2
Failover share - connections and handles lost,
temporary stall of IO
3
Connections and handles auto-recovered
Application IO continues with no errors
Hyper-V
Resilient for both file and directory operations
1
Requires:
•
•
•
•
File Servers configured as Windows Failover Cluster
Windows Server 2012 on both the servers running
the application and file server cluster nodes
Shares enabled for “continuous availability”
(default configuration for clustered file shares)
Works for both classic file server clusters (cluster
disks) or scale-out file server clusters (CSV)
3
\\fs\share
\\fs\share
2
Targeted for server app storage
•
•
•
Example: Hyper-V and SQL Server
Increase available bandwidth by adding nodes
Leverages Cluster Shared Volumes (CSV)
Key capabilities:
•
•
•
•
•
•
•
•
Active/Active file shares
Fault tolerance with zero downtime
Fast failure recovery
CHKDSK with zero downtime
Support for app consistent snapshots
Support for RDMA enabled networks
Optimization for server apps
Simple management
Advantages
•
•
•
•
Scalable, fast and efficient storage access
High throughput with low latency
Minimal CPU utilization for I/O processing
Load balancing, automatic failover and bandwidth
aggregation via SMB Multichannel
Scenarios
•
•
High performance remote file access for application
servers like Hyper-V, SQL Server, IIS and HPC
Used by File Server and Clustered Shared Volumes
(CSV) for storage communications within a cluster
Required hardware
User
Kernel
Sample Configurations
Full Throughput
•
Bandwidth aggregation with multiple NICs
•
Multiple CPUs cores engaged when NIC
offers Receive Side Scaling (RSS)
Automatic Failover
•
SMB Multichannel implements end-to-end
failure detection
•
Leverages NIC teaming (LBFO) if present,
but does not require it
Automatic Configuration
•
SMB detects and uses multiple paths
End-to-end encryption of SMB data in flight
•
Protects data from eavesdropping or
snooping attacks on untrusted networks
Zero new deployment costs
•
No need for IPSec, specialized hardware,
or WAN accelerators
Client
Server
Configured per share or for the entire server
Can be turned on for a variety of scenarios
where data traverses untrusted networks
•
•
Application workload over unsecured
networks
Branch Offices over WAN networks
SMB Encryption
Backup Server
Application consistent
shadow copies for
server application data
stored on Windows
Server 2012 file shares
Backup and restore
scenarios
Read from
Shadow Copy
Share
Backup
A
E
B
D
C
Full integration with VSS
infrastructure
F
Relay
Shadow
Copy
request
\\fs\foo
Data volume
Application Server
File Server
\\fs\foo@t1
Shadow Copy
G
Dual-node File Server
Single-node File Server
•
•
Lowest cost for shared storage
Shares not continuously available
Low cost for continuously available
shared storage
Limited scalability
(up to a few hundred disks)
•
•
Config
VHD
Config
Config
Disk
VHD
Share1
Share2
Disk
Disk
Disk
Disk
VHD
Disk
Share1
Share2
Share1
Share2
Disk
Disk
B
•
•
Highest scalability
(up to thousands of disks)
Higher cost, but still lower than
connecting all Hyper-V hosts with FC
Config
VHD
Disk
A
Config
Multi-node File Server
Disk
Config
Disk
VHD
Share1
Disk
Share2
Disk
Disk
Disk
VHD
Share3
Disk
C
Disk
Share4
Disk
Disk
1GbE Networks
Mixed 1GbE/10GbE
Clients
Clients
10GbE or InfiniBand Networks
Clients
Clients
A
B
C
D
• Full permissions on NTFS folder and SMB share for
• Hyper-V Administrator
• Computer Account of Hyper-V hosts
• If Hyper-V is clustered, the Hyper-V Cluster Account (CNO)
1. Create Folder
2. Create Share
3. Apply Share permissions to NTFS Folder permissions
• Hyper-V supports SMB version 3.0 only
• Virtual Machine Manager 2012 SP1
supports Hyper-V over SMB
• Remote Management
• Continuously Available shares are recommended
• Active Directory is required
• Loopback configurations are not supported
SAS
Module
SAS HBA
SAS HBA
VM
VM
VM
Virtual
Machine
vDisk
SMB 3.0
Client
File
File
Share
Share
SMB 3.0
Server
Hyper-V
Host
Space
Space
Storage
Spaces
SAS HBA
SAS HBA
SAS
Module
SAS
Module
SAS
Module
SAS
Module
SAS
Module
Disk
Disk
Disk
Disk
Disk
Disk
Disk
Disk
Disk
Disk
Disk
Disk
VM
VM
VM
Virtual
Machine
File
File
Share
Share
Space
Space
vDisk
SMB 3.0
Client
SMB 3.0
Server
Storage
Spaces
SAS HBA
Hyper-V
Host
SAS HBA
SAS
Module
SAS
Module
Disk
Disk
Disk
Disk
•
•
•
•
•
•
•
•
VM
VM
VM
Virtual
Machine
File
File
Share
Share
Space
Space
SMB 3.0
Server
Hyper-V
Host
~4.4 GB/sec
2 x 10GbE x 2
SMB 3.0
Client
Storage
Spaces
SAS HBA
SAS HBA
8.8 GB/sec
2 x 6Gb SAS x4 x 2
vDisk
SAS
Module
SAS
Module
Disk
Disk
Disk
Disk
NIC
Throughput
HBA
Throughput
Memory
Throughput
1Gb Ethernet
~0.1 GB/sec
3Gb SAS x4
~1.1 GB/sec
DDR2-400 (PC2-3200)
~3.4 GB/sec
10Gb Ethernet
~1.1 GB/sec
6Gb SAS x4
~2.2 GB/sec
DDR2-667 (PC2-5300)
~5.7 GB/sec
40Gb Ethernet
~4.5 GB/sec
4Gb FC
~0.4 GB/sec
DDR2-1066 (PC2-8500)
~9.1 GB/sec
32Gb InfiniBand (QDR)
~3.8 GB/sec
8Gb FC
~0.8 GB/sec
DDR3-800 (PC3-6400)
~6.8 GB/sec
56Gb InfiniBand (FDR)
~6.5 GB/sec
16Gb FC
~1.5 GB/sec
DDR3-1333 (PC3-10600)
~11.4 GB/sec
DDR3-1600 (PC3-12800)
~13.7 GB/sec
DDR3-2133 (PC3-17000)
~18.3 GB/sec
Bus Slot
Throughput
Intel QPI
Throughput
PCIe Gen2 x4
~1.7 GB/sec
4.8 GT/s
~9.8 GB/sec
PCIe Gen2 x8
~3.4 GB/sec
5.86 GT/s
~12.0 GB/sec
PCIe Gen2 x16
~6.8 GB/sec
6.4 GT/s
~13.0 GB/sec
PCIe Gen3 x4
~3.3 GB/sec
7.2 GT/s
~14.7 GB/sec
PCIe Gen3 x8
~6.7 GB/sec
8.0 GT/s
~16.4 GB/sec
PCIe Gen3 x16
~13.5 GB/sec
VM
VM
VM
Virtual
Machine
File
File
Share
Share
Space
Space
vDisk
SMB 3.0
Client
SMB 3.0
Server
Storage
Spaces
SAS HBA
Hyper-V
Host
SAS HBA
SAS
Module
SAS
Module
Disk
Disk
Disk
Disk
SAS
HBA
SAS
HBA
SAS
HBA
SAS
HBA
SAS
HBA
SAS
HBA
SAS
SAS
SAS
SAS
SAS
SAS
JBOD
JBOD
JBOD
JBOD
JBOD
JBOD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
Workload
BW
IOPS
IOs/sec
Privileged
Physical host, 512KB IOs, 100% read, 2t,
12o
~16.8
~32K
~16%
Physical host, 32KB IOs, 100% read, 8t, 4o
~10.9
~334K
~52%
12 VMs, 4VP, 512KB IOs, 100% read, 2t, 16o
~16.8
~32K
~12%
12 VMs, 4VP, 32KB IOs, 100% read, 4t, 32o
~10.7
~328K
~62%
GB/sec
%CPU
Client
CPU
Server
2 sockets, 8 cores total, 2.26 GHz
Memory
24 GB RAM
Network
1 x 1GbE NIC (onboard)
Storage
adapter
N/A
1 FC adapter
2 x 4Gbps links
Disks
N/A
24 x 10Krpm HDD
20 used for data
2 used for log
SDC 2012 presentation
white paper by ESG
VMs
Local
IOPS
Remote
IOPS
Remote/
Local
1
900
850
94.4%
2
1,750
1,700
97.1%
4
3,500
3,350
95.7%
6
5,850
5,600
95.7%
8
7,000
6,850
97.9%
SMB 3.0 + RDMA
(InfiniBand FDR)
Configuration
BW
MB/sec
IOPS
512KB IOs/sec
%CPU
Non-RDMA (Ethernet, 10Gbps)
1,129
2,259
~9.8
RDMA (InfiniBand QDR, 32Gbps)
3,754
7,508
~3.5
RDMA (InfiniBand FDR, 54Gbps)
5,792
11,565
~4.8
Local
5,808
11,616
~6.6
Privileged
Configuration
RAID
Controller
RAID
Controller
RAID
Controller
RAID
Controller
SAS
SAS
SAS
SAS
JBOD
JBOD
JBOD
JBOD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
TechEd 2012
BW
MB/sec
IOPS
512KB IOs/sec
%CPU
Latency
1 – Local
10,090
38,492
~2.5%
~3ms
2 – Remote
9,852
37,584
~5.1%
~3ms
3 - Remote VM
10,367
39,548
~4.6%
~3 ms
Privileged
milliseconds
Workload
RAID
Controller
SAS
HBA
SAS
HBA
SAS
HBA
SAS
HBA
SAS
HBA
SAS
SAS
SAS
SAS
SAS
SAS
JBOD
JBOD
JBOD
JBOD
JBOD
JBOD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
TechNet Radio
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
BW
IOPS
%CPU
Latency
512KB IOs, 100% read, 2t, 8o
16,778
32,002
~11%
~ 2 ms
8KB IOs, 100% read, 16t, 2o
4,027
491,665
~65%
< 1 ms
MB/sec
IOs/sec
Privileged
milliseconds
partner team blog post
TechEd 2012 demo
TechEd 2012 demo
®
TechEd 2012
Failover Cluster 1
Failover Cluster 2
Windows Server 2012 Virtual Labs
10GbE
Switch 3a
10GbE
Switch 3b
Philip Moss
•
Describe the basics of the Hyper-V over SMB scenario,
including the main reasons to implement it.
•
Enumerate the most common performance
bottlenecks in Hyper over SMB configurations.
•
Outline a few Hyper-V over SMB configurations that
can provide continuous availability, including details
on networking and storage.