Customer choice Customer ONE Microsoft Consistent Platform Service Provider Private Cloud with Partner Storage Private Cloud with Microsoft SDS Hybrid Cloud Storage SAN and NAS Storage Windows SMB3 Scale Out File Server (SoFS) + Storage Spaces StorSimple + Microsoft Azure Storage Public Cloud Storage Microsoft Azure Storage Microsoft Software Defined Storage.

Download Report

Transcript Customer choice Customer ONE Microsoft Consistent Platform Service Provider Private Cloud with Partner Storage Private Cloud with Microsoft SDS Hybrid Cloud Storage SAN and NAS Storage Windows SMB3 Scale Out File Server (SoFS) + Storage Spaces StorSimple + Microsoft Azure Storage Public Cloud Storage Microsoft Azure Storage Microsoft Software Defined Storage.

Customer choice
Customer
ONE
Microsoft
Consistent
Platform
Service
Provider
Private Cloud
with Partner
Storage
Private Cloud
with
Microsoft SDS
Hybrid
Cloud
Storage
SAN and NAS
Storage
Windows SMB3
Scale Out File Server
(SoFS) + Storage
Spaces
StorSimple +
Microsoft Azure
Storage
Public
Cloud
Storage
Microsoft Azure
Storage
Microsoft Software Defined Storage (SDS)
Breadth offering and unique opportunity
Hyper-V Cluster
SMB
Scale-Out File Server Cluster
Storage Spaces
Shared JBOD
What is it?
• New file storage solution for server applications
• Scenarios include Hyper-V virtual machines and SQL Server databases
Highlights
• Enterprise-grade storage: scalable, reliable, continuously available
• Easy provisioning and management, using familiar Microsoft tools
• Leverages the latest networking technologies (converged Ethernet, RDMA)
• Increased flexibility, including live migration and multi-cluster deployments
• Reduces capital and operational expenses
Supporting Features
• SMB Transparent Failover - Continuous availability if a node fails
• SMB Scale-Out – Active/Active file server clusters, automatically balanced
• SMB Direct (SMB over RDMA) - Low latency, high throughput, low CPU use
• SMB Multichannel – Increased network throughput and fault tolerance
• SMB Encryption – Secure data transmission without costly PKI infrastructure
• VSS for SMB File Shares - Backup and restore using existing VSS framework
• SMB PowerShell, VMM Support – Manageability and System Center support
Shared
Storage
Sample Configurations
Full Throughput
Bandwidth aggregation with multiple NICs
Multiple CPUs cores engaged when using
Receive Side Scaling (RSS)
Single
RSS NIC
SMB Client
Multiple
NICs
Team of
NICs
Multiple
RDMA NICs
SMB Client
SMB Client
SMB Client
NIC Teaming
RSS
Automatic Failover
SMB Multichannel implements end-to-end
failure detection
Leverages NIC teaming if present, but does
not require it
Automatic Configuration
SMB detects and uses multiple network paths
NIC
10GbE
NIC
1GbE
Switch
10GbE
Switch
1GbE
NIC
10GbE
NIC
1GbE
NIC
1GbE
Switch
1GbE
NIC
1GbE
RSS
SMB Server
NIC
10GbE
Switch
10GbE
NIC
10GbE
NIC
10GbE
Switch
10GbE
NIC
10GbE
NIC
10GbE/IB
Switch
10GbE/IB
NIC
10GbE/IB
NIC
10GbE/IB
Switch
10GbE/IB
NIC
10GbE/IB
NIC Teaming
SMB Server
SMB Server
SMB Server
SMB Client
CPU utilization per core
RSS
NIC
10GbE
Switch
10GbE
NIC
10GbE
RSS
SMB Server
Core 1
Core 2
Core 3
Core 4
SMB Client
CPU utilization per core
RSS
Switch
10GbE
Switch
10GbE
NIC
10GbE
NIC
10GbE
SMB Server
CPU utilization per core
NIC
10GbE
NIC
10GbE
RSS
SMB Client
RSS
RSS
Core 1
Core 2
Core 3
Core 4
SMB Server
Core 1
Core 2
Core 3
Core 4
1 session, without Multichannel
No automatic failover
Can’t use full bandwidth
Only one NIC engaged
Only one CPU core engaged
SMB Client 1
SMB Client 2
RSS
RSS
NIC
10GbE
NIC
10GbE
NIC
10GbE
NIC
10GbE
Switch
10GbE
Switch
10GbE
Switch
10GbE
Switch
10GbE
NIC
10GbE
NIC
10GbE
NIC
10GbE
NIC
10GbE
RSS
RSS
SMB Server 1
SMB Server 2
1 session, without Multichannel
1 session, with Multichannel
No automatic failover
Can’t use full bandwidth
Only one NIC engaged
Only one CPU core engaged
SMB Client 1
SMB Client 2
SMB Client 2
RSS
RSS
NIC
10GbE
NIC
10GbE
NIC
10GbE
NIC
10GbE
NIC
10GbE
Switch
10GbE
Switch
10GbE
Switch
10GbE
Switch
10GbE
Switch
10GbE
Switch
10GbE
NIC
10GbE
NIC
10GbE
NIC
10GbE
NIC
10GbE
NIC
10GbE
NIC
10GbE
RSS
RSS
NIC
10GbE
NIC
10GbE
NIC
10GbE
Switch
10GbE
Switch
10GbE
NIC
10GbE
NIC
10GbE
RSS
RSS
SMB Server 1
SMB Client 1
SMB Server 2
RSS
SMB Server 1
RSS
SMB Server 2
Windows Server 2012
Developer Preview results
using four 10GbE NICs
SMB Client Interface Scaling - Throughput
5000
4500
4000
Linear bandwidth scaling
Leverages NIC support
for RSS (Receive Side
Scaling)
MB/sec
1 NIC – 1150 MB/sec
2 NICs – 2330 MB/sec
3 NICs – 3320 MB/sec
4 NICs – 4300 MB/sec
3500
3000
2500
2000
1500
1000
500
0
512
1024
4096
8192
16384
32768
65536
131072
262144
524288
I/O Size
Bandwidth for small IOs is
bottlenecked on CPU
1 x 10GbE
2 x 10GbE
3 x 10GbE
4 x 10GbE
1048576
1 session, with NIC Teaming, no MC
Automatic NIC failover
Can’t use full bandwidth
Only one NIC engaged
Only one CPU core engaged
SMB Client 1
NIC Teaming
NIC
NIC
10GbE
10GbE
Switch
10GbE
Switch
10GbE
NIC
NIC
10GbE
10GbE
NIC Teaming
SMB Server 2
SMB Client 2
NIC Teaming
NIC
NIC
1GbE
1GbE
Switch
1GbE
Switch
1GbE
NIC
NIC
1GbE
1GbE
NIC Teaming
SMB Server 2
1 session, with NIC Teaming, no MC
1 session, with NIC Teaming and MC
Automatic NIC failover
Can’t use full bandwidth
Only one NIC engaged
Only one CPU core engaged
SMB Client 1
NIC Teaming
NIC
NIC
10GbE
10GbE
Switch
10GbE
Switch
10GbE
NIC
NIC
10GbE
10GbE
NIC Teaming
SMB Server 2
SMB Client 2
NIC Teaming
NIC
NIC
1GbE
1GbE
SMB Client 1
NIC Teaming
NIC
NIC
10GbE
10GbE
Switch
1GbE
Switch
10GbE
Switch
1GbE
NIC
NIC
1GbE
1GbE
NIC Teaming
SMB Server 2
Switch
10GbE
NIC
NIC
10GbE
10GbE
NIC Teaming
SMB Server 1
SMB Client 2
NIC Teaming
NIC
NIC
1GbE
1GbE
Switch
1GbE
Switch
1GbE
NIC
NIC
1GbE
1GbE
NIC Teaming
SMB Server 2
1 session, without Multichannel
No automatic failover
Can’t use full bandwidth
Only one NIC engaged
RDMA capability not used
SMB Client 1
SMB Client 2
R-NIC
54GbIB
R-NIC
54GbIB
R-NIC
10GbE
R-NIC
10GbE
Switch
54GbIB
Switch
54GbIB
Switch
10GbE
Switch
10GbE
R-NIC
54GbIB
R-NIC
54GbIB
R-NIC
10GbE
R-NIC
10GbE
SMB Server 1
SMB Server 2
1 session, without Multichannel
1 session, with Multichannel
No automatic failover
Can’t use full bandwidth
Only one NIC engaged
RDMA capability not used
SMB Client 1
SMB Client 2
SMB Client 1
SMB Client 2
R-NIC
54GbIB
R-NIC
54GbIB
R-NIC
10GbE
R-NIC
10GbE
R-NIC
54GbIB
R-NIC
54GbIB
R-NIC
10GbE
R-NIC
10GbE
Switch
54GbIB
Switch
54GbIB
Switch
10GbE
Switch
10GbE
Switch
54GbIB
Switch
54GbIB
Switch
10GbE
Switch
10GbE
R-NIC
54GbIB
R-NIC
54GbIB
R-NIC
10GbE
R-NIC
10GbE
R-NIC
54GbIB
R-NIC
54GbIB
R-NIC
10GbE
R-NIC
10GbE
SMB Server 1
SMB Server 2
SMB Server 1
SMB Server 2
Client
File
Server
Memory
RDMA
Memory
SMB
Server
SMB Client
SMB Direct
SMB Direct
NDKPI
NDKPI
RDMA
NIC
RDMA
NIC
Ethernet or
InfiniBand
File Client
File Server
Application
User
Kernel
SMB Client
Network w/
RDMA support
R-NIC
SMB Server
Network w/
RDMA support
NTFS
SCSI
R-NIC
Disk
Client
1
User
File Server
4
Memory
Application
RDMA
Unchanged API
Kernel
SMB Client
2. SMB client makes
the decision to use
SMB Direct at run
time
2
SMB Server
2
SMB Direct
TCP/ IP
SMB Direct
NDKPI
NDKPI
TCP/ IP
3
NIC
RDMA
NIC
RDMA
NIC
Ethernet and/or
InfiniBand
1. Application
(Hyper-V, SQL
Server) does not
need to change.
1
Memory
NIC
3. NDKPI provides a
much thinner layer
than TCP/IP
3
4. Remote Direct
Memory Access
performed by the
network interfaces.
4
Type (Cards*)
Pros
Non-RDMA Ethernet
(wide variety of NICs)
•
•
•
•
RoCE
(Mellanox ConnectX-3,
Mellanox ConnectX-3Pro*)
InfiniBand
(Mellanox ConnectX-3,
Mellanox ConnectX-3Pro*,
Mellanox Connect-IB)
TCP/IP-based protocol
Works with any Ethernet switch
Wide variety of vendors and models
Support for in-box NIC teaming
Low CPU Utilization under load
Low latency
iWARP
(Intel NE020,
Chelsio T4/T5*)
Cons
•
•
High CPU Utilization under load
High latency
•
•
•
•
TCP/IP-based protocol
Works with any Ethernet switch
RDMA traffic routable
Offers up to 40Gbps per NIC port today*
•
•
Requires enabling firewall rules
Dual 40GbE port performance limited by PCI slot*
•
•
•
Ethernet-based protocol
Works with Ethernet switches
Offers up to 40Gbps per NIC port today*
•
•
•
RDMA not routable via existing IP infrastructure
(Routable RoCE is in the works…)
Requires DCB switch with Priority Flow Control (PFC)
Dual 40GbE port performance limited by PCI slot*
•
•
•
•
Switches typically less expensive per port*
Switches offer high speed Ethernet uplinks
Commonly used in HPC environments
Offers up to 54Gbps per NIC port today*
•
•
•
•
Not an Ethernet-based protocol
RDMA traffic not routable via IP infrastructure
Requires InfiniBand switches
Requires a subnet manager (typically on the switch)
* This is current as of May 2014. Information on this slide is subject to change as technologies evolve and new cards become available.
• InfiniBand and Ethernet solutions
Up to 54Gb/s InfiniBand or 40Gb/s Ethernet per port
Single or dual port cards, configured as InfiniBand or Ethernet
(RoCE)
Low latency, Low CPU overhead, RDMA
End-to-end solutions (adapters, switches, cables), including
InfiniBand to Ethernet Gateways for seamless operation
Inbox drivers for Windows Server 2012 and Windows Server
2012 R2
For more information:
http://www.mellanox.com/content/pages.php?pg=file_server
http://www.chelsio.com/wpcontent/uploads/2011/07/ProductSelector-0312.pdf
Install hardware and drivers
Get-NetAdapter
Get-NetAdapterRdma
Configure IP addresses
Get-SmbServerNetworkInterface
Get-SmbClientNetworkInterface
Establish an SMB Connection
Get-SmbConnection
Get-SmbMultichannelConnection
Similar to configuring SMB for regular NICs
# Clear previous configurations
Remove-NetQosTrafficClass
Remove-NetQosPolicy -Confirm:$False
# Enable DCB
Install-WindowsFeature Data-Center-Bridging
# Disable the DCBx setting:
Set-NetQosDcbxSetting -Willing 0
# Create QoS policies and tag each type of traffic with the relevant priority
New-NetQosPolicy "SMB" -NetDirectPortMatchCondition 445 -PriorityValue8021Action 3
New-NetQosPolicy "DEFAULT" -Default -PriorityValue8021Action 3
New-NetQosPolicy "TCP" -IPProtocolMatchCondition TCP -PriorityValue8021Action 1
New-NetQosPolicy "UDP" -IPProtocolMatchCondition UDP -PriorityValue8021Action 1
# If VLANs are used, mark the egress traffic with the relevant VlanID:
Set-NetAdapterAdvancedProperty -Name <Adapter> -RegistryKeyword "VlanID" -RegistryValue <ID>
# Enable Priority Flow Control (PFC) on a specific priority. Disable for others
Enable-NetQosFlowControl -Priority 3
Disable-NetQosFlowControl 0,1,2,4,5,6,7
# Enable QoS on the relevant interface
Enable-NetAdapterQos -InterfaceAlias "Ethernet 4"
# Optionally, limit the bandwidth used by the SMB traffic to 60%
New-NetQoSTrafficClass "SMB" -Priority 3 -Bandwidth 60 -Algorithm ETS
Local
SMB 3.0 + 10GbE
(non-RDMA)
SMB 3.0 + RDMA
(InfiniBand QDR)
SMB 3.0 + RDMA
(InfiniBand FDR)
SMB
Client
SMB
Client
SMB
Client
SQLIO
10GbE
SQLIO
Single
Server
SMB
Server
IB QDR
SMB
Server
10 GbE
FusionIO
IO
Fusion
Fusion
IO
Fusion IO
SQLIO
FusionIO
IO
Fusion
Fusion
IO
Fusion
IO
SQLIO
IB FDR
SMB
Server
IB QDR
FusionIO
IO
Fusion
Fusion
IO
Fusion IO
IB FDR
FusionIO
IO
Fusion
Fusion
IO
Fusion IO
Workload: 512KB IOs, 8 threads, 8 outstanding
BW
IOPS
MB/sec
512KB IOs/sec
Privileged
1,129
2,259
~9.8
RDMA (InfiniBand QDR, 32Gbps)
3,754
7,508
~3.5
RDMA (InfiniBand FDR, 54Gbps)
5,792
11,565
~4.8
Local
5,808
11,616
~6.6
Non-RDMA
(Ethernet, 10Gbps)
Configuration
%CPU
http://smb3.info
Configuration
Workload: 8KB IOs, 16 threads, 16 outstanding
BW
IOPS
%CPU
MB/sec
8KB IOs/sec
Privileged
571
73,160
~21.0
RDMA (InfiniBand QDR, 32Gbps)
2,620
335,446
~85.9
RDMA (InfiniBand FDR, 54Gbps)
2,683
343,388
~84.7
Local
4,103
525,225
~90.4
Non-RDMA (Ethernet, 10Gbps)
*** Preliminary *** results from two Intel Romley machines with 2 sockets each, 8 cores/socket
Both client and server using a single port of a Mellanox network interface PCIe Gen3 x8 slot
Data goes all the way to persistent storage, using 4 FusionIO ioDrive 2 cards
File Client
(SMB 3.0)
SQLIO
Single Server
File Server
(SMB 3.0)
SQLIO
RAID
RAID
Controller
RAID
Controller
RAID
Controller
RAID
Controller
RAID
Controller
SQLIO
RDMA
NIC
RDMA
NIC
RDMA
NIC
RDMA
NIC
RAID
Controller
VM
RAID
Controller
File Server
(SMB 3.0)
RAID
Controller
RAID
Controller
RDMA
NIC
RDMA
NIC
RDMA
NIC
RDMA
NIC
RAID
Controller
RAID
Controller
SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS
http://smb3.info
Controller
Hyper-V
(SMB 3.0)
JBOD
JBOD
JBOD
JBOD
JBOD
JBOD
JBOD
JBOD
JBOD
JBOD
JBOD
JBOD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
Configuration
BW
IOPS
%CPU
Latency
MB/sec
512KB IOs/sec
Privileged
milliseconds
1 – Local
10,090
38,492
~2.5%
~3ms
2 – Remote
9,852
37,584
~5.1%
~3ms
3 - Remote VM
10,367
39,548
~4.6%
~3 ms
File Client
(SMB 3.0)
SQLIO
File Server
(SMB 3.0)
RDMA
NIC
RDMA
NIC
RDMA
NIC
RDMA
NIC
RDMA
NIC
RDMA
NIC
Storage Spaces
RAID
Controller
SAS
HBA
SAS
HBA
SAS
HBA
SAS
HBA
SAS
HBA
SAS
SAS
SAS
SAS
SAS
SAS
JBOD
JBOD
JBOD
JBOD
JBOD
JBOD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
Workload
BW
IOPS
%CPU
Latency
MB/sec
IOs/sec
Privileged
milliseconds
512KB IOs, 100% read, 2t, 8o
16,778
32,002
~11%
~ 2 ms
8KB IOs, 100% read, 16t, 2o
4,027
491,665
~65%
< 1 ms
File Client
(SMB 3.0)
SQLIO
File Server
(SMB 3.0)
RDMA
NIC
RDMA
NIC
RDMA
NIC
RDMA
NIC
RDMA
NIC
RDMA
NIC
8KB random reads
from a mirrored space (disk)
~600,000 IOPS
Storage Spaces
SAS
HBA
SAS
HBA
SAS
HBA
SAS
HBA
SAS
HBA
SAS
HBA
SAS
SAS
SAS
SAS
SAS
SAS
JBOD
JBOD
JBOD
JBOD
JBOD
JBOD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
8KB random reads
from cache (RAM)
~1,000,000 IOPS
SSD
SSD
SSD
SSD
SSD
SSD
SSD
SSD
32KB random reads
from a mirrored space (disk)
~500,000 IOPS
~16.5 GBytes/sec
Hyper-V Servers
Dell R820 Sandy Bridge Server
Dell R820 Sandy Bridge Server
Dell R820
Mellanox 56G IB FDR RDMA Card(x2)
Mellanox 56G IB FDR RDMA Card(x2)
Mellanox 56G InifniBand SX6036 Switch
2-Node Scale-Out File Server Cluster
Mellanox 56G IB FDR RDMA Card(x2)
Mellanox 56G IB FDR RDMA Card(x2)
Dell R820 Sandy Bridge Server
LSI 12G SAS HBA 9300-8e(x4)
Dell R820 Sandy Bridge Server
LSI 12G SAS HBA 9300-8e(x4)
http://t.co/HBWoZ94QkT
DataOn DNS-1660D JBOD with
60 of sTec ZeusIOPS SAS SSDs
IOPS Comparison in a Windows Server 2012
R2 Scale-Out File Server
(8K Random Writes@Queue Depth 64)
IOPS Comparison in a Windows Server 2012 R2
Scale-Out File Server
(8K Random Reads@Queue Depth 64)
0.9
1.2
1.084
1.078
0.8
0.789
0.719
1
0.7
0.6
0.6
0.4
SMB Server with local CSV
Hyper-V VMs on SMB Client
2 VMs per
Client
2 Clients
Million IOPS
Million IOPS
0.8
0.5
0.4
0.3
0.2
0.2
0.1
0
0
SMB Server with local CSV
Hyper-V VMs on SMB Client
2 VMs per
Client
2 Clients
…
…
Client
Client NIC
…
…
File
DHCP
Server
DC/DNS
Management
NIC
VM
VM
VM
Virtual
Machine
vNIC
Switch1
Client
Client
Client NIC
NIC Teaming
NIC
Router
Switch2
NIC
Client
Switch4
R-NIC
vDisk
vSwitch
Hyper-V
Hyper-V
Hyper-V
Hyper-V
Host
SAS
Module
NIC
R-NIC
NIC
Switch3
NIC
Switch5
NIC
NIC
SMB 3.0
Client
R-NIC
File
Server
2
File
File
Share
Share
SAS HBA
SAS HBA
Space
Space
SAS
Module
Disk
Disk
Disk
Disk
SAS JBOD 1
SAS
Module
SAS
Module
Disk
Disk
Disk
Disk
SAS JBOD 2
Switch6
SMB 3.0
Server
Storage
Spaces
R-NIC
R-NIC
R-NIC
File
Server
1
SAS HBA
SAS HBA
SAS
Module
SAS
Module
Disk
Disk
Disk
SAS JBOD 3
Disk
Number of clients
Speed of client NICs
…
Client
Client NIC
…
…
VMs per host
Virtual processes per VM
RAM per VM
File
DHCP
Server
DC/DNS
Management
NIC
VM
VM
VM
Virtual
Machine
NIC
vNIC
Switch1
Switch3
Switch2
NIC Teaming
NIC
Client
Hyper-V
Hyper-V
Hyper-V
Hyper-V
Host
Switch5
Hyper-V hosts
Cores per Hyper-V host
RAM per Hyper-V host
NIC
NIC
SMB 3.0
Client
R-NIC
File
Server
2
File
File
Share
Share
SAS HBA
SAS HBA
Space
Space
SMB 3.0
Server
Storage
Spaces
R-NIC
R-NIC
R-NIC
R-NICs per Hyper-V host
Speed of R-NICs
SAS ports per module
SAS
SAS Speed
Disk Disk
Module
SAS
Module
Disk
Disk
SAS JBOD 1
SAS
Module
SAS
Module
Disk
Disk
Disk
Disk
SAS JBOD 2
Switch6
Switch4
NICs per Hyper-V host
Speed of NICs
NIC
R-NIC
vDisk
vSwitch
NIC
Router
NIC
R-NIC
Client
Client
Client NIC
…
R-NICs per file server,
Speed of R-NICs
Number of Spaces
Columns per space
CSV cache config
Tiering config
File
Server
1
SAS HBA
SAS HBA
SAS HBAs per File Server
SAS Speed
SAS
Module
SAS
Module
Disk
Disk
Disk
SAS JBOD 3
Disks per JBOD
Disk type and speed
SAS Speed
Disk
Switch 1
NIC1F
File Server
NIC2F
NIC
Teaming
Other
Components
TCP/IP
Switch 2
NIC1H
NIC2H
NIC
Teaming
Hyper-V
Virtual
Switch
VNIC
TCP/IP
SMB
Server
VMNIC
Other
Components
TCP/IP
SMB Client
VM
Switch 1
NIC1F
File Server
NIC2F
Switch 2
NIC1H
NIC
Teaming
NIC
Teaming
Other
Components
NIC2H
Hyper-V
Virtual
Switch
TCP/IP
VNIC
TCP/IP
SMB
Server
VMNIC
SMB
Direct
SMB
Direct
NIC3F
(RDMA)
NIC4F
(RDMA)
Switch 3
NIC3H
(RDMA)
Switch 4
NIC4H
(RDMA)
Other
Components
TCP/IP
SMB Client
VM
Blog: Make sure your network interfaces are RSS-capable
Blog: Make sure your network interfaces are RSS-capable
Blog: Use multiple subnets when deploying SMB
Multichannel in a cluster
Blog: Minimum version of Mellanox firmware required for running
SMB Direct in Windows Server 2012
Blog: How much traffic needs to pass between the SMB Client and
Server before Multichannel actually starts?
Blog: Can I use SMB3 storage without RDMA?
Blog: Is it possible to run SMB Direct from within a VM?
http://support.microsoft.com/kb/2883200
Type
Foundation
Breakout
Breakout
Breakout
Breakout
Breakout
Breakout
Breakout
Breakout
ILL
ILL
ILL
ILL
HOL
HOL
Title
FDN06 Transform the Datacenter: Making the Promise of Connected Clouds a Reality
DCIM-B349 Software-Defined Storage in Windows Server 2012 R2 and System Center 2012 R2
DCIM-B354 Failover Clustering: What's New in Windows Server 2012 R2
DCIM-B337 File Server Networking for a Private Cloud Storage Infrastructure in WS 2012 R2
DCIM-B333 Distributed File System Replication (DFSR) Scalability in Windows Server 2012 R2
DCIM-B335 Microsoft Storage Solutions in Production Environments
DCIM-B364 Step-by-step to Deploying Microsoft SQL Server 2014 with Cluster Shared Volumes
DCIM-B310 The StorSimple Approach to Solving Issues Related to Growing Data Trends
DCIM-B357 StorSimple: Enabling Microsoft Azure Cloud Storage for Enterprise Workloads
DCIM-IL200 Build Your Storage Infrastructure with Windows Server 2012 R2
DCIM-IL200-R Build Your Storage Infrastructure with Windows Server 2012 R2
DCIM-IL308 Windows Server 2012 R2: Introduction to Failover Clustering with Hyper-V
DCIM-IL308-R Windows Server 2012 R2: Intro to Failover Clustering with Hyper-V
DCIM-H205 Build Your Storage Infrastructure with Windows Server 2012 R2
DCIM-H321 Windows Server 2012 R2: Introduction to Failover Clustering with Hyper-V
Date/Time
Mon 11:00 AM
Wed 8:30 AM
Tue 1:30 PM
Tue 3:15 PM
Wed 5:00 PM
Tue 1:30 PM
Thu 8:30 AM
Mon 3:00 PM
Wed 1:30 PM
Wed 8:30 AM
Wed 5:00 PM
Mon 3:00 PM
Tue 1:30 PM
http://blogs.technet.com/b/josebda http://smb3.info
http://blogs.technet.com/b/filecab
http://blogs.msdn.com/b/clustering/
http://blogs.technet.com/b/virtualization/
http://blogs.msdn.com/b/virtual_pc_guy/
For More Information
Windows Server 2012 R2
http://technet.microsoft.com/en-US/evalcenter/dn205286
System Center 2012 R2
http://technet.microsoft.com/en-US/evalcenter/dn205295
Azure Pack
http://www.microsoft.com/en-us/servercloud/products/windows-azure-pack
Microsoft Azure
http://azure.microsoft.com/en-us/
Come Visit Us in the Microsoft Solutions Experience!
Look for Datacenter and Infrastructure Management
TechExpo Level 1 Hall CD
http://channel9.msdn.com/Events/TechEd
www.microsoft.com/learning
http://microsoft.com/technet
http://microsoft.com/msdn