Windows Server 2012 NIC Teaming and Multichannel Solutions

Download Report

Transcript Windows Server 2012 NIC Teaming and Multichannel Solutions

Team Interfaces,
Team NICs, or
tNICs
Team
Team members
--or-Network
Adapters
Switch
independent
team
Switch
dependent
team
Switch
Independent
Switch Dependent
Address Hash
Sends on all active members,
receives on one member
(primary member)
Sends on all active members,
receives on all active
members, inbound traffic
may use different NIC than
outbound traffic for a given
stream (inbound traffic is
distributed by the switch)
Hyper-V port
Sends on all active
members, receives on all
active members, traffic from
same port always on same
NIC
All outbound traffic from a
port will go on a single NIC.
Inbound traffic may be
distributed differently
depending on what the
switch does to distribute
traffic
Switch
Independent
Switch Dependent
Address Hash
Hyper-V port
Sends on all active
Sends on all active
members,
receives
on oneusing members,
on all
Sends on all
active members
the selected receives
level of address
hashing (primary
(defaults to member)
4-tuple hash). active members, traffic from
member
same port always on same
Because each IP address can only be associated with a single
NIC
MAC address for routing purposes, this mode receives inbound
Sends
active
members,
All outbound
trafficon
onall
only
one member
(the primary
member). traffic from a
receives on all active
port will go on a single NIC.
Best usedinbound
when:
members,
traffic
Inbound traffic may be
a) Native mode teaming where switch diversity is a concern;
may b)
use
different NIC than
distributed differently
Active/Standby mode
outbound
traffic
for workloads
a given thatdepending
on whatlight
the
c) Servers
running
are heavy outbound,
stream
(inbound
traffic
is
switch does to distribute
inbound
workloads
(e.g., IIS)
distributed by the switch)
traffic
Switch
Independent
Switch Dependent
Address Hash
Hyper-V port
Sends on all active members, Sends on all active
Sends on
on all
active
the hashed Hyper-V
switch receives
port. Eachon all
receives
one
member
members,
Hyper-V
pormembers using t will be
bandwidth
limitedtraffic
to notfrom
(primary
member)
active
members,
more than one team member’s bandwidth.
same port always on same
Because each VM (Hyper-V port) isNIC
associated with a single NIC,
this on
mode
inbound trafficAll
for outbound
the VM on the
samefrom
NIC a
Sends
all receives
active members,
traffic
it sends on so all NICs receive inbound traffic. This also allows
receives
on all active
port will go on a single NIC.
maximum use of VMQs for better performance over all.
members, inbound traffic
Inbound traffic may be
mayBest
useused
different
NIC under
than the Hyper-V
distributed
for teaming
switchdifferently
when
- number
VMsa well-exceeds
number of team
members
outbound
trafficoffor
given
depending
on what
the
restriction traffic
of a VMisto one NIC’s
bandwidth
is acceptable
stream -(inbound
switch
does to
distribute
distributed by the switch)
traffic
Switch
Independent
Switch Dependent
Address Hash
Hyper-V port
Sends on all active members, Sends on all active
Sends on all active members using the selected level of address
receives
on(defaults
one member
hashing
to 4-tuple hash). members, receives on all
(primary member)
active members, traffic from
Receives on all ports. Inbound traffic
is distributed
by the
same
port always
on same
switch. There is no association between
NIC inbound and outbound
traffic.
Sends on all active members, All outbound traffic from a
receives
on for:
all active
port will go on a single NIC.
Best used
- Native
teaming traffic
for maximum performance
and switch
members,
inbound
Inbound traffic
may be
diversity
is not required;
or
may
use different
NIC than
distributed differently
- teaming under the Hyper-V switch when an individual VM
outbound
traffic for a given
depending on what the
needs to be able to transmit at rates in excess of what one team
stream
(inbound
traffic is
switch does to distribute
member
can deliver
distributed by the switch)
traffic
Switch
Independent
Switch Dependent
Address Hash
Hyper-V port
Sends
onon
allallactive
on all
activeswitch
Sends
active members,
members usingSends
the hashed
Hyper-V
port. on
Eachone
Hyper-V
port will be bandwidth
limited
to noton
more
receives
member
members,
receives
all
than one
team member’s bandwidth.
(primary
member)
active members, traffic from
same port always on same
Receives on all ports. Inbound traffic is distributed by the
NIC inbound and outbound
switch. There is no association between
traffic.
Sends
on all active members, All outbound traffic from a
receives on all active
port will go on a single NIC.
Best used when:
members,
inbound traffic
Inbound traffic may be
- Hyper-V teaming when VMs on the switch well-exceed the
maynumber
use different
NIC thanand distributed differently
of team members
outbound
given
on what
- whentraffic
policyfor
callsafor
e.g., LACPdepending
teams and when
an the
individual
VM does
not need
one team
stream
(inbound
traffic
is to transmit
switchfaster
doesthan
to distribute
member’sby
bandwidth
distributed
the switch)
traffic
Feature
Comments
RSS
Programmed directly by TCP/UDP when bound to TCP/UDP.
VMQ
Programmed directly by the Hyper-V switch when bound to
Hyper-V switch
IPsecTO, LSO, Jumbo frames, all
checksum offloads (transmit)
Yes – advertised if all NICs in the team support it
RSC, all checksum offloads
(receive)
Yes – advertised if any NICs in the team support it
DCB
Yes – works independently of NIC Teaming
RDMA,
TCP Chimney offload
No support through teaming
SR-IOV
Teaming in the guest allows teaming of VFs
Network virtualization
Yes
Multiple connections per SMB session
Full Throughput
Bandwidth aggregation with multiple
NICs
Multiple CPUs cores engaged when
using Receive Side Scaling (RSS)
Sample Configurations
Single 10GbE
RSS-capable NIC
SMB Client
Multiple 1GbE NICs
Multiple 10GbE
in a NIC team
Multiple RDMA NICs
SMB Client
SMB Client
SMB Client
NIC Teaming
RSS
SMB Multichannel implements end-toend failure detection
Leverages NIC teaming if present, but
does not require it
Automatic Configuration
SMB detects and uses multiple network
paths
NIC
10GbE
NIC
1GbE
Switch
10GbE
Switch
1GbE
NIC
10GbE
RSS
NIC
1GbE
SMB Server
NIC
1GbE
Switch
1GbE
NIC
1GbE
NIC
10GbE
Switch
10GbE
NIC
10GbE
NIC
10GbE
Switch
10GbE
NIC
10GbE
NIC
10GbE/IB
Switch
10GbE/IB
NIC
10GbE/IB
NIC
10GbE/IB
Switch
10GbE/IB
NIC
10GbE/IB
NIC Teaming
SMB Server
SMB Server
SMB Server
SMB Client
CPU utilization per core
RSS
RSS
Switch
10GbE
Switch
10GbE
NIC
10GbE
NIC
10GbE
SMB Server
CPU utilization per core
NIC
10GbE
NIC
10GbE
RSS
SMB Client
RSS
Core 1 Core 2 Core 3 Core 4
SMB Server
Core 1 Core 2 Core 3 Core 4
1 session, without Multichannel
SMB Client 1
RSS
RSS
SMB Client 2
1 session, with Multichannel
SMB Client 1
RSS
SMB Client 2
RSS
NIC
10GbE
NIC
10GbE
NIC
10GbE
NIC
10GbE
NIC
10GbE
NIC
10GbE
NIC
10GbE
NIC
10GbE
Switch
10GbE
Switch
10GbE
Switch
10GbE
Switch
10GbE
Switch
10GbE
Switch
10GbE
Switch
10GbE
Switch
10GbE
NIC
10GbE
NIC
10GbE
NIC
10GbE
NIC
10GbE
NIC
10GbE
NIC
10GbE
NIC
10GbE
NIC
10GbE
RSS
RSS
SMB Server 1
SMB Server 2
RSS
SMB Server 1
RSS
SMB Server 2
SMB Client Interface Scaling - Throughput
1 x 10GbE
2 x 10GbE
3 x 10GbE
4 x 10GbE
5000
MB/sec
4000
3000
2000
1000
0
I/O Size
Preliminary results based on
Windows Server “8” Developer Preview
http://go.microsoft.com/fwlink/p/?LinkId=227841
1 session, with NIC Teaming, no MC
SMB Client 1
NIC Teaming
NIC
NIC
10GbE
10GbE
Switch
10GbE
Switch
10GbE
NIC
NIC
10GbE
10GbE
NIC Teaming
SMB Server 2
1 session, with NIC Teaming and MC
SMB Client 2
NIC Teaming
NIC
NIC
1GbE
1GbE
SMB Client 1
NIC Teaming
NIC
NIC
10GbE
10GbE
Switch
1GbE
Switch
10GbE
Switch
1GbE
NIC
NIC
1GbE
1GbE
NIC Teaming
SMB Server 2
Switch
10GbE
NIC
NIC
10GbE
10GbE
NIC Teaming
SMB Server 1
SMB Client 2
NIC Teaming
NIC
NIC
1GbE
1GbE
Switch
1GbE
Switch
1GbE
NIC
NIC
1GbE
1GbE
NIC Teaming
SMB Server 2
1 session, without Multichannel
SMB Client 1
SMB Client 2
1 session, with Multichannel
SMB Client 1
SMB Client 2
R-NIC
54GbIB
R-NIC
54GbIB
R-NIC
10GbE
R-NIC
10GbE
R-NIC
54GbIB
R-NIC
54GbIB
R-NIC
10GbE
R-NIC
10GbE
Switch
54GbIB
Switch
54GbIB
Switch
10GbE
Switch
10GbE
Switch
54GbIB
Switch
54GbIB
Switch
10GbE
Switch
10GbE
R-NIC
54GbIB
R-NIC
54GbIB
R-NIC
10GbE
R-NIC
10GbE
R-NIC
54GbIB
R-NIC
54GbIB
R-NIC
10GbE
R-NIC
10GbE
SMB Server 1
SMB Server 2
SMB Server 1
SMB Server 2
SMB Client
SMB Client
NIC
NIC
1GbE
Wireless
Switch
Switch
1GbE
NIC
NIC
1GbE
Wireless
Single NIC configurations
where full bandwidth is
already available without
MC
Configurations
with different
NIC type or
speed
Wireless
SMB Client
RSS
SMB Client
SMB Server
SMB Server
SMB Client
SMB Client
NIC
10GbE
NIC
1GbE
R-NIC
10GbE
R-NIC
32GbIB
R-NIC
10GbE
NIC
1GbE
NIC
1GbE
Wireless
Switch
10GbE
Switch
1GbE
Switch
10GbE
Switch
IB
Switch
10GbE
Switch
1GbE
Switch
1GbE
Wireless
NIC
10GbE
NIC
1GbE
R-NIC
10GbE
R-NIC
32GbIB
R-NIC
10GbE
NIC
1GbE
NIC
1GbE
NIC
Switch
RSS
SMB Server
SMB Server
SMB Server
SMB Server
Throughput
Single NIC (no RSS)
Fault Tolerance
for SMB
▲▲
▲
Multiple NICs (no RSS) + NIC Teaming
▲▲
▲▲
▲
▲▲
▲▲
▲
Multiple NICs (with RSS)
Multiple NICs (with RSS) + NIC Teaming
Single NIC (with RDMA)
Multiple NICs (with RDMA)
Lower CPU
utilization
▲
Multiple NICs (no RSS)
Single NIC (with RSS)
Fault Tolerance
for non-SMB
▲
▲▲
▲▲
▲
▲
▲
▲
▲