Transcript Animation - Cisco Communities
Architecture & Solutions Group
US Public Sector Advanced Services Mark Stinnette, CCIE Data Center #39151
Date 28 August 2013 Version 1.2.2
© 2013 Cisco and/or its affiliates. All rights reserved.
1
This Quick Start Guide (QSG) is a Cookbook style guide to Deploying Data Center technologies with end-to-end configurations for several commonly deployed architectures.
This presentation will provide end-to-end configurations mapped directly to commonly deployed data center architecture topologies. In this cookbook style; quick start guide; configurations are broken down in an animated step by step process to a complete end-to end good clean configuration based on Cisco best practices and strong recommendations. Each QSG will contain set the stage content, technology component definitions, recommended best practices, and more importantly different scenario data center topologies mapped directly to complete end-to-end configurations. This QSG is geared for network engineers, network operators, and data center architects to allow them to quickly and effectively deploy these technologies in their data center infrastructure based on proven commonly deployed designs.
© 2013 Cisco and/or its affiliates. All rights reserved.
2
Benefits Overview
Geographically dispersed data centers provide added application resiliency and workload allocation flexibility. To this end, the network must provide Layer 2, Layer 3 and storage connectivity between data centers. Connectivity must be provided without compromising the autonomy of data centers or the stability of the overall network. OTV provides an operationally optimized solution for the extension of Layer 2 connectivity across any transport. OTV is therefore critical to the effective deployment of distributed data centers to support application availability and flexible workload mobility.
OTV is a "MAC address in IP" technique for supporting Layer 2 VPNs to extend LANs over any transport. The transport can be Layer 2 based, Layer 3 based, IP switched, label switched, or anything else as long as it can carry IP packets. By using the principles of MAC routing, OTV provides an overlay that enables Layer 2 connectivity between separate Layer 2 domains while keeping these domains independent and preserving the fault-isolation, resiliency, and load-balancing benefits of an IP based interconnection.
• • • • •
Overlay Transport Virtualization (OTV) provides the following benefits: Scalability
Extends Layer 2 LANs over any network that supports IP (Transport agnostic) Designed to scale across multiple data centers
Simplicity
Supports transparent deployment over existing network without redesign Requires minimal configuration commands
Resiliency
Preserves existing Layer 3 failure boundaries Includes built-in loop prevention Failure boundary preservation and site independence preservation (failover isolation between data centers)
Efficiency
Optimized available bandwidth, by using equal-cost multi-pathing and optimal multicast replication Multipoint connectivity Fast failover
Virtual Machine Mobility
© 2013 Cisco and/or its affiliates. All rights reserved.
3
Benefits Overview
Additional benefits of using OTV for Layer 2 extension:
• No need for Ethernet over Multiprotocol Label Switching (EoMPLS) or Virtual Private LAN Services (VPLS) deployment for Layer 2 extensions • Use any network transport that supports IP • Provision of Layer 2 and Layer 3 connectivity using the same dark fiber connections • Native Spanning Tree Protocol (STP) isolation: No need to explicitly configure Bridge Data Protocol Unit (BPDU) filtering • Native unknown unicast flooding isolation: Unknown unicast not sent to the overlay • Address Resolution Protocol (ARP) optimization with the OTV ARP cache • Simplified provisioning of First Hop Redundancy Protocol (FHRP) isolation • Simplified addition of sites © 2013 Cisco and/or its affiliates. All rights reserved.
4
L3 & Join Interfaces L2 Internal Interfaces
Commonly Deployed Designs :: Aggregation Layer
Most Commonly Deployed No Network Redesign or Re-Cabling Join Interface connects back through the VDC that has the SVIs on them Separate OTV VDC or Appliance Switch OTV On a Stick Inline OTV Dedicated Uplink for DCI Join Interface has a dedicated link out to the DCI transport (Core or WAN Edge) Separate OTV VDC or Appliance Switch © 2013 Cisco and/or its affiliates. All rights reserved.
5
Terminology & Components
© 2013 Cisco and/or its affiliates. All rights reserved.
Peer-Link SVI Separation on the Aggregation VDC OTV Overlay Interface 6
Terminology & Components
HSRP Active WEST DC HSRP Standby Site ID 1 Site VLAN 99
Filter HSRP Filter HSRP
Site ID & Site VLAN are Deployed on Both OTV Edge Devices HSRP Active EAST DC HSRP Standby
Filter HSRP Filter HSRP
Site ID 2 Site VLAN 99 © 2013 Cisco and/or its affiliates. All rights reserved.
7
Terminology & Components :: Layer 2 & Layer 3 Features
Layer 3 Interface (Towards Routed Core) Layer 3 Interface (Towards OTV Join) OTV Join Interfaces OTV Internal Interfaces
switchport mode trunk
x
, y
Aggregation Internal Interfaces
© 2013 Cisco and/or its affiliates. All rights reserved.
Aggregation Switch :: Enable PIM feature pim
8
Additional Features, Terminology, & Components
Feature
Edge Device Internal Interfaces Join Interfaces Overlay Interface Authoritative Edge Device (AED) Site VLAN Site Identifier MTU FHRP Isolation SVI Separation
Overview
The OTV Edge Devices performs OTV functions, multiple OTV Edge Devices can exist at each site. OTV requires the Transport Services (TRS) license. If you create the OTV Edge Device in a non default VDC; it requires the Advanced Services license. Internal interfaces are the site facing interfaces of the Edge device; carrying VLANs extended through OTV. They are regular Layer 2 interfaces, switch ort mode trunk, and typically port channels in a vPC. No OTV configuration is required on these interfaces. Join interfaces are one of the uplink of the Edge device; they are Layer 3 point-to-point routed interfaces (physical interface, port channel, or sub interface). Its used to physically ‘join’ the Overlay network. No OTV specific configuration required. Virtual interface is where most of the OTV configuration happens, logical multi-access multicast-capable interface, encapsulates Layer 2 frames in IP unicast or multicast. The AED is responsible for MAC address advertisement for its VLANs; forwarding its VLANs traffic inside and outside the site. The extended VLANs are split across the AEDs (even & odd) in OTV multi-homing. The OTV Site VLAN is used to discover OTV neighbor edge devices in same local site.
Same site Edge devices must use a common unique Site ID. Site ID is included in the control plane; an overlay will not come up until a Site ID is configured; and should be on all local OTV Edge devices. Join interfaces and neighboring Core interfaces need to have MTU of ≥ 1542 (hard requirement). Best practice to the max possible MTU size supported by the transport Filtering FHRP messages across the OTV Overlay allows to provide the same active default gateway in each data center site. Note, in future releases OTV will offer a simple command to enable these filtering capabilities. OTV currently enforces SVI separation for the VLANs being extended across the OTV link, meaning OTV is usually in its own VDC for OTV functions and have SVIs in another Aggregation VDC.
© 2013 Cisco and/or its affiliates. All rights reserved.
9
Additional Features, Terminology, & Components
Feature
OTV Requirements Multicast Transport Unicast Transport Adjacency Server OTV Extend VLAN OTV Authentication Dual Homed OTV Edge Devices
Overview
Nexus 7000 Series or ASR routers. LAN ADVANCED SERVICES (VDC) license & TRANSPORT SERVICES (OTV/LISP) license. An M-Series line card is required in the OTV VDC for OTV functions. Multicast transport (OTV Control Plane) is ideal for connecting a higher number of sites. OTV Neighbor relationships are built over a multicast enabled core / transport infrastructure. All OTV edge devices can be configured to join a specific ASM (Any Source Multicast) group where they simultaneously play the role of receiver and source. Edge devices join a multicast group; adjacencies are maintained over that multicast group and a single update reaches all neighbors. Supported since NX-OS release 5.2. Unicast-only transport (OTV Control Plane) is ideal for connecting a small number of sites. Requires the adjacency server. Each OTV devices would need to create multiple copies of each control plane packet and unicast them to each remote OTV device part of the same logical overlay. Used in OTV Unicast mode; usually enabled on an OTV Edge device; can have a primary and secondary; and all other OTV Edge client devices are configured with the address of the adjacency server. The goal is to be able to communicate with all the remote OTV devices, each OTV node needs to know a list of neighbors to replicate the control packets to. Rather than statically configuring in each OTV node the list of all neighbors, a simple dynamic means is used to provide this information; this adjacency server. Enables OTV advertisements for those VLANs. OTV will not forward Layer 2 packets for VLANs not in the extended VLAN range for the overlay interface. Assign a VLAN to only one overlay interface. OTV supports authentication of Hello messages along with authentication of PDUs.
Leverage vPC or vPC+ for dual homed OTV Edge devices. The concept of the AED role along with the site vlan allows multi-homing OTV Edge devices.
© 2013 Cisco and/or its affiliates. All rights reserved.
10
Additional Features, Terminology, & Components
Feature
Selective Unicast Flooding Dedicated Data Broadcast Forwarding Source Interface with Loopback OTV VLAN Translation
Overview
In 6.2(2); some applications rely on unknown unicast frames; so selective unicast flooding can be enabled on a per mac address per vlan to accommodate silent or uni-directional hosts. OTV default behavior is no unknown unicast forwarding. Command used: otv flood mac [xxxx.yyyy.zzzz] vlan [#] In 6.2(2); Dedicated broadcast group is a configurable option; useful for QoS purposes. A dedicated multicast group can be configured for all broadcast transmission in an OTV overlay that utilizes multicast transmission on the underlying OTV network. By default, the broadcast and control traffic will share the same multicast group address. The broadcast group needs to be configured on all OTV Edge devices connected to the OTV overlay network. In 6.2(2); Logical interfaces as Join Interfaces; Loopback to guarantee interfaces is up/up. An OTV Edge device can be configured to use a loopback interface as the join-interface for an OTV overlay to increase availability. This feature requires the OTV Edge device to participate in the core PIM multicast domain to support multiple paths. Prior to this feature only single homed Ethernet and port channel interface options were available. In 6.2(2); VLAN translation allows OTV to map a local VLAN (in DC 1) to a remote VLAN (in DC 2). In previous NX-OS releases, the extended VLANs had to be identical in each site (ie. X to X). With the VLAN mapping feature, VLANs can be translated, so they can be different in each site (ie. X to Y to Z) providing more flexible deployment options. Both multicast and unicast enabled IP core networks are supported. VLAN mappings have a one-to-one relationship. © 2013 Cisco and/or its affiliates. All rights reserved.
11
Supported Line Card Topologies :: NX-OS 6.1 and Prior Releases Aggregation VDC
• • • OTV VDC must use only M-Series ports for both Internal and Join Interfaces [M1-48, M1-32, M1-08, M2-Series] OTV VDC Types (M-only) Aggregation VDC Types (M-only, M1-F1 or F2/F2E) © 2013 Cisco and/or its affiliates. All rights reserved.
12
Supported Line Card Topologies :: NX-OS 6.2 and Later Releases Aggregation VDC
• • • • OTV VDC Join Interfaces must use only M-Series ports [M1-48, M1-32, M1-08, M2-Series] OTV VDC Internal Interfaces can use M-Series, F1 and F2E ports (F1 and F2E must be in Layer 2 proxy mode) OTV VDC Types (M-only, M1-F1, M1-F2E) Aggregation VDC Types (M-only, M1-F1, M1-F2E, F2, F2E, F2F2E) © 2013 Cisco and/or its affiliates. All rights reserved.
13
Quick Start Guide Assumptions
Physical View – Connectivity Map OTV Characteristics
2-wide 7k Aggregation VDC Multi-homed OTV VDC Multicast enabled transport Extend VLAN 10 Layer 3 routed point-to-point interfaces. Will be using OSPF as the routing protocol. Layer 2 interfaces. The Aggregation VDC connects through vPC to the OTV VDC. © 2013 Cisco and/or its affiliates. All rights reserved.
14
Perform Configuration Steps at Both DC Sites (East & West)
Create Aggregation & OTV VDCs
[Admin / Default VDC] no vdc combined-hostname vdc AGG-1 vdc AGG-1 limit-resource module-type m1 f1 m1xl m2xl cpu-share 5 allocate interface Ethernet [….] vdc OTV-1 vdc OTV-1 limit-resource module-type m1 m1xl m2xl cpu-share 5 allocate interface Ethernet [….]
Step 1 ::
install | validate licenses
Step 2 ::
create aggregation VDC
Step 3 ::
create OTV VDC Allocate the Interfaces to appropriate VDC role accordingly © 2013 Cisco and/or its affiliates. All rights reserved.
[Admin / Default VDC] no vdc combined-hostname vdc AGG-2 vdc AGG-2 limit-resource module-type m1 f1 m1xl m2xl cpu-share 5 allocate interface Ethernet [….] vdc OTV-2 vdc OTV-2 limit-resource module-type m1 m1xl m2xl cpu-share 5 allocate interface Ethernet [….] Verify the Nexus 7000 has the proper licenses to support OTV and VDC. OTV requires the Transport Services license VDC requires the Advanced Services license install license bootflash:///lan_advanced_services_pkg.lic
install license bootflash:///lan_transport_services_pkg.lic
show license usage 15
Perform Configuration Steps at Both DC Sites (East & West)
Configure Aggregation VDC :: Layer 2 vPC (Option)
feature lacp feature vpc vlan 10-20 spanning-tree pathcost method long spanning-tree port type edge bpduguard default spanning-tree port type edge bpdufilter default no spanning-tree loopguard default spanning-tree vlan 10-20 priority 0 spanning-tree pseudo-information vlan 10-20 root priority 4096 vlan 1-10 designated priority 8192 vlan 11-20 designated priority 16384 vpc domain 1 role priority 1 system-priority 4096 peer-keepalive destination [….] source [….]
vrf management peer-switch
peer-gateway auto-recovery auto-recovery reload-delay delay restore 30 ip arp synchronize interface port-channel 2 switchport switchport mode trunk switchport trunk allowed vlan 10-20 spanning-tree port type network
vpc peer-link
interface e3/1 , e4/1 channel-group 2 force mode active See QSG :: vPC for more details … feature lacp feature vpc vlan 10-20 spanning-tree pathcost method long spanning-tree port type edge bpduguard default spanning-tree port type edge bpdufilter default no spanning-tree loopguard default spanning-tree vlan 10-20 priority 0 spanning-tree pseudo-information vlan 10-20 root priority 4096 vlan 1-10 designated priority 16384 vlan 11-20 designated priority 8192 vpc domain 1 role priority 2 system-priority 4096 peer-keepalive destination [….] source [….]
vrf management peer-switch
peer-gateway auto-recovery auto-recovery reload-delay delay restore 30 ip arp synchronize interface port-channel 2 switchport switchport mode trunk switchport trunk allowed vlan 10-20 spanning-tree port type network
vpc peer-link
interface e3/1 , e4/1 channel-group 2 force mode active © 2013 Cisco and/or its affiliates. All rights reserved.
16
Perform Configuration Steps at Both DC Sites (East & West)
Configure Aggregation VDC :: Layer 2 FabricPath vPC+ (Option)
feature lacp feature vpc install feature-set fabricpath feature-set fabricpath vlan 10-20
mode fabricpath fabricpath switch-id 10 Default / Admin VDC Only
fabricpath domain default
root-priority 255
spanning-tree pseudo-information vlan 10-20 root priority 0 vpc domain 1 role priority 1 system-priority 4096 peer-keepalive destination [….] source [….]
vrf management
peer-gateway auto-recovery auto-recovery reload-delay delay restore 30 ip arp synchronize
fabricpath switch-id 1000
interface port-channel 2
switchport mode fabricpath vpc peer-link
interface e3/1 , e4/1 channel-group 2 force mode active © 2013 Cisco and/or its affiliates. All rights reserved.
See QSG :: FabricPath for more details … feature lacp feature vpc install feature-set fabricpath feature-set fabricpath vlan 10-20
mode fabricpath fabricpath switch-id 11 Default / Admin VDC Only
fabricpath domain default
root-priority 254
spanning-tree pseudo-information vlan 10-20 root priority 0 vpc domain 1 role priority 2 system-priority 4096 peer-keepalive destination [….] source [….]
vrf management
peer-gateway auto-recovery auto-recovery reload-delay delay restore 30 ip arp synchronize
fabricpath switch-id 1000
interface port-channel 2
switchport mode fabricpath vpc peer-link
interface e3/1 , e4/1 channel-group 2 force mode active 17
Perform Configuration Steps at Both DC Sites (East & West)
Configure Aggregation VDC :: Layer 3 Infrastructure
feature ospf feature interface-vlan feature hsrp vlan 10 – 20 interface loopback0 ip address [….]/32 router ospf 1 router-id [….] log-adjacency-changes detail auto-cost reference-bandwidth 100Gbps interface e1/1 ip address [….]/30 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point interface e1/10 ip address [….]/30 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point interface vlan 10 ip address 10.10.10.2/24
no ip redirects
ip router ospf 1 area 0.0.0.0
ip ospf passive-interface
hsrp 1 preempt
priority 110
ip 10.10.10.1
Allocate the following accordingly :: IP addressing OSPF areas SVIs & HSRP Groups feature ospf feature interface-vlan feature hsrp vlan 10 – 20 interface loopback0 ip address [….]/32 router ospf 1 router-id [….] log-adjacency-changes detail auto-cost reference-bandwidth 100Gbps interface e1/1 ip address [….]/30 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point interface e1/10 ip address [….]/30 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point interface vlan 10 ip address 10.10.10.3/24
no ip redirects
ip router ospf 1 area 0.0.0.0
ip ospf passive-interface
hsrp 1 preempt ip 10.10.10.1
© 2013 Cisco and/or its affiliates. All rights reserved.
18
Perform Configuration Steps at Both DC Sites (East & West)
Configure OTV :: Layer 2 & Layer 3 Infrastructure @ Aggregation
feature ospf feature lacp feature vpc interface e 1/2 ip address [….] / 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point interface port-channel 10 switchport switchport mode trunk switchport trunk allowed vlan 10 vpc 10 interface port-channel 20 switchport switchport mode trunk switchport trunk allowed vlan 10 vpc 20 interface e5/1 channel-group 10 force mode active interface e6/1 channel-group 20 force mode active
Step 1 ::
configure L3 link towards OTV Join Interface
Step 2 ::
configure L2 vPC towards OTV Internal Interface feature ospf feature lacp feature vpc interface e 1/2 ip address [….] / 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point interface port-channel 10 switchport switchport mode trunk switchport trunk allowed vlan 10 vpc 10 interface port-channel 20 switchport switchport mode trunk switchport trunk allowed vlan 10 vpc 20 interface e5/1 channel-group 10 force mode active interface e6/1 channel-group 20 force mode active © 2013 Cisco and/or its affiliates. All rights reserved.
19
Perform Configuration Steps at Both DC Sites (East & West)
Configure OTV :: Layer 2 & Layer 3 Infrastructure @ OTV VDC
feature ospf feature lacp vlan 10 spanning-tree vlan 10 priority 32768 interface loopback0 ip address [….]/32 router ospf 1 router-id [….] log-adjacency-changes detail auto-cost reference-bandwidth 100Gbps interface e 1/9 ip address [….] / 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point interface port-channel 10 switchport switchport mode trunk switchport trunk allowed vlan 10 interface e2/1, e2/2 channel-group 10 force mode active © 2013 Cisco and/or its affiliates. All rights reserved.
Step 1 ::
configure OTV Join Interfaces
Step 2 ::
configure OTV Internal Interfaces
Step 3 ::
create vlan to extend feature ospf feature lacp vlan 10 spanning-tree vlan 10 priority 32768 interface loopback0 ip address [….]/32 router ospf 1 router-id [….] log-adjacency-changes detail auto-cost reference-bandwidth 100Gbps interface e 1/9 ip address [….] / 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point interface port-channel 10 switchport switchport mode trunk switchport trunk allowed vlan 10 interface e2/1, e2/2 channel-group 10 force mode active 20
Perform Configuration Steps at Both DC Sites (East & West)
Configure OTV :: Enable Jumbo MTU
feature ospf feature lacp feature vpc vlan 10 – 20 interface e 1/2
mtu 9216
ip address [….] / 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point interface e1/10
mtu 9216
ip address [….]/30 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point interface e1/1
mtu 9216
ip address [….]/30 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point feature ospf feature lacp feature vpc vlan 10 interface e 1/9
mtu 9216
ip address [….] / 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point
Step 1 ::
increase MTU on Join Interfaces
Step 2 ::
increase MTU on all Layer 3 Interfaces feature ospf feature lacp feature vpc vlan 10 – 20 interface e 1/2
mtu 9216
ip address [….] / 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point interface e1/10
mtu 9216
ip address [….]/30 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point interface e1/1
mtu 9216
ip address [….]/30 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point feature ospf feature lacp feature vpc vlan 10 interface e 1/9
mtu 9216
ip address [….] / 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point © 2013 Cisco and/or its affiliates. All rights reserved.
21
Perform Configuration Steps at Both DC Sites (East & West)
Configure OTV :: Enable Required Multicast
feature ospf feature lacp feature vpc
feature pim ip pim rp-address [x.x.x.x] group-list 224.0.0.0/4 ip pim ssm range 232.0.0.0/8
interface e1/1 mtu 9216 ip address [….]/30 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point
ip pim sparse-mode
interface e 1/2 mtu 9216 ip address [….] / 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point
ip pim sparse-mode ip igmp version 3
interface e1/10 mtu 9216 ip address [….]/30 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point
ip pim sparse-mode Step 1 ::
enable PIM
Step 2 ::
configure PIM sparse mode [AGG VDC] (on all intra & inter data center Layer 3 links)
Step 3 ::
configure IGMP v3 [AGG & OTV VDC] (join interfaces only)
Step 4 ::
configure Rendezvous Point (RP)
Step 5 ::
configure Source-Specific Multicast (SSM) interface e 1/9 mtu 9216 ip address [….] / 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point
ip igmp version 3
interface e 1/9
mtu 9216
ip address [….] / 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point
ip igmp version 3
feature ospf feature lacp feature vpc
feature pim ip pim rp-address [x.x.x.x] group-list 224.0.0.0/4 ip pim ssm range 232.0.0.0/8
interface e1/1 mtu 9216 ip address [….]/30 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point
ip pim sparse-mode
interface e 1/2
mtu 9216
ip address [….] / 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point
ip pim sparse-mode ip igmp version 3
interface e1/10
mtu 9216
ip address [….]/30 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point
ip pim sparse-mode
© 2013 Cisco and/or its affiliates. All rights reserved.
22
OTV Characteristics
2-wide 7k Aggregation VDC Multi-homed OTV VDC Multicast enabled transport Extend VLAN 10
Finish OTV Configuration :: Overlay, Site-ID, Site-VLAN
feature otv vlan 10 , 99
otv site-vlan 99 otv site-identifier 0000.0000.0001
interface Overlay 1 otv join-interface ethernet 1/9 otv control-group 239.1.1.1
otv data-group 232.1.1.0/24 otv extend-vlan 10
interface port-channel 10 switchport switchport mode trunk switchport trunk allowed vlan 10
, 99
interface e2/1, e2/2 channel-group 10 force mode active Site ID 1 Site VLAN 99 Site ID 1 Site VLAN 99 Site ID 2 Site VLAN 99 feature otv vlan 10 , 99
otv site-vlan 99 otv site-identifier 0000.0000.0002
interface Overlay 1 otv join-interface ethernet 1/9 otv control-group 239.1.1.1
otv data-group 232.1.1.0/24 otv extend-vlan 10
Site ID 2 Site VLAN 99 interface port-channel 10 switchport switchport mode trunk switchport trunk allowed vlan 10
, 99
interface e2/1, e2/2 channel-group 10 force mode active feature otv vlan 10 , 99
otv site-vlan 99 otv site-identifier 0000.0000.0001
interface Overlay 1 otv join-interface ethernet 1/9 otv control-group 239.1.1.1
otv data-group 232.1.1.0/24 otv extend-vlan 10
interface port-channel 10 switchport switchport mode trunk switchport trunk allowed vlan 10
, 99 Step 1 ::
enable OTV feature
Step 2 ::
configure site-vlan
Step 3 ::
enable site-vlan on L2 trunks (make vlan active)
Step 4 ::
configure site-identifier
Step 5 ::
configure OTV Overlay Interface interface e2/1, e2/2 channel-group 10 force mode active interface e2/1, e2/2 channel-group 10 force mode active feature otv vlan 10 , 99
otv site-vlan 99 otv site-identifier 0000.0000.0002
interface Overlay 1 otv join-interface ethernet 1/9 otv control-group 239.1.1.1
otv data-group 232.1.1.0/24 otv extend-vlan 10
interface port-channel 10 switchport switchport mode trunk switchport trunk allowed vlan 10
, 99
© 2013 Cisco and/or its affiliates. All rights reserved.
23
NOTES
OTV Configuration
Site ID 1 Site VLAN 99 The Layer 2 links are known as internal interfaces and are used by the OTV edge device to learn the MAC addresses of the site and forward Layer 2 traffic across the sites for the
extended
VLANs.
The Layer 3 link is known as the join interface, which OTV uses to perform IP-based virtualization to send and receive overlay traffic between sites. The IP address of this interface is used to advertise reachability of a MAC addresses present in the site. There is one Join interface per OTV Overlay; however, if multiple Layer 3 interfaces are present on the OTV edge device, the unicast extended traffic can get routed over any of these links OTV encapsulates packets into an IP header and where it sets the Don't Fragment (DF) bit for all OTV control and data packets crossing the transport network. The encapsulation adds 42 bytes to the original IP maximum transition unit (MTU) size. So it is a best practice to configure the join interface and all Layer 3 interfaces that face the IP core between the OTV edge devices with the max possible MTU size supported by the transport.
OTV uses site VLAN to allow multiple OTV edge devices within the site to talk to each other and determine the AED for the OTV-extended VLANs. It is a best practice to use a dedicated VLAN as site VLAN. The site VLAN should not be extended and should be carried down to the aggregation layer across the VPC peer link.
• • • • The OTV edge device is also configured with the overlay interface, which is associated with the join interface to provide connectivity to the physical transport network. The overlay interface is used by OTV to send and receive Layer 2 frames encapsulated in IP packets. From the perspective of MAC based forwarding on the site, the overlay interface is simply another bridged interface. However, no Spanning Tree Protocol packets or unknown unicast packets are forwarded over the overlay interface.
Note:
The overlay interface does not come up until you configure a multicast group address
and
the site-VLAN has at least an active port on the device.
A VLAN is not advertised on the overlay network; therefore, forwarding cannot occur over the overlay network unless the VLANs are explicitly extended. Once the VLAN is extended, the OTV edge device will begin advertising locally learned MAC addresses on the overlay network.
Key advantages of using multicast is that it allows optimal multicast traffic replication to multiple sites and avoids head-end replication that leads to suboptimal bandwidth utilization.
• • • • When sites are multihomed with OTV EDs, separation is achieved by electing an authoritative edge device (AED) for each VLAN in the same site (site-id), which is the only device that can forward the traffic for the extended VLAN inside and outside the data center. The extended VLANs are split in odd and even and automatically assigned to the site's edge devices.
The multicast control group identifies the overlay; two different overlays must have two different multicast control groups. The control group is used for neighbor discovery and to exchange MAC address reachability. The data group however is an SSM (Source Specific Group) group range, which is used to carry multicast data traffic generated by the sites In the aggregation layer, Protocol Independent Multicast (PIM) is configured on all intra- and inter-data-center Layer 3 links to allow multicast states to be built in the core network.
Since PIM sparse mode requires a rendezvous point (RP) to build a multicast tree, one of the aggregation switches in each data center is used as an RP.
Local RP allows both local sources and receivers to join local RP rather than having to go to different data center to reach an RP in order to build a shared tree. For more information about MSDP and Anycast features of multicast, visit: http://www.cisco.com/en/US/docs/ios/solutions_docs/ip_multicast/White_papers/anycast.html
© 2013 Cisco and/or its affiliates. All rights reserved.
24
Primary Adjacency Server :: Join Interface [
x
] Secondary Adjacency Server :: Join Interface [
y
]
OTV Configuration :: Unicast-Only Mode
Primary Adjacency Server
feature otv vlan 10, 99
otv site-vlan 99 otv site-identifier 0000.0000.0001
interface Overlay 1 otv join-interface ethernet 1/9 otv adjacency-server unicast-only otv extend-vlan 10
interface e 1/9 mtu 9216 ip address
[x]
/ 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point feature otv vlan 10, 99
otv site-vlan 99 otv site-identifier 0000.0000.0001
interface Overlay 1 otv join-interface ethernet 1/9 otv use-adjacency-server [x] [y] unicast-only otv extend-vlan 10
interface e 1/9 mtu 9216 ip address
[w]
/ 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point Site ID 1 Site VLAN 99
Step 1 :: Step 2 :: Step 3 :: Step 4 :: Step 5 :: Assume ::
Site ID 1 Site VLAN 99 Site ID 2 Site VLAN 99 enable OTV configure site-vlan, site-id, Overlay Interface define role of adjacency server [primary] define role of adjacency server [secondary] define all other edge devices as clients enable site-vlan on L2 trunks (make vlan active) interface port-channel 10 switchport switchport mode trunk switchport trunk allowed vlan 10, 99
Secondary Adjacency Server
feature otv vlan 10, 99
otv site-vlan 99 otv site-identifier 0000.0000.0002
interface Overlay 1 otv join-interface ethernet 1/9 otv adjacency-server unicast-only otv use-adjacency-server [x] unicast-only otv extend-vlan 10
Site ID 2 Site VLAN 99 interface e 1/9 mtu 9216 ip address
[y]
/ 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point feature otv vlan 10, 99
otv site-vlan 99 otv site-identifier 0000.0000.0002
interface Overlay 1 otv join-interface ethernet 1/9 otv use-adjacency-server [x] [y] unicast-only otv extend-vlan 10
interface e 1/9 mtu 9216 ip address
[z]
/ 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point © 2013 Cisco and/or its affiliates. All rights reserved.
25
NOTES
OTV Configuration :: Unicast-Only Mode
Primary Adjacency Server
feature otv vlan 10, 99
otv site-vlan 99 otv site-identifier 0000.0000.0001
interface Overlay 1 otv join-interface ethernet 1/9 otv adjacency-server unicast-only otv extend-vlan 10
interface e 1/9 mtu 9216 ip address
[x]
/ 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point Primary Adjacency Server :: Join Interface [
x
] Secondary Adjacency Server :: Join Interface [
y
]
Secondary Adjacency Server
feature otv vlan 10, 99
otv site-vlan 99 otv site-identifier 0000.0000.0002
interface Overlay 1 otv join-interface ethernet 1/9 otv adjacency-server unicast-only otv use-adjacency-server [x] unicast-only otv extend-vlan 10
interface e 1/9 mtu 9216 ip address
[y]
/ 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point feature otv vlan 10, 99
otv site-vlan 99 otv site-identifier 0000.0000.0001
interface Overlay 1 otv join-interface ethernet 1/9 otv use-adjacency-server [x] [y] unicast-only otv extend-vlan 10
interface e 1/9 mtu 9216 ip address
[w]
/ 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point Two pieces of configuration are required to deploy OTV across a unicast-only transport infrastructure: first, it is required to define the role of Adjacency Server; whereas the other piece of configuration is required in each OTV edge device not acting as an Adjacency Server (i.e acting as a client). All client OTV edge devices are configured with the address of the Adjacency Server. All other adjacency addresses are discovered dynamically. Thereby, when a new site is added, only the OTV edge devices for the new site need to be configured with the Adjacency Server addresses. No other sites need additional configuration. The recommendation is usually to deploy a redundant pair of Adjacency Servers in separate DC sites.
The configuration on the Primary Adjacency Server is very simple and limited to enable AS functionality (
otv adjacency server
command). The same command is also required on the Secondary Adjacency Server device, but also needs to point to the Primary AS (leveraging the
otv use-adjacency-server
command). Finally, the generic OTV Edge Device must be configured to use both the Primary and Secondary Adjacency Servers. The sequence of adjacency server address in the configuration determine primary or secondary adjacency server role. This order is relevant since an OTV edge device will always use the OTV neighbor-list (oNL) provided by the Primary Adjacency Server, unless it detects that specific device is not available anymore (control plane Hellos are always exchanged as keepalives between each OTV device and the Adjacency Servers). © 2013 Cisco and/or its affiliates. All rights reserved.
26
OTV Configuration :: HSRP Filtering
Step 1 ::
create OTV HSRP access-lists (VACL)
Step 2 ::
create OTV HSRP localization filters Filter out HSRP v1 and v2 Filter out Gratuitous ARP
Step 3 ::
create route-map to prevent advertisements of HSRP VMACs ip access-list ip access-list
ALL_IPs
10 permit ip any any mac access-list
ALL_MACs
10 permit any any
HSRP_IP
10 permit udp any 224.0.0.2/32 eq 1985
20 permit udp any 224.0.0.102/32 eq 1985
mac access-list
HSRP_VMAC
10 permit 0000.0c07.ac00 0000.0000.00ff any
20 permit 0000.0c9f.f000 0000.0000.0fff any
arp access-list
HSRP_VMAC_ARP
10 deny ip any mac 0000.0c07.ac00 ffff.ffff.ff00
20 deny ip any mac 0000.0c9f.f000 ffff.ffff.f000
30 permit ip any mac any feature dhcp ip arp inspection filter vlan access-map HSRP_Localization 10 match mac address match ip address action vlan access-map HSRP_Localization 20 match mac address match ip address action
drop forward
vlan filter HSRP_Localization vlan-list
10
mac-list OTV_HSRP_VMAC_deny seq 10 deny 0000.0c07.ac00 ffff.ffff.ff00
mac-list OTV_HSRP_VMAC_deny seq 11 deny 0000.0c9f.f000 ffff.ffff.f000
mac-list OTV_HSRP_VMAC_deny seq 20 permit 0000.0000.0000 0000.0000.0000 route-map
HSRP_VMAC_ARP HSRP_VMAC HSRP_IP ALL_MACs ALL_IPs
OTV_HSRP_filter permit 10 match mac-list OTV_HSRP_VMAC_deny vlan
10
feature otv vlan
10
, 99 otv site-vlan 99 otv site-identifier 0000.0000.0001
interface Overlay 1 otv join-interface ethernet 1/9 otv control-group 239.1.1.1
otv data-group 232.1.1.0/24 otv extend-vlan
10
The filtering of FHRP messages across the overlay is a critical functionality to be enabled, because it allows applying the same FHRP configuration in different sites.
The end result is that the same default gateway is available (i.e. characterized by the same virtual IP and virtual MAC addresses) in each data center. This capability optimizes the outbound traffic flows (server to client direction); but this does not solve the mechanism to control and improve the ingress traffic (client to server direction) as traffic will continue to go via the original DC; solutions to solve this challenge include DNS Based, Route Injection, or LISP.
otv-isis default vpn Overlay1 redistribute filter route-map OTV_HSRP_filter The VLAN ACL is required to identify the traffic that needs to be filtered. This configuration applies to the HSRP v1 & v2 (
bold
) protocols. After applying the configuration on the OTV VDC to the set of VLANs that are trunked from the Agg VDC to the OTV VDC, all HSRP messages will be dropped once received by the OTV VDC. It is also required to apply a specific filter to ensure suppression of the Gratuitous ARP (GARP) messages that may be received across the OTV Overlay from the remote sites.
Even though HSRP traffic is filtered via the VACL, the vMAC used to source the HSRP packets is still learned by the OTV VDC. Therefore, OTV advertises this MAC address information to the other sites via an IS-IS update. While this in itself is not causing harm, it would cause the remote OTV the edge devices to see constant MAC moves happening for the vMAC (from the internal interface to the overlay interface and vice versa).
IP ACL's to drop HSRP Hellos and forward other traffic MAC ACL's to drop non-IP HSRP traffic and forward other traffic Create the VACL & apply the VACL to each extended vlan Feature dhcp required for ARP inspection & create the ARP access-list to deny traffic from the Virtual MAC & apply ARP ACL to each extended VLAN mac-list to deny advertising virtual MAC, create the route-map, and apply the route-map to each overlay © 2013 Cisco and/or its affiliates. All rights reserved.
27
Perform Configuration Steps at Both DC Sites (East & West)
OTV Configuration :: Authentication
OTV supports authentication of Hello messages along with authentication of Protocol Data Units (PDU)’s feature otv vlan 10, 99 key chain OTV-Key key 1 key-string 0 Cisc0! otv site-vlan 99 otv site-identifier 0000.0000.0001
interface Overlay 1 otv join-interface ethernet 1/9 otv control-group 239.1.1.1
otv data-group 232.1.1.0/24 otv extend-vlan 10 otv isis authentication-type md5 otv isis authentication key-chain OTV-Key otv-isis default vpn Overlay1 authentication-check authentication-type md5 authentication key-chain OTV-Key © 2013 Cisco and/or its affiliates. All rights reserved.
Step 1 ::
configure OTV key chain
Step 2 ::
apply md5 authentication to OTV Hellos
Step 3 ::
apply md5 authentication to OTV PDUs feature otv vlan 10, 99 key chain OTV-Key key 1 key-string 0 Cisc0! otv site-vlan 99 otv site-identifier 0000.0000.0001
interface Overlay 1 otv join-interface ethernet 1/9 otv control-group 239.1.1.1
otv data-group 232.1.1.0/24 otv extend-vlan 10 otv isis authentication-type md5 otv isis authentication key-chain OTV-Key otv-isis default vpn Overlay1 authentication-check authentication-type md5 authentication key-chain OTV-Key 28
Perform Configuration Steps at Both DC Sites (East & West)
OTV Configuration :: Logical Join Interface
feature otv feature ospf feature pim vlan 10, 99 router ospf 1
ip pim rp-address [x.x.x.x] group-list 224.0.0.0/4 ip pim ssm range 232.0.0.0/8
interface loopback 10 ip address [….]/32 ip router ospf 1 area 0
ip pim sparse-mode otv site-vlan 99 otv site-identifier 0000.0000.0001
interface Overlay 1 otv join-interface loopback 10
otv control-group 239.1.1.1
otv data-group 232.1.1.0/24
otv extend-vlan 10
interface e 1/9 mtu 9216 ip address [….] / 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point
ip pim sparse-mode interface e 2/9 mtu 9216 ip address [….] / 30 ip router ospf 1 area 0.0.0.0
ip ospf network point-to-point ip pim sparse-mode
Overlay can now use
otv source-interface
command to source OTV traffic from the edge device and source joins to receive multicast traffic from the core. Loopback to guarantee interface is up/up. Multiple L3 uplinks can now connect to OTV edge device, each running PIM and an IGP with IPv4 Core. The OTV edge device can now act as first-hop/last-hop multicast PIM router instead of IGMP client; “ip pim sparse-mode” on all L3 interfaces.
Old Configuration (OTV Join Interface)
interface e 1/9 mtu 9216 ip address [….] / 30 ip router ospf 1 area 0.0.0.0 ip ospf network point-to-point
ip igmp version 3 Step 1 ::
enable PIM
Step 1 ::
configure loopback interface
Step 4 ::
configure Rendezvous Point (RP)
Step 5 ::
configure Source-Specific Multicast (SSM)
Step 2 ::
set OTV joint interface as loopback
Step 3 ::
enable PIM sparse on loopback
Step 4 ::
enable PIM sparse on all L3 links Note: OTV Tunnel depolarization will be introduced in a future release. The use of a secondary IP address on the interface, not the OTV interface, will help improve load balancing ECMP for OTV traffic to the core.
Existing Join Interface limitations:
Bandwidth to the core / aggregation is limited to one physical link or port-channel Changes to join-interface will churn all OTV overlay states, since the overlay encapsulation for all routes need to be updated PIM cannot be enabled on the join-interface, since the OTV solution assumes it's an IGMP host interface Unable to utilize the redundancy of multiple uplinks when available, and the flexibility of dynamic unicast routing convergence on uplink failures If join-interface goes down, the connectivity to the core is broken. User intervention is needed to provide alternate core connectivity © 2013 Cisco and/or its affiliates. All rights reserved.
29
OTV VLAN Translation :: Translation Through Transit VLAN
feature otv vlan 10, 99, 100
otv site-vlan 99 otv site-identifier 0000.0000.0001
interface Overlay 1 otv join-interface ethernet 1/9
otv control-group 239.1.1.1
otv data-group 232.1.1.0/24
otv extend-vlan 10 otv vlan mapping 10 to 100
feature otv vlan 20, 99, 100
Step 1 ::
configure vlan mapping
otv site-vlan 99 otv site-identifier 0000.0000.0002
When a different VLAN is used at multiple sites A mapped VLAN can not be extended on another site VLAN mappings have a one-to-one relationship VLAN mappings can be added or removed without impacting all mappings on the overlay interface
interface Overlay 1 otv join-interface ethernet 1/9
otv control-group 239.1.1.1
otv data-group 232.1.1.0/24
otv extend-vlan 20 otv vlan mapping 20 to 100 VLAN 100 VLAN 10
© 2013 Cisco and/or its affiliates. All rights reserved.
VLAN 20
30
OTV Configuration :: Dedicated Broadcast Group
“otv broadcast-group” configuration line under overlay Optional command Useful for QoS purposes The broadcast group needs to be configured for all OTV Edge Devices connected to the OTV Overlay network
Perform Configuration Steps at Both DC Sites (East & West)
feature otv vlan 10, 99 interface loopback 10 ip address [….]/32
otv site-vlan 99 otv site-identifier 0000.0000.0001
interface Overlay 1 otv join-interface loopback 10
otv control-group 239.1.1.1
otv data-group 232.1.1.0/24
otv broadcast-group 224.2.2.0
otv extend-vlan 10 otv vlan mapping 10
© 2013 Cisco and/or its affiliates. All rights reserved.
Step 1 ::
configure broadcast group feature otv vlan 10, 99 interface loopback 10 ip address [….]/32
otv site-vlan 99 otv site-identifier 0000.0000.0001
interface Overlay 1 otv join-interface loopback 10
otv control-group 239.1.1.1
otv data-group 232.1.1.0/24
otv broadcast-group 224.2.2.0
otv extend-vlan 10 otv vlan mapping 10
31
OTV Configuration :: Selective Unicast Flooding
feature otv vlan 10, 99 otv site-vlan 99 otv site-identifier 0000.0000.0001
otv flood mac 1111.2222.0101 vlan 10
interface Overlay 1 otv join-interface ethernet 1/9 otv control-group 239.1.1.1
otv data-group 232.1.1.0/24 otv extend-vlan 10
Step 1 ::
configure static OTV flood [enabled per mac address] feature otv vlan 10, 99 otv site-vlan 99 otv site-identifier 0000.0000.0002
interface Overlay 1 otv join-interface ethernet 1/9 otv control-group 239.1.1.1
otv data-group 232.1.1.0/24 otv extend-vlan 10
Unknown Unicast
MAC 1 MAC 2 Normally, unknown unicast Layer 2 frames are not flooded between OTV sites, and MAC addresses are not learned across the overlay interface. Any unknown unicast messages that reach the OTV edge device are blocked from crossing the logical overlay, allowing OTV to prevent Layer 2 faults from spreading to remote sites.
The end points connected to the network are assumed to not be silent or unidirectional. However, some data center applications require the unknown unicast traffic to be flooded over the overlay to all the data centers, where end points may be silent. Beginning with Cisco NX-OS Release 6.2(2), you can configure selective unicast flooding to flood the specified destination MAC address to all other edge devices in the OTV overlay network with that unknown unicast traffic.
© 2013 Cisco and/or its affiliates. All rights reserved.
32
Strong Recommendations and Key Notes
OTV encapsulation is done on M-series modules Note: The control-plane protocol used by OTV is IS-IS. However, IS-IS does not need to be explicitly configured. It runs in the background once OTV is enabled.
In a multi-tenancy environment, the same OTV VDC can be configured with multiple overlays to provide a segmented Layer 2 extension for different tenants or applications. When multiple data center sites are interconnected, the OTV operations can benefit from the presence of multicast in the core. Multicast is not mandatory in most OTV topologies (number of sites) since you can use the unicast-mode as well. The same OTV VDCs can be used by multiple VDCs deployed at the aggregation tier, as well as by other Layer 2 switches connected to the OTV VDCs. This is done by configuring multiple OTV overlays. It's important to note that the extended VLANs within these multiple overlays should not overlap.
A separate Layer 3 link between the two aggregation VDCs should be configured as per best practices to carry any Layer 3 traffic between the them. The overlay interface will not come up until you configure a multicast group address
and
the site-VLAN has at least an active port on the OTV edge device.
Support for loopback interfaces as OTV Join interfaces is planned for 6.2(2) and later code releases.
© 2013 Cisco and/or its affiliates. All rights reserved.
33
Strong Recommendations and Key Notes
FHRP Filtering Note: It is important to stress how this outbound path (server to client) optimization functionality should be deployed in conjunction with an equivalent one optimizing inbound traffic (client to server) flows to avoid asymmetric traffic behavior (this would be highly undesirable especially in deployments leveraging stateful services across data centers).
White Paper discussing inbound traffic optimization solutions :: http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/DCI/4.0/EMC/EMC.pdf
It is important to note how OTV support requires the use of the new Transport Services (TRS) license. Depending on the specifics of the OTV deployment, the Advanced License may be required as well to provide Virtual Device Contexts (VDCs) support.
Before configuring OTV you should review and implement Cisco recommended STP best practices at each site. OTV is independent from STP but it greatly benefits from a stable and robust Layer 2 topology.
If the data centers are OTV multi-homed, it is a recommended best practice to bring the Overlay up in single-homed configuration first, by enabling OTV on a single edge device at each site. After the OTV connection has been tested in as single-homed, then enable the functionality on the other edge devices of each site. OTV currently enforces switch-virtual-interface (SVI) separation for the VLANs being extended across the OTV link, meaning that OTV is usually in its own VDC. With the VDC license on the Cisco Nexus 7000 you have the flexibility to have SVIs in other VDCs and have a dedicated VDC for OTV functions. Configure the join interface and all Layer 3 interfaces that face the IP core between the OTV edge devices with the highest maximum transmission unit (MTU) size supported by the IP core. OTV sets the Don't Fragment (DF) bit in the IP header for all OTV control and data packets so the core cannot fragment these packets. © 2013 Cisco and/or its affiliates. All rights reserved.
34
Strong Recommendations and Key Notes
With NX-OS 6.1 and earlier: Only one join interface can be specified per overlay; two methods are available Configure a single join-interface, which is shared across multiple overlays Configure a different join interface for each overlay, which increases the OTV reliability For a higher resiliency, you can use a port-channel, but it is not mandatory. There are no requirements for 1 Gigabit Ethernet versus 10 Gigabit-Ethernet or dedicated versus shared mode. The transport network must support PIM sparse mode (ASM) or PIM-Bidir multicast traffic. OTV is compatible with a transport network configured only for IPv4. IPv6 is not supported. Do not enable PIM on the join-interface. Do not configure OTV on an F-series module. Ensure the site identifier is configured and is the same for all edge devices on a site. OTV brings down all overlays when a mismatched site identifier is detected from a neighbor edge device and generates a system message. Mixing the Nexus 7000 and the ASR 1000 devices for OTV is not supported at this time when the devices will be placed within the same site. However, using Cisco Nexus 7000s in one site and Cisco ASR 1000s at another site for OTV is fully supported. For this scenario, please keep the separate scalability numbers in mind for the two different devices, because you will have to account for the lowest common denominator. Starting in NX-OS 5.2, the
site-id
command was introduced as a way to harden multihoming for OTV. It is a configurable option that must be the same for devices within the same data center and different between any devices that are in different data centers. It specifies which site a particular OTV device is in so that two OTV devices in different sites cannot join each other as a multihomed site. This command is now mandatory.
© 2013 Cisco and/or its affiliates. All rights reserved.
35
Strong Recommendations and Key Notes
Using Virtual Port Channels (vPCs) and OTV together provides an extra layer of resiliency and is thus recommended as a best practice. OTV & FabricPath: Because OTV encapsulation is done on M-series modules, OTV cannot read FabricPath packets. Because of this restriction, terminating FabricPath and reverting to Classical Ethernet where the OTV VDC resides is necessary. In addition, when running FabricPath in your network, we highly recommend that you use the
spanning-tree domain
36
OTV Encapsulation :: MAC in IP
•
42 Bytes
overhead to the packet IP MTU size (Outer IP Header + OTV Shim) – (Original L2 Header without the 802.1Q header) • 802.1Q header is
removed
and the VLAN field copied over to the OTV shim header • Outer OTV shim header contains VLAN, overlay ID number, and an external IP header • Consider Jumbo MTU Sizing along the path between the source and destination endpoints to account for the extra 42 bytes 802.1Q
802.1Q header
removed
DMAC SMAC 802.1Q
Ether Type
DMAC
6B
SMAC
6B
Ether Type
2B
IP Header
20B
OTV Shim
8B
L2 Header
14B
*
Payload
CRC
4B
Original L2 Frame
*
The 4 Bytes of the 802.1Q header have already been removed © 2013 Cisco and/or its affiliates. All rights reserved.
20B + 8B + 14B* = 42 Bytes of total overhead 37
How OTV Works :: Inter-Site Packet Flow (OTV Data Plane)
VLAN
MAC Table
MAC IF 10 10 10 MAC 1 MAC 2 MAC 3 Eth 1 IP B Eth 3 Layer 2 Lookup Layer 2 Lookup VLAN
MAC Table
MAC IF 10 10 10 MAC 1 MAC 2 MAC 3 IP A Eth 2 IP A WEST DC EAST DC Encap Decap
MAC 1 MAC 2 IP A IP B MAC 1 MAC 2 IP A IP B MAC 1 MAC 2 MAC 1 MAC 2
Assumption ::
New MACs where learned in the VLANs that are OTV extended on the internal interfaces; an OTV update message was sent and replicated across the transport and delivered to all remote OTV Edge devices; those MACs learned through OTV are then imported in the MAC address tables of the OTV Edge Devices.
Step 1 ::
The Layer 2 frame is received at the aggregation layer or OTV Edge Device. A traditional Layer 2 lookup is performed, the MAC for Host B ’s information in the MAC table does not point to a local Ethernet interface (as you would see in intra-site communication); but to the IP address of the remote OTV Edge Device that advertised that MAC’s reachability information.
Step 2 ::
The OTV Edge Device encapsulates the original Layer 2 Frame; with is the source IP of the outer header of the local Join interface & the destination IP which is the IP address of the remote Edge Device Join interface.
Step 3 ::
The OTV encapsulated frame (a regular unicast IP packet) is carried across the transport infrastructure and delivered to the remote OTV Edge Device.
Step 4 ::
The remote OTV Edge Device decapsulates the frame exposing the original Layer 2 packet.
Step 5 ::
The OTV Edge Device performs another Layer 2 lookup on the original Ethernet frame and discovers that it is reachable through a physical interface, which means it is a MAC address local to the site.
Step 6 ::
The frame is then delivered to the MAC destination of Host B © 2013 Cisco and/or its affiliates. All rights reserved.
38
Additional Resources & Further Reading
External (public)
OTV Best Practices Guide http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/guide_c07-728315.pdf
Great External Resources
OTV Technology Introduction and Deployment Considerations http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/DCI/whitepaper/DCI_1.html
Using OTV to Extend Layer 2 between Two Data Centers http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/white_paper_c11-644634.html
Nexus 7000 NX-OS OTV Configuration Guides http://www.cisco.com/en/US/docs/switches/datacenter/sw/nx-os/OTV/config_guide/b_Cisco_Nexus_7000_Series_NX OS_OTV_Configuration_Guide.html
Cisco Nexus 7000 NX-OS Verified Scalability Guide (OTV Limits) http://www.cisco.com/en/US/docs/switches/datacenter/sw/verified_scalability/b_Cisco_Nexus_7000_Series_NX OS_Verified_Scalability_Guide.html#reference_18192F87114B45D9A40A41A0DEF3F74D Cisco Live 365 (sign up & search session catalog for OTV) https://ciscolive365.com/ BRKDCT – 3103 :: Advance OTV – Configure, Verify and Troubleshoot OTV in Your Network; Andy Gossett (CSE) © 2013 Cisco and/or its affiliates. All rights reserved.
39
Additional Resources & Further Reading Quick Start Guide :: Virtual Port Channel (vPC)
https://communities.cisco.com/docs/DOC-35728
Quick Start Guide :: FabricPath
https://communities.cisco.com/docs/DOC-35725l © 2013 Cisco and/or its affiliates. All rights reserved.
40
© 2013 Cisco and/or its affiliates. All rights reserved.
41