Transcript - GENI Wiki
The ShadowNet ProtoGENI Measurement Infrastructure Jim Griffioen Kobus Van der Merwe Lab for Advanced Networking University of Kentucky Lexington, KY AT&T Labs - Research Florham Park, NJ Other Project Members Zongming Fei (Kentucky) Eric Boyd (Internet 2) Outline ProtoGENI ShadowNet Leveraging AT&T ShadowNet GEC7 March 17, 2010 ProtoGENI ShadowNet GEC7 March 17, 2010 Project Overview Problem: ProtoGENI backbone router resources are limited and can be challenging to use. Idea: Leverage the logical router features of Juniper routers to dynamically create virtual routers (slivers) in the backbone that provide carrier-grade performance and services. Challenge 1: Creating the control software needed to virtualize the Juniper M7i and integrate with the ProtoGENI network Challenge 2: Make it easy for users to “see” what is happening on their backbone router slivers. GEC7 March 17, 2010 Project Goals 1. Deploy “virtualizable” commercial routers (Juniper m7i) in the ProtoGENI backbone that support commercial OS/software. 2. Add software support to these virtual routers that will enable per-slice monitoring and measurement. 3. Develop tools and interfaces that will allow slice users to use the measurement infrastructure in simple and easy ways. GEC7 March 17, 2010 ProtoGENI Network Source: http://groups.geni.net/geni/attachment/wiki/presentations/protogeni_Ricci_gec3.pdf GEC7 March 17, 2010 ProtoGENI Shadownet Sites Year 1 Year 2 Source: http://groups.geni.net/geni/attachment/wiki/presentations/protogeni_Ricci_gec3.pdf GEC7 March 17, 2010 ProtoGENI Backbone Node Architecture Internet 2 Sliver n Sliver 1 NetFPGA NetFPGA NetFPGA Gigabit Ethernet Switch General Purpose Slivers Non-sliced PC GEC7 Sliced PC March 17, 2010 ProtoGENI Backbone Node Architecture Internet 2 Sliver n Sliver 1 NetFPGA NetFPGA NetFPGA Gigabit Ethernet Switch General Purpose Slivers Non-sliced PC GEC7 Virtual Server Juniper Component Manager Juniper M7i Router Logical Router 1 Logical Router 2 Logical Router n ShadowBox Controller ShadowBox Router Sliced PC March 17, 2010 ProtoGENI Backbone Node Architecture Internet 2 General Purpose Slivers Non-sliced PC GEC7 perfSONAR n perfSONAR 1 Sliver n Sliver 1 NetFPGA NetFPGA NetFPGA Gigabit Ethernet Switch Measurement Slivers Virtual Server Juniper Component Manager Juniper M7i Router Logical Router 1 Logical Router 2 Logical Router n ShadowBox Controller ShadowBox Router Sliced PC March 17, 2010 Leveraging AT&T ShadowNet GEC7 March 17, 2010 Why ShadowNet? ShadowNet is roughly addressing same problem as GENI, however Less clean slate… Focus on services and network management… Need the ability to more rapidly evolve the way we run our network and the services we offer in our network (pull): Inherently difficult: – Potential impact to existing services Networks are shared, new service/feature might negatively interact with existing services Gets worse with time: networks are “cumulative” (hardly ever gets switched off) Very long test cycles – Need for support systems Configuration management, network management, service monitoring, provisioning, customer interfaces, billing, fault management Legacy lock in: Existing (complicated) systems need to be modified to support new services Extremely long development time New vendor technologies (push): Programmability and virtualization available from major vendors – Allow non-vendor code to execute on routers – Loosen the tight coupling between physical boxes and logical functions Rethink the way we deploy services and operate our network ShadowNet as (part of) a solution “National footprint” network/platform/testbed for research and service trials – Connected to, but separate from production network Limit impact on operational network Look like a customer to AT&T network – In between lab and production Stable enough for service trials Open/flexible enough for research experiments – “General purpose”, shareable testbed facility Would like to make this a widely available/useful facility, akin to general purpose computing facilities The role of ShadowNet: Operational (but non-production) environment to enable: – Evaluation of new technologies/vendor capabilities No impact on existing network/services – Service testing/trials in a realistic environment (including customer trials) Utilize virtualization and partitioning capabilities to limit interaction and reduce risk – Evolution of network support systems Free from legacy lock – Research in operational setting Both networking and “Internet services” Safe playground for network evolution – This model might become the way we want to build our network ShadowNet node architecture ShadowNet rack Sun Fire X4150 Server GigE Sun Fire X4150 Server Sun Fire X4150 Server Router Sun Fire X4150 Server Sun Fire X4150 Server Cisco Catalyst 3560G-48TS Sun Fire X4150 Server Juniper M7i Sun Fire X4150 Server Router Juniper M7i Juniper M7i Set of building blocks that can be flexibly combined into an operational network (or networks) Operational nodes: Richardson, TX Pleasanton, CA Chicago, IL • Each node: – “Gateway” router, Juniper M7i • AS 5105: – Full BGP table – 2 X GigE connectivity to AT&T network – – 4 /24 prefixes 7 X SunFire x4150 servers – – Advertise up to /32 2 X “multiservice” routers, Juniper M7i Waiting for network connectivity: – Cisco GigE switch (Catalyst 3560) – OOB access Page 14 Middletown, NJ ShadowNet •Sharable and composable infrastructure •Strong separation between physical and logical devices: •Physical machines -> virtual machines •Physical routers -> logical routers •Physical links -> logical gigE links: pseudowires, tunnels, VLANs etc •ShadowNet slices consist of logical devices that have been plumbed together •However, allow allocation of physical devices to a slice Page 15 Life cycle of ShadowNet devices GEC7 March 17, 2010 Using ShadowNet "The interesting thing about cloud computing is that we've redefined cloud computing to include everything that we already do. I can't think of anything that isn't cloud computing with all of these announcements.” Larry Ellison, CEO Oracle Wall Street Journal, September 26, 2008 •CloudNet experimentation • Combining cloud computing with VPN • Fairly elaborate setup involving many components • Create VPLS VPN between three sites • Prototype dynamic VPN connectivity • Experiment with (live) virtual machine and storage migration • Mechanisms for optimizing WAN migration In the works: • Cloud control architecture • Slice with bunch of VMs for “architectural support for network debugging” • Declarative approach to network management • Extend to provide mobility functionality GEC7 March 17, 2010 Enterprise Cloud Challenges Existing cloud platforms do not meet the needs of enterprise customers Insufficient security controls Need isolation at server and network level Deployment is difficult - transparency Cloud resources are completely separate from local ones Can’t make VMs look like part of existing enterprise network Limited control over network resources Cannot specify network topology or IP addresses Cannot reserve bandwidth or request QoS guarantees for network links Page 18 CloudNet Enterprise-Ready Virtual Private Clouds •Use VPNs to separate customer resources •Customer’s cloud resources are only reachable from other VPN end points •More flexible control of how IP addresses are assigned •Physical network is transparent to customer •Assume a virtual machine abstraction VPNs provide both network resource isolation and strong security Page 19 Virtual Private Clouds Cloud Site X VPN A Server VPN B VPC A Server PE PE AT&T Backbone Cloud Site Y Server PE VPN A PE VPN B Server Server VPC B Virtual Private Cloud: • Collection of cloud resources presented to customer as a private set of cloud resources, transparently and securely connected to customer VPN • Manage network resources in the same dynamic manner as cloud resources Page 20 System/Architecture Components CloudNet Portal VPN A VPN B Cloud Platform Server Network Manager Cloud Manager Server PE PE Server VPN A CE Router Server AT&T Backbone PE VPN B Server Cloud Domain Network Domain High level abstraction: Cloud Manager: Network Manager (IRSCP): • Create compute resources • Create compute resources • VPN management (network side) • Map into VPN • Map into VPN (cloud side) • Cross domain interaction Page 21 Cloudnet in ShadowNet: Physical nodes involved CloudNet slice ShadowNet rack Sun Fire X4150 Server Sun Fire X4150 Server CHCG Sun Fire X4150 Server PLTN Sun Fire X4150 Server Sun Fire X4150 Server Juniper M7i Cisco Catalyst 3560G-48TS Sun Fire X4150 Server ShadowNet rack Sun Fire X4150 Server Sun Fire X4150 Server Sun Fire X4150 Server AT&T backbone (7132) Sun Fire X4150 Server Sun Fire X4150 Server Juniper M7i Juniper M7i Sun Fire X4150 Server Cisco Catalyst 3560G-48TS Sun Fire X4150 Server Juniper M7i Sun Fire X4150 Server Juniper M7i ShadowNet rack Sun Fire X4150 Server Sun Fire X4150 Server Juniper M7i Sun Fire X4150 Server Sun Fire X4150 Server Sun Fire X4150 Server Juniper M7i RCSN Cisco Catalyst 3560G-48TS Sun Fire X4150 Server Sun Fire X4150 Server Juniper M7i Juniper M7i Page 22 GRE tunnels Cloudnet in ShadowNet: VPLS MPLS VPN in a slice CHCG PLTN RR/IRSCP PE1 PLTN5 P3 CHCG6 PE3 P1 RCSN VLAN circuit cross connect Logical tunnel P2 PE2 RCSN6 Physical ethernet Logical link: VLAN cross connect example P3 P1 P1 VLAN Page 23 Cisco Switch VLAN Juniper Router VLAN-CCC Juniper Router VLAN Cisco Switch VLAN P3 VM migration across WAN Laptop Game Client VpnRemap PLTN ipsec RR/IRSCP P3 PLTN5 PE1 VM0 CHCG PE3 CHCG6 r0 P1 drbd RCSN P2 PE2 RCSN6 •Ipsec client on laptop provides remote access to VPN •Run game server on VM •Run game client on laptop •Game server move with VM •Application very sensitive to network impairments •Client monitor typically shows game detects minor changes •VM migration across WAN “just works” using VPLS VPNs •Optimize for WAN conditions: •Storage: moving between asynchronous and synchronous replication Page 24 •VM: optimizing migration logic + redundancy elimination r0 VM0 Game Server Thank You! Questions? This material is based upon work supported in part by the National Science Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of GPO Technologies, Corp, the GENI Project Office, or the National Science Foundation .