Fast Path Data Center Relocation

Download Report

Transcript Fast Path Data Center Relocation

Fast Path Data Center Relocation
Presented by:
AFCOM & Independence Blue Cross
April 8, 2011
Agenda
•
•
•
•
•
•
•
Some Background to Understand Why
Initial Planning (After Fear Subsides)
Critical Path Components
Dark Fiber DWDM Network Issue Identified
Summary of Project Results
Keys to Success
Overview of DR Network Design
Background Events
• Completed Relocation of Primary Data Center in September 2009
• Completed Implementation & Data Center Infrastructure Build-out of
Insourced DR Capability in January 2010 at Secondary Site (Including
Cancelation of Outsource Contract)
• In Q1 2010: Learned the Location Supporting the DR Data Center (580)
Could be Phased Out in 2011
• Completed Relocation of Entire Test, QA, Development Server
Infrastructure to Primary Data Center in Q2 2010
• Q2 2010: Started a Detail Data Center Site Alternative Study to Evaluate
Options
• By July 2010, Two Viable Options Had Been Identified:
– Build-out within an existing IBC real estate footprint
– Build-out within the Directlink Technologies facility
• By August, 2010: Learned the 580 Phase Out Would Need to be a Fast Path
with a Target Decommission in March 2011
Realization and Fear
• Realization: We Have Less than 7 Months to:
– Select a Site and Either Complete Engineering Design or
Negotiate a Deal
– Build Out the Data Center
– Re-route the Dark Fiber Network
– Plan and Execute the Relocation of the DR Data Center
– Relocate the Operations Command Center which was Also
Located in the DR Facility
• Initial Reaction:
Feasibility Assessment – Fear 101
• Document Key Factors
–
–
–
–
–
–
–
Long Lead Purchases
Contract Negotiations
Fiber Path Build-out
Data Center Build-out
New Command Center Build-outs (2)
Relocation Events Integrated into Master Release Calendar
Cycle Time for Purchase Order Approvals
• Obtain Executive Support
– Agree to a Fast Path Approval Process
– Agree to Master Calendar Adjustments
• Publish High Level Timeline
• Parallel Sub-Projects for Critical Path Activities
High-Level Timeline
580 Relocation Timeline - Optimal
Submit Equipment
Orders for Approval
8/24/10
Equipment
10/31/2010
Equipment Orders
8/2/2010
Delivered
Contract Complete
Approved
Go Decision
11/15/2010
8/27/2010
Decommission
580 Site
3/31/2011
Network Active
At New Site
12/15/2010
1/8/2011 - 2/26/2011
580 Relocations
8/2/2010 - 8/31/2010
Detail Planning
9/1/2010
10/1/2010
11/1/2010
12/1/2010
1/1/2011
2/1/2011
3/1/2011
8/2/2010
4/1/2011
8/2/2010 - 8/31/2010
Long-Lead Items
Start Site Selection
Negotiations
8/2/2010
Order Equipment
8/27/2010
Site Selected
Deal Sealed
9/1/2010
Start Site
Network Buildout
10/31/2010
Move 1
1/7/2011
Move 2
1/28/2011
Move 3
2/25/2011
Site Selected: Directlink Technologies
Building 2

Key Decision Factors:

Facility infrastructure design that could serve as our production data center

Our dark fiber vendor was already in the facility, and could be added as a
node on our ring with minimal physical build-out

Negotiated a deal based on our requirements, not a “one model fits all”
approach

Enabled IBC to meet the aggressive timeline before contract officially
7
signed (e.g. work through the Attorney dance time)
Critical Path – Parallel Activities
1.
2.
3.
4.
5.
6.
7.
Facility build-out including space design and configuration changes:
Installation of power distribution, cooling capacity, overhead cable tray,
and NOC area
Fiber network build-out to support migration plan to transition to the
new data center location without outage
Acquisition and implementation of additional optical equipment to
support the new fiber network node
Acquisition and implementation of “seed” network infrastructure
equipment to allow new data center to be an active node prior to
relocation events
Installation of “seed” server cabinets and structured cabling within the
new data center space
Relocation strategy development and event detail planning
Relocate the primary Operations Command Center to another location
(1901 Market St.) prior to completing data center relocation
Fiber Network Issue Identified
• One Inter-Data Center Fiber Path Much Longer
– Invalidated Previous Inter Data Center Fiber
Channel Configuration
– Required Conversion from Native FC to FC/IP for
Storage Replication and LSAN Capabilities
• Resulting in (just what we needed)
– Another Critical Path, Parallel Activity
Project Results Summary
• Command Center Reconfiguration
– Primary Command Center Relocated from 580 to
IBC Headquarters at 1901 Market Street
– Active Secondary Command Center Built-Out in
the Directlink Facility
Project Results Summary
• Data Center Preparation
– Approximately two miles of fiber cable
– Approximately one mile of copper cable (reduced from over 20 miles of
copper installed at 580 site)
– 1,200’ of ceiling mounted cable trays
– 1,850 fiber terminations
– 500 Ethernet connections
– Temporary routings of backbone fiber to support the cutover to Reading
– Installation of more than 40 new network devices supporting the fiber
backbone and core network
– Rerouting telecommunication circuits supporting redundant connections for
the MPLS WAN and Extranet, Internet ISP, Bluesnet, IBC Sonet, ASP Services
and Remote Access
– Installation of 100 equipment cabinets (some new, some as emptied)
– Custom configuration of dual power utility feeds to 2 separate 750 kVA UPS
systems
– Installation of four 300 kVA PDUs
– Installation of new 30 ton CRAH units
– Installation of 185+ electrical circuits to provide dual A/B power to all IT
equipment
Data Center Gallery
Before
After
September 2010
September 2010
September 2010
October 2010
October 2010
November 2010
December 2010
NOC September 2010
NOC November 2010
NOC November 2010
NOC December 2010
NOC February 2011
1901 NOC September 2010
1901 NOC February 2011
Project Results Summary
•
Equipment Relocation
– 111 physical servers that support redundant production, redundant QA and
standby DR
– 126 virtual servers supporting redundant production, redundant QA and
standby DR
– 39 Information Security appliances and network infrastructure devices all
redundant to Bretz configurations
– z10 Disaster Recovery Mainframe
– Mainframe disk configurations hosting replication of data from primary site
– Mainframe virtual tape system and back-end ATL
– Affiliate AS/400 Disaster Recovery configuration including mirrored storage
and tape subsystem
– Informatics-Teradata configuration supporting QA-Test-Dev and DR for
production environment
– A 16-cabinet AS/400 configuration, including mirrored storage and ATL
supporting QA-Test-Dev and DR for production environment
– Distributed Systems SAN storage configuration supporting both active HA
systems as well as production data replication
– Distributed SAN Infrastructure and ATL
– Fully engaged application test teams to verify system status post-move
Keys to Success
• Partner Type Relationship with Directlink
– Handshake deal to start facility construction tasks
– Willingness to work through configuration changes w/o red tape
• Confidence in Network Strategy
– Strategy Successfully Used for Production DC Relocation
– Experience with Target Design and Foundation Technology
• Experienced Team to Design & Build Data Center Layout and Supporting
Cable Plant
– Layout to Optimize Active Cable Plant and Facilitate Future Expansion
– Diligence in Documenting. Labeling and Pre-Testing All Connections
• Leverage Inventory Configuration Management and Relocation Experience
– Use of Existing Inventory Management Repository Containing Key Equipment
Specifications Including Power and Connection Details
– Development of Detailed Move Event Scripts and Supporting Staffing Plans
• Strong Project Management Oversight Managing All Critical Path Activities
Disaster Recovery Network
• Configuration Flexibility To Provide
– The Ability to Safely Test DR Capabilities in an
Isolated DR Network
– Fast, Seamless Integration Into the Production
Network in the Event of a Primary Site Failure