Transcript title

Green Data Center Program
Alan Crosswell
05/15/09
Agenda
•
•
•
•
•
•
•
The Opportunities
The CUIT Data Center
Other Data Centers around Columbia
Current and Developing Data Center Best Practices
Future State Goals
Our NYSERDA Advanced Concepts Datacenter project
Next Steps
05/15/09
2
The Opportunities
• Data centers consume 3% of all electricity in New York State (1.5%
nationally as of 2007).
• Use of IT systems for administrative and especially research
applications continues to grow.
• We need space for academic purposes such as labs.
• Columbia’s commitment to PlaNYC carbon footprint reduction of 30%
by 2017.
• Gov. Paterson’s 15x15 goal: (15% electricity demand reduction by
2015)
05/15/09
3
CUIT Data Center
• Architectural
– Built in 1963, updated somewhat in the 1980's.
– 4400 sf raised floor machine room space.
– 1750 sf raised floor space, now offices for GSB IT.
– 12” raised floor
– Adequate support spaces nearby
• Staff
• Staging
• Storage
• Mechanical & fire suppression
• (future) UPS room
1968
2009
05/15/09
4
CUIT Data Center
• Electrical
– Supply: 3-phase 208V from automatic transfer
switch.
– Distribution: 208V to wall-mounted panels; 120V to
most servers.
– No central UPS; lots of rack-mounted units.
– Generator: 1750 kW shared with other users & at
capacity.
– No metering. (Spot readings every decade or so:-)
– IT demand load tripled from 2001-2008
05/15/09
5
CUIT Data Center
600
Historical and Projected IT Demand Load
537
486
500
445
477
406
400
363
438
409
382
336
335
kW 300
historical
projected (low)
projected (high)
200
137
100
96
0
2001
2002
2003
2004
2005
2006
2007
Year
2008
2009
2010
2011
2012
2013
05/15/09
6
CUIT Data Center
• Mechanical
– On floor CRAC units served by campus chilled
water.
– Also served by backup glycol dry coolers.
– Supplements a central overhead air system.
– Heat load is shared between the overhead and
CRAC.
– No hot/cold aisles
– Rows are in various orientations
– Due to tripling of demand load, the backup
(generator-powered) CRAC units lack sufficient
capacity.
05/15/09
7
CUIT Data Center
• IT systems
– A mix of mostly administrative (nonresearch) systems.
– Most servers dual-corded 120V power input.
– Many ancient servers.
– Due to lack of room UPS, each rack has
UPSes taking up 30-40% of the space.
– Lots of spaghetti in the racks and under the
floor.
05/15/09
8
CUIT Data Center
Data Center Power Allocation (kW)
Servers
335
37%
HVAC cooling
462
51%
HVAC fans
111
12%
Power Use Efficiency (PUE) =2.17
This is a SWAG!
05/15/09
9
LBNL Average PUE for 12 Data Centers
Power Use Efficiency (PUE) =2.17
05/15/09
10
Other Data Centers around Columbia
• Many school, departmental & research server
rooms all over the place.
– Range from huge (7,000 sf C2B2)
… to tiny (2-3 servers in a closet)
– Several mid-sized
• Most lack electrical or HVAC backup.
• Many could be better used as academic space
(labs, offices, classrooms).
• Growth in research HPC putting increasing
pressure on these server rooms.
• Lots of wasted money building new server rooms
for HPC clusters that are part of faculty startup
packages, etc.
05/15/09
11
Making the server slice bigger, the pie
smaller and green.
• Reduce the PUE ratio by improving electrical & mechanical
efficiency.
– Google claims a PUE of 1.2
• Consolidate data centers (server rooms)
– Claimed more efficient when larger (prove it!)
– Free up valuable space for wet labs, offices, classrooms.
• Reduce the overall IT load through
– Server efficiency (newer, more efficient hardware)
– Server consolidation & sharing
• Virtualization
• Shared research clusters
• Move servers to a zero-carbon data center
05/15/09
12
Data center electrical best practices
• 95% efficient 480V room UPS
– Basement UPS room
– vs. wasting 40% of rack space
• 480V distribution to PDUs at ends of rack rows
– Transformed to 208/120V at PDU
– Reduces copper needed, transmission losses
• 208V power to servers vs. 120V
– More efficient (how much?)
• Variable Frequency Drives for cooling fans and pumps
– Motor power consumption increases as the cube of the speed.
• Generator backup
05/15/09
13
Data center mechanical best practices
• Air flow – reduce mixing, increase delta-T
– Hot/cold or double hot aisle separation
– 24-36” under floor plenum
– Plug up leaks in floor
– Increase temperature
– Perform CFD modeling
• Alternative cooling technique: In-row or in-rack cooling
– Reduces or eliminates hot/cold air mixing
– More efficient transfer of heat (how much?)
– Supports much higher power density
– Water-cooled servers are making a comeback
05/15/09
14
Data center green power best practices
• Locate data center near a renewable source
– Hydroelectric power upstate or in Canada
– Wind power – but most wind farms lack transmission capacity.
• 40% of power is lost in transmission. So bring the servers to the
power.
• Leverages our international high speed data networks
• Use “free cooling” (outside air)
– Stanford proposal will free cool almost always
• Implement “follow the Sun” data centers
– Move the compute load to wherever the greenest power is
currently available.
05/15/09
15
Energy Saving Best Practices
• Efficient lighting, HVAC, windows, appliances, etc.
– LBNL and other nations’ 1W standby power proposals
• Behavior modification
– Turn off the lights!
– Enable power-saving options on computers
– Social experiment in Watt Hall
• Co-generation
– Waste heat is recycled to generate energy
– Planned for Manhattanville campus
– Possibly for Morningside campus
• Columbia participation in PlaNYC
05/15/09
16
http://sicortex.com/green_index/results
04/27/09
Barriers to implementing best practices
•
•
•
•
Capital costs
Grant funding restrictions
Short-term and parochial thinking
Saving electricity is not well incented as nobody is billed for their
electrical use close enough to where they use it.
• Distance
– Synchronous writes for data replication are limited to about 30
miles
– Bandwidth*delay product impact on transmission of large
amounts of data
– Reliability concerns
– Server hugging
– Staffing needs
05/15/09
18
Key Recommendations from Bruns-Pak
2008 Study
•
•
•
•
Allocate currently unused spaces for storage, UPS, etc.
Consolidate racks to recapture floor space
Generally improve redundancy of electrical & HVAC
Upgrade electrical systems
– 750 kVA UPS module
– New 480V 1500 kVA service
– Generator improvements
• Upgrade HVAC systems
– 200-ton cooling plant
– VFD pumps & fans
– Advanced control system
05/15/09
19
Future State Goals – Next 5 years
• Begin phased upgrades of the Data Center to Improve power and
space efficiency. Overall cost ~ $25M.
• Consolidate and replace pizza box servers with blades (&
virtualization).
• Consolidate and simplify storage systems.
• Accommodate growing demand for HPC research clusters
– Share clusters among researchers to be more efficient
• Accommodate server needs of new Interdisciplinary Science Building.
• Develop internal cloud services.
• Explore external cloud services.
– Stanford giving Amazon EC2 credits for faculty startup
05/15/09
20
Future State Goals – Next 5-10 years
• Build a new data center of 10,000-15,000 sf
– Perhaps cooperatively with others
– Possibly in Manhattanville
– Not necessarily in NYC
• Consolidate many small server rooms.
• Significant use of green-energy cloud computing
resources.
From www.jiminypeak.com
05/15/09
21
Our NYSERDA Project
• New York State Energy Research & Development Authority Program
Opportunity Notice 1206
• ~$1.2M ($447K from NYSERDA awarded pending contract)
• Improve space & power efficiency of primarily administrative servers.
• Contribute to Columbia's PlaNYC carbon footprint reduction goal.
• Make room for shared research computing in the existing data center.
• Measure and test vendor claims of energy efficiency improvements.
05/15/09
22
Our NYSERDA Project – Specific Tasks
• Identify 30 old servers to replace.
• Instrument server power consumption and data center heat load.
“Real time” with SNMP.
• Establish PUE profile (use DoE DC Pro survey tool)
• Implement 9 racks of high-density cooling (in-row/rack).
• Implement proper UPS and high-voltage distribution.
• Compare old & new research clusters' power consumption for the
same workload.
• Implement advanced server power management and measure
improvements.
• Review with internal, external and research faculty advisory groups.
• Communicate results.
05/15/09
23
Measuring Power Consumption
• Use SNMP which enables comparison with other metrics
05/15/09
24
Measuring Power Consumption
• Can measure power use with SNMP
at:
– Main electrical feeder, panels,
subpanels, circuits.
– UPS
– Power strips
– Some servers
– HP c7000 blade chassis SNMP
MIB includes
• Power supply load
• Cooling fan airflow
• etc.
05/15/09
25
Measuring Heat Rejection
• Data Center chilled water goes
through a plate heat exchanger
to the campus chilled water
loop.
• Measure the amount of heat
rejected to the campus loop with
SNMP instrumented:
– BTU meter &
– Flow meter
05/15/09
26
FIN
This work is supported in part by the New York State Energy Research and Development Authority.
05/15/09
27