슬라이드 1 - WebHome < Main < HEPG

Download Report

Transcript 슬라이드 1 - WebHome < Main < HEPG

23 Sept. 2010
Jonghu Lee (李鍾厚)
KISTI GSDC
jlee206ATkisti.re.kr
Content
1
Introduction to KISTI and GSDC
2
Current Status of SACC
3
Future Plan of SACC
2
1. Introduction to KISTI and GSDC
Jan. 1962
Organized KORSTIC
Jan. 1980
Reorganized to KIET
•
Jan. 1991
(Korea Institute for Industrial Economics and Technology)
Merged with KIEI (Korea International Economics Institute)
KINITI
•
Feb. 1991
(Korea Scientific & Technology Information Center)
(Korea Institute of Industry and Technology Information)
spun off from KIET
Established SERI
•
division under KINITI
(System Engineering Research Institute)
Apr. 1993
Founded KORDIC
Sep. 1999
Acquisition Super Computing Center
•
Jan. 2001
(KORea Research and Development Information Center)
from ETRI (Electronics and Telecommunications Research Institute)
Established KISTI
(Korea Institute of Science and Technology Information)
3
Three Main Functions of KISTI
Super Computing
Center
Supercomputing Management and Operation
Supercomputing
High Speed Research Network
National Grid Infrastructure
Knowledge
Information Center
Developing National Portal Systems for Information
Resources
Developing Next Generation Technology in Information
Services
Information
Analysis Center
Core Technologies Analysis
Core Technologies Feasibility study
Foreign Information Trend Analysis
Information Analysis System Development
4
Organization of KISTI
5
Location
Located in the Heart of Science Valley, “Daedeok” Innopolis
The DAEDEOK INNOPOLIS complex consists of a
cluster of firms that represents a cross-section of
Korea's cutting-edge industries, including information
technology, biotechnology and nanotechnology.
SEOUL
160km
Daejeon
• 6 Universities
• 20 government research
institutes
• 10 government-invested
institutes
• 33 private R&D labs
KISTI
• 824 high-tech companies
6
Super Computing Center at KISTI
1988:
- The 1st Super Computer in Korea
- Cray-2S (2GFlops)
1993:
- Cray C90 (16GFlops)
Rank
1
2
3
4
5
15
16
2001:
- IBM p690(655.6GFlops)
- NEC SX-6(160GFlope)
- IBM p690+ (2,655GFlops)
2009:
- IBM p595 (5.9TFlops)
- SUN B6048 (24TFlops)
Site
Computer/Year Vendor
Oak Ridge National Laboratory
Jaguar - Cray XT5-HE Opteron Six Core 2.6 GHz / 2009
United States
Cray Inc.
National Supercomputing Centre in Shen Nebulae - Dawning TC3600 Blade, Intel X5650, NVidia Tesla
zhen (NSCS)
C2050 GPU / 2010
China
Dawning
Roadrunner - BladeCenter QS22/LS21 Cluster, PowerXCell 8i
DOE/NNSA/LANL
3.2 Ghz / Opteron DC 1.8 GHz, Voltaire Infiniband / 2009
United States
IBM
National Institute for Computational Scie
Kraken XT5 - Cray XT5-HE Opteron Six Core 2.6 GHz / 2009
nces/University of Tennessee
Cray Inc.
United States
Forschungszentrum Juelich (FZJ)
JUGENE - Blue Gene/P Solution / 2009
Germany
IBM
TachyonII - Sun Blade x6048, X6275, IB QDR M9 switch,
KISTI Supercomputing Center
Sun HPC stack Linux edition / 2009
Korea, South
Sun Microsystems
University of Edinburgh
HECToR - Cray XT6m 12-Core 2.1 GHz / 2010
United Kingdom
Cray Inc.
Cores
Rmax
Rpeak
Power
224162
1759.00
2331.00
6950.60
120640
1271.00
2984.30
122400
1042.00
1375.78
98928
831.70
1028.85
294912
825.50
1002.70
2268.00
26232
274.80
307.44
1275.96
43660
274.70
366.74
2345.50
7
KISTI Super Computing Center Mission
Keep securing and providing
world-class
super computing systems
Help Korea research
communities to be equipped
with proper knowledge of
super computing
Resource Center
Technology Center
Test-bed Center
Value-adding Center
Validate newly emerging
concepts, ideas, tools
and system
Make the best use of
What the center has to create
the new values
8
Global Research Networks
■ GLORIAD
 GLObal RIng Network for Advanced
Applications Development with 10/40Gbps
Optical lambda networking
 Consortium of 11 Nations: Korea, USA,
China, Russia, Canada, the Netherlands
and 5 Nordic Countries
 Supporting Advanced Application
Developments such as HEP, Astronomy,
Earth System, Bio-Medical, HDTV etc.
 Funded by MEST (Ministry of Education,
Science and Technology) of KOREA
■ KREONET
 Korea Research Environment Open
NETwork
 National Science & Research Network of
Korea, Funded by Government since 1998
 20Gbps Backbone, 1 ~ 20Gbps Access
Networks
9
MISSION
Physics
ALICE
STAR
Belle
CDF
2009 ~
Various
Next ~
Bio-informatics
Earth Science
Astrophysics
10
Current Status
Korean Gov.
Computing and Storage Infrastructure
Technology Development
Apply Grid Technology to legacy app.
support
Tier-1
RAW Data
ALICE Tier-1 Test-bed
ALICE Tier-2
KIAF (KIsti Analysis Farm)
RAW Data
11
Activities
ALICE Tier-1 Test-bed
ALICE Tier-2
Completed set-up ALICE Tier-1 test-bed this year
Will provide official service in a few years
Site availability: 98% since Feb. 2009
Belle
Providing computing resources for Belle MC
production (Grid)
Bell to provide their data to KISTI GSDC
CDF
Providing computing resources under NAMCAF
Supporting CDFSoft development
LIGO
Set-up LIGO cluster test-bed
GBrain
Planning to cooperate with global brain research
project (mainly from McGill Univ. Canada)
12
Members
No
Name
Role
1
Dr. Haengjin Jang
Head of GSDC
2
Dr. Hyungwoo Park
Project Management
3
Mr. Jinseung Yu
Technical Staff (Network)
4
Mr. Heejun Yoon
Technical Staff (DBA)
5
Dr. Beokyun Kim
Technical Staff (Grid)
6
Dr. Christophe Bonnaud
Technical Staff (Admin)
7
Dr. Seokmyun Kwon
Technical Staff
8
Dr. Seo-Young Noh
Technical Staff
9
Dr. Jonghu Lee
Technical Staff (STAR contact)
10
Dr. Seungyun Yu
Planning
11
Mr. Kyungyun Kim
Technical Staff
12
Mr. Seunghee Lee
Technical Staff
13
Ms. Tajia Han
Technical Staff
13
Current Computing Resources:
Cluster Server
Cluster
Mem.
Node
Core
kSI2k
Dell
Intel Xeon E5405
2.0 GHz Quad 2 CPU
16GB
6
48
48
Ce01
HP
Intel Xeon E5420
2.5GHz Quad 2 CPU
16GB
16
128
219
Ce02
IBM
Intel Xeon E5450
3.0GHz Quad 2 CPU
16GB
38
304
650
IBM
Intel Xeon X5650
2.66GHz 6 Core
2 CPU
24GB
36
432
1,2
Ce-alice
This Year
Spec
14
Current Computing Resources:
Storage
Year
Model
Disk/Tape
Physical Size
Usable Size
2008
NetApp
FAS2050
Disk(SAN)
48TB
30TB
2009
NetApp
FAS6080
Disk(SAN, NAS)
334TB
200TB
This Year
Hitachi
USP-V
Disk(SAN)
960TB
600TB
This Year
-
Tape
100TB
100TB
1442TB
930TB
Total
15
Some Pictures
Newly delivered storage
system in June
16
Network Architecture
GLORIAD
Public
10GbE S/W
1G Grid Servers
…
Private
….
1G Grid Servers
10G Grid Servers
10GbE S/W
WN Servers
IBRIX Fusion
Cluster File System
….
…
: 8Gb FC
PFS Servers
: 1GbEthernet
: 10GbEthernet
8Gb SAN S/W
1G/10GbE S/W
NAS
NAS
Controller 1,2
NAS
U40TB
FC
FC
Controller 1,2
2009
FC
Controller #1
FC
Controller #2
2010
SAN
U160TB
SAN
U600TB
17
2. Current Status SACC
Provide Service
STAR Asian Users
(China, India, etc.)
Collaboration
Provide
Computing Resources
18
Network Architecture
Starproject to BNL
10G
1F 신규 2전산실
S/W
FDF(3,4)
FDF(3,4)
10G Trunk
T2/3
1G
DJ-F10
10G
Vlan 123, Vlan124
10G
1G
대전
OME6500
10G
시애틀
OME6500
10G
10G Trunk
Vlan 123
1G
1G Lightpath
Vlan124
KISTI 2F 대용량데이터팀 LAB
DJ_7609
Cisco7609
G8/21
VLAN 233
1F –FDF(3,4)
G2/11
시애틀
OSR
Gloriad-KR
VLAN 124
IP : 134.75.123.1 / 24
IP : 134.75.238.1 / 26
G2/5
시애틀F-10
10G
VLAN 123
3F –FDF(3,4)
1G
Foundry
SuperX
G3/18
IP : 150.183.233.1 / 24
G0/26
Foundry
2402CF
2F 대용량
데이터팀
19
Daejeon – Seattle Performance Test(UDP)
[root@seattle ~]# iperf -s -u -i 1
------------------------------------------------Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size:
107 KByte (default)
------------------------------------------------[3] local 134.75.205.21 port 5001 connected with
134.75.204.20 port 41490
[3] 0.0- 1.0 sec
233 MBytes 1.96 Gbits/sec
0.006 ms
0/166343 (0%)
[3] 1.0- 2.0 sec
233 MBytes 1.96 Gbits/sec
0.005 ms
0/166533 (0%)
[3] 2.0- 3.0 sec
234 MBytes 1.96 Gbits/sec
0.005 ms
0/166643 (0%)
[3] 3.0- 4.0 sec
234 MBytes 1.96 Gbits/sec
0.005 ms
0/166659 (0%)
[3] 4.0- 5.0 sec
234 MBytes 1.96 Gbits/sec
0.007 ms
0/166652 (0%)
[3] 5.0- 6.0 sec
234 MBytes 1.96 Gbits/sec
0.005 ms
0/166664 (0%)
[3] 6.0- 7.0 sec
233 MBytes 1.96 Gbits/sec
0.006 ms
0/166363 (0%)
[3] 7.0- 8.0 sec
234 MBytes 1.96 Gbits/sec
0.005 ms
0/166644 (0%)
[3] 8.0- 9.0 sec
234 MBytes 1.96 Gbits/sec
0.006 ms
0/166663 (0%)
[3] 0.0-10.0 sec 2.28 GBytes 1.96 Gbits/sec
0.008 ms
0/1665755 (0%)
[root@localhost ~]# iperf -c 134.75.205.21 -u -b 1.8g -i 1 -t
10
-----------------------------------------------------------Client connecting to 134.75.205.21, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 64.0 KByte (default)
-----------------------------------------------------------[3] local 134.75.204.20 port 41490 connected with 134.75.205.21
port 5001
[ID] Interval
Transfer
Bandwidth
[3] 0.0- 1.0 sec
233 MBytes 1.96 Gbits/sec
[ID] Interval
Transfer
Bandwidth
[3] 1.0- 2.0 sec
233 MBytes 1.96 Gbits/sec
[ID] Interval
Transfer
Bandwidth
[3] 2.0- 3.0 sec
234 MBytes 1.96 Gbits/sec
[ID] Interval
Transfer
Bandwidth
[3] 3.0- 4.0 sec
234 MBytes 1.96 Gbits/sec
[ID] Interval
Transfer
Bandwidth
[3] 4.0- 5.0 sec
234 MBytes 1.96 Gbits/sec
[D] Interval
Transfer
Bandwidth
[3] 5.0- 6.0 sec
234 MBytes 1.96 Gbits/sec
[ID] Interval
Transfer
Bandwidth
[3] 6.0- 7.0 sec
233 MBytes 1.96 Gbits/sec
[ID] Interval
Transfer
Bandwidth
[3] 7.0- 8.0 sec
234 MBytes 1.96 Gbits/sec
[ID] Interval
Transfer
Bandwidth
[3] 8.0- 9.0 sec
234 MBytes 1.96 Gbits/sec
[ID] Interval
Transfer
Bandwidth
[3] 0.0-10.0 sec 2.28 GBytes 1.96 Gbits/sec
[3] Sent 1665755 datagrams
[3] Server Report:
[ID] Interval
Transfer
Bandwidth
Jitter
Lost/Total Datagrams
[3] 0.0-10.0 sec 2.28 GBytes 1.96 Gbits/sec 0.008 ms
0/1665755 (0%)
20
Daejeon – Seattle Performance Test(TCP)
[root@seattle ~]# iperf -c 134.75.204.20 -i 1 -w 30m -t 60
-----------------------------------------------------------Client connecting to 134.75.204.20, TCP port 5001
TCP window size: 60.0 MByte (WARNING: requested 30.0 MByte)
-----------------------------------------------------------[ 3] local 134.75.205.21 port 46831 connected with 134.75.204.20 port 5001
[ 3] 0.0- 1.0 sec 52.3 MBytes
439 Mbits/sec
[ 3] 1.0- 2.0 sec 0.00 Bytes 0.00 bits/sec
[ 3] 2.0- 3.0 sec 0.00 Bytes 0.00 bits/sec
[ 3] 3.0- 4.0 sec 0.00 Bytes 0.00 bits/sec
[ 3] 4.0- 5.0 sec 0.00 Bytes 0.00 bits/sec
[ 3] 5.0- 6.0 sec 20.9 MBytes
176 Mbits/sec
[
[
[
[
[
[
[
3]
3]
3]
3]
3]
3]
3]
54.0-55.0
55.0-56.0
56.0-57.0
57.0-58.0
58.0-59.0
59.0-60.0
0.0-60.1
sec
sec
sec
sec
sec
sec
sec
267 MBytes 2.24 Gbits/sec
238 MBytes 1.99 Gbits/sec
270 MBytes 2.26 Gbits/sec
242 MBytes 2.03 Gbits/sec
239 MBytes 2.01 Gbits/sec
270 MBytes 2.27 Gbits/sec
8.84 GBytes 1.26 Gbits/sec
21
Current Resources
CPUs
Intel Xeon E5450 300cores
≈ 960kSI2K (1core = 3.2kSI2K)
shared queue
Storages
Disk: 100TB
Tape: 100TB
Network
10G GLORIAD: Active
Software
SL10i on the Scientific Linux 5.5 64bits
22
100TB Disks
23
Resource Plan
2010
2011
2012
960
2,080
3,100
Disk (TB)
100
200
300
Tape (TB)
100
200
300
~10G (GLORID)
~10G (GLORID)
~10G (GLORID)
CPU (kSI2K)
Storage
Network
24
3. Future Plan of SACC
Before 2010
Installation of additional CPUs (+430 cores) & Tests
MOU between KISTI GSDC and STAR Collaboration
Start user service for Asian users
Collaborations between KISTI GSDC and Asian institutes
Jan. 2011
RUN11 – online data transfer from BNL to KISTI GSDC
Summer
2011
Installation of additional storages (100TB disk + 100TB tape)
25
Our Vision
26
Do You Have
Any Questions?
27