Introduction to PRAGMA Grid Cindy Zheng PRAGMA Grid Team For

Download Report

Transcript Introduction to PRAGMA Grid Cindy Zheng PRAGMA Grid Team For

Introduction to PRAGMA Grid
Cindy Zheng
For
PRAGMA Grid Team
Pacific Rim Application and Grid Middleware Assembly
http://www.pragma-grid.net
http://goc.pragma-grid.net
MURPA Seminar, 4/8/2009
Overview
• PRAGMA
• PRAGMA Grid
–
–
–
–
People
Hardware
Software
Operations
• Education Grid
• Grid Applications
• Grid Middleware
– Application middleware
– Infrastructure middleware
• Collaborations/Integrations
• Grid Interoperations
MURPA Seminar, 4/8/2009
Heterogeneity
People
Collaborations
Integrations
Lessons learned
–PRAGMA
–A Practical Collaborative Framework
–IOIT-VN
–Strengthen Existing and Establish
New Collaborations
–
–Work with Science Teams to
Advance Grid Technologies and
Improve the Underlying
Infrastructure
MURPA
Seminar,
4/8/2009
–http://www.
pragma
-grid.net
–In the Pacific Rim and Globally
Working Groups: Organize Activities
Resources
Biosciences
H5N1 related glycan
conformation analysis
using M*Grid and
Glyco-M*Grid
Relaxed Complex
Method Molecular
Dynamics Simulation
Data Sets & Database
Virtual Screening Data
Sets & Database
HPC
Clusters,
NBCR,
TeraGrid,
MHPCC
Virtual
Directory Tree
/gfs/$USER
databases
applications
Zinc
PRAGMA Portal
My WorkSphere
CSF4 Server
NCIDS
Gfarm File System
NAMD
AutoDock
Mr. Bayes
–Feb 2009
GEO
Telescience
MURPA Seminar, 4/8/2009
PRAGMA Working Groups
• Bioscience
• Telescience
• Geo-science
• Resources and data
–Grid middleware interoperability
–Global grid usability and productivity
PRAGMA Grid effort is led by resources and
data working group, but rely on collaborations
and contributions among all working groups.
MURPA Seminar, 4/8/2009
PRAGMA Grid
UZH
Switzerland
CNIC
GUCAS
China
LZU
China
UoHyd
India
JLU
China
CUHK
HKU
HongKong
ASGC
NCHC
Taiwan
UUtah
USA
HCMUT
HUT
IOIT-HCM
Vietnam
SKU
UI
Indonesia
IHPC/NGO
NTU
Singapore
MU
Australia
NCSA
USA
UPRM
Puerto Rico
CICESE
Mexico
UNAM
Mexico
CeNAT-ITCR
Costa Rica
APAC
QUT
Australia
BESTGrid
New Zealand
UChile
Chile
29 institutions in 15 countries/regions, 25 compute sites (+ 10 in preparation)
MURPA Seminar, 4/8/2009
BU
USA
SDSC
USA
ASTI
Philippines
NECTEC
ThaiGrid
Thailand
MIMOS
USM
Malaysia
KISTI
Korea
AIST
OsakaU
UTsukuba
Japan
PRAGMA Grid Members and Team
http://goc.pragma-grid.net/wiki/index.php/Site_status_and_tasks
• Sites
– 22 sites from PRAGMA member institutions
– 7 sites from Non-PRAGMA member institutions
– 25 sites contributed compute clusters
• Team members
–
–
–
–
–
231 and growing
one management contact / site
1~3 technical support contact / site
1~4 application drivers / application
1~5 members / Middleware development team
MURPA Seminar, 4/8/2009
PRAGMA Grid Compute Resources
http://goc.pragma-grid.net/pragma-doc/computegrid.html
MURPA Seminar, 4/8/2009
Characteristics of PRAGMA Grid
• Grass-root
– Voluntary contribution
– Open (PRAGMA member or not, pacific rim or not)
– Long-term collaborative working experiment
• Heterogeneous
–
–
–
–
Funding
No uniform infrastructure management
Variety of sciences and applications
Site policies, system and network environments
• Realistically tough
– Good for development, collaborations, integrations and
testing
MURPA Seminar, 4/8/2009
PRAGMA Grid Software Layers
http://goc.pragma-grid.net/pragma-doc/userguide/join.html
Applications
FMO Savannah MM5 CSTFT Siesta AMBER Phylogenetic
…
Ninf-G Nimrod/G Mpich-GX … Gfarm SCMSWeb CSF MOGAS …
Application Middleware
Infrastructure Middleware
Globus (required)
SGE
PBS
LSF
SQMS
…
Local job scheduler (require one)
MURPA Seminar, 4/8/2009
PRAGMA Grid Operations
MURPA Seminar, 4/8/2009
Grid Operation
http://goc.pragma-grid.net, http://wiki.pragma-grid.net
• Develop and maintain mutual beneficial and happy
relationships among all people involved
–
–
–
–
Geographies, time-zones, languages
Funding, chain-of-command, priorities
Mutual benefit, consensus, active leadership
Coordinator, site contacts
• Collaboration tools
– Mailing lists, VTCs, Skype, semi-annual workshops
– Grid Operation Center (GOC)
– Wiki, all sites and application, middleware teams collaborate
• Heterogeneity
–
–
–
–
–
Tolerate, technology, overcome and take advantage
Software inventory instead of software stack
Many sub-grids for applications
Recommendation instead of requirements
Software license (Amber grid-wide license)
MURPA Seminar, 4/8/2009
Create New Ways To Operate
http://goc.pragma-grid.net, http://wiki.pragma-grid.net
•
•
•
•
Lack precedence
Everyone contributes ideas, suggestions
Evolving and improving over time
Everyone document and update (wiki)
– Create new procedures
• New site setup to join PRAGMA Grid
http://goc.pragma-grid.net/pragma-doc/userguide/join.html
• New user/application to run in PRAGMA grid
http://goc.pragma-grid.net/pragmadoc/userguide/pragma_user_guide.html
– Tabulate information
• Application pages, site pages, resources tables, status pages
– Publish instructions
• Software deployment procedures, tools
MURPA Seminar, 4/8/2009
Education Grid
PRIME - Pacific Rim Undergraduate Experiences, providing UCSD
undergraduate students international interdisciplinary research
internships and Cultural experiences, in collaboration with PRAGMA
since 2004. http://prime.ucsd.edu
PRIUS - Pacific Rim International UniverSity, provide Osaka
University students expert lectures and internship abroad, in
collaboration with PRAGMA since 2005 http://prius.ics.es.osakau.ac.jp/en/index.html
MURPA – Monash Undergraduate Research Projects Abroad, provides
Monash undergraduate students an 8-week summer international
research opportunity at University of California, San Diego (UCSD).
http://www.infotech.monash.edu.au/about/events/2008/murpa.html
Sample middleware projects
–
–
–
MOGAS
Grid security analysis
Virtualization
MURPA
Sample applications ran in PRAGMA grid:
–
–
–
–
–
–
–
Climate modeling
Multi-walled carbon nanotube and polyethylene oxide
composite computer visualization model
Metabolic regulation of ionic currents and pumps in rabbit
ventricular myocyte model
Improving binding energy using quantum mechanics
Cardiac mechanics modeling
H5N1 simulation
Shp2 Protein Tyrosine Phosphatase Inhibitor simulation for
cancer research
MURPA Seminar, 4/8/2009
PRIME Host and Mentor Sites
Research Apprenticeship; Cultural Experience
U Zurich
Switzerland
Doshisha U
NIICT
Osaka U
Japan
CNIC
China
UCSD
USA
AST
NCHC
NCREE
NMMBA
Taiwan
UoHyd
India
UWisc
USA
USM
Malaysia
Monash U
Australia
U Auckland
U Waikato
New Zealand
– 2004 – 2007: There are 5 host sites: Osaka, NCHC, Monash, CNIC; NCREE
– New in 2008: USM, U Auckland, U Waikato, New 2009 U Hyderabad; National Institute for
Information and Communication Technology; Doshisha University; Academia Sinica, NMMBA
MURPA Seminar, 4/8/2009
PRIUS and MURPA: Based on PRIME
New Models for Building Research Capacity
and Cultural Awareness
–U Hyderabad –+
–Hyderabad, India
–+
–USM
–Penang Malaysia
–12 Oct 05
–U Auckland
–U Waikato
–New Zealand
–+
MURPA Seminar, 4/8/2009
–31 Jul 08
–Osaka
–University
–MURPA
–Monash U
Application Driven
MURPA Seminar, 4/8/2009
Applications
http://goc.pragma-grid.net/wiki/index.php/Applications
•
•
•
•
•
•
•
•
•
•
•
Climate simulation
– Savannah/Nimrod (MU, Australia)
Quantum Mechanics
– TDDFT,QMMD,FMO/Ninf-G (AIST, Japan)
Structure biology
– Phaser/Nimrod (MU, Australia)
Drug analysis
– EMSAM (PRIME)
Bio-medical research
– Cardiac output (PRIME)
– Cardial mechanics (PRIME)
– Ventricular Myocyte model (PRIME)
Genomics and meta-genomics
– Avian Flu Grid/CSF (SDSC/CNIC/JLU/UTsukuba/…)
Computational fluid dynamics
– e-AIRS (KISTI, Korea)
Environmental Science
– CSTFT/Ninf-G (UPRM, Puerto Rico)
– Braizilian Regional Atmospheric modelling (UChile, Chile)
Organic-chemistry
– Enediynes (PRIME)
– Virtual screen SHP-1 and SHP-2 (PRIME)
– Virtual screen SSH-2 (PRIME)
Computer Science
– Image Analysis (UMelb, Australia)
– Image rendering (AIST, Japan)
– Grid debugging (MonashU, Australia)
Nanotechnology
– Nanoengineering simulation (PRIME)
MURPA Seminar, 4/8/2009
Lessons Learned From Running Applications
• PRAGMA grid resources enables large-scale
computation
• Its heterogeneous environment
is great for
–
–
–
–
Testing
Collaborating
Integrating
Sharing
• Not easy – more need to be done
– Middleware needs improvements
• Work in heterogeneous environment
• Fault tolerance
– Need user friendly portals and services
– Automate and integrate
• Information collections (grid monitoring, workflow)
• Decisions and executions (scheduling)
• Domain specific easy user interfaces (portals, CE tools)
– …
MURPA Seminar, 4/8/2009
Grid Middleware
MURPA Seminar, 4/8/2009
Application Middleware
MURPA Seminar, 4/8/2009
Ninf-G
http://ninf.apgrid.org
•
•
•
•
•
•
•
Developed by AIST, Japan
Based on GridRPC model
Support parallel computing
Integrated to NMI release 8 (first non-US software in NMI)
Integrate with Rocks
OGF standard
4 applications ran in PRAGMA grid, 2 ran across multi-grid
–
–
–
–
•
•
•
•
TDDFT (quantum chemistry)
QM/MD (quantum mechanics)
FMO (molecular dynamics)
CSTFT (sensor data analysis)
Achieved long runs (50 days)
Improved fault-tolerance
Simplified deployment procedures
Speed-up development cycles
MURPA Seminar, 4/8/2009
Nimrod/G
http://www.csse.monash.edu.au/~davida/nimrod
Description
of Parameters
PLAN FILE
•
•
•
Developed by Monash University (MU), Australia
Supports large scale parameter sweeps on Grid infrastructure
Easy user interface – Nimrod portals
–
–
–
•
MU, Australia
UZurich, Switzerland
UCSD, USA
4 applications ran in PRAGMA grid and 1 runs in multi-grids
–
–
–
–
Savanah climate simulation (MU, Australia))
GAMESS/APBS (UZurich, Switzerland)
Siesta (UZurich, Switzerland)
Structure biology (MU, Australia)
•
CeNAT-ITCR, Costa Rica and UNAM, Mexico are
starting applications using Nimrod in PRAGMA grid
•
•
•
•
Developed interface to Unicore
Achieved long runs (90 different scenarios of 6 weeks each
Improved fault-tolerance (innovate time_step)
Enhancements in data and storage handling
Job 1 Job 2 Job 3
Job 4 Job 5 Job 6
Job 7 Job 8 Job 9
Job 10Job 11Job 12
Job 13Job 14Job 15
Job 16Job 17Job 18
MURPA Seminar, 4/8/2009
Mpich-Gx
http://www.moredream.org/mpich.htm
• Mpich-GX
– Korea Institute of Science and Technology
Information (KISTI), Korea
– Based on Mpich-g2
– Grid-enabled MPI, support
• Private IP
• Fault tolerance
• MM5 and WRF
– CICESE, Mexico
– Medium scale atmospheric simulation model
• Experiment
– KGrid
– WRF work well with MPICH-GX
– MM5 experienced scaling problems with MPICH-GX
when use more than 24 processors in a cluster
– Functionality of the private IP is usable
– Performance of the private IP is reasonable
MURPA Seminar, 4/8/2009
MM5-WRF/Mpich-GX Experiment
Hurricane Marty
Simulation
Mpich-GX
Private IP support
Fault Tolerance support
Santana Winds
Simulation
KGrid
output
USA
SDSC
CICESE Ensenada
eolo
pluto
MURPA Seminar, 4/8/2009
México
Infrastructure Middleware
MURPA Seminar, 4/8/2009
SCMSWeb
http://www.opensce.org/components/SCMSWeb
•
•
Developed by Kasetsart University and ThaiGrid
Web-based real-time grid monitoring system
–
–
–
–
•
•
Support Linux, Solaris. Good meta-view, easy user
interface, excellent user support
Develop and test in PRAGMA grid
–
–
–
•
System usage, Job/queue status
Probe – Globus authentication, job submission, gridftp,
Gfarm access, …
Network bandwidth measurements with Iperf
PRAGMA grid geo map
Deployed in 27 sites, improve scalability and performance
Sites help with porting to ia64 and Solaris
Demands push fast expansion of functionalities
More regional/national grids learned and adopted
MURPA Seminar, 4/8/2009
SCMSWeb New Features
• Automatic email alert deployed and operational
– According to probe failure status
– 3 times a week
– Email to site admins and Cindy Zheng
• Bi-directional bandwidth measurements using
iPerf
– Deployed on 11 systems
– Need to investigate problems with some sites in
one direction
– Need to deploy to all systems
• Software catalog
– Implemented for 7 software so far
• Amber, APBS, AutoDock, NAMD
• Ninf-G
• Intel-C, Intel-Fortran
– Deployed on 11 systems
– Need to deploy to all systems
– Add more software as needed
MURPA Seminar, 4/8/2009
MOGAS
http://ntu-cg.ntu.edu.sg/pragma/index.jsp
•
Multi-Organization Grid Accounting System (MOGAS)
–
–
–
–
•
Lead by NanYang University, funded by National Grid Office in Singapore
Build on globus core (gridftp, GRAM, GSI)
Support GT2,3,4; SGE, PBS
Job/user/cluster/OU/grid levels usages; job logs; metering and charging tools
Develop and test in PRAGMA grid
– Deployed on 14 sites: different GT versions, job schedulers, GRAM scripts,
security policies
– Feedbacks, improve, automate deployment procedure
•
•
MOGAS-2 with improved database performance
Collaborations and integrations with applications and other middleware
teams push the development of easy database interface
MURPA Seminar, 4/8/2009
Gfarm Grid File System
http://datafarm.apgrid.org
•
•
•
•
•
•
•
•
AIST, UTsukuba, Open source development at SourceForge.net
Grid file system that Federates storage of each site
Meta-server keeps track of file copies and locations
Can be mounted from cluster nodes and clients (GfarmFS-FUSE)
Parallel I/O, near site copy for scalable performance
Replication for fault tolerance
Use GSI authentication
Easy application deployment, file sharing
MURPA Seminar, 4/8/2009
Gfarm version 2 update
Version 2.0.0 released on 28 Nov, 2007
First release
Version 2.1.0 released on 27 May, 2008
GSI support
File replica management
Group management, disk usage report
Version 2.1.1 released on 27 Sep, 2008
Hardlink support
On-demand replication
Version 2.2.0 will be released really soon
Symbolic link support
Hundreds of clients support
Directory listing speedup
MURPA Seminar, 4/8/2009
Develop and Test GfarmFS-FUSE in PRAGMA Grid
http://goc.pragma-grid.net/wiki/index.php/Resources_and_Data
Testing with applications
•
Igap and Avian Flu Grid (UTsukuba, Japan, UCSD,
USA; JLU, China)
–
–
–
–
•
Huge number of small files
High meta-data access overhead
Meta-data cache server
Dramatic improvements (44sec -> 3.54sec)
AMBER (USM, Malaysia; UTsukuba, Japan)
–
–
–
–
–
Remote Gfarm meta-server
Meta-server is bottle-neck
File sharing permission, security
2.0 improved performance
Use as a shared storage only
Version 1.x works well in local or regional grid
•
•
GeoGrid, Japan
CLGrid, Chile
Integration
•
•
SCMSWeb (ThaiGrid, Thailand)
Rocks (SDSC, USA; UZH, Switzerland)
MURPA Seminar, 4/8/2009
CSF4
http://goc.pragma-grid.net/wiki/index.php/CSF_server_and_portal
•
Community Scheduler Framework, v4 – meta-scheduler
– Developed by Jilin University, China
– Grid services host in GT4, WSRF compliant, execution Component in Globus
Toolkit 4
– Open Source, http://sourceforge.net/projects/gcsf
– Support GT2&4, LSF, PBS, SGE, Condor
– Easy user interface - portal
•
Testing and collaborating in PRAGMA
–
–
–
–
Testing with application iGAP (UCSD, AIST, KISTI, …)
Collaborate and integrate with Gfarm on data staging (AIST, Japan)
Setup a CSF server and portal (SDSC, USA)
Collaborate/integrate with SCMSWeb for resource information (Thaigrid,
Thailand)
– Leverage resources and global grid testing environment
MURPA Seminar, 4/8/2009
Date Turbine
•
•
DataTurbine is an open source, Java based network ring buffer for all sorts of data. You can use
memory + disk for the ring and it runs on almost any JVM.
Sources can have multiple channels with varied types - numeric (e.g. sensors), video, audio, text,
binary blobs.
–
–
–
•
Interfaces
–
–
•
Data acquisition (DAQ)
• National Instruments (NI-DAQ, DAQmx, Compact RIO) via Java proxy
• Campbell Scientific: File-based, via LoggerNet, up to 1Hz tested.
• Dataq Instruments (serial connect via C & DaqToRbnb)
• PUCK, (Programmable Underwater Connector with Knowledge)
• Seabird/Seacat
• Vaisala weather station
Video and still cameras
• Anything with motion JPEG via URL:Axis, Panasonic, etc
• Still images via WebDAV, HttpMonitor
Accelerometers
• ADXL202 and Apple laptop
–
Primary interface is via the Java-based API
Other avenues
•
If you’re on Windows, there’s ActiveX
•
TCP/UDP proxy (some code required)
•
WebDAV/HTTP (Not as fast, but cross-platform and very flexible)
•
Java-based proxy on TCP/IP - we use this a lot
•
MATLAB API provided, including performance metrics and test suite.
Geotagging and Google Earth...works now!
–
Can also use plugins for tightly-coupled computations such as image processing.
Scalable.
MURPA Seminar, 4/8/2009
SAN DIEGO SUPERCOMPUTER CENTER, UCSD
Example - NCHC Kenting (Taiwan)
•
Kenting National Park and Yuan-Yang Lake, pictures from FangPang Lin and Ebbe Strandell
MURPA Seminar, 4/8/2009
MURPA Seminar, 4/8/2009
Rocks-based Virtual Machine
• Developed by Rocks group in SDSC
• Based on Xen
• Decouples OS/application from
hardware
• Increase utilization
• Increase security
• Key advantage
• Solving application requirement
conflicts
• Allow user system-level access
• Grow and shrink virtual cluster
• Requirements
• A lot of memory
• Will test in PRAGMA grid
– Can create virtual machines
– Can create virtual clusters
MURPA Seminar, 4/8/2009
Science
Technologies
Collaborations
Integrations
MURPA Seminar, 4/8/2009
Collaborations With Science and
Technology Teams
• Grid security
– Naregi (Japan), APGrid, GAMA (SDSC, USA)
• Grid infrastructure
– Monitoring - SCMSWeb (ThaiGrid, Thailand)
– Accounting - MOGAS (NTU Singapore)
– Metascheduling - Community Scheduler
Framework (JLU, China)
– Cyber-environment - CSE-Online (UUtah,
USA)
– Rocks, middleware and VM (SDSC, USA; …)
• Ninf-G, SCE, Gfarm, Bio, K*Rocks, Condor, …
• Virtual machine
• Science, datagrid, sensor, network
–
–
–
–
–
Biosciences – Avian Flu, portal, …
Gfarm-fuse (AIST, Japan)
GEON data network
GLEON sensor network
OptIPuter
• High performance networked TDW
• Telescience
MURPA Seminar, 4/8/2009
Grid Security
•
Trust in PRAGMA grid, http://goc.pragma-grid.net/pragmadoc/certificates.html
– IGTF distribution
– Non-IGTF distribution (trust all PRAGMA Grid sites)
•
APGrid PMA
– One of three IGTF founding PMAs
– Many PRAGMA grid sites are members
– PRAGMA CA
•
Naregi-CA
– AIST, UCSD, UChile, UoHyd, UPRM
•
PRAGMA CA (experimental and production)
– Based on Naregi-CA
– Catch-all CA for PRAGMA
– Production CA is IGTF compliant
•
MyProxy and VOMS services
– APAC
•
Work with GAMA
– Integrate with Naregi-CA (Naregi, UCSD)
– Integration with VOMS (AIST)
– Add servelet for account management (UChile)
MURPA Seminar, 4/8/2009
Lessons learned
•
•
•
Leverage resources,
setups and expertise
Balance and consider
both security and easy
access and use
Get more user
communities involved
with grid security
PRAGMA-CA and VOMS
•
PRAGMA-UCSD CA (https://goc.pragma-grid.net/ca)
–
–
–
–
–
•
Built according to the most current APGrid recommendations and guidelines
Naregi-CA developed new version
Setup at SDSC
Accredited by APGrid PMA in April, 2008
Included in IGTF distribution in July, 2008
VOMS (http://goc.pragma-grid.net/wiki/index.php/VOMRS)
– Setup VOMRS server at SDSC
– Focused on group mapping to local account
– VOMS client site
• GUMS – VO group to unix account mapping
• PRIMA – interface globus to GUMS
• Auth-tools – enable individual user account mapping
• VOMS client – enable user choose a mapping
• GSISSH – enable user access with globus certificate
MURPA Seminar, 4/8/2009
SCMSWeb Collaborations and Integrations
•
Grid Interoperation Now (GIN, OGF)
http://forge.gridforum.org/sf/wiki/do/viewPage/projects.gin/wiki/GinOps
– Worked with PRAGMA grid, TeraGrid, OSG, NorduGrid and EGEE on GIN testbed
monitoring http://goc.pragma-grid.net/cgi-bin/scmsweb/probe.cgi, added probes to
handle various grid service configurations/tests.
– Worked with CERN and Implemented a XML-> LDIF translator for GIN geo map
http://maps.google.com/maps?q=http://lfield.home.cern.ch/lfield/gin.kml
– Worked with many grid monitor software developers on a common schema for
cross-grid monitoring http://wiki.pragmagrid.net/index.php?title=GIN_%28Grid_Inter-operation_Now%29_Monitoring
•
Software integration and interoperations
– Rocks – SCE roll
– MOGAS – grid accounting
– Condor, CSF, … – provide resource info
•
Things are being worked on and planned
– Data federator for grid applications
• Provide site software information
• Standardize data extractions and formats
• Improve data storage with RDBMS
– Interoperate with other monitoring software
• Ganglia support
MURPA Seminar, 4/8/2009
SCMSWeb-Condor Interface
http://goc.pragma-grid.net/wiki/index.php/Condor-PRAGMA_Interoperation
•
SCMSWeb interface Condor-G
SCMSWeb provides system info, Condor-g dispatchs jobs
accordingly
•
Collaboration between PRAGMA and Condor
– ThaiGrid
• Project lead
• Interface development work
– SDSC
• Coordination
• Resource support
– KISTI
• Application testing
– Condor
• Interface development support
•
Current Status
– Running on rocks-153.sdsc.edu
– Successfully tested with application
– Working on improving performance and fault-tolerance
•
Submitted a paper to an IEEE conference
MURPA Seminar, 4/8/2009
Collaborations with OptIPuter, GLIF and CAMERA
http://www.optiputer.net
• OptIPuter (Optical networking, Internet Protocol,
computer storage, processing and visualization
technologies)
– Infrastructure that will tightly couple computational resources
over parallel optical networks using the IP communication
mechanism
– central architectural element is optical networking, not computers
– enable scientists who are generating terabytes and petabytes of
data to interactively visualize, analyze, and correlate their data
from multiple storage sites connected to optical networks
• Rocks/SAGE VIS-roll (SDSC)
• Networked Tile Display Walls (TDW)
–
–
–
–
Low cost
For research collaboration
For remote education and conferencing
Deployed at many PRAGMA grid sites
MURPA Seminar, 4/8/2009
Build a Rocks / SAGE OptIPortal
AIST
KISTI
UZurich
CNIC
NCHC
Osaka U
Calit2@UCI
Calit2@UCSD
MURPA Seminar, 4/8/2009
NCSA &
TRECC
NCMIR@UCSD
UIC
SIO@UCSD
Global Lambda Integrated Facility (GLIF)
http://www.glif.is
Visualization courtesy of
Bob Patterson, NCSA.
– Map to many PRAGMA
grid sites
– PRAGMA grid use GLIF to
solve grid application
bandwidth problem
MURPA Seminar, 4/8/2009
Intergrate CAMERA and PRAGMA Grid
Microbial Metagenomicist Userbase
Over 1300 Registered Users From 48 Countries
MURPA Seminar, 4/8/2009
Grid Interoperation
MURPA Seminar, 4/8/2009
Grid Interoperation Now (GIN)
http://forge.gridforum.org/sf/wiki/do/viewPage/projects.gin/wiki/GinOps
• Open Grid Forum and GIN
• GIN-OPS (lead by PRAGMA)
• GIN testbed (February, 2006 – On
going)
–
–
–
–
–
–
One or more clusters from each grid
Still part of each production grid
Running real science applications
Explore interoperation issues
Develop solutions
Provide insight to standardization
effort
• Application driven
– TDDFT/Ninf-G (PRAGMA - AIST,
Japan)
• PRAGMA, TeraGrid, OSG,
NorduGrid; EGEE
– Savanah fire simulation (PRAGMA –
Monash University, Australia)
• PRAGMA, TeraGrid, OSG
MURPA Seminar, 4/8/2009
Grid Interoperation Now (GIN)
http://forge.gridforum.org/sf/wiki/do/viewPage/projects.gin/wiki/GinOps
•
Software interface and integration
–
–
–
–
–
•
Ninf-G (AIST/PRAGMA) - NorduGrid
Nimrod/G (MU-PRIME/PRAGMA) – Unicore
SCMSWeb (ThaiGrid/PRAGMA) – Condor (UWisc/OSG)
SCMSWeb (ThaiGrid/PRAGMA) – BDII (CERN)
VDT (OSG) and Rocks (SDSC/PRAGMA) integration
Multi-Grid monitoring
– Lead by ThaiGrid/PRAGMA
– SCMSWeb probe matrix (PRAGMA - ThaiGrid, Thailand)
– Common schema (http://goc.pragmagrid.net/wiki/index.php?title=GIN_%28Grid_Inter-operation_Now%29_Monitoring)
•
•
•
•
PRAGMA – SCMSWeb, MOGAS
TeraGrid – Globus Gt4.0.1, Ganglia, NAGIOS
EGEE – MonAlisa
NorduGrid/ARC – NorduGrid/MDS2, NorduGrid/Grid Monitor
MURPA Seminar, 4/8/2009
MURPA Seminar, 4/8/2009
Peer-grid Interoperation Experiments
http://goc.pragma-grid.net/wiki/index.php/Main_Page#Grid_Inter-operations
• Different from GIN testbed
– More resources and support from each grid
– Either uni-directional or bi-directional application run
– Long run to achieve scientific results
• OSG<->PRAGMA (January, 2007 – on-going)
– How
• Each grid identify management, application drivers, resources
supporters
• All participants document application requirements, meetings,
issues, solutions, status, results, … at wiki.pragma-grid.net
– Resources
• OSG – FermilabGrid, will add UWisc
• PRAGMA grid - any sites application driver choose to use
– Applications
• OSG – GISolve, spatial Interpolation (UIowa, USA)
• PRAGMA
– FMO/Ninf-G, quantum Chemistry (AIST, Japan) – completed
– Structure biology (MU, Australia) – start soon
MURPA Seminar, 4/8/2009
OSG-PRAGMA Grid Interoperation Experiments
http://goc.pragma-grid.net/wiki/index.php/Main_Page#Grid_Inter-operations
• More resources and support from each grid, but no special
arrangements
• Application long-run
– GridFMO/Ninf-G – Large scale quantum Chemistry (Tsutomo Ikegami,
AIST, Japan)
– 240 CPUs from OSG and PRAGMA grid, 10 days x 7 calculations
– Fault-tolerance enabled long-run
– Meaningful and usable scientific results
MURPA Seminar, 4/8/2009
Grid Interoperation Experiments
http://goc.pragma-grid.net/wiki/index.php/Run_Phaser_on_PRAGMA_grid_and_OSG
• Phaser/Nimrod-G – structural biology
• Large run July-September
• 71,000 jobs, each
– Wall clock time: a few hours – a few days
– Memory: 0.2 – 2 GB
• Total CPU time: 511,000 hours
• Total CPUs: 1200, 20 discrete resources
– Australian enterpriseGrid, Monash, APAC Grid
– PRAGMA Grid
– OSG (FermilabGrid, RENCI <VO>)
• Submitted a paper to an IEEE conference
MURPA Seminar, 4/8/2009
Lessons Learned From Grid Interoperation
http://forge.gridforum.org/sf/wiki/do/viewPage/projects.gin/wiki/GinOps
•
•
Grid interoperation make large scale calculations possible
Differences among grids provide learning, collaboration and integration
opportunities
–
–
–
–
–
–
–
–
–
•
IGTF, VOMS (GIN)
Common Software Area (TeraGrid)
Ninf-G – NorduGrid
Nimrod/G – Unicore
SCMSWeb – Condor
SCMSWeb – BDII
SCMSWeb probe matrix for GIN testbed monitoring
Common schema among many grid monitoring software
VDT (OSG) and Rocks (SDSC/PRAGMA) integration
Differences in grid environment are source of difficulties for users and
applications
– Different user access setup procedure - take extra effort
– Different job submission protocols
• GRAM, Sandbox, gridftp, modified GRAM, …
• One-to-one interface - is it scalable? Possible standards?
•
Middleware fault tolerance and flexible resource management is important
– Cope with unfamiliar fault conditions, lack of parallel computation support, …
MURPA Seminar, 4/8/2009
Collaborate in Publishing Research
Results
Some published papers in 2007:
•
•
•
•
•
•
•
•
•
•
•
Amaro, RE, Minh DDL, Cheng LS, Lindstrom, WM Jr, Olson AJ, Lin JH, Li WW, and McCammon JA. Remarkable
Loop Flexibility in Avian Influenza N1 and Its Implications for Antiviral Drug Design. J. AM. CHEM. SOC.
2007, 129, 7764-7765 (PRIME)
Choi Y, Jung S, Kim D, Lee J, Jeong K, Lim SB, Heo D, Hwang S, and Byeon OH."Glyco-MGrid: A Collaborative
Molecular Simulation Grid for e-Glycomics," in 3rd IEEE International Conference on e-Science and Grid
Computing, Banglore, India, 2007. Accepted.
Ding Z, Wei W, Luo Y, Ma D, Arzberger PW, and Li WW, "Customized Plug-in Modules in Metascheduler
CSF4 for Life Sciences Applications," New Generation Computing, p. In Press, 2007.
Ding Z, Wei S, Ma, D and Li WW, "VJM -- A Deadlock Free Resource Co-allocation Model for Cross Domain
Parallel Jobs," in HPC Asia 2007, Seoul, Korea, 2007, p. In Press.
Görgen K, Lynch H, Abramson D, Beringer J and Uotila P. "Savanna fires increase monsoon rainfall as
simulated using a distributed computing environment", to appear, Geophysical Research Letters.
Ichikawa K, Date S, Krishnan S, Li W, Nakata K, Yonezawa Y, Nakamura H, and Shimojo S, "Opal OP: An
extensible Grid-enabling wrapping approach for legacy applications", GCA2007 - Proceedings of the 3rd
workshop on Grid Computing & Applications -, pp.117-127 , Singapore, June 2007 a. (PRIUS)
Ichikawa K, Date S, and Shimojo S. “A Framework for Meta-Scheduling WSRF Based Services”, Proceedings
of 2007 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 2007),
Victoria, Canada, pp. 481-484, Aug. 2007 b. (PRIUS)
Kuwabara S, Ichikawa K, Date S, and Shimojo S. “A Built-in Application Control Module for SAGE”,
Proceedings of 2007 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing
(PACRIM 2007), Victoria, Canada, pp. 117-120, Aug. 2007. (PRIUS)
Takeda S, Date S, Zhang J, Lee BU, and Shimojo S. “Security Monitoring Extension For MOGAS”, GCA2007 Proceedings of the 3rd workshop on Grid Computing & Applications - , pp.128-137 Singapore, June 2007.
(PRIUS)
Tilak S, Hubbard P, Miller M, and Fountain T, ``The Ring Buffer Network Bus (RBNB) DataTurbine Streaming
Data Middleware for Environmental Observing Systems," to appear in the Proceedings of the e-Science 2007
Zheng C, Katz M, Papadopoulos P, Abramson D, Ayyub S, Enticott C, Garic S, Goscinski W, Arzberger P, Lee B
S, Phatanapherom S, Sriprayoonsakul S, Uthayopas P, Tanaka Y, Tanimura Y, Tatebe O. Lesson Learned
Through Driving Science Applications in the PRAGMA Grid. Int. J. Web and Grid Servies, Vol.3, No.3, pp287312. 2007 …
MURPA Seminar, 4/8/2009
Summary
• PRAGMA grid
– Shared vision lower resistance to use others software, test on others
resources
– Formed new development collaborations
– Size and heterogeneity, explore issues which functional grid must resolve
•
•
•
•
Management, resources and software coordination
Identity and fault management
Scalability and performance
Feedback between application and middleware help improve software and
promote software integration
• Heterogeneous global grid
– Is realistic and challenging
– Can be good for middleware development and testing
– Can be useful for real science
• Impact
– Software dissemination (Rocks, Ninf-G, Nimrod, SCMSWeb, Naregi-CA, …)
– Help new national/regional grids (Chile, Vietnam, Hong kong, …)
• Key is people, is collaboration
MURPA Seminar, 4/8/2009
A Grass Roots Effort
“One of the most important lessons of the
Internet is that it grows most successfully
where grass roots initiatives are encouraged
and enabled. The Internet has historically
grown from the bottom up, and this aspect
continues to fuel its continued growth in the
academic and commercial sectors.”
– Vint Cert, UN Economic and Social Council in
2000
MURPA Seminar, 4/8/2009
•
•
•
PRAGMA is supported by the National Science
Foundation (Grant No. INT-0216895, INT0314015, OCI -0627026) and by member
institutions
PRIME is supported by the National Science
Foundation under NSF INT 04007508
PRAGMA grid is the result of contributions and
support from all PRAGMA grid team members
Thank You
http://www.pragma-grid.net
http://goc.pragma-grid.net
http://wiki.pragma-grid.net
MURPA Seminar, 4/8/2009