Grids for DoD and Real Time Simulations IEEE DS-RT 2005 Montreal Canada Oct.

Download Report

Transcript Grids for DoD and Real Time Simulations IEEE DS-RT 2005 Montreal Canada Oct.

Grids for DoD and Real Time Simulations

IEEE DS-RT 2005 Montreal Canada Oct. 11 2005 Geoffrey Fox Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University Bloomington IN 47401 [email protected]

http://www.infomall.org

1

Why are Grids Important

Grids are important for DoD because they more or less directly address DoD’s problem and have made major progress in the core infrastructure that DoD has identified rather qualitatively

Grids are important to distributed simulation because they address all the distributed systems issues except simulation and in any sophisticated distributed simulation package, most of the software is not to do with simulation but rather the issues Grids address

DoD and Distributed Simulation communities are too small to go it alone – they need to use technology that industry will support and enhance

2

  

Internet Scale Distributed Services

Grids use Internet technology and are distinguished by managing or organizing sets of network connected resources

Classic Web allows independent one-to-one access to individual resources

Grids integrate together and manage multiple Internet connected resources: People, Sensors, computers, data systems Organization can be explicit

TeraGrid as in which federates many supercomputers;

Deep Web Technologies IR Grid which federates multiple data resources;

CrisisGrid which federates first responders, commanders, sensors, GIS, (Tsunami) simulations, science/public data Organization can be implicit as in Internet resources such as curated databases and simulation resources that “harmonize a community”

3

     

Different Visions of the Grid

Grid just refers to the technologies

Or Grids represent the full system/Applications DoD’s vision of Network Centric Computing can be considered a Grid (linking sensors, warfighters, commanders, backend resources) and they are building the GiG (Global Information Grid) Utility Computing or X-on-demand (X=data, computer ..) is major computer Industry interest in Grids and this is key part of enterprise or campus Grids e-Science or Cyberinfrastructure are virtual organization Grids supporting global distributed science (note sensors, instruments are people are all distributed Skype (Kazaa) VOIP system is a Peer-to-peer Grid (and VRVS/GlobalMMCS like Internet A/V conferencing are Collaboration Grids ) Commercial 3G Cell-phones are forming mobile Grids and DoD ad-hoc network initiative

4

        

Types of Computing Grids

Running “ Pleasing Parallel Jobs ” as in United Devices, Entropia (Desktop Grid) “cycle stealing systems” Can be managed (“inside” the enterprise informal (as in SETI@Home) as in Condor) or more Computing-on-demand in Industry where jobs spawned are perhaps very large (SAP, Oracle …) Support distributed file systems as in Legion (Avaki), Globus with (web-enhanced) UNIX programming paradigm

Particle Physics will run some 30,000 simultaneous jobs Distributed Simulation HLA/RTI style Grids Linking Supercomputers as in TeraGrid Pipelined applications linking data/instruments, compute, visualization Seamless Access where Grid portals allow one to choose one multiple resources with a common interfaces of Parallel Computing typically NOT suited for a Grid (latency)

5

Analysis and Visualization DATA ACQUISITION ADVANCED VISUALIZATION IMAGING INSTRUMENTS

,

ANALYSIS Large Disks

QuickTime™ and a decompressor are needed to see this picture.

Old Style Metacomputing Grid COMPUTATIONAL RESOURCES LARGE-SCALE DATABASES Large Scale Parallel Computers Spread a single large Problem over multiple supercomputers

6

   

Utility and Service Computing

An important business application of Grids is believed to be utility computing Namely support a pool of computers to be assigned as needed to take-up extra demand

Pool shared between multiple applications Natural architecture is not a cluster of computers connected to each other but rather a “ Farm of Grid Services ” connected to Internet and supporting services such as

Web Servers

Financial Modeling

• •

Run SAP Data-mining

• •

Simulation response to crisis like forest fire or earthquake Media Servers for Video-over-IP Note classic Supercomputer use is to allow full access to do “anything” via ssh etc.

In service model , one pre-configures services for all programs and you access portal to run job with less security issues

7

  

Simulation and the Grid

Simulation on the Grid is distributed but its rarely classical distributed simulation

It is either managing multiple jobs that are identical except for parameters controlling simulation – SETI@Home style of “ desktop grid ”

Or workflow that roughly corresponds to federation The workflow is designed to supported the integration of distributed entities

Simulations (maybe parallel) and Filters for example GCF Manchester General Coupling Framework from

• •

Databases and Sensors Visualization and user interfaces RTI should be built on workflow WS-*/GS-* and NCOW CES and inherit built on same

8

Two-level Programming I

• The Web Service (Grid) paradigm implicitly assumes a two-level Programming Model • We make a Service (same as a “distributed object” or “computer program” running on a remote computer) using conventional technologies – C++ Java or Fortran Monte Carlo module – Data streaming from a sensor or Satellite – Specialized (JDBC) database access • Such services accept and produce data from users files and databases

Service

Data

• The Grid is built by coordinating such services assuming we have solved problem of programming the service 9

Two-level Programming II

The Grid is discussing the composition of distributed services with the runtime interfaces to Grid as opposed to UNIX pipes/data streams

Service1 Service3 Service2 Service4

Familiar from use of UNIX Shell, PERL or Python scripts to produce real applications from core programs

Such interpretative environments are the single processor analog of Grid Programming

Some projects like GrADS from Rice University are looking at integration between service and composition levels but dominant effort looks at each level separately

10

Consequences of Rule of the Millisecond

   

Useful to remember

• •

critical time scales Classic 1) 0.000001 ms 2a) 0.001 to 0.01 ms – CPU does a calculation Programming – Parallel Computing MPI latency

• • •

2b) 3) 0.001 to 0.01 ms 1 ms – Overhead of a Method Call – wake-up a thread or process either?

4) 10 to 1000 ms – Internet delay: Workflow 2a), 4) implies geographically distributed metacomputing can’t in general compete with parallel systems 3) << 4) implies a software overlay network is possible without significant overhead

We need to explain why it adds value of course!

2b) versus 3) and 4) describes regions where method message based programming paradigms important and

11

Towards an International Grid Infrastructure

UK NGS

Leeds Manchester

US TeraGrid

Starlight (Chicago) Netherlight (Amsterdam) Oxford RAL SDSC NCSA PSC UCL UKLight All sites connected by production network (not all shown)

SC05

Local laptops in Seattle and UK Computation Network PoP Steering clients Service Registry 12

     

Information/Knowledge Grids

Distributed (10’s to 1000’s) of data sources file systems, curated databases …) (instruments, Data Deluge : 1 (now) to 100’s petabyte s/year (2012)

Moore’s law for Sensors Possible filters

assigned dynamically ( on-demand ) Run image processing algorithm on telescope image

Run Gene sequencing algorithm on compiled data Needs decision support simulations front end with “what-if” Metadata ( provenance ) critical to annotate data Integrate astronomy across experiments as in multi-wavelength Data Deluge comes from pixels/year available

13

        

Data Deluged Science

Now particle physics will get 100 petabytes from CERN using around 30,000 CPU’s simultaneously 24X7 Exponential growth in data and compare to:

The Bible = 5 Megabytes

• • •

Annual refereed papers = 1 Terabyte Library of Congress = 20 Terabytes Internet Archive (1996 – 2002) = 100 Terabytes Weather, climate, solid earth ( EarthScope ) Bioinformatics curated databases (Biocomplexity only 1000’s of data points at present) Virtual Observatory and SkyServer in Astronomy Environmental Sensor nets In the past, HPCC community worried about data in the form of parallel I/O or MPI-IO , but we didn’t consider it as an enabler of new science and new ways of computing Data assimilation was not central to HPCC DoE ASCI set up because didn’t want test data!

14

Virtual Observatory Astronomy Grid Integrate Experiments

Radio Far-Infrared Visible Visible + X-ray Dust Map 15 Galaxy Density Map

International Virtual Observatory Alliance

• Reached international agreements on Astronomical Data Query Language, VOTable 1.1, UCD 1+, Resource Metadata Schema • Image Access Protocol, Spectral Access Protocol and Spectral Data Model, Space-Time Coordinates definitions and schema • Interoperable registries by Jan 2005 (NVO, AstroGrid, AVO, JVO) using OAI publishing and harvesting • So each Community of Interest builds data AND 16 service standards that build on GS-* and WS-*

• Imminent ‘deluge’ of data • Highly heterogeneous • Highly complex and inter-related • Convergence of data and literature archives

myGrid Project

17

A

The Williams Workflows

B

A: Identification of overlapping sequence B: Characterisation of nucleotide sequence C: Characterisation of protein sequence

C

18

Field Trip Data Repositories Federated Databases Database Database Sensors Streaming Data Sensor Grid Database Grid Research Education SERVOGrid

Data Filter Services

Compute Grid Research Simulations ?

Discovery Services

GIS Grid Analysis and Visualization Portal

Customization Services From Research to Education

Education Grid Computer Farm

19

Grid of Grids: Research Grid and Education Grid

      

SERVOGrid Requirements

Seamless Access computers to Data repositories and large scale Integration of multiple data sources including sensors, databases, file systems with analysis system

Including filtered OGSA-DAI (Grid database access) Rich meta-data generation and access with SERVOGrid specific Schema extending openGIS (Geography as a Web service) standards and using Semantic Grid Portals with component model for user interfaces and web control of all capabilities Collaboration to support world-wide work Basic Grid tools: workflow and notification NOT metacomputing

20

SERVOGrid Portal Screen Shots

21

Portal Architecture

Portlet Class: WebForm Portlet Class SERVOGrid (IU) Remote or Proxy Portlets Web/Grid service Computing Web/Grid service Data Stores Portlet Class GridPort etc.

Web/Grid service Instruments Portlet Class Clients Portal Internal Services Portal Portlets (Java) COG Kit Local Portlets Libraries Hierarchical arrangement Services

Each Service has its own portlet Individual portlet for the Proxy Manager Use tabs or choose different portlets to navigate through interfaces to different services 2 Other Portlets

OGCE Consortium

     

GIS Grids and Sensor Grids

OGC has defined a suite of data structures and services to support Geographical Information Systems and Sensors GML Geography Markup language defines specification of geo-referenced data SensorML and O&M (Observation and Measurements) define meta-data and data structure for sensors Services like Web Map Service, Web Feature Service, Sensor Collection Service define services interfaces to access GIS and sensor information Grid workflow links services that are designed to support streaming input and output messages We are building Grid (Web) service implementations of these specifications for NASA’s SERVOGrid

24

A Screen Shot From the WMS Client

25

WMS uses WFS that uses data sources

Railroads ` Client Fe atu re C G olle etF ctio ea n tu re SQ L Q ue ry Ra ilro [a ad b] s WFS Server G et F ea tu re F ea tu re C ol le ct io n WMS H igw ay SQ [1 L Q 2-1 8] ue ry Interstate Highways

<

fault

> Northridge2 Northridge2 Wald D. J.

-118.72,34.243 118.591,34.176

fault

>

90 Rivers Bridges 26

Electric Power and Natural Gas data from LANL Interdependent Critical Infrastructure Simulations

Zoom-in Zoom-out FeatureInfo mode Measure distance mode Clear Distance Drag and Drop mode Refresh to initial map

Typical use of Grid Messaging in NASA

Sensor Grid Grid Eventing GIS Grid Datamining Grid

28

Typical use of Grid Messaging

Sensor Grid Filter or Datamining

Post before Processing Post after Processing

Narada Brokering

Notify

Grid Database

Archives

WS-Context

Stores dynamic data

HPSearch

Subscribe

Manages

Web Feature Service WFS (GIS data) GIS Grid Geographical Information System

29

Real Time GPS and Google Maps

Subscribe to live GPS station. Position data from SOPAC is combined with Google map clients.

Select and zoom to GPS station location, click icons for more information.

30

Integrating Archived Web Feature Services and Google Maps

Google maps can be integrated with Web Feature Service Archives to filter and browse seismic records.

31

Google Maps as Service accessed from our WMS Client

32

      

What is Happening?

Grid ideas are being developed in (at least) four communities

Web Service – W3C, OASIS, (DMTF)

Grid Forum (High Performance Computing, e-Science)

Enterprise Grid Alliance near term focus) (Commercial “Grid Forum” with a Service Standards are being debated Grid Operational Infrastructure is being deployed Grid Architecture

and core software being developed Apache has several important projects as do academia; large and small companies Particular System Services are being developed “centrally” – OGSA or GS-* framework for this in GGF; WS-* for OASIS/W3C/Microsoft-IBM Lots of fields are setting domain specific standards domain specific services and building USA started but now continues Europe is probably in the lead and Asia will soon catch USA if momentum (roughly zero for USA)

33

The Grid and Web Service Institutional Hierarchy

4: Application or Community of Interest Specific Services

such as “Run BLAST” or “Look at Houses for sale”

3: Generally Useful Services and Features

Such as “Access a Database” or “Submit a Job” or “Manage Cluster” or “Support a Portal” or “Collaborative Visualization”

OGSA GS-* and some WS-* GGF/W3C/….

2: System Services and Features

Handlers like WS-RM, Security, Programming Models like BPEL or Registries like UDDI

WS-* from OASIS/W3C/ Industry 1: Container and Run Time (Hosting) Environment Apache Axis .NET etc.

Must set standards to get interoperability

34

    

Philosophy of Web Service Grids

Much of Distributed Computing was built by natural extensions of computing models developed for sequential machines This leads to the distributed object by Java and CORBA

Invocation) for Java (DO) model represented RPC (Remote Procedure Call) or RMI (Remote Method Key people think this is not a good idea as it scales badly and ties distributed entities together too tightly

Distributed Objects Replaced by Services Note organization and proposed infrastructure

CORBA and Java was considered too complicated in both was considered as “tightly coupled to Sun”

So there were other reasons to discard Thus replace distributed objects by services connected by “ one-way ” messages and not by request-response messages

35

The Ten areas covered by the 60 core WS-* Specifications WS-* Specification Area 1: Core Service Model 2: Service Internet 3: Notification 4: Workflow and Transactions 5: Security 6: Service Discovery 7: System Metadata and State 8: Management 9: Policy and Agreements 10: Portals and User Interfaces Examples

XML, WSDL, SOAP WS-Addressing, WS-MessageDelivery; Reliable Messaging WSRM; Efficient Messaging MOTM WS-Notification, WS-Eventing (Publish-Subscribe) BPEL, WS-Choreography, WS-Coordination WS-Security, WS-Trust, WS-Federation, SAML, WS-SecureConversation UDDI, WS-Discovery WSRF, WS-MetadataExchange, WS-Context WSDM, WS-Management, WS-Transfer WS-Policy, WS-Agreement WSRP (Remote Portlets)

RTI and NCOW needs all of these?

36

Stateful Interactions

There are (at least) four approaches to specifying state

OGSI use factories to generate separate services for each session in standard distributed object fashion

Globus GT-4 and WSRF use metadata of a resource to identify state associated with particular session

• •

WS-GAF uses WS-Context to provide abstract context defining state. Has strength and weakness that reveals less about nature of session WS-I+ “Pure Web Service” leaves state specification the application – e.g. put a context in the SOAP body

I think we should smile and write a great metadata service hiding all these different models for state and metadata

37

    

WS-* implies the Service Internet

We have the classic (CISCO, Juniper ….) Internet routing the flood of ordinary packets in OSI stack architecture Web Services build the “ Service Internet ” or IOI (Internet on Internet)

• •

with Routing via WS-Addressing not IP header Fault Tolerance (WS-RM not TCP)

• • •

Security (WS-Security/SecureConversation not IPSec/SSL) Data Transmission by WS-Transfer not HTTP Information Services (UDDI/WS-Context not DNS/Configuration files)

At message/web service level and not packet/IP address level Software-based Service Internet possible as computers “fast” Familiar from Peer-to-peer networks and built as a software overlay network defining Grid (analogy is VPN) SOAP Header contains all information needed for the “Service Internet” ( Grid Operating System ) with SOAP Body containing information for Grid application service

  

WS-I Interoperability

Critical underpinning of Grids and Web Services is the gradually growing set of specifications in the Web Service Interoperability Profiles Web Services Interoperability (WS-I) Interoperability Profile 1.0a." http://www.ws-i.org

. gives us XSD, WSDL1.1, SOAP1.1, UDDI in basic profile and parts of WS-Security in their first security profile.

We imagine the “60 Specifications” being checked out and evolved in the cauldron of the real world and occasionally best practice identifies a new specification to be added to WS-I which gradually increases in scope •

Note only 4.5 out of 60 specifications have “made it” in this definition

39

Activities in Global Grid Forum Working Groups

GGF Area 1: Architecture 2: Applications 3: Compute GS-* and OGSA Standards Activities

High Level Resource/Service Naming (level 2 of fig. 1), Integrated Grid Architecture Software Interfaces to Grid, Grid Remote Procedure Call, Checkpointing and Recovery, Interoperability to Job Submittal services, Information Retrieval, Job Submission, Basic Execution Services, Service Level Agreements for Resource use and reservation, Distributed Scheduling

4: Data 5: Infrastructure 6: Management 7: Security

Database and File Grid access, Grid FTP, Storage Management, Data replication, Binary data specification and interface, High-level publish/subscribe, Transaction management Network measurements, networking, Data transport Role of IPv6 and high performance Resource/Service configuration, deployment and lifetime, Usage records and access, Grid economy model Authorization, P2P and Firewall Issues, Trusted Computing

RTI and NCOW needs all of these?

40

     

SOAP Message Structure I

SOAP Message consists of headers and a body

Headers could be for Addressing, WSRM, Security, Eventing etc.

Headers are processed by handlers or filters controlled by container as message enters or leaves a service Body processed by Service itself The header processing defines the “ Web Service Distributed Operating System ” Containers queue messages; control processing of headers and offer convenient (for particular languages) service interfaces Handlers are really the core Operating system services standards specify handler structure as they receive and give back messages like services; they just process and perhaps modify different elements of SOAP Message – WS Container Workflow Service H1 H2 H3 H4 Body F1 F2 F3 Container Handlers F4

41

SOAP Message Structure II

   

Content of individual headers and the body is defined by XML Schema associated with WS-* headers and the service WSDL SOAP Infoset captures header and body structure XML Infoset for individual headers and the body capture the details of each message part Web Service Architecture requires that we capture Infoset structure but does not require that we represent XML in angle bracket value notation H1 H2 H3 H4 hp1 hp2 hp3 hp4 hp5 Body bp1 bp2 bp3 Infoset represents semantic structure of message and its parts

42

High Performance Streams

Optimize Stream representation and transport protocol SOAP’’ Filter2 StdSOAP Filter1 SOAP’ Choose Invertible Filter Preserving Infoset Choose Protocol SOAP’ Filter1 -1 StdSOAP Filter2 -1 SOAP’’ Database (WS-Context) Conversation Context Coordinating between Source, Sink, SOAP Intermediaries and between different messages in a stream

43

  

High Performance XML

Filters controlled by Conversation Context convert messages between representations using permanent context (metadata) catalog to hold conversation context Different handlers and service within one end point

message views Conversation Context for each end point or even for individual is fast dynamic metadata service to enable conversions NaradaBrokering will implement Fr and Ft using its support of multiple transports, fast filters and message queuing; H1 H2 H3 H4 Body Conversation Context URI-S, URI-R, URI-T Replicated Message Header Transported Message Handler Message View Service Message View Ft Fr F3 Container Handlers F4 Service

44

The Global Information Grid Core Enterprise Services Core Enterprise Services Service Functionality CES1: Enterprise Services Management (ESM)

including life-cycle management

CES2: Information Assurance (IA)/Security CES3: Messaging CES4: Discovery CES5: Mediation CES6: Collaboration CES7: User Assistance CES8: Storage CES9: Application

Supports confidentiality, integrity and availability. Implies reliability and autonomic features Synchronous or asynchronous cases Searching data and services Includes translation, aggregation, integration, correlation, fusion, brokering publication, and other transformations for services and data. Possibly agents Provision and control of sharing with emphasis on synchronous real-time services Includes automated and manual methods of optimizing the user GiG experience (user agent) Retention, organization and disposition of all forms of data Provisioning, operations and maintenance of applications.

45

    

Major Conclusions I

One can map 7.5 out of 9 NCOW and GiG core capabilities into Web Service (WS-*) and Grid (GS-*) architecture and core services

Analysis of Grids in NCOW document inaccurate (confuse Grids and Globus and only consider early activities) Some “mismatches” on both NCOW and Grid sides GS-*/WS-* messaging do not have collaboration and miss some NCOW does not have at core level system metadata and resource/service scheduling and matching Higher level services of importance include GIS (Geographical Information Systems), Sensors and data-mining

46

Major Conclusions II

Criticisms of Web services in a recent paper by Birman seem to be addressed by Grids or reflect immaturity of initial technology implementations

NCOW does not seem to have any analysis of how to build their systems on WS-*/GS-* technologies in a layered fashion; they do have a layered service architecture so this can be done

• •

They agree with service oriented architecture They seem to have no process for agreeing to WS-* GS-* or setting other standards for CES

Grid of Grids allows modular architectures and natural treatment of legacy systems

47

DoD Core Services and WS-* plus GS-* I

NCOW Service or Feature WS-* Service area GGF Others Use Service Oriented Architecture Grid of Grids Composition

Core (#1)

A: General Principles

Service Model Build Services Grids on Web Industry Best Practice (IBM, Microsoft …) Strategy for legacy subsystems and modular architecture

B: NCOW Core Services (to be continued)

WS-* #8 Management GGF #6: Management

CES 1: Enterprise Services Management CES 2: Information Assurance(IA)/Security CES 3: Messaging

WS-* #5 WS-Security WS-* #2, #3 GGF #7, CIM Grid-Shib, Permis Liberty Alliance etc.

JMS, MQSeries,Streaming /Sensor Technologies

CES 4: Discovery

WS-* #6

CES 5: Mediation CES 6: Collaboration

WS-* #4 workflow VO GGF VO.

Treatment systems.

of Transformations XGSP, Shared Service ports Legacy Data Web

CES 7: User assistance

WS- * #10 Portlets, NCOW Interfaces JSR168

DoD Core Services: WS-* and GS-* II

NCOW Service or Feature WS-* Service area GGF Others B: NCOW Core Services Continued

GGF #4 Data

CES 8: Storage (not real-time streams) CES 9: Application

NCOW Data Strategy GGF #2 Best building services Practice in Grid/Web

Environmental Services ECS Control Resource Infrastructure

WS-*, #9 GGF #5; giG itself; Ad-hoc networks important

Meta-data Resource/Service Matching/Scheduling Sensors (real-time data) C: Key NCOW Capabilities not directly in CES

WS-* #7 Distributed Scheduling SLA’s (GGF # 3) and Semantic Grid Globus MDS GGF scheduling work extended to networks Semantic Annotation Web; Extend computer scheduling to networks and data flow OGC Sensor standards

GIS

OGC GIS standards 49

   

Grids and HLA/RTI I

HLA through IEEE1516 has specified the interfaces for its key services that are supported by RTI (Run Time Infrastructure) HLA does not specify each message semantics or core system services

RTI implementations are NOT interoperable one should support any HLA federation although each

RTI implementations become a full distributed system environment as need metadata, reliable messaging etc. with simulation support only a small part Grids can be used in an “ unchanged ” HLA with

• •

Dynamic assignment of compute resources to support federates Building web service interfaces to federates (XMSF) Or use Grids as Infrastructure to build a new generation of RTI that will use Web system services and just add simulation support

50

Grids and HLA/RTI II

HLA specifies

Declaration management – achieved through use of publish/subscribe Grid Messaging (NaradaBrokering)

• •

Data Distribution management sensitive publish and subscribe model (add as allowed in WS Eventing) – corresponds to geometry Time management based simulations) – corresponds to simulation framework (use best event driven and time stepped models – as infrastructure generic, one can support broad range of simulations including classic parallel computing and agent-

Object management - Very specific to HLA and should be built as per IEEE1516

Ownership management use metadata catalog catalogs to handle properties – might be generalizable - could use Grid virtualization and

Federation management - Could use workflow and generalize to support of general simulation models (federates and federations are a general concept)

51

GlobalMMCS Web Service Architecture

Use Multiple Media servers to scale to many codecs and many versions of audio/video mixing Session Server XGSP-based Control Web Services Media Servers Filters NB Scales as distributed Admire NaradaBrokering All Messaging High Performance (RTP) and XML/SOAP and ..

SIP H323 Access Grid Gateways convert to uniform XGSP Messaging NaradaBrokering Native XGSP 52

GlobalMMCS Architecture

Non-WS collaboration “gatewayed” to XGSP control protocols are

NaradaBrokering supports TCP (chat, control, shared display, PowerPoint etc.) and UDP (Audio-Video conferencing)

Audio Video Web Service Instant Messaging Web Service Shared Display Web Service Shared ….

Web Service

XGSP Conference Control Service Event Messaging Service (NaradaBrokering)

53

XGSP Example: New Session

GameRoom chess chess-0 John false chess-0 Bob black chess-0 Jack white

54

XGSP AV Signaling Protocol with H.323

H323 H323 Terminal XGSP H323 Gateway Session server

H225.Setup

JoinAVSession

H.225

Call Setup

H225.Connect

JoinAVSession OK with the RTPLinks

H.245

Capability Exchange

Terminal Capability Set ACK Terminal Capability Set & capability description ACK

Open Audio & Video Logic Channels

OpenLogicChannel ( Video ) ACK OpenLogicChannel ( Video ) ACK JoinAVSession (Video) OpenLogicChannel ( Audio ) ACK OpenLogicChannel ( Audio ) ACK ACK with Audio RTPLink JoinAVSession (Audio) 55

          

NaradaBrokering 2003-2006

Messaging infrastructure for collaboration, peer-to-peer and Grids Implements JMS and native high-performance protocols ( message transit time of 1 to 2 ms per hop ) Order-preserving message transport with QoS and security profiles Support for different underlying Multicast, RTP transport such as TCP, UDP, SOAP message support

WS-Notification and WS-Eventing, WS-RM when specification agreed and WS-Reliability . Active replay support: Pause and Replay live streams.

Stream Linkage: can link permanently multiple streams – using in annotation of real-time video streams Replicated storage support failures.

for fault tolerance and resiliency to storage Management: HPSearch Scripting Interface to streams and brokers (uses WS-Management ) Broker Topics and Message Discovery: Locate appropriate Integration with Axis2 Web Service Container (?) High Performance Transport supporting SOAP Infoset

56

Average Video Delays for one broker –

Performance scales proportional to number of brokers

Latency ms Multiple sessions One session 30 frames/sec # Receivers

57

Collaboration Grid

WS-Context Narada Broker HPSearch UDDI WS-Security Gateway Gateway Narada Broker Narada Broker SharedWS XGSP Media Service Audio Mixer Video Mixer Transcoder Thumbnail Replay Record Annotate SharedDisplay WhiteBoard

58

GIS TV Chat Video Mixer Webcam

GlobalMMCS SWT Client

59

e-Annotation Archived Stream Annotated e-Annotation

e - Annotation Archived stream Annotation player /WB e -Annotation Whiteboard Real time Real time stream stream list stream list player

Stream List Stream List Player

60

         

Some ideas to Remember

Grids replace previous sophisticated distributed object technologies b ecause industry won’t support DO’s but will support services Grids are managed Services exchanging Messages Grid Services GS-* extend WS-* Web Service Specifications Web Service container replaces computer Service replaces process A stream is an ordered set of messages Service Internet replaces Internet : messages replace packets (Sub) Grids replace Libraries 7.5 out of 9 NCOW Core Enterprise Services CES directly from Grid Services ; metadata, (part of) messaging, collaboration, and sensors need special attention RTI should be enhanced with agreed service interfaces and interoperable RTI-SOA

61

     

Location of software for Grid Projects in Community Grids Laboratory

htpp://www.naradabrokering.org provides Web service (and JMS) compliant distributed publish-subscribe messaging (software overlay network) htpp://www.globlmmcs.org is a service oriented (Grid) collaboration environment (audio-video conferencing) http://www.crisisgrid.org

is an OGC (open geospatial consortium) Geographical Information System (GIS) compliant GIS and Sensor Grid (with POLIS center) http://www.opengrids.org

UDDI etc.

has WS-Context, Extended The work is still in progress but NaradaBrokering is quite mature All software is open source and freely available

62