Robust Distributed Computing

Download Report

Transcript Robust Distributed Computing

Client-Server concept
• Server program is shared by many clients
• RPC protocol typically used to issue requests
• Server may manage special data, run on an
especially fast platform, or have an especially
large disk
• Client systems handle “front-end” processing and
interaction with the human user
Robust Distributed Computing
1
(c) Kenneth P. Birman; 1996
Server and its clients
Robust Distributed Computing
2
(c) Kenneth P. Birman; 1996
Examples of servers
•
•
•
•
•
•
Network file server
Database server
Network information server
Domain name service
Microsoft Exchange
Kerberos authentication server
Robust Distributed Computing
3
(c) Kenneth P. Birman; 1996
Summary of typical split
• Server deals with bulk data storage, high perf.
computation, collecting huge amounts of
background data that may be useful to any of
several clients
• Client deals with the “attractive” display, quick
interaction times
• Use of caching to speed response time
Robust Distributed Computing
4
(c) Kenneth P. Birman; 1996
Statefulness issues
• Client-server system is stateless if:
Client is independently responsible for its actions,
server doesn’t track set of clients or ensure that
cached data stays up to date
• Client-server system is stateful if:
Server tracks its clients, takes actions to keep their
cached states “current”. Client can trust its
cached data.
Robust Distributed Computing
5
(c) Kenneth P. Birman; 1996
Best known examples?
• The UNIX NFS file system is stateless.
• Database systems are usually stateful:
Client reads database of available seats on plane,
information stays valid during transaction
Robust Distributed Computing
6
(c) Kenneth P. Birman; 1996
Typical issues in design
• Client is generally simpler than server: may be
single-threaded, can wait for reply to RPC’s
• Server is generally multithreaded, designed to
achieve extremely high concurrency and
throughput. Much harder to develop
• Reliability issue: if server goes down, all its
clients may be “stuck”. Usually addressed with
some form of backup or replication.
Robust Distributed Computing
7
(c) Kenneth P. Birman; 1996
Use of caching
• In stateless architectures, cache is responsibility
of the client. Client decides to remember results
of queries and reuse them. Example: caching
Web proxies, the NFS client-side cache.
• In stateful architectures, cache is owned by
server. Server uses “callbacks” to its clients to
inform them if cached data changes, becomes
invalid. Cache is “shared state” between them.
Robust Distributed Computing
8
(c) Kenneth P. Birman; 1996
Example of stateless approach
• NFS is stateless: clients obtain “vnodes” when
opening files; server hands out vnodes but treats
each operation as a separate event
• NFS trusts: vnode information, user’s claimed
machine id, user’s claim uid
• Client uses write-through caching policy
Robust Distributed Computing
9
(c) Kenneth P. Birman; 1996
Example of stateful approach
• Transactional software structure:
– Data manager holds database
– Transaction manager does begin op1 op2 ... opn commit
– Transaction can also abort; abort is default on failure
• Transaction on database system:
–
–
–
–
Atomic: all or nothing effects
Concurrent: can run many transactions at same time
Independent: concurrent transactions don’t interfere
Durable: once committed, results are persistent
Robust Distributed Computing
10
(c) Kenneth P. Birman; 1996
Why are transactions stateful?
• Client knows what updates it has done, what
locks it holds. Database knows this too
• Client and database share the guarantees of the
model. See consistent states
• Approach is free of the inconsistencies and
potential errors observed in NFS
Robust Distributed Computing
11
(c) Kenneth P. Birman; 1996
Current issues in client-server
systems
• Research is largely at a halt: we know how to
build these systems
• Challenges are in the applications themselves, or
in the design of the client’s and servers for a
specific setting
• Biggest single problem is that client systems
know the nature of the application, but servers
have all the data
Robust Distributed Computing
12
(c) Kenneth P. Birman; 1996
Typical debate topic?
• Ship code to the data (e.g. program from client to
server)?
• ... or ship data to the code? (e.g. client fetches the
data needed)
• Will see that Java, Tacoma and Telescript offer
ways of trading off to avoid inefficient use of
channels and maximum flexibility
Robust Distributed Computing
13
(c) Kenneth P. Birman; 1996
Message Oriented Middleware
• Emerging extension to client-server architectures
• Concept is to weaken the link between the client
and server but to preserve most aspects of the
“model”
• Client sees an asynchronous interface: request is
sent independent of reply. Reply must be
dequeued from a reply queue later
Robust Distributed Computing
14
(c) Kenneth P. Birman; 1996
MOMS: How they work
• MOM system implements a queue in between
clients and servers
• Each sends to other by enqueuing messages on
the queue or queues for this type of request/reply
• Queues can have names, for “subject” of the
queue
• Client and server don’t need to be running at the
same time.
Robust Distributed Computing
15
(c) Kenneth P. Birman; 1996
MOMS: How they work
client
MOMS
Client places message into a “queue” without
waiting for a reply. MOMS is the “server”
Robust Distributed Computing
16
(c) Kenneth P. Birman; 1996
MOMS: How they work
server
MOMS
Server removes message from the queue and
processes it.
Robust Distributed Computing
17
(c) Kenneth P. Birman; 1996
MOMS: How they work
server
MOMS
Server places any response in a “reply” queue
for eventual delivery to the client. May have a
timeout attached (“delete after xxx seconds”)
Robust Distributed Computing
18
(c) Kenneth P. Birman; 1996
MOMS: How they work
client
MOMS
Client retrieves response and resumes its
computation.
Robust Distributed Computing
19
(c) Kenneth P. Birman; 1996
Pros and Cons of MOMS
• Decoupling of sender, destination is a plus: can
design the server and client without each
knowing much about the other, can extend easily
• Performance is poor, a (big) minus: overhead of
passing through an intermediary
• Priority, scheduling, recoverability are pluses
.... use this approach if you can afford the performance hit, a factor of 10-100 compared to RPC
Robust Distributed Computing
20
(c) Kenneth P. Birman; 1996
Remote Procedure Call
•
•
•
•
•
•
Basic concepts
Implementation issues, usual optimizations
Where are the costs?
Firefly RPC, Lightweight RPC
Reliability and consistency
Multithreading debate
Robust Distributed Computing
21
(c) Kenneth P. Birman; 1996
A brief history of RPC
• Introduced by Birrell and Nelson in 1985
• Idea: mask distributed computing system using a
“transparent” abstraction
– Looks like normal procedure call
– Hides all aspects of distributed interaction
– Supports an easy programming model
• Today, RPC is the core of many distributed
systems
Robust Distributed Computing
22
(c) Kenneth P. Birman; 1996
More history
• Early focus was on RPC “environments”
• Culminated in DCE (Distributed Computing
Environment), standardizes many aspects of RPC
• Then emphasis shifted to performance, many
systems improved by a factor of 10 to 20
• Today, RPC often used from object-oriented
“CORBA” systems. Reliability issues are more
evident than in the past.
Robust Distributed Computing
23
(c) Kenneth P. Birman; 1996
The basic RPC protocol
client
“binds” to
server
server
registers with
name service
prepares,
sends request
receives request
invokes handler
sends reply
unpacks reply
Robust Distributed Computing
24
(c) Kenneth P. Birman; 1996
Compilation stage
• Server defines and “exports” a header file giving
interfaces it supports and arguments expected.
Uses “interface definition language” (IDL)
• Client includes this information
• Client invokes server procedures through “stubs”
– provides identical interface as server does
– responsible for building the messages and
interpreting the reply messages
Robust Distributed Computing
25
(c) Kenneth P. Birman; 1996
Binding stage
• Occurs when client and server program first start
execution
• Server registers its network address with name
directory, perhaps with other information
• Client scans directory to find appropriate server
• Depending on how RPC protocol is
implemented, may make a “connection” to the
server, but this is not mandatory
Robust Distributed Computing
26
(c) Kenneth P. Birman; 1996
Request marshalling
• Client builds a message containing arguments,
indicates what procedure to invoke
• Data representation a potentially costly issue!
• Performs a send operation to send the message
• Performs a receive operation to accept the reply
• Unpacks the reply from the reply message
• Returns result to the client program
Robust Distributed Computing
27
(c) Kenneth P. Birman; 1996
Costs in basic protocol?
• Allocation and marshalling data into message
(costs more for a more general solution)
• Two system calls, one to send, one to receive,
hence context switching
• Much copying all through the O/S: application to
UDP, UDP to IP, IP to ethernet interface, and
back up to application
Robust Distributed Computing
28
(c) Kenneth P. Birman; 1996
Typical optimizations?
• Compile the stub “inline” to put arguments
directly into message
• If sender and dest. have same data representations, skip host-independent formatting
• Use a special “send, then receive” system call
• Optimize the O/S path itself to eliminate copying
Robust Distributed Computing
29
(c) Kenneth P. Birman; 1996
Fancy argument passing
• RPC is transparent for simple calls with a small
amount of data passed
• What about complex structures, pointers, big
arrays? These will be very costly, and perhaps
impractical to pass as arguments
• Most implementations limit size, types of RPC
arguments. Very general systems less limited but
much more costly.
Robust Distributed Computing
30
(c) Kenneth P. Birman; 1996
Overcoming lost packets
client
server
sends request
retransmit
ack for request
reply
Robust Distributed Computing
31
(c) Kenneth P. Birman; 1996
Overcoming lost packets
client
server
sends request
retransmit
ack for request
reply
ack for reply
Robust Distributed Computing
32
(c) Kenneth P. Birman; 1996
Costs in fault-tolerant version?
• Acks are expensive. Try and avoid them, e.g. if
the reply will be sent quickly supress the initial
ack
• Retransmission is costly. Try and tune the delay
to be “optimal”
• For big messages, send packets in bursts and ack
a burst at a time, not one by one
Robust Distributed Computing
33
(c) Kenneth P. Birman; 1996
Big packets
client
sends request
as a burst
server
ack entire burst
reply
ack for reply
Robust Distributed Computing
34
(c) Kenneth P. Birman; 1996
RPC “semantics”
• At most once: request is processed 0 or 1 times
• Exactly once: request is always processed 1 time
• At least once: request processed 1 or more times
... exactly once is impossible because we can’t
distinguish packet loss from true failures! In
both cases, RPC protocol simply times out.
Robust Distributed Computing
35
(c) Kenneth P. Birman; 1996
Implementing at most/least once
• Use a timer (clock) value and a unique id, plus
sender address
• Server remembers recent id’s and replies with
same data if a request is repeated
• Also uses id to identify duplicates and reject
them
• Very old requests detected and ignored via time.
Robust Distributed Computing
36
(c) Kenneth P. Birman; 1996
RPC versus local procedure call
• Restrictions on argument sizes and types
• New error cases:
– Bind operation failed
– Request timed out
– Argument “too large” can occur if, e.g., a table grows
• Costs may be very high
... so RPC is actually not very transparent!
Robust Distributed Computing
37
(c) Kenneth P. Birman; 1996
RPC costs in case of local dest
•
•
•
•
•
•
•
Caller builds message
Issues send system call, blocks, context switch
Message copied into kernel, then out to dest.
Dest is blocked... wake it up, context switch
Dest computes result
Entire sequence repeated in reverse direction
If scheduler is a process, context switch 6 times!
Robust Distributed Computing
38
(c) Kenneth P. Birman; 1996
RPC example
Dest on same site
O/S
Source does
xyz(a, b, c)
Robust Distributed Computing
39
(c) Kenneth P. Birman; 1996
RPC in normal case
Destination and O/S are blocked
Dest on same site
O/S
Source does
xyz(a, b, c)
Robust Distributed Computing
40
(c) Kenneth P. Birman; 1996
RPC in normal case
Source, dest both block. O/S runs its
scheduler, copies message from source outqueue to dest in-queue
Dest on same site
O/S
Source does
xyz(a, b, c)
Robust Distributed Computing
41
(c) Kenneth P. Birman; 1996
RPC in normal case
Dest runs, copies in message
Dest on same site
O/S
Source does
xyz(a, b, c)
Same sequence needed to return results
Robust Distributed Computing
42
(c) Kenneth P. Birman; 1996
Important optimizations: LRPC
• Lightweight RPC (LRPC): for case of sender,
dest on same machine (Bershad et. al.)
• Uses memory mapping to pass data
• Reuses same kernel thread to reduce context
switching costs (user suspends and server wakes
up on same kernel thread or “stack”)
• Single system call: send_rcv or rcv_send
Robust Distributed Computing
43
(c) Kenneth P. Birman; 1996
LRPC
O/S and dest initially are idle
Dest on same site
O/S
Source does
xyz(a, b, c)
Robust Distributed Computing
44
(c) Kenneth P. Birman; 1996
LRPC
Control passes directly to dest
Dest on same site
O/S
Source does
xyz(a, b, c)
arguments directly visible through
remapped memory
Robust Distributed Computing
45
(c) Kenneth P. Birman; 1996
LRPC performance impact
• On same platform, offers about a 10-fold
improvement over a hand-optimized RPC
implementation
• Does two memory remappings, no context switch
• Runs about 50 times faster than standard RPC by
same vendor (at the time of the research)
• Semantics stronger: easy to ensure exactly once
Robust Distributed Computing
46
(c) Kenneth P. Birman; 1996
Broad comments on RPC
• RPC is not very transparent
• Failure handling is not evident at all: if an RPC
times out, what should the developer do?
• Performance work is producing enormous gains:
from the old 75ms RPC to RPC over U/Net with
a 75usec round-trip time: a factor of 1000!
Robust Distributed Computing
47
(c) Kenneth P. Birman; 1996
Contents of an RPC environment
• Standards for data representation
• Stub compilers, IDL databases
• Services to manage server directory, clock
synchronization
• Tools for visualizing system state and managing
servers and applications
Robust Distributed Computing
48
(c) Kenneth P. Birman; 1996
Examples of RPC environments
• DCE: From OSF, developed in 1987-1989.
Widely accepted, runs on many platforms
• ONC: Proposed by SUN microsystems, used in
the NFS architecture and in many UNIX services
• OLE, CORBA: next-generation “object-oriented”
environments.
Robust Distributed Computing
49
(c) Kenneth P. Birman; 1996
Multithreading debate
• Three major options:
– Single-threaded server: only does one thing at a time,
uses send/recv system calls and blocks while waiting
– Multi-threaded server: internally concurrent, each
request spawns a new thread to handle it
– Upcalls: event dispatch loop does a procedure call for
each incoming event, like for X11 or PC’s running
Windows.
Robust Distributed Computing
50
(c) Kenneth P. Birman; 1996
Single threading: drawbacks
• Applications can deadlock if a request cycle
forms
• Much of system may be idle waiting for replies
to pending requests
• Harder to implement RPC protocol itself (need to
use a timer interrupt to trigger acks,
retransmission, which is awkward)
Robust Distributed Computing
51
(c) Kenneth P. Birman; 1996
Multithreading
• Idea is to support internal concurrency, as if each
process was really multiple processes that share
one address space
• Thread scheduler uses timer interrupts and
context switching to mimic a physical
multiprocessor using the smaller number of
CPU’s actually available
Robust Distributed Computing
52
(c) Kenneth P. Birman; 1996
Multithreaded RPC
• Each incoming request is handled by spawning a
new thread
• Designer must implement appropriate mutual
exclusion to guard against “race conditions” and
other concurrency problems
• Ideally, server is more active because it can
process new requests while waiting for its own
RPC’s to complete on other pending requests
Robust Distributed Computing
53
(c) Kenneth P. Birman; 1996
Negatives to multithreading
• Users may have little experience with
concurrency and will then make mistakes
• Concurrency bugs are very hard to find due to
non-reproducible scheduling orders
• Reentrancy can come as an undesired surprise
• Threads need stacks hence consumption of
memory can be very high
• Stacks for threads must be finite and can
overflow, corrupting the address space
Robust Distributed Computing
54
(c) Kenneth P. Birman; 1996
Recent RPC history
• RPC was once touted as the transparent answer
to distributed computing
• Today the protocol is very widely used
... but it isn’t very transparent, and reliability issues
can be a major problem
• Emerging interest is in Corba, which uses RPC
as the mechanism to implement object invocation
Robust Distributed Computing
55
(c) Kenneth P. Birman; 1996
CORBA: The Common Object
Request Broker Architecture
• Role of an architecture for distributed computing:
standardize system components so that
developers will know what they can count on
• CORBA also standardizes the way that programs
interact with one another and are “managed”
• Model used is object oriented
Robust Distributed Computing
56
(c) Kenneth P. Birman; 1996
Brief history of architectures
• Interest goes back quite far
• ANSA project often identified as first to look
seriously at this issue: “Advanced Network
Systems Architecture;” started around 1985
• Today, DCE and CORBA and Microsoft’s OLE2
are most well known
Robust Distributed Computing
57
(c) Kenneth P. Birman; 1996
Roles of an architecture
• Descriptive: a good way to “write down” the
structure of a distributed system
• Interoperability: two applications that use the
same architecture can potentially be linked to
each other
• Ease of development: idea is that architecture
enables development by stepwise refinement
• Reliability: modularity encourages fault-isolation
Robust Distributed Computing
58
(c) Kenneth P. Birman; 1996
Kinds of architectures
• Enterprise architecture: describes how people
interact and use information in a corporation
• Network information architecture: describes
information within the network and relationships
between information sources, uses
• Distributed application architecture: the
application itself, perhaps at several levels of
refinement
Robust Distributed Computing
59
(c) Kenneth P. Birman; 1996
Architecture can “hide” details
• Architecture may talk about the “distributed
name server” as a single abstraction at one level,
but later explain that in fact, this is implemented
using a set of servers that cooperate.
• This perspective leads us to an object oriented
perspective on distributed computing
Robust Distributed Computing
60
(c) Kenneth P. Birman; 1996
Distributed objects
• An object could be a program or a data object
• A program object can invoke an operation on
some other kind of object if it knows its type and
has a handle on an instance of it.
• Each object thus has: type, interface, “state”,
location, unique handle or identifier, and perhaps
other attributes: owner, age, size, etc.
Robust Distributed Computing
61
(c) Kenneth P. Birman; 1996
Distributed objects
Robust Distributed Computing
62
(c) Kenneth P. Birman; 1996
Distributed objects
host a
host b
object storage
server
Robust Distributed Computing
63
(c) Kenneth P. Birman; 1996
Building an object
• Develop the code
• Define an interface
• Register the interface in
“interface repository”
• Register each instance in
“object directory”
Robust Distributed Computing
64
I
N
T
E
R
F
A
C
E
Code for the
object
(c) Kenneth P. Birman; 1996
Using an object
• Import its interface when developing your
program object
• Your code can now do object invocations
• At runtime, your program object must locate the
instance(s) of the object class that it will act upon
• Binding operation associates actor to target
• Object request broker mediates invocation
Robust Distributed Computing
65
(c) Kenneth P. Birman; 1996
Invocation occurs through “proxy”
client: obj.xyz(1, 2, 3)
xyz(1,2,3)
Proxy
Robust Distributed Computing
Stub
66
(c) Kenneth P. Birman; 1996
Location transparency
• Target object can be in same address space as
client or remote: invocation “looks” the same
(but error conditions may change!)
• Objects can migrate without informing users
• Garbage collection problem: delete (passive)
objects for which there are no longer any
references
Robust Distributed Computing
67
(c) Kenneth P. Birman; 1996
Dynamic versus static invocation
• Dynamic occurs when program picks the object
it will invoke at runtime. More common, but
more complex
• Static invocation occurs when the program and
the objects on which it acts are known at compile
time. Avoids some of the overhead of dynamic
case but is less flexible
Robust Distributed Computing
68
(c) Kenneth P. Birman; 1996
Orbix IDL for a grid object
// grid server example for Orbix
// IDL in file grid.idl
interface grid {
readonly attribute short height;
readonly attribute short width;
void set(in short n, in short m, in long value);
long get(in short n, in short m);
};
Robust Distributed Computing
69
(c) Kenneth P. Birman; 1996
Grid “implementation class”
// C++ code fragment for grid implementation class
#include “grid.hh” // generated file produced from IDL
class grid_i: public gridBOAImpl { // This is a “Basic object adapter”
short m_height, m_width; long **m_a;
public:
grid_i(short h, short w); // Constructor
virtual ~grid_i();
// Destructor
virtual short width(CORBA::Environment &);
virtual short height(CORBA::Environment &);
virtual void set(short n, short m, long value, CORBA::Environment &);
virtual long get(short n, short m, CORBA::Environment &);
};
Robust Distributed Computing
70
(c) Kenneth P. Birman; 1996
Enclosing program for server
#include “grid_i.h”
#include <iostream.h>
void main() {
grid_i myGrid(100,100);
// Orbix objects can be named but this example is not
CORBA::Orbix.impl_is_ready();
cout << “server terminating” << endl;
}
Robust Distributed Computing
71
(c) Kenneth P. Birman; 1996
Server code to implement class
#include “grid_i.h”
grid_i::grid_i(short h, short w) {
m_height = h; m_width = w;
m_a = new long* [h];
for (int i = 0; i < h; i++)
m_a[i] = new long [w];
}
Robust Distributed Computing
72
(c) Kenneth P. Birman; 1996
Server code to implement class
#include “grid_i.h”
grid_i::grid_i(short h, short w) {
m_height = h; m_width = w;
m_a = new long* [h];
for (int i = 0; i < h; i++)
m_a[i] = new long [w];
}
grid_i::~grid_i() {
for(int i = 0; i < m_height;
i++)
delete[ ] m_a[i];
delete[ ] m_a;
}
Robust Distributed Computing
73
(c) Kenneth P. Birman; 1996
Server code to implement class
#include “grid_i.h”
grid_i::grid_i(short h, short w) {
m_height = h; m_width = w;
m_a = new long* [h];
for (int i = 0; i < h; i++)
m_a[i] = new long [w];
}
grid_i::~grid_i() {
for(int i = 0; i < m_height;
i++)
delete[ ] m_a[i];
delete[ ] m_a;
}
Robust Distributed Computing
short grid_i::width(CORBA::Environment &){
return m_width;
}
short grid_i::height(CORBA::Env... &){
return m_width;
}
short grid_i::set(short n, short m, long value..){
m_a[n][m] = value;
}
short grid_i::get(short n, short m, ...){
return m_a[n][m];
}
74
(c) Kenneth P. Birman; 1996
Client program
#include “grid.hh”
#include <iostream.h>
void main() {
grid *p = grid::_bind(“:gridSrv”); // Assumes registered under this name
cout << “Grid height is “ << p->height() << endl;
cout << “Grid width is “ << p->width() << endl;
p->set(2, 4, 123); // set cell (2,4) to value 123
cout << “Grid(2,3) is “ << p->get(2,3) << endl;
p->release();
}
Robust Distributed Computing
75
(c) Kenneth P. Birman; 1996
Our example is unreliable!
• Doesn’t check for binding failure
• Doesn’t catch errors in remote invocation
... also illustrates potential problems: neglecting to
delete resources properly, or to release
references, can cause system to “leak” resources
and eventually fail
Robust Distributed Computing
76
(c) Kenneth P. Birman; 1996
Notions of reliability in Corba
• Security/authentication
• Catch error and “handle it”
• Transactional subsystem (for database
applications; will see this in future lectures)
• Replication for high availability (also will revisit)
Robust Distributed Computing
77
(c) Kenneth P. Birman; 1996
Despite these options, reliability
is a serious problem with Corba
• Hard to use error catching mechanisms
• Easy to leak resources
• Transactional mechanisms: costly, mostly useful
with databases, can be complex to use
• Replication mechanisms: more transparent but
also expensive unless used with sophistication
Robust Distributed Computing
78
(c) Kenneth P. Birman; 1996
Error handling example
TRY { p = grid::_bind(“:gridSrv”); } CATCHANY {
cout << “Binding to gridSrv object failed” << endl;
cout << “Fatal exception “ << IT_X << endl;
}
TRY { cout << “Height is “ << p->height() << endl; } CATCHANY {
cout << “Call to get height failed” << endl;
cout << “Fatal exception “ << IT_X << endl;
}
.... etc ....
Robust Distributed Computing
79
(c) Kenneth P. Birman; 1996
Sets of objects
• Can notify Orbix that a service can be accepted
from any of a set of servers
• Orbix will find one and bind to it
• But later, what if that server fails? Orbix can
assist in rebinding to another, but how will
application get back to the “right state” if that
server might not have given identical data
Robust Distributed Computing
80
(c) Kenneth P. Birman; 1996
Example to think about
• Bond pricing server in a trading setting
• Clients download information on bond portfolio
• Server provides callbacks as prices change,
allows clients to evaluate hypothetical trades
• Switching from server to server may be very hard
due to “state” built up during execution of the
system prior to a failure
Robust Distributed Computing
81
(c) Kenneth P. Birman; 1996
Roles of the Corba ORB
• Glues invoking object to target objects
• ORB sees object invocation
• Looks at object reference. If in same address
space, uses procedure call for invocation
• Else communicates to object RPC-style, location
transparent
• Reports errors if invocation fails
Robust Distributed Computing
82
(c) Kenneth P. Birman; 1996
Invocation occurs through “proxy”
client: obj.xyz(1, 2, 3)
xyz(1,2,3)
Proxy
Stub
Object Request Broker (ORB)
Robust Distributed Computing
83
(c) Kenneth P. Birman; 1996
ORB-to-ORB protocol: IOP
• Allows invocation to be passed from one ORB to
another, or one language to another
• Implication: Corba application running in Orbix
from Iona can invoke DSOM server built by user
of an IBM system!
• Runs over TCP connection, performance costs
not yet known but likely to be slow at first
Robust Distributed Computing
84
(c) Kenneth P. Birman; 1996
ORB can also find objects on
disk
• Life-cycle service knows how to instantiate a stored
object (or even to create an object as needed)
• User issues invocation, ORB notices that object is
non-resident, life-cycle-service brings it in,
invocation occurs, then object becomes passive
again
• Raises issues of persistence, unexpected costs!
Robust Distributed Computing
85
(c) Kenneth P. Birman; 1996
Some other Corba services
•
•
•
•
Clock service maintains synchronized time
Authentication service validates user id’s
Object storage service: a file system for objects
Life cycle service: oversees activation,
deactivation, storage, checkpointing, etc.
• Transactions and replication services
Robust Distributed Computing
86
(c) Kenneth P. Birman; 1996
Event notification service
•
•
•
•
•
Alternative to normal binding and invocation
Application registers interest in “events”
Data producers publish events
ENS matches events to subscribers
Idea is to “decouple” the production of data from
the consumption of data. System is more
extensible: can always add new subscribers
Robust Distributed Computing
87
(c) Kenneth P. Birman; 1996
ENS model
produces IBM
quote
ENS manages a
pool of events
produces DEC
quote
Robust Distributed Computing
88
consumes IBM
quote
consumes DEC
and IBM quotes
(c) Kenneth P. Birman; 1996
ENS example
• Events could be quotes on a stock
• Each stock would have its own class of events
• Brokerage application would monitor stocks by
registering interest in corresponding event classes
• Notice that each application can monitor many
types of events!
... decoupling of data producer from consumer seen
as valuable form of design flexibility
Robust Distributed Computing
89
(c) Kenneth P. Birman; 1996
Corba challenges and issues
• Much easier to “specify” services than to
implement them. Some specifications may be seen
as poor as implementations finally emerge
• Reliability a broad problem with architecture
• Hard to use Corba half-way: frequently need to
employ technology everywhere or not at all
• Hidden costs: a simple operation may invoke a
massive mechanism. Programmer must be careful!
Robust Distributed Computing
90
(c) Kenneth P. Birman; 1996