Java Grande Update http://www.javagrande.org December 19 2003 PTLIU Laboratory for Community Grids Bryan Carpenter, Geoffrey Fox Computer Science, Informatics, Physics Indiana University, Bloomington IN 47404 (Technology Officer,

Download Report

Transcript Java Grande Update http://www.javagrande.org December 19 2003 PTLIU Laboratory for Community Grids Bryan Carpenter, Geoffrey Fox Computer Science, Informatics, Physics Indiana University, Bloomington IN 47404 (Technology Officer,

Java Grande Update
http://www.javagrande.org
December 19 2003
PTLIU Laboratory for Community Grids
Bryan Carpenter, Geoffrey Fox
Computer Science, Informatics, Physics
Indiana University, Bloomington IN 47404
(Technology Officer, Anabas Corporation, San Jose)
http://grids.ucs.indiana.edu/ptliupages
[email protected]
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/"
email="[email protected]"
1
Java Grande in a Nutshell




Concept started in December 1996 with first meeting
on Java for Science and Engineering
Forum established in February 1998
Multiple forum activities in numerics, message-passing
and parallel/distributed systems
Ongoing set of workshops sponsored by ACM
• Bill Joy talked in 2000, Guy Steele in 2001




Multiple useful Web Sites and papers/presentations
JSR activities with probably insufficient momentum
No institutional contact with Sun for 3 years
No impressive support for Java on HPC machines with
relevant research compilers not productized
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
2
Java Grande Concept










Use of Java for “Performance” and “Usability” in:
High Performance Network Computing
Scientific and Engineering Computation
(Distributed) Modeling and Simulation
Parallel and Distributed Computing
Data Intensive Computing
HPCC
Computational Grids
The above is classic “small” technical computing area.
There is a much larger Grande problem:
Communication and Computing Intensive Commercial
Applications
• Large scale Enterprise Software (iPlanet J2EE etc.)
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
3
Java Grande Motivation I: Users



We have rather different drivers from HPCC (parallel
computing) and Enterprise Systems
In Enterprise software, we have Java as well
established but architectures new (J2EE and messages)
so new performance and scaling issues (Enterprise
systems are large as in Grids/Autonomic computing)
In HPCC we failed to produce good computing
environments in HPCC Initiative and there is a possibly
serious gap between field (use of Fortran/C/C++) and
next generation of potential Science and Engineering
Users (Java, C#, Python ….)
• Opportunity to deliver on high productivity HPCC
environments
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
4
Java Motivation II: Language






The Java Language has several good design features
• secure, safe (wrt bugs), object-oriented, familiar (to C C++
and even Fortran programmers)
Java has a very good set of libraries covering everything from
commerce, multimedia, images to math functions (under
development at http://math.nist.gov/javanumerics)
Java has best available electronic and paper training
resources
Java has excellent integrated program development
environments
Java naturally integrated with network and universal
machine supports potentially powerful “write once-run
anywhere” model
There is a large and growing trained labor force
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
5
Java Grande Forum




Group meets (through 2002) either at annual meeting or
separately
Forum coordinated by Fox
Numerics Group led by Boisvert and Pozo
The Concurrency and Applications (Benchmarks) Group led by
Caromel and Gannon
• MPI subgroup led by Getov

Annual ACM sponsored workshops were in Bay area just before
JavaOne upto 2001
• In 1999 merged with ISCOPE (Object Methods in Scientific Computing
e.g. C++) but JG dominates
• 2002 held just before OOPSLA with 90 attendees and good quality papers
(peak attendance some 220)
• No meeting planned for 2003
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
6
JG Workshop 2002 I


KEYNOTE: Pratap Pattnaik, IBM, Autonomic Computing
Session II Grid and Parallel Computing
• The Ninf Portal: An Automatic Generation Tool for Computing Portals
• JavaSymphony: New Directives to Control and Synchronize Locality,
Parallelism, and Load Balancing for Cluster and GRID-Computing
• Ibis: an Efficient Java-based Grid Programming Environment
• Efficient, Flexible and Typed Group Communications for Java
• JOPI: A Java Object-Passing Interface

Session III Grid and Peer-to-peer Computing
• Abstracting Remote Object Interaction in a Peer-2-Peer Environment
• Advanced Eager Scheduling for Java-Based Adaptively Parallel
Computing
• A Scaleable Event Infrastructure for Peer to Peer Grids

Session IV Java Compilation
•
•
•
•
11/6/2015
Elimination of Java Array Bounds Checks in the Presence of Indirection
Simple and Effective Array Prefetching in Java
Fast Subtype Checking in the HotSpot JVM
Almost-whole-program Compilation
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
7
JG Workshop 2002 II

Session V Object-based Computing
• KEYNOTE: Alexander Stepanov, The Future of Abstraction
• Generic Programming for High Performance Scientific Applications

Session VI Object-based Computing and Applications
• Higher-Order Functions and Partial Applications for a C++ Skeleton
Library
• Ravenscar-Java: A High Integrity Profile for Real-Time Java
• Parsek: Object Oriented Particle in Cell. Implementation and
Performance Issues
• inAspect - Interfacing Java and VSIPL

Session VII Node Java I
• Open Runtime Platform: Flexibility with Performance Using Interfaces
• Aggressive Object Combining
• Run-time Evaluation of Opportunities for Object Inlining in Java

Session IV Node Java II
•
•
•
•
11/6/2015
Jeeg: A Programming Language for Concurrent Objects Synchronization
Specifying Java Thread Semantics Using a Uniform Memory Model
Immutability Specification and its Applications
Adding Tuples to Java: a Study in Lightweight Data Structures
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
8
Disappointing Comment





I have not seen strong interest from HPCC users and HPCC
purchasers in Java
• Possibly Chicken and Egg situation ..
2 years ago, Sun offered poor Java support on HPC
• Not certain current situation
IBM Research produced several interesting HPC compilers
supporting for example high performance arrays
• These were not I think offered on IBM HPC machines
However people voting on this are not from the Internet
generation and the “alternatives” are not good!
However one of largest pure Java science applications is from
Los Alamos – CartaBlanca for heat transfer and multiphase
fluid flow
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
9
Types of Activity

Java on Node
• Compilers and Language issues

Parallel Computing
• Thread and Message-passing models
• Very little academic work for any languages!

Distributed Computing
• RMI
• Jini JXTA
• Grid and Web Services

High performance enterprise Java
• Combines Node and distributed computing issues
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
10
Java on the Node

Numerics subgroup of Java Grande Forum focused on
“node issues”
• Floating Point
• Java Math libraries
• Arrays – efficiency of >1D arrays and support of Fortran90
style array functions
• Convenience and natural syntax – complex arithmetic
notation and multi-type libraries



Very good SCIMARK node kernel benchmark at
http://math.nist.gov/javanumerics/
Broader range of benchmarks at
http://www.epcc.ed.ac.uk/javagrande/
Typical compiler work (from IBM)
http://www.research.ibm.com/ninja/
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
11
Scimark








http://math.nist.gov/scimark2/
FFT
SOR
Monte Carlo
Sparse Matrix Multiply
Dense LU
Available as downloadable applet
Today peak performance is Sun 1.4.2 VM on Pentium
at 555 megaflops averages
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
12
Edinburgh Benchmark Set I



http://www.epcc.ed.ac.uk/javagrande/index_1.html
Sequential, multi-threaded, mpiJava, C versus Java
Low-Level
•
•
•
•
•
•
•
•
•
11/6/2015
Arith: execution of arithmetic operations
Assign: variable assignment
Cast: casting
Create: creating objects and arrays
Loop: Loop overheads
Math: execution of math library operations
Method: method invocation
Serial: Serialisation
Exception: Exception handling
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
13
Edinburgh Benchmark Set II

Medium Size
•
•
•
•
•
•
•

Series: Fourier coefficient analysis
LUFact: LU Factorisation
SOR: Successive over-relaxation
HeapSort: Integer sorting
Crypt: IDEA encryption
FFT: FFT
Sparse: Sparse Matrix multiplication
“Real” Application
•
•
•
•
•
11/6/2015
Search: Alpha-beta pruned search
Euler: Computational Fluid Dynamics
MD: Molecular Dynamics simulation
MC: Monte Carlo simulation
Ray Tracer: 3D Ray Tracer
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
14
Java
Performance I
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
15
Java
Performance II
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
16
Numerics I


Initially focused on “Java floating rules” that guaranteed same
(bad) result on all processors
strictfp: This has been a part of Java for some time now. It is a
keyword that specifies that the original strict (slow) semantics
for Java floating point should be followed.
• The new default allows 15-bit exponents for anonymous (temporary)
variables. This tiny change allows Java implementations on the x86
family or processors to run at (nearly) full speed.
• Also in default mode the specification of elementary functions is relaxed to
allow any result within one unit in the last place of the correctly rounded
exact results. This allows more efficient algorithms to be used (including
hardware sin/cos).
• There is a separate java.lang.StrictMath library that has a specific
implementation of the functions that produces the exact same results on
all machines. One must call the strict version explicitly to get the slower
but certain result.
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
17
Some more on Numerics


fastfp modifier: There was a JSR for this that was withdrawn.
Obvious goals include support for fused multiply-add.
• Mark Snir was the lead and IBM could not find a
replacement, so this is not being pursued.
• At some point we'd like to resubmit. We are hoping that Joe
Darcy would be the lead.
• You can see info on fastfp at
http://math.nist.gov/javanumerics/reports/jgfnwg-minutes-600.html
Joe Darcy: Joe has the title of Java Floating-Point Czar (it
actually says this on his business card).
• Joe is working on floating-point issues within Sun and serves
as our main technical contact now.
• He has proposed the inclusion of additional methods in
java.lang.Math with the goal of making this library on par
with the C math library (libm).
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
18
Numerics III







http://math.nist.gov/javanumerics/reports/jgfnwg-minutes-1102.html November 2 2002 Update
True multidimensional arrays indexed using specialized
notation. This is JSR83
Operator overloading to support the easy expression of
alternate arithmetics.
Complex numbers that are as efficient as primitive types.
A new floating-point mode (i.e., fastfp) that admits the use of
fused multiply-add operations in Java, and possibly admits
additional compiler optimizations, such as the use of
associativity.
Expect to meet every 6 months
Note http://www.vni.com/jmsl/ Java Math Library
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
19
Java and Parallelism

Message passing Systems
• mpiJava from Community Grids Lab “oldie but goodie”
• Pure Java version MPJ planned but not implemented (well)

OpenMP in Java
• JOMP from Edinburgh has its version of JavaGrande
benchmarks
• http://www.epcc.ed.ac.uk/computing/research_activities/JOM
P/index_1.html

Thread and RMI based libraries
• http://www.ipd.uka.de/JavaParty/ JavaParty from Michael
Phillipsen (active in JG Forum) also has
• Optimized RMI (KaRMI)
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
20
HPJava




Conceived as a language for parallel programming, especially
suitable for massively parallel, distributed memory computers.
Takes various ideas (hopefully the good ones) from High
Performance Fortran ̶ distributed array model, parallel
constructs.
But in many respects HPJava is a lower level parallel
programming language than HPF (takes the best of MPI and HPF
style programming models)
• Explicitly SPMD, requiring parallel programmer to insert calls
to collective communication libraries like MPI or Adlib
(library developed originally to support general distributed
memory parallel compilers)
• More or less as a by-product, HPJava also has a useful
“sequential” subset, that just adds scientific multidimensional
arrays (à la Fortran 90) to Java (c.f. Java Grande numerics).
http://www.hpjava.org
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
21
HPspmd Model


HPJava was originally intended as a first demonstration of
a parallel programming model we called the HPspmd
model. (Single Program Multiple Data)
• Java was chosen as the base language for this demo
(instead of Fortran 90 or C++) partly because of
JavaGrande philosophy – we expected Java to be a
more productive high performance computing
environment
Actually it took so long to finish the HPJava preprocessor
that in the mean time Java has become comparable in
speed with those languages.
• Because HPJava uses standard JVMs, it leverages all
the progress with Java.
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
22
An HPF-like Program in HPJava
Procs p = new Procs2(P, P) ;
on(p) {
group.
// Declare 2d group of processes
// Enclosed code executed by that
Range x = new ExtBlockRange(N, p.dim(0), 1, 1) ; // Distributed index ranges…
in this
Range y = new ExtBlockRange(N, p.dim(1), 1, 1) ; //
case extended with
ghost regions.
float [[-,-]] u = new float [[x, y]] ;
// A distributed array
for(int iter = 0 ; iter < NITER ; iter++) {
Adlib.writeHalo(u) ;
// Communication – edge
exchange
overall(i = x for 1 : N - 2)
// Distributed, parallel looping
construct
overall(j = y for 1 + (i` + iter) % 2 : N - 2 : 2)
u [i, j] = 0.25 * (u [i - 1, j] + u [i + 1, j] + u [i, j - 1] + u [i, j + 1]) ;
}
}
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
23
HPJava vs HPF



This HPJava program looks like HPF, but the
programming model is one of multiple, interacting
processes, or threads
• “Loosely synchronous”, not HPF single-threaded
semantics.
We invoke the communication library to update the
ghost regions in the array explicitly.
• But because we have high-level collective libraries,
this isn’t particularly onerous.
Can “break out” of the collective mindset at any time,
and resort to low-level MIMD node processing, and/or
message exchange, if algorithm demands it (which is
quite common).
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
24
Benefits of HPspmd Model



Translators are much easier to implement than full parallel
compilers. No compiler magic needed and inherit immediately
features of best “standard compilers”.
• The current HPJava compiler is just a preprocessor
converting to standard Java, using a simple translation
scheme with essentially no optimization. But performance is
not embarrassing (see later).
• Of course later we can do optimizations, and (hand-coding
suggests) improve performance significantly.
Good (object-oriented) framework for developing specialized
parallel libraries.
HPspmd designed to have “ease of writing” of HPF but allow
clearer control of parallel implementation for somebody who
understood parallel algorithm
• HPF criticized by some as too automatic
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
25
HPJava Architecture
Full HPJava
Translator
Sequential Java
with Multiarrays
(Group, Range,
on, overall,…)
int[[*, *]]
Java Source-to-Source Translation
Libraries
Adlib
OOMPH¹
MPJ¹
mpjdev
Native MPI
11/6/2015
Jini¹
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
(¹Not yet
Implemented)
26
HPJava Preprocessor Features





Input language is a strict extension of Java 2.
Multi-arrays are translated into conventional Java 1D arrays
Front-end implements all compile-time checks required by the
Java Language Spec (currently testing against Jacks suite).
• Goal: if the preprocessor accepts the source, it never outputs a
program the javac back-end will reject.
Carefully preserves line numbering, so run-time exception
messages usually point accurately back into original HPJava
source code – makes debugging HPJava “easy”.
Version 1.0 released April 2003 at http://www.hpjava.org.
• Full source of preprocessor + libraries are in public domain.
• Good framework for other experiments with Java language
extensiaons…
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
27
Libraries



Adlib is a comprehensive library for collective operations on
distributed arrays, implementing operations like reductions,
shifts, edge exchange for stencil updates, etc.
• Invoked like MPI, but higher level.
• Originally implemented to support HPF translation (shpf,
PCRC projects).
• Originally C++, now Java, implemented on mpjdev
portability layer.
MPJ is a proposed Java binding of standard MPI
OOMPH is an envisaged HPJava binding of MPI-level
operations, taking advantage of multiarrays to simplify the
API.
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
28
Low-Level Messaging for HPC


mpiJava is our own binding of MPI for Java. Implemented as native method
wrappers for “real” MPI (MPICH, Sun HPC and IBM MPI’s)
• Several other groups developed similar APIs, but mpiJava is probably most
widely used today.
• Pugh claims Java’s new I/O very fast. Prototyped a nio-based MPJava, but
not actively developing.
• Distinguish from “MPI on the Grid” MPICH-G2
MPJ was put forward as a unified “standard” by a small group (including
Vladimir Getov, Tony Skjellum and Carpenter from Indiana) but activity
appears to be dormant.
• API is quite large and inherits some ugly features from MPI and mpiJava.
A smaller, more focused, OOMPH API might be more attractive.
• Note MPI datatype’s are very unsuitable for object-based languages
• Sun HPC interested in MPJ for a while
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
29
Level of Interest in mpiJava?


Uptake of
our
mpiJava
software
over ~five
years:
Being used
in university
courses.
mpiJava Downloads/year
900
800
700
600
500
400
300
200
100
0
1998
11/6/2015
1999
2000
2001
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
2002
2003
30
Example HPJava Benchmarks on IBM SP3

Absolute speed, not scaled speed-up
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
31
Distributed Computing






Much of the work of Forum was in distributed
computing
It included several pure Java frameworks such as those
from European groups (Bal, Caromel, Phillipsen)
Forum initially focused on fast RMI and a “Java
Framework for Computing” or Java Computing
Portals
Most workers in this field now position research in a
Grid context
RMI becomes GridRPC
Portals become Grid Computing Environments (cochaired by Fox and Gannon to reflect JG Heritage)
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
32
Background on Indiana
Community Grids
Laboratory Research
Geoffrey Fox, Director
http://grids.ucs.indiana.edu/ptliupages/
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/"
email="[email protected]"
33
6 Activity Areas in CG Laboratory I

HPJava: Parallel Programming with Java
• MPI and HPF Style Programming in Java (multi-arrays)
• http://www.hpjava.org
• Build on this for “HPSearch” with Java bound to
“Grid/Web/XPATH/Google” handles
• Available Dec. 2002; mpiJava available for 3 years

NaradaBrokering Publish/Subscribe Distributed
Event/Message System
• http://www.naradabrokering.org
• “MQSeries/JMS” P/S applied to Collaboration, Grid,
P2P(JXTA)
• Supports UDP, TCP/IP, Firewalls (actual transport  user
call)
• Used in other projects: Collaboration, Portal and Handheld
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
34
6 Activity Areas in CG Laboratory II

Online Knowledge Center DoD HPCC Support Portal
http://ptlportal.communitygrids.iu.edu/portal/
• Portal, Database, XML Metadata Tools
• Jetspeed and portlet architecture
• http://www.xmlnuggets.org is “email group” interface for browsing
multiple instances of a Schema (also XML based news groups)
• Schema wizard gives general user interfaces for each Schema

Gateway Computing Portal
• DoD HPCMO, Geoscience, (Bioinformatics, particle physics)
applications
• Web Service based (originally CORBA)
• Kerberos, SAML Security, GCE Shell – 70 functions
• Integrate data and compute Grids
• http://www.gatewayportal.org/ and http://www.servogrid.org/
• Portlets in NCSA Alliance Portal
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
35
6 Activity Areas in CG Laboratory III
Components of an Education Grid


Anabas provides base JMS based collaborative e-learning
service (Fox co-founded 2 years ago)
Collaboration as a Web Service
• General XGSP specification of Collaborative session capturing
H323 SIP JXTA
• Audio-Video conferencing as a web service – Admire (Beihang),
Access Grid, VOIP, Polycom, Desktop USB
• Move all tools and shared applications (Word, PowerPoint)
• General Scheme to make WS’s collaborative using NB

Carousel HandHeld Collaborative Environments
• iPAQ running Savaje Java OS linked to PC’s; adding cellphone/PDA tandems
• SVG as a Web Service demonstrated
• Universal Access
• http://grids.ucs.indiana.edu/ptliupages/projects/carousel/
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
36
HPJava: December 2003


The HPJava project has been in hibernation
(maintenance only) since the release of the HPJava
Development Kit, version 1.0, April this year.
The release wasn’t very aggressively advertised, partly
because the project ran out of funding.
• Rate of downloads similar to early days of mpiJava, which
was quite aggressively advertised.

Noticeable growth in downloads of mpiJava in the past
few months. Not sure why.
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
37
HPJava Software Status

The 1.0 release was functionally complete and self-contained
with good documentation, and a reasonably comprehensive test
suite. Reliable, we think.
• Highly compliant with standard Java. Strict extension of JLS 2nd ed.
syntax. Produces standard class files. Uses standard JVMs to execute.
Any standard Java code can be invoked.


Performance suffers from a naïve translation scheme. It would
be easy to improve the node code dramatically, with standard
optimizations. Anybody have compiler experience?
HPJava language specification is quite stable and well-defined.
Probably some semantic details should be changed to make
optimization easier.
• E.g. make usage rules for multiarrays more explicitly Fortran-like.
• Sequential multiarrays deserve special attention: they were an
afterthought when we were originally defining the language.
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
38
Java MPI Status

mpiJava is popular in its field, popularity apparently still
increasing.
• Foundation for communication in the HPJava language, but mostly used
standalone at the moment.

It is old, and the implementation needs a complete rewrite.
• It is a relatively complex API where we tried to make every call a JNI
wrapper. Now obvious this is the wrong strategy.


It should be rebuilt on top of a layer like mpjdev, which could be
implemented using native MPI (or other native HPC platform)
or on top of Java NIO (plus Jini?). This should make it much
more maintainable.
The API could also be improved.
• More NIO compatible?
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
39
Related Systems I: UPC


UPC is a new parallel version of the C language, supported by
various groups and companies.
Programming model is distributed shared memory: somewhere
in between HPF and OpenMP.
• c.f. HPJava is somewhere in between HPF and MPI.


Adds shared type signatures for arrays, pointers.
Block cyclic distribution formats like HPF, but one dimensional
distribution only.
• multidimensional arrays effectively flattened before mapping.
• HPJava, like HPF, has true multidimensional distributions.

Primitives in language for barriers, locks.
• HPJava doesn’t have these: message + library based communication.

HPJava, UPC both SPMD. After that the approaches and
motivations are very different.
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
40
Related Systems II: Titanium


Parallel language from Berkeley, based on Java.
Titanium Syntax =
Java 1 + many new features – Java Threads
Parallel programming model conceptually similar to UPC, but
seems more complicated.
• Variables can be independently global or local, shared or nonshared (or
polyshared).
• Discipline of single variables and methods, and compile time checks on
valid barrier sequence.
• Sophisticated model of multiarrays, based on Domain and Point concepts.
• Many other language-level features…

Not very compatible with Java: compiles to C++; doesn’t use a
JVM; thread-centric Java packages like AWT unavailable.
11/6/2015
uri="http://ptlportal.ucs.indiana.edu/" email="[email protected]"
41