eVLBI Progress

Download Report

Transcript eVLBI Progress

High Bandwidth, Radio Astronomy
Data Transport
Background
• Radio Astronomy and Radio
Interferometers
• Very Long Baseline Interferometry
• Current VLBI Technology
EVN-NREN Project
• Objectives
• Current status and results
Future Possibilities
• Wavelength switched networks
• VLBI as a GRID application
• Distributed correlation
• Post-correlator processing
• Data distribution
Questions?
Title & Outline
TERENA Network Conference, June 2004, Rhodes
TERENA Network Conference
June 2004
Rhodes
Steve Parsley
JIVE
[email protected]
• Super massive Black Hole
• Mass: 109 Suns
• Brightness: 1012 Suns
• 1 Light-day across
VLBI can measure velocity of telescopes to better than
1mm/year: Used to evaluate rotation, orientation and
shape of the Earth.
TERENA Network Conference, June 2004, Rhodes
http://www.cv.nrao.edu/~abridle/dragnparts.htm
VLA
WSRT
MERLIN
Very Long Baseline Interferometry
TERENA Network Conference, June 2004, Rhodes
VLBI: Summary
• Scientists submit proposals to a
review committee.
• About twenty experiments
selected.
• Three sessions per year, two
weeks of continuous
observations
• Up to 20 telescopes per
experiment
• All telescopes observe object at
the same time
• Telescope signals correlated (=
multiply/accumulate) on all
baselines, at many different
separations and orientations
• Fourier transform yields high
resolution image of radio source:
better than any other
instrument:
( ~10s of micro-arc-seconds)
TERENA Network Conference, June 2004, Rhodes
0.5º
•
•
•
•
Correlator
Highly dedicated super-computer
16 thousand billion operations per second
1024 correlator chips working in parallel
Capable of processing 16 X 1Gbit/s data
streams
Tape recorders
•
•
•
•
Record/replay at 1Gb/s with tape spinning at 8m/s
~1Tbyte per tape
1980s technology
Expensive: Recorder/Player: $100k, Tapes: $1.2k,
Ferrite Heads: $9k
• No further increase in performance feasible.
TERENA Network Conference, June 2004, Rhodes
Enter eVLBI:- VLBI over international networks
Connecting to the
Telescopes is not easy.
….in a two-year Proof
They are deliberately
of Concept Project
built far from cities
(PoC) we plan to
so it’s expensive
connect up to six
telescopes to JIVE in
The principle must be
real-time.
demonstrated first……
TERENA Network Conference, June 2004, Rhodes
PoC Targets
• For the EVN and JIVE
• Feasibility of eVLBI:- Costs, timescales, logistics.
• Standards:- Protocols, parameter tuning, procedures at
telescope and correlator.
• New Capabilities:- Higher data rates, improved reliability,
quicker response.
• FOR GÉANT and the NRENs
• To see significant network usage with multiple Gbit streams
converging on JIVE.
• Minimum is three telescopes (not including Westerbork)
• Must be seen to enable new science and not just solve an
existing data-transport problem
• Real-time operation is seen as the ultimate aim, buffered
operation accepted as a development stage.
TERENA Network Conference, June 2004, Rhodes
eVLBI Proof-of-Concept
Project
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
DANTE/GÉANT
SURFnet
GARR
UKERNA
PSNC
DFN
KTHNOC/NORDUnet
Manchester University
JIVE
Westerbork telescope
Onsala Space Observatory
MRO
MPIfR
Jodrell Bank
TCfA
CNR IRA
TERENA Network Conference, June 2004, Rhodes
Pan-European Network
Dutch NREN
Italian NREN
UK NREN
Polish NREN
German NREN
Nordic NREN
Network application software
EVN Correlator
Netherlands
Sweden
Finland
Germany
UK
Poland
Italy
GE lines to
correlator
LX Optics
converters
Cisco ONS 15252
Optical Splitters
Correlator Interface
TERENA Network Conference, June 2004, Rhodes
Fibre pair from
Amsterdam
eVLBI Milestones
•
•
•
•
•
•
•
•
September 2002: First International optical –fibre, eVLBI
fringes between UK and Netherlands at iGRID conference
May 2003: First use of FTP for VLBI session fringe
checks.
September 2003: eVLBI data transfer between Bologna
and JIVE – 300Mb/s
November 2003: Cambridge – Westerbork fringes
detected, only 15 minutes after observations were made.
• 64Mb/s, with disk buffering at JIVE only.
November 2003: Onsala Space Observatory (Sweden)
connected at 1Gb/s.
January 2004: Disk buffered eVLBI session:
• Three telescopes at 128Mb/s for first eVLBI image
• On – Wb fringes at 256Mb/s
March 2004: First intercontinental real-time eVLBI
experiment between Onsala, Sweden and Westford,
Massachusetts, USA
April 2004: Three-telescope, real-time eVLBI session.
• Fringes at 64Mb/s
• First real-time EVN image - 32Mb/s.
TERENA Network Conference, June 2004, Rhodes
Network Topology
Network
North-West
Gbit link
Onsala
Sweden
Chalmers University
of Technology,
Gothenburg
150Mbit link
Jodrell Bank
UK
Dwingeloo
DWDM link
MERLIN
Microwave
link
Cambridge
UK
Westerbork
Netherlands
Gbit link
Data hand-carried
to Bologna POP
Medicina
Italy
The lines to Amsterdam also
connect us to global networks
The future
• Now: 1Gb/s per telescope
• Target for 2010: 30Gb/s per telescope.
• Vital to complement capabilities of other instruments and
techniques.
• Requires new data collection, transport and pocessing solutions.
• A single, 16-telescope experiment generate 2.5 PetaBytes of
raw data.
• Data from all telescopes must be presented to the data
processor simultaneously and with very precise synchronization.
• To avoid a huge data storage/buffering problem we will
therefore need stable, high bandwidth, point-to-point data
paths for several weeks at a time.
• eVLBI application is ripe for the concept of connectionoriented networks based on optically switched
infrastructures.
TERENA Network Conference, June 2004, Rhodes
Is VLBI a suitable GRID Application?
1. Data Transport
Telescopes: 16
4
Baselines: 120
6
TERENA Network Conference, June 2004, Rhodes
Is VLBI a suitable GRID Application?
2. Processing Hardware
• Custom LSI chip
• 4X109 complex MAC
per second
• Correlator has 1024
chips working in
parallel
• Technologically
contemporary with
486/66 processor
TERENA Network Conference, June 2004, Rhodes
So, arguments against GRID correlation:
• A central, dedicated processor will always be more efficient than an
array of general purpose processors
• If data are centralised, all baselines can be evaluated in parallel.
Distributed processing implies duplication of data to multiple
processing nodes
But, arguments for GRID correlation:
• Dedicated processors are VERY expensive
• General purpose processors are VERY cheap
(especially if they belong to someone else!)
• The middleware needed to orchestrate the process is being developed
for the GRID
• If the data are time-sliced, the total amount of data to be moved is the
same as in the central processor case but more evenly distributed over
the network
TERENA Network Conference, June 2004, Rhodes
VLBI Correlation: GRID Computation task
Controller/Data
Concentrator
Processing Nodes
TERENA Network Conference, June 2004, Rhodes
Post correlation processing
• Shorter integration times required for wide field imaging.
• Current maximum correlator output : 10 MByte/s
• 12 hours to image/Fourier transform a 12 hour experiment
• Upgrade to 160 MByte/s, almost complete: 7 TByte for a typical 12
hour experiment
• Months to process 12h experiment using existing resources
• Interactive de-convolution requires much faster than real-time imaging/FT
TERENA Network Conference, June 2004, Rhodes
Post-correlator processing
4 X Compute Node
1.8G Opteron, 2GB RAM
6 X Storage Node
1.8G Opteron, 4GB RAM,
4Tbyte RAID
InfiniBand interconnect:
Low latency: 6s
High bandwidth: 10Gb/s
24 ports
TERENA Network Conference, June 2004, Rhodes
The tape recorders 2
TERENA Network Conference, June 2004, Rhodes