The Architecture of the ZEUS Micro Vertex Detector DAQ and Second Level Global Track Trigger Alessandro Polini DESY/Bonn ZEUS MVD and GTT Group: ANL, Bonn.

Download Report

Transcript The Architecture of the ZEUS Micro Vertex Detector DAQ and Second Level Global Track Trigger Alessandro Polini DESY/Bonn ZEUS MVD and GTT Group: ANL, Bonn.

The Architecture of the ZEUS Micro
Vertex Detector DAQ and Second Level
Global Track Trigger
Alessandro Polini
DESY/Bonn
ZEUS MVD and GTT Group: ANL, Bonn Univ.,
DESY-Hamburg -Zeuthen, Hamburg Univ., KEKJapan, NIKHEF, Oxford Univ., Bologna, Firenze,
Padova, Torino Univ. and INFN, UCL, Yale, York.
Computing Seminar, DESY, 23 June 2003
1
A. Polini
Outline



The Micro Vertex Detector Layout
The ZEUS Experiment Environment & Requirements
The MVD DAQ and the Global Track Trigger
–




Design, Hardware & Software Solutions
The GTT Algorithm
Performance and first experience with real data
Slow Control + Control and Monitoring Software
Summary and Outlook
Computing Seminar, DESY, 23 June 2003
2
A. Polini
Detector Layout
Forward Section
410 mm
Barrel Section
622 mm
e±
p
27.5
GeV
920
GeV
The forward section consists of
4 wheels with 28 wedged silicon
sensors/layer providing
r- information.
The Barrel section provides 3
layers of support frames (ladders)
which hold 5 full modules, 600 square
sensors in total, providing r- and r-z space points.
Computing Seminar, DESY, 23 June 2003
3
A. Polini
Barrel MVD: Module and Ladder Structure
125 mm
Two single
sensors
are glued and
electrically
connected by
gold plated
Upilex foils
Two half modules
are then glued together
to form a full module
The Upilex
connection foils can
be bent and glued
to the ladder profile
Five full modules are
disposed over a
carbon fiber support
Computing Seminar, DESY, 23 June 2003
4
A. Polini
Forward MVD Layout
Forward wheels:
• 112 Si planes with wedge shape (480 readout strips);
• r- measurement;
• 1 wheel made of 14x2 detectors;
• 4 wheels placed @ z = 311, 441, 571 and 721 mm from IP.
Computing Seminar, DESY, 23 June 2003
5
A. Polini
Silicon Microstrip Detectors
•
120 mm
•
•
QRIGHT
QLEFT
p+ implantation of
the intermediate strip
Cc
Cint
Cc >> Cint > Cb
particle
Cb
20 mm
Interstrip coordinate x
Computing Seminar, DESY, 23 June 2003
•
n-doped silicon wafers (300 mm
thickness) with p+
implantations (12 or 14 mm
wide), HAMAMATSU PH. K.K.
512 (480 for forward sensors)
readout channels.
Using the capacitive charge
sharing, the analogue readout
of one strip every 6 allows a
good resolution (<20 mm)
despite the readout pitch of 120
mm.
Highest coupling to the front
end electronics if:
p+ implantation of
The readout strip
6
The charge sharing is a non linear
function h on the interstrip coordinate x
Qright
h
Qleft  Qright
A. Polini
Detector Front-end
Front-end Chip HELIX 3.0*
– 128 channel analog pipelined readout system specifically designed
for the HERA environment (up to 40 MHz).
– Highly programmable for wide and flexible usage
– ENC[e]  400 + 40*C[pF] (no radiation damage included, S/N ~13).
– Data read-out and multiplexed (96ns) over the analog output
– Internal Test Pulse and Failsafe Token Ring (8 chips) capability
125 mm
Front-end
Hybrid
Silicon
Sensors
* Uni. Heidelberg Nim A447, 89 (2000)
Computing Seminar, DESY, 23 June 2003
7
A. Polini
The ZEUS Detector
e±
27.5
GeV
CTD
FLT
GSLT Accept/Reject
Other
CAL
Components CTD
SLT
SLT
Global Second
Level Trigger
5ms pipeline
5ms pipeline
920
GeV
Other
Components
Global First
Level Trigger
Event Buffers
p
CAL
FLT
107 Hz
CTD
Front End
~0.7 ms
500Hz
Event Buffers
CAL
Front End
~10 ms
GSLT Accept/Reject
40Hz
CAL
CTD
Event Builder
Third Level Trigger
bunch crossing interval: 96 ns
cpu
ZEUS: 3-Level Trigger System
(Rate 500Hz405 Hz)
Computing Seminar, DESY, 23 June 2003
cpu
cpu
cpu
cpu
cpu
5Hz
Offline Tape
8
A. Polini
MVD DAQ and Trigger Design



ZEUS experiment designed end of the ’80s
–
First high rate (96 ns) pipelined system
–
With a flexible 3 level trigger
–
Main building blocks were transputers (20 MHz,20Mbit/s)
10 years later the MVD:
–
208.000 analog channels (more than the whole of ZEUS)
–
MVD available for triggering from 2nd level trigger on
DAQ Design Choice:
–
Use off-the-shelf products whenever possible
–
VME embedded systems for readout
–
Commercial Fast/Ethernet Gigabit Network
–
Linux PC for data processing
Computing Seminar, DESY, 23 June 2003
9
A. Polini
The Global Track Trigger
Conceptual Development Path
–
MVD participation in GFLT not feasible, readout latency too large.
–
Participation at GSLT possible:


–
But track and vertex information poor due to low number of
planes.
Expand scope to interface data from other tracking detectors:


–
Tests pushing ADC/VME data over FastEthernet gave
acceptable rates/latencies performance.
Initially Central Tracking Detector (CTD) - overlap with
MVD barrel detectors
Later Straw Tube Tracker (STT) - overlap with MVD
wheels detectors.
Implement GTT as a PC farm with TCP data and control path
Trigger Aims
–
Higher quality track reconstruction and rate reduction at GSLT
–
Primary Z vertex resolution 9 cm (CTD only)  400 mm (+MVD)
–
Decision required within existing SLT (<15 ms)
–
Eventually sensitive to heavy quark secondary vertices
Computing Seminar, DESY, 23 June 2003
10
Dijet MC event
A. Polini
The MVD Readout
Custom made ADC Modules*

9u VME board + private bus extensions

Eight 20 MHz ADCs detector modules
(~8000 channels per board)

10 bit resolution

Common Mode, Pedestal and Noise Subtraction

Strip Clustering

2 separate data buffers:
~30 boards in 3 crates
– cluster data (for trigger purposes)
– raw/strip data for accepted events.

Design event data sizes
– Max. raw data size: 1.5 MB event (~208.000 ch)
– Strip data: Noise threshold 3 sigma (~15 kB)
– Cluster data ~ 8 kB
* Kek Tokyo Nim A436,281 (1999)
Computing Seminar, DESY, 23 June 2003
11
A. Polini
First Level Trigger Synchronization
Clock and Control Master board*

For synchronization to the ZEUS GFLT

Standalone operation provided

Handles:
Slave
Clock, Accept/Abort, FLT+Bunch Crossing Numbers
Trigger Type, Readout Type, Test Triggers
Masterbox
Busy, Error, Fatal-Error
Run Control
Slave Board

One Slave board per ADC crate

Initiate Helix readout of ADC on GFLT accept

Non accepted triggers role off Helix pipeline
Helix Interface Fan-out and driver boards

Helix Front-end programming

Pipeline synchronization and readout
* ZEUS UCL http://www.hep.ucl.ac.uk/zeus/mvd/candc.html
Computing Seminar, DESY, 23 June 2003
Helix Interface
12
Slave
A. Polini
Interfaces to the ZEUS Existing Environment
The ZEUS Experiment is based on Transputer
GTT Interfaces for data gathering from
other detectors (CTD, STT) and connection
to existing Global Second Level Trigger done
using ZEUS 2TP* modules
CTD, STT, GSLT had to expand their TP
network to communicate to the MVD.
All newer components interfaces using
Fast/Gbit Ethernet
VME TP connections planned to be upgraded
to Linux PC + PCI-TP interface
* Nikhef NIM A332, 263 (1993)
Computing Seminar, DESY, 23 June 2003
13
A. Polini
The MVD Data Acquisition System and GTT
Central Tracking
Detector Read-out
Analog Data
Forward Tracking, Straw
Tube Tracker Read-out
MVD HELIX Front-End & Patch-Boxes
MVD VME Readout
NIM + AnalogLinks
Latency
Lynx
OS
CPU
HELIX
Driver Front-end
Lynx
OS
CPU
VME HELIX Driver Crate
Clock +
Control
ADCM
modules
VME (C+C Slave)
Crate 2 (MVD forward)
NIM + AnalogLinks
Latency
Lynx
OS
CPU
ADCM
modules
VME (C+C Slave)
Crate 1 (MVD bottom)
Clock+
Control
NIM + AnalogLinks
Latency
Lynx
OS
CPU
ADCM
modules
NIM +
Latency
NIM +
Latency
Clock+
Control
Lynx
OS
CPU
VME (C+C Master)
Crate 0 (MVD top)
CTD 2TP
modules
VME TP connection
Data from CTD
Lynx
OS
CPU
Global First Level
Trigger,Busy, Error
STT 2TP
module
VME TP connection
Data from STT
NIM +
Latency
Lynx
OS
CPU
GSLT 2TP
modules
Global Tracking Trigger Processors (GFLT rate ~500 Hz)
TP connection to
the Global Second
Level Trigger
Fast Ethernet/
Gigabit Network
NIM +
Latency
Lynx
OS
CPU
ZEUS Run Control and Online
Monitoring Environment
GTT Control +
Fan-out
Global Second
Level Trigger
Decision
Slow control +
Latency Clock modules
VME CPU Boot Server
and Control
Main MVDDAQ server, Local
Network Connection to
Control, Event-Builder Interface
the ZEUS Event Builder
Computing Seminar,
DESY,
23
June
2003
14
(~50 Hz)
A. Polini
GTT Hardware
Implementation
–
–

NIKHEF-2TP VME-Transputer

Motorola MVME2400 450MHz
PC farm
–
PC farm and switches
12 DELL PowerEdge 4400 Dual 1GHz
GTT/GSLT result interface

–
3 Motorola MVME2400 450MHz
CTD/STT interfaces

–
MVD readout
MVD readout

–
CTD/STT interface
Motorola MVME2700 367MHz
GSLT/EVB trigger result interface

DELL PowerEdge 4400 Dual 1GHz

DELL Poweredge 6450 Quad 700 MHz
GTT/GSLT interface
Network switches

EVB/GSLT result interface
3 Intel Express 480T Fast/Giga 16 ports
Thanks to Intel Corp. who provided high-performance
switch and PowerEdge hardware via Yale grant.
Computing Seminar, DESY, 23 June 2003
15
A. Polini
VME Data Readout and Transfer

Data readout and transfer using LynxOS 3.01* Real Time OS on
network booted Motorola MVME2400/MV2700 PPC VME Computers
–

System interrupt driven (data transfer on last ADC data ready)
–

Readout Cluster data and send (TCPIP) to "free" processing GTT
On receiving GSLT decision (TCPIP)
–

VME functionalities using developed VME driver/library uvmelib*:
multiuser VME access, contiguous memory mapping and DMA transfers,
VME interrupt handling and process synchronization
Readout Strip data on "accept" and send (TCPIP) to Event Builder
Priority scheduling of interrupt, readout and send task pipeline required
–
Real Time kernel → LynxOS
–
Otherwise readout-transfer latency unstable
1 CPU per ADC crate
* http://www.lynuxworks.com
Computing Seminar, DESY, 23 June 2003
16
A. Polini
UVMElib* Software Package


Exploit the Tundra UniverseII VME bridge features
–
8 independent windows for R/W access to the VME
–
Flexible Interrupt and DMA capabilities
Library layered on an enhanced uvmedriver
–
Mapping of VME addresses AND contiguous PCI DRAM segments
–
Each window (8 VME, N PCI) addressed
by a kernel uvme_smem_t structure


Performance
–
18MB/s DMA transfer on std VMEbus
–
Less than 50μs response on VME IRQ
typedef struct {
int id;
unsigned mode;
int size;
unsigned physical;
unsigned virtual;
char name[20];
} uvme_smem_t;
Addressing Mode: A32D32,
A24D32, A16D16… SYS,USR…
For DMA transfer
For R/W operations
Symbolic Mapping
Additional Features
–
Flexible interrupt usage via global system semaphores
–
Additional Semaphores for process synchronization
–
DMA PCI
–
Easy system monitoring via semaphores status and counters
VME trasfer queuing
Computing Seminar, DESY, 23 June 2003 * http://mvddaq.desy.de/
17
A. Polini
VME Software and Monitoring

All VME DAQ Software syncrhonized via UVMElib system semaphores (+IRQ)

Custom designed VME “all purpose Latency clock + interrupt board”

Full DAQ wide latency measurement system (16μs)


Data transfer over Fast Ethernet/GigaBit network using TCPIP connections
NIM +
Latency
Data trasfer as binary stream
with an XDR header

data file playback capability
(Montecarlo or dumped)
Computing Seminar, DESY, 23 June 2003
Lynx
OS
CPU
NIM +
Latency AnalogLinks
GSLT 2TP
modules
Lynx
OS
CPU
NIM +
Latency
Clock +
Control
ADCM
modules
Lynx
OS
CPU
Slow control +
Latency Clock modules
F/E Network
Latency Clock
GSLT VME interface
18
MVD VME Readout
Crates
CPU Boot Server and Control
A. Polini
GTT Network Connections and Message Transfer Definitions
GFLT accept trigger decision
SETUP transition:
1. Credit Allocation
2. Credit-Socket Resolution
3. Credit List
ACTIVE State:
4. Free credit
5. Data to algorithm
6. Algorithm result
7. Algorithm finished, free credit
8. GSLT result
9. Algorithm result Banks and
MVD MVD cluster data
10. MVD strip data
11. Standalone Run and
Latency measurements
12. Histograms and Pedestals
MVD
1
2
3
4
8
3
STT
CTD
4
1
2
2
2
5
3
GTT
1
8
2
3
2
4
1
6
7
1
FROMGSLT
9
1
11
10
2
TOGSLT
Bind+accept (server)
Connect
(client)
1
TOEVB
12
GSLT decision
Computing Seminar, DESY, 23 June 2003
1
GTT decision
19
Event Builder
A. Polini
Sizing the GTT
Naïve estimate of GTT node multiplicity:
–
Ignore network transit times
–
Assume higher rate than expected
–
GTT latency at GSLT must not be worse than
existing CTD component
–
Control (credit) based identification of next
free GTT node (not Round-Robin) preferred
Simulate mean and max waiting time for node:
~ 10 nodes needed
GTT Credit Distribution from a typical Luminosity Run (12 nodes)
Computing Seminar, DESY, 23 June 2003
20
A. Polini
GTT Algorithm Description
Modular Algorithm Design
– Two concurrent algorithms foreseen

Barrel (MVD+CTD)

Forward (MVD+STT)
– Process one event per host
– multithreaded event processing:

data receiving unpacking

concurrent algorithms

time-out

Data trasfer for accepted events
At present only a barrel
algorithm is implemented.
A forward algorithm is
in development phase.
Computing Seminar, DESY, 23 June 2003
21
A. Polini
Barrel Algorithm Description
Find tracks in the CTD, extrapolate into the MVD
to resolve pattern recognition ambiguity
–
Find segments in Axial and Stereo layers of CTD
–
Match Axial Segments to get r- tracks
–
Match MVD r- hits
–
Refit r- track including MVD r- hits
After finding 2-D tracks in r-, look for 3-D
tracks in z-axial track length,s:
–
Match stereo segments to track in r- to get
position for z-s fit
–
Extrapolation to inner CTD layers
–
If available use coarse MVD wafer position to
guide extrapolation
–
Match MVD z hits
–
Refit z-s track including z hits
Constrained or unconstrained fit
–
Pattern recognition better with constrained
tracks
–
Secondary vertices require unconstrained tracks
Unconstrained track refit after MVD hits have
been matched
Computing Seminar, DESY, 23 June 2003
22
A. Polini
First Tracking Results
GTT event
display
Run 42314
Event 938
Physics data
vertex
distribution
Dijet Montecarlo
Vertex Resolution
Collimator C5
Backscattering
Nominal Vertex
Run 44569
Vertex Distribution
mm
Resolution including MVD from MC ~400 μm
Computing Seminar, DESY, 23 June 2003
23
A. Polini
First Tracking Results
GTT event
display
Yet another
background
event
Physics data
vertex
distribution
Dijet Montecarlo
Vertex Resolution
Collimator C5
Backscattering
Nominal Vertex
Run 44569
Vertex Distribution
mm
Resolution including MVD from MC ~400 μm
Computing Seminar, DESY, 23 June 2003
24
A. Polini
First GTT Latency Results
ms
ms
ms
MVD VME SLT Readout Latency CTD VME Readout Latency
(with respect to ADC IRQ)
with respect to MVD

GTT Latency after complete
trigger processing
2002 HERA running, after lumi upgrade
compromized by high background rates
MVD-GTT Latency as
measured by GSLT
Mean GTT latency vs GFLT rate per run
Hz
– Mean datasizes from CTD and MVD
much larger than the design

ms
Low data occupancy rate tests
Montecarlo
Sept 2002 runs used to tune datasize cuts
– Allowed GTT to run with acceptable
mean latency and tails at the GSLT
HERA
– Design rate of 500 Hz appears possible
ms
Computing Seminar, DESY, 23 June 2003
25
A. Polini
MVD Slow Control
CANbus is the principle fieldbus used:
2 ESD CAN-PCI/331* dual CANbus adapter in 2
linux PCs
Each SC sub-system uses a dedicated CANbus:
–
Silicon detector/radiation monitor bias voltage:
30 ISEG EHQ F0025p 16 channel supply boards**+
4 ISEG ECH 238L UPS 6U EURO crates
–
Front-end Hybrid low voltage:
Custom implementation based on the ZEUS LPS
detector supplies (INFN TO)
–
Cooling and Temperature monitor:
Custom NIKHEF SPICan Box ***
–
Interlock System:
Frenzel+Berg EASY-30 CAN/SPS ****
MVD slow control operation:
–
Channel monitoring performed typically every 30s
–
CAN emergency messages are implimented
–
SC wrong state disables the experiment‘s trigger
*http://www.esd.electronic **http://www.iseg-hv.com ***http://www.nikhef.nl/user/n48/zeus_doc.html
****//www.frenzel-berg.de/produkte/easy.html
Computing Seminar, DESY, 23 June 2003
26
A. Polini
Software Issues



Standard C software used throughout, but:
–
Rewrote LynxOS VME driver to exploit TUNDRA chipset
–
ROOT used for GUI's (Run Control, Slow Control, histogram display, etc.)
–
Multi-threaded when needed
–
Web browser access to configuration, status and summary information*
Hosts
–
VME LynxOS systems network booted (from LynxOS MVME2700 disk based)
–
Linux PCs (suse 8.1) disk booted
–
Executables, etc. mounted via NFS
–
Boot-started Daemon process starts-stops advertised processes
Process communications
-
Single XDR definition file used
-
Single TCP HUB processes:

Allocates unique name on client request

Permanent or temporary message forwarding
based on name, XDR message type and MD5 hash
Most recent message (name, type, hash) stored
* http://mvddaq.desy.de

Computing Seminar, DESY, 23 June 2003
27
A. Polini
Run Control


Operation modes:
–
Standalone use ROOT GUI
–
In ZEUS use Gateway process and GUI viewer
Configuration:
–
Run defined by
 Run type (PHYSICS, DAQTEST),
Run Control GUI
 Trigger type (STD-PHYSICS, RATE-TEST)
 Run number
–

Process execution:
–

CPP used to sequence list of processes + input
variables
Process control starts-stops process via daemons
Pedestal, time scan and charge injection runs
done by shiftcrew
Computing Seminar, DESY, 23 June 2003
28
Histogram
Display GUI
A. Polini
Slow Control Software

Slow Control Controller process
–
Simplified process control
–
Sequences transitions to:
 ON
 OFF
 STANDBY
–
By sending configuration to each sub-system
–
Interfaces to BBL3 experiment interlock disables
trigger if in wrong state

Slow Control GUI
Each sub-system
–
Handles CANbus traffic:
 Control commands
 Monitoring
 Emergency messages
–
Performs transitions required
–
Produces monitoring histograms, etc
–
Flags errors
Cooling Control GUI
Computing Seminar, DESY, 23 June 2003
29
A. Polini
Calibration and DQM


Calibration Runs (noise and pedestals runs)
Online and offline DQM tasks
–
Noise and Pedestals
strip maps, bad channels, timing, multiplicity,
studies and ADC signal profiles

Detector performance stored in
offline mySQL accessible database
–
Needed for offline data analysis
–
Useful for tracking detector performance
Test Pulse/Gain
Computing Seminar, DESY, 23 June 2003
30
Test Pulse Time Scan (radiation damage)
A. Polini
Summary and Outlook




The MVD and GTT system have been successfully integrated into the ZEUS
experiment
267 runs with 3.1Mio events recorded between 31/10/02 and 18/02/03 with
MVD on and DQM (~ 700 nb-1)
The MVD DAQ and GTT architecture, built as a synthesis of custom solutions
and common-off-the-shelf equipment (real-time OS + Linux PC+ Gigabit Network),
works reliably
The MVD DAQ and GTT performance (latency, throughput and stability) are
satisfactory
Next steps:


Enable utilization of GTT barrel algorithm result at the GSLT
Finalize development and integration of the forward algorithm
So far very encouraging results. Looking forward
to routine high Luminosity data taking.
Computing Seminar, DESY, 23 June 2003
31
A. Polini
HELIX 3.0 Schematics
Computing Seminar, DESY, 23 June 2003
32
A. Polini
VME Systems: DAQ Monitor Consoles
Internal Data buffer (SLT/EVB)
Latencies measurements
List of UVME Shared Memories
Computing Seminar, DESY, 23 June 2003
UVME System Semaphores
(process synchronization)
CTD TP interface
33
A. Polini