LHCb Trigger and Data Acquisition System
Download
Report
Transcript LHCb Trigger and Data Acquisition System
LHCb Trigger and Data Acquisition System
Beat Jost
Cern / EP
Presentation given at the
11th IEEE NPSS Real Time Conference
June 14-18, 1999
Santa Fe, NM
Outline
Introduction
General Trigger/DAQ Architecture
Trigger/DAQ functional components
Selected Topics
Level-1 Trigger
Event-Building Network Simulation
Summary
RT 99, Santa Fe, 15 June 1999
Beat Jost, Cern
Slide 2
Introduction to LHCb
Special purpose experiment to
measure precisely CP violation
parameters in the BB system
Detector is a single-arm
spectrometer with one dipole
Total b-quark production rate is
~75 kHz
Expected rate from inelastic p-p
collisions is ~15 MHz
Branching ratios of interesting
channels range between 10-5-10-4
giving interesting physics rate
of ~5 Hz
RT 99, Santa Fe, 15 June 1999
Beat Jost, Cern
LHCb in Numbers
Number of Channels
900000
Bunch crossing rate
40 MHz
Level-0 accept rate
1 MHz
Level-1 accept rate
40 kHz
Readout Rate
40 kHz
Event Size
100 kB
Event Building Bandwidth
4 GB/s
Level-2 accept rate
~5 kHz
Level-3 accept rate
~200 Hz
2·106 MIPS
Level-2/3 CPU Power
Data rate to Storage
20 MB/s
Slide 3
LHCb Detector
RT 99, Santa Fe, 15 June 1999
Beat Jost, Cern
Slide 4
Typical Interesting Event
RT 99, Santa Fe, 15 June 1999
Beat Jost, Cern
Slide 5
Trigger/DAQ Architecture
LHC-B Detector
VDET TRACK ECAL
HCAL MUON
Data
rates
RICH
40 MHz
Fixed latency
4.0 ms
Level 1
Trigger
40 TB/s
1 MHz
Timing L0
&
Fast
40 kHz
L1
Control
Front-End Electronics
Front-End Multiplexers (FEM)
1 MHz
Front End Links
Variable latency
<1 ms
Throttle
RU
RU
RU
Read-out units (RU)
Read-out Network (RN)
SFC
Variable latency
L2 ~10 ms
L3 ~200 ms
RT 99, Santa Fe, 15 June 1999
Storage
Beat Jost, Cern
SFC
1 TB/s
LAN
Level 0
Trigger
4 GB/s
2-4 GB/s
Sub-Farm Controllers (SFC)
CPU
CPU
CPU
CPU
Trigger Level 2 & 3
Event Filter
Control
&
Monitoring
20 MB/s
Slide 6
Front-End Electronics
Data Buffering for Level-0 latency
Data Buffering for Level-1 latency
Digitization and Zero Suppression
Front-end Multiplexing onto Frontend links
RT 99, Santa Fe, 15 June 1999
Beat Jost, Cern
T itle:
(f e_figure2.eps)
Creator:
Adobe Il lus trator(T M) 7.0
Preview:
T his EPS pic ture was not s aved
with a preview i nc luded i n i t.
Comment:
T his EPS pic ture wi ll pri nt to a
Pos tScri pt print er, but not to
other t ypes of printers.
Slide 7
Timing and Fast Control
LHC
Clock
Readout
Supervisor
L0
L1
Readout
Supervisor
Readout Supervisor
Fan-out
L1
L1
L0
Subdet
trig.
Readout
Supervisor
gL0
gL1
L1 trigger data
LHCb trig
Subdet
trig.
L0
DAQ system
L-0
L0 trigger data
L-1
TTCtx
Fanout
Control system
gL0
gL1
RS-Switch
RS-Switch
TTCtx
TTCtx
TTC optical
fan-out
TTCrx
TTCrx
Clk,L0,RST
FEchip
FEchip
FEchip
FEchip
FEE
TTCrx ADC
TTCrx ADC
C
t
r
l
L1
ODE
DAQ
throttle
Clk,L0,RST
FEchip
FEchip
FEchip
FEchip
FEE
C L1
t
L1buff
FEchip
FEchip
r
l
L1buff
FEchip
FEchip
DSP
ODE
DSP
DAQ system
Slow Control Flow
Clock, Trigger Flow
Data Flow
RT 99, Santa Fe, 15 June 1999
Beat Jost, Cern
Slide 8
Detector signals
Provide common and synchronous
clock to all components needing
it
Provide Level-0 and Level-1
trigger decisions
Provide commands synchronous in
all components (Resets)
Provide Trigger hold-off
capabilities in case buffers are
getting full
Level-0 Trigger
Large transverse Energy
(Calorimeter) Trigger
Large transverse momentum
Muon Trigger
Pile-up Veto
Implemented in FPGAs/DSPs
basically hard-wired
RT 99, Santa Fe, 15 June 1999
Beat Jost, Cern
Input rate: 40 MHz
Output rate: 1 MHz
Latency: 4.0 ms (fixed)
Slide 9
DAQ Functional Components
Readout Units (RUs)
Multiplex Front-end links onto
Readout Network links
Merge input fragments to one output fragment
Readout Network
provide connectivity between
RUs and SFCs for event-building
provide necessary bandwidth (4 GB/sec
sustained)
CPU farm
Front End Links
RU
RU
RU
Read-out Network (RN)
SFC
Storage
SFC
CPU
CPU
CPU
CPU
Trigger Level 2 & 3
Event Filter
Control
&
Monitoring
Level-2 (Input rate: 40 kHz, Output rate: 5 kHz)
Level-3 (Input rate: 5 kHz, Output rate: ~100 Hz)
~2000 processors (à 1000 MIPS)
Beat Jost, Cern
4 GB/s
Sub-Farm Controllers (SFC)
execute the high level trigger algorithms
RT 99, Santa Fe, 15 June 1999
4 GB/s
Read-out units (RU)
LAN
assemble event fragments arriving
from RUs to complete events and send
them to one of the CPUs connected
Load balancing among the CPUs connected
Timing
&
Fast
Control
Throttle
Subfarm Controllers (SFCs)
Note: There is no central
event manager
Slide 10
20 MB/s
Control System
Common integrated controls system
Detector controls (classical ‘slow
control’)
Classical RUN control
Setup and configuration of all
components (FE, Trigger,DAQ)
Monitoring
LAN
ROC
ROC
Readout system
DAQ controls
WAN
Other systems
(LHC, Safety, ...)
High voltage
Low voltage
Crates
Temperatures
Alarm generation and handling
etc.
Storage
Configuration DB,
Archives,
Logfiles, etc.
IOS
PLC PLC
IOS
IOS
...
IOS
PLC PLC
LHC Exp. Sub-Detectors &
Experimental equipment
Same system for both functions
RT 99, Santa Fe, 15 June 1999
Beat Jost, Cern
Slide 11
Level-1 Trigger
Purpose
Select events with detached secondary vertices
Algorithm
Based on special geometry of vertex detector
(r-stations, -stations)
Several steps
track reconstruction in 2 dimensions (r-z)
determination of primary vertex
search for tracks with large impact parameter relative to primary
vertex
full 3 dimensional reconstruction of those tracks
Expect rate reduction by factor 25
Technical Problem: 1 MHz input rate, 3 GB/s data rate,
small event fragments, Latency
RT 99, Santa Fe, 15 June 1999
Beat Jost, Cern
Slide 12
Level-1 Trigger (2)
...
SFC
Beat Jost, Cern
SFC
SFC
LTrg
ODE
board
ODE
board
CPU
...
CPU
CPU
...
CPU
CPU
...
CPU
...
Network
Switch
DAQ
RT 99, Santa Fe, 15 June 1999
MUX
~32 sources to switching
network
Algorithm running in
processors (~200 CPUs)
Basic idea is to have a
switching network between
data sources and processors
In principle very similar to
DAQ, however the input
rate of 1 MHz poses special
problems.
MUX
Implementation
Global
Trigger
Slide 13
Event-Building Network
Requirements
4 GB/s sustained bandwidth
scalable
expandable
~100 inputs (RUs)
~100 outputs (SFCs)
affordable and if possible commercial (COTS, Commodity?)
Readout Protocol
Pure push-through protocol of complete events to one CPU of the
farm
Simple hardware and software
No central control perfect scalability
Full flexibility for high-level trigger algorithms
Large bandwidth needed
Avoiding buffer overflows via ‘throttle’ to trigger
RT 99, Santa Fe, 15 June 1999
Beat Jost, Cern
Slide 14
Event-Building Network Simulation
Simulated technology: Myrinet
Nominal 1.28 Gb/s
Xon/Xoff flow control
Switches:
ideal cross-bar
8x8 maximum size (currently)
wormhole routing
source routing
No buffering inside switches
Trigger Signal
Trigger
RU
Data Generator
Buffer
Buffer
NIC
NIC
Lanai
Lanai
Throttle
Composite Switching Network
Lanai
Lanai
NIC
NIC
Buffer
Buffer
Fragment Assembler
SFC
Fragment Assembler
Software used: Ptolemy discrete
event framework
Realistic traffic patterns
variable event sizes
event building traffic
RT 99, Santa Fe, 15 June 1999
Beat Jost, Cern
RU
Data Generator
Slide 15
SFC
Network Simulation Results
Efficiency relative to installed BW
Results don’t depend strongly on specific technology
(Myrinet), but rather on characteristics (flow control, etc)
60.00%
Switch Size
Fifo Size
Switching
Levels
Efficiency
50.00%
8x8
NA
1
52.5%
32x32
0
2
37.3%
32x32
256 kB
2
51.8%
64x64
0
2
38.5%
64x64
256 kB
2
51.4%
96x96
0
3
27.6%
96x96
256 kB
3
50.7%
128x128
0
3
27.5%
128x128
256 kB
3
51.5%
40.00%
30.00%
20.00%
256 kb FIFOs
No FIFOs
10.00%
0.00%
8
32
64
96
128
Switch Size
FIFO buffers between switching levels allow to recover scalability
50 % efficiency “Law of nature”
RT 99, Santa Fe, 15 June 1999
Beat Jost, Cern
Slide 16
Summary
LHCb is a special purpose experiment to study CP violation
Triggering poses special challenges
Similarity between inelastic p-p interactions and events with B-Mesons
DAQ is designed with simplicity and maintainability in mind
Push readout protocol Simple, e.g. No central event manager
Harder bandwidth requirements on readout network
Simulations suggest that readout network can be realized by adding
FIFO buffers between levels of switching elements
Unified approach to Controls
Same basic infrastructure for detector controls and DAQ controls
Both aspects completely integrated but operationally independent
RT 99, Santa Fe, 15 June 1999
Beat Jost, Cern
Slide 17