TELL1 A common data acquisition bard for LHCb
Download
Report
Transcript TELL1 A common data acquisition bard for LHCb
TELL1
A common data
acquisition board for
LHCb
Guido Haefeli, University of Lausanne
Outline
LHCb readout scheme
LHCb data acquisition
Optical links
Event building network
Common readout requirements
Trigger rates
Buffers
Bandwidth
Data flow on the board
Synchronization
Level 1 trigger pre-processing and zero-suppression
Higher level trigger processing
Gigabit Ethernet interface
Board implementation
FPGAs
Level 1 buffer
Higher level trigger multi event packet buffer
Summary
LHCb trigger system
L0: fully synchronous and
pipelined fixed latency
Pile-Up
Calorimeter
Muon
L1: software trigger with
maximal latency
VELO
TT
(Outer Tracker)
HLT: software trigger
Access to all subdetectors
LHCb data acquisition
Front End
of detectors
in cavern
60-100m
TELL1
in counting
room
Optical link implementation
Event building network
TELL1
Level-1
Traffic
1.11MHz
MEP /32
HLT
Traffic
Front-end Electronics
FE FE FE FE FE FE FE FE FE FE FE FE TRM
Switch
Switch
Switch
Switch
MEP /16
Switch
Multiplexing Layer
Mux x2
40KHz
Mux x8
L1-Decision
Readout Network
SFC /94
SFC
Gb Ethernet
Level-1 Traffic
HLT Traffic
Mixed Traffic
SFC
Switch Switch
CPU
CPU
CPU
CPU
CPU
CPU
…
SFC
SFC
Switch Switch
CPU
CPU
CPU
CPU
CPU
CPU
Sorter
94 Links
7.1 GB/s
94
SFCs
~1800
CPUs
CPU
Farm
Commercial
network
equipment
Trigger rates and buffering
Max. L0 Accept rate = 1.11 MHz
Max. L1 Accept rate = 40 KHz
L0 buffer is implemented on the Front
End fixed to be 160 clock cycles !
L1 buffer 58254 events which equals
to 52.4us !
Bandwidth requirements
Input data bandwidth for a 24 optical link motherboard
Optical receiver 24 fibres x 1.28 Gbit/s
30.7Gbit/s
Analog receiver 64 channels x 10-bit @ 40 MHz
25.6 Gbit/s
L1 Buffer
Write data bandwidth 30.7 Gbit/s
Read data bandwidth 4 Gbit/s
DAQ links
4 Gigabit Ethernet links
ECS
10/100 Ethernet for remote control
A bit of history
L1 trigger scheme changed
During the last year the maximal L1T latency has
increased from 1.8ms to 52ms (x32). This forces the
change SRAM FIFO SDRAM
Detectors added to L1T (TT, OT) and potentially
others
Decreasing cost for the optical links data processing is
done in the counting room
More and more functionalities on the read out board
because no Readout Unit and no Network Processors!
The event fragments are packed in so called “Multi
Event Packets” MEP to optimize ethernet packet size
and packet rate.
The acquisition board adds IP destination, does
Ethernet framing and transmit data buffering …)
How can we make a common
useful read out ?
Adaptation to two link system
is possible with receiver
mezzanine cards.
FE
FE
FE
FE
A-RxCard
A-RxCard
PP-FPGA
PP-FPGA
PP-FPGA
PP-FPGA
L1B
L1B
L1B
L1B
O-RxCard
FPGAs allow the adaptation
for different data processing.
SyncLink-FPGA
Sufficient bandwidth for the
entire acquisition path
ECS
Mezzanine card for detector
specific needs
ECS
TTCrx
TTC
RO-Tx
L1T
FEM
HLT L0 and L1
Throttle
Advantages being common !
Solution and consents finding
for new system requirements
much easier.
Cost reduction due bigger
quantity serial production
(300 boards for LHCb).
FE
FE
FE
FE
A-RxCard
A-RxCard
PP-FPGA
PP-FPGA
PP-FPGA
PP-FPGA
L1B
L1B
L1B
L1B
O-RxCard
SyncLink-FPGA
Reduce maintenance cost with
a single system.
ECS
ECS
Common software interfaces.
TTCrx
TTC
RO-Tx
L1T
FEM
HLT L0 and L1
Throttle
L1T dataflow
SyncLink-FPGA
PP-FPGA
O-RxCard
X12
32
PP-FPGA
PP-FPGA
ECS
32
L1T Link
FIFO
PP-FPGA
L1T IP
RAM
L1T DEST
FIFO
8
FIFO
FIFO
64
64
64
64 Kbyte
32
Shared data path
for 2 channel
RO-TxCard
@ 100 MHz
32
L1B
32
Link DDR
@80MHz
32
RO-Interface
POS-Level 3
FIFO
32
L1T Framer
16
L1T MEP Buffer
32
FIFO
FIFO
Data Link
FIFO
Cluster encapsulation
FIFO
L1PP
8
L1PP
8
L1PP
ID Check
FIFO
Sync
8
16
32
FIFO
L1PP
FIFO
O-RxCard (Mezzanine card)
32
X12
TTC
broadcast
32
64 KByte internal
SRAM
@ 100 MHz
HLT dataflow
32
32
FIFO
32
FIFO
Data link
FIFO
L1B
FIFO
ID Check
16
32
16
32
FIFO
FIFO
ID Check
FIFO FIFO
16
16
32
1 Kbyte
1 Kbyte
32
TTC
broadcast
ECS
32
16
4 Kbyte
32
HLT DEST
FIFO
@120MHz
@120MHz
64 KEvent DDR SDRAM
Link DDR
@80MHz
32
Shared data path
for 2 channel
RO-TxCard
@ 100 MHz
16
@80MHz
32
RO-Interface
POS-Level 3
32
HLT MEP Buffer
FIFO
FIFO
FIFO FIFO
FIFO FIFO
ID Check
Sync Sync
Sync Sync
32
HLT IP
RAM
FIFO
FIFO
32
16
Sync Sync
O-RxCard (Mezzanine card)
PP-FPGA
HLT ZeroSupp Event Encaps.
FIFO
32
16
X12
SyncLink
Stratix 1S25, 25K LE
32
16
PP-FPGA
16
320us/MEP
FIFO
PP-FPGA,
Altera Stratix 1S20, 18K LE
O-RxCard
X12
20us/event
HLT Framer
0.9us/event
1 Mbyte external
QDR SRAM
@ 100 MHz
Prototyping
Motherboard specification, schematics and layout is
finished
Daughtercard design:
Pattern generator card (available)
12 way optical receiver card (design finished)
RO-TxCard is implemented as a dual Gigabit ethernet
(see talk from Hans in this session)
CCPC and GlueCard for ECS
Test system
PCI based data generator card
Gigabit ethernet connection to PC
Technology used
FPGA
Stratix 1S20 , 780-pin FBGA
Stratix 1S25 , 1020-pin FBGA
Main features used:
Memory blocks 512Kbit,4Kbit and 512bit
DDR SDRAM interface (dedicated circuit)
DDR I/Os
Terminator technology for serial and parallel termination on
chip.
DSP blocks for L1T pre-processing
DDR SDRAM running at 240 MHz data transfer rate (120
MHz clock) for L1B
QDR SRAM running at 100 MHz for HLT MEP buffer
12 layer PCB (50Ohm)
DDR bank data signal layout
Equal length signal traces are required for
DDR SDRAM
Implementation (4 x 48-bit wide bus @
240MHz data transfer rate ! (46 Gbit/s)
14cm
Summary
After evaluating different concepts for data processing
and acquisition a common read out board for LHCb has
been specified and designed.
It serves for 24-optical with a data transfer rate of 1.28
Gbit/s each.
The board implements data identification, L1 buffering an
zero suppression.
It is made for the use with standard Gigabit ethernet
equipment.