No Slide Title

Download Report

Transcript No Slide Title

LHCb Online System TDR
Data Acquisition and Experiment Control
Presentation to the LHCC
23 January 2002
Beat Jost
Cern / EP
On behalf of the LHCb Collaboration
Outline
 Introduction
 Scope of the System
 System Context
 Design Goals and Principles
 System Description
 Architecture
 Components




Timing and Fast Control
Data Flow
Experiment Control
General Computing Infrastructure
 Partitioning in Practice
 Scale of the System
 Cost, Milestones and Responsibilities
 Conclusion
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
2
Scope of the System
BX 40 MHz
L0 Trigger
L0Y 1 MHz
L0Y 1 MHz
L1Y 40 kHz
L1 Trigger
Timing and
Fast Control
 Data Acquisition
(DAQ) System
 Timing and Fast
Controls (TFC)
Subsystem
 Experiment Control
System (ECS)
Experiment Control System
The LHCb Online System
consists of three major
subsystems
L0Y 1 MHz
Front-End Electronics
L1Y 40 kHz
Throttle
40 kHz 100 kB 500+ Links
Data Multiplexing
Throttle
40 kHz 100 kB
~60 Links
Event Building
40 kHz 100 kB
~60 Links
Event Filtering
200 Hz 200 kB
1 Link
Data Storage
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
3
System Context
LHCb
Trigger
LHCb
Detector
Data
Processing/
Offline
Computing
TDR Presentation to LHCC, 23 Jan. 2002
Running
Modes/
Partitioning
Operations
DAQ TFC
ECS
LHC
Accelerator
Beat Jost, Cern
Infrastructure
Services
4
Design Goals and Principles (1)
Influenced by experience gained running recent experiments
 Architecture-centred
 Technology independence
 Simplicity
 Simple functionalities and protocols
 Separation of controls and data path
 Reliability
 Scalability
 Upgrade path to higher rates without change of
architecture and protocols
 Uniformity
 As small a number of component types and link technologies
as possible
 Lower cost and easier maintenance
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
5
Design Goals and Principles (2)
 Main-stream technologies where possible
 Lower cost and improved reliability
 Homogeneous control system (same tools for
Detector, DAQ and Run control)
 Leverage on all developments in the field of controls
 Easier maintenance
 Support Partitioning
 Higher operational and diagnostics efficiency
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
6
Partitioning
 Support different running modes, e.g.
 Physics
 Calibration
 Tests
 Allow multiple independent and concurrent activities
 Important consequences on
 the architecture of the distribution of timing and trigger
signals
 The architecture of control hardware and software
 The architecture and the physical realization of the readout
system
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
7
Performance Requirements
Parameter
Average Physics Event-Size
Average Level-1 Trigger rate
Average RAW Data Rate
Total CPU power for Online Data Processing
Average Event Size to Storage (RAW+ESD data)
Average Rate to Storage
Average data rate to Storage
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
Value
100 kB
40 kHz
4 GB/s
110000 SI95
200 kB
200 Hz
40 MB/s
8
System Architecture





Front-End Multiplexers
Readout Units
Readout Network
Sub-Farm Controllers
Event Filter Farm
Level 1
Trigger
Variable Latency
<2 ms
Data
Rates
40 MHz
40 TB/s
Level-0
Timing L0
& Fast
40 kHz Control L1
(TFC)
1 MHz
1 MHz
Front-End Electronics
1 TB/s
Level-1
LAN
 Dataflow system
Level 0
Trigger
Fixed Latency
4.0 s
Front-End Multiplexers (FEM)
4 GB/s
Front-End Links
Throttle
Functional Elements
LHCb Detector
VELO TRACK ECAL HCAL MUON RICH
RU
RU
RU
Read-out Units (RU)
4 GB/s
Read-out Network (RN)
 Timing and Fast Control
 Experiment Control System
 General Computing
Infrastructure
SFC
SFC Sub-Farm Controllers (SFC)
CPU
CPU
CPU
CPU
Control &
Monitoring
(ECS)
Trigger Level 2 & 3
Event Filter
Variable latency
L2 ~10 ms
L3 ~200 ms
40 MB/s
Storage
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
9
TFC Architecture
TTCrx
L1 Trigger
Trigger splitter
Trigger splitter GPS receiver
L1
L0
L1
L0
TTCrx
L0 Trigger
Local trigger
(optional)
 Designed around RD12-TTC
system
 Functional Components
Clock fanout
BC and BCR
LHC clock
Readout
Supervisor
Readout
Supervisor
L0 Throttle switch
Readout
Supervisor
TFC switch
L1 Throttle switch
L1 trigger system
SD1 TTCtx
SDn TTCtx
Optical couplers
Optical couplers
L0 TTCtx
L1 TTCtx
DAQ
Throttle OR
Throttle OR
Front-End
Electronics
DAQ
TDR Presentation to LHCC, 23 Jan. 2002
 Front-End Electronics
 Buffer overflow control
Optical couplers
TTC system
Front-End
Electronics
 Switches
 RD12-TTC System
(TTCtx’s, Optical
Couplers, TTCRx’s)
 TFC signal handling
SD2 TTCtx
Optical couplers
 Clock transmission
 TFC signal generation
 Readout Supervisor
 TFC signal distribution
Beat Jost, Cern
 Central in RS
(synchronous Part)
 Throttle signals and
infrastructure (ORs and
Switches)
Partitioning Support
10
Readout Supervisor Design
Throttles
ECS
L0
LHC clock
‘Heart’ of the TFC sub-System
Functionality
L1
Trigger generator
ECS interface
Trigger controller
L1
Derandomizer
Derandomizer
Reset/command
Reset/command
generator
RS Front-End
L0/L1
TTC encoder
DAQ
Front-End Electronics
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
 Handle inputs of trigger
systems
 Generate internal triggers
 Control buffer
 Generate synchronous
commands and reset signals
 Encode clock and trigger
decisions for the TTC system
 Provide data to the Dataflow
system for recording
11
TFC System Implementation
 TTC components from RD12 collaboration
 Readout Supervisor implemented with FPGAs, with
emphasis on flexibility
 TFC Switch implemented with standard components
(ECL)
 Throttle Switches implemented with FPGAs
 Crucial performance tests have been performed
successfully (can use TTC system to transmit two
trigger decisions at required rate)
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
12
Data Flow Subsystem
 Functionality
LHCb
LHCb Detector
Detector
40
40 MHz
MHz
40 TB/s
Multiplexing layer (FEM, RUs)
Readout Network layer
“Event-Building”
Sub-Farm Controllers
Event-Filter Farm
Front-End Multiplexers (FEM)
Front-End Links
RU
RU
RU
Read-out Network (RN)
SFC
Beat Jost, Cern
4 GB/s
Read-out Units (RU)
4 GB/s
SFC Sub-Farm Controllers (SFC)
CPU
CPU
CPU
CPU
Control
Control &&
Monitoring
Monitoring
(ECS)
(ECS)
Trigger Level 2 & 3
Event Filter
Variable latency
L2 ~10 ms
L3 ~200 ms
40 MB/s
Storage
Storage
TDR Presentation to LHCC, 23 Jan. 2002
1 TB/s
LAN
Level
Level--00
Front
Front--End
End Electronics
Electronics
Level
Level--11
Timing
Timing L0
L0
&& Fast
Fast
40
Control L1
40 kHz
kHz Control
L1
(TFC)
(TFC)
11 MHz
MHz
11 MHz
MHz
 Functional Components




Data
Rates
VELO
VELO TRACK
TRACK ECAL
ECAL HCAL
HCAL MUON
MUON RICH
RICH
Throttle
 Collect data belonging to one
trigger from the Front-End Level
Level 00
Trigger
Trigger
Fixed
Fixed Latency
Latency
Electronics boards and
4.0
4.0 ss
assemble them in one CPU for Level
Level 11
Trigger
Trigger
the Software triggers
Variable
Variable Latency
Latency
<2
<2 ms
ms
 Collect the data of positive
trigger decisions and the
reconstructed data and write
to permanent storage
13
Data Links and Protocols
 Only Point-to-point links between modules
 No shared busses (VME bus, etc.)
 Improved reliability and diagnostics capabilities
 Push protocol
 Source sends data to destination as soon as available
( no further synchronisation)
 Assumes that always enough buffer available at destination
(Throttle to stop trigger)
 No central Event Manager
 (Gb) Ethernet as link technology throughout the
system
 Large market and large range of speeds
 Low price
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
14
Data Multiplexing
 Reduce Number of Links (n:1)
 Front-End Multiplexers (FEM) and Readout
Units (RU)
 RUs are the same as FEMs but have to
conform to the protocols imposed by
Readout Network
Input
Output
Level
Level
Level 111
Trigger
Trigger
Trigger
Variable
Variable
latency
Variable latency
latency
<2
<2
ms
<2 ms
ms
Data
Rates
MUON
MUON
RICH
MUON RICH
RICH
40
40
MHz
40 MHz
MHz
111 MHz
MHz
MHz
40
40
kHz
40 kHz
kHz
40 TB/s
Level
Level
Level---000
Timing
Timing
Timing L0
L0
L0
&
&
Fast
& Fast
Fast
Control
Control
Control L1
L1
L1
(TFC)
(TFC)
(TFC)
Front
Front
End
Electronics
Front---End
End Electronics
Electronics
1 TB/s
Level
Level
Level---111
LAN
Level
Level
Level 000
Trigger
Trigger
Trigger
Fixed
Fixed
latency
Fixed latency
latency
4.0
4.0
4.0sss
LHCb
LHCb Detector
Detector
VELO
VELO
TRACK
ECAL
HCAL
VELO TRACK
TRACK ECAL
ECAL HCAL
HCAL
111 MHz
MHz
MHz
Front-End Multiplexers (FEM)
4 GB/s
Front-End Links
Throttle
RU/FEM Application
4:1
Data
Merging
RU
RU
RU
Read-out Units (RU)
Read
Read
out
Network
(RN)
Read---out
out Network
Network (RN)
(RN)
4 GB/s
Header
Trailer
SFC
SFC
SFC
SFC
Sub
Farm
Controllers
(SFC)
SFC
Sub---Farm
Farm Controllers
Controllers (SFC)
(SFC)
SFC Sub
CPU
CPU
CPU
CPU
CPU
CPU
CPU
CPU
CPU
CPU
CPU
CPU
Control
Control
&
Control &
&
Monitoring
Monitoring
Monitoring
(ECS)
(ECS)
(ECS)
Trigger
Trigger
Level
&
Trigger Level
Level 222 &
& 333
Event
Event
Filter
Event Filter
Filter
Variable
Variable
latency
Variable latency
latency
L2
L2
~10
ms
L2 ~10
~10 ms
ms
L3
L3
~200
ms
L3 ~200
~200 ms
ms
40 MB/s
Storage
Storage
Storage
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
15
Front-End Multiplexer/Readout Unit
 Two R&D activities for implementation have
been performed
 Conventional solution using FPGAs
 Prototypes produced, functional
 Solution based on Network Processors (IBM NP4GS3)
 Software-based, programmable solution
 Software written and benchmarked by simulation and
measurement  Performance sufficient
 Pursued Strategy
 Baseline: Network Processor
 Backup: FPGA-based solution
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
16
Network Processors
 Concept appeared in 1999
 Their designated application domain is the input
stage of high-end network switches
 Every major chip manufacturer has a network
processor line in their product suite and development
is going very fast (2nd and 3rd generations).
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
17
Inside a Network Processor
Single chips or chipsets containing
On-chip Memory
+ Interfaces for external memories
Control and Monitoring
General
Purpose
CPU
Scheduler
Routing and
Bridging Tables
Processor
Complex
Packet Buffer
Memory
Integrated Network
Interfaces
Search
Engine
Large numbers (8-16) of
RISC processor cores
MAC/FRAME
Processor
HW
Assist
 Several hardware threads
To and From PHYs
 Hardware support for specialized functions
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
18
FEM/RU Baseline Implementation
D
DRAM Data
3.3V
Phy
2x
32Mx4
DDR
DATA
D6
2x
32Mx4
DDR
DATA
D6
2x
8Mx16
DDR
PARITY
D4
Phy
8Mx16
DDR
D3
8Mx16
DDR
D2
2x
8Mx16
DDR
DATA
DS1
2x
8Mx16
DDR
DATA
DS0
DRAM Control
NP4GS3
2x
8Mx16
DDR
D0
8Mx16
DDR
D1
Phy
DRAM Control
SCH
LU
512kx18
SRAM
2x
512kx18
SRAM
Throttle
ECS
Interface
PCI
DASL A
I2C
JTAG
Phy
DRAM Data
Reset
2x
32Mx4
DDR
PARITY
D6
2.5V
1.8V
C
DMUs
GND
B
Power Island
A
5.0V
5.1V
3.7V
PCI
PCI
DRAM Control
8Mx16
DDR
D3
8Mx16
DDR
D2
2x
8Mx16
DDR
D0
8Mx16
DDR
D1
Phy
2x
512kx18
SRAM
512kx18
SRAM
LU
SCH
Phy
DRAM Control
2x
32Mx4
DDR
DATA
D6
2x
32Mx4
DDR
DATA
D6
2x
8Mx16
DDR
PARITY
D4
2x
8Mx16
DDR
DATA
DS0
2x
8Mx16
DDR
DATA
DS1
Phy
Clock Island
125 MHz, 53.33 MHz
 Data merging (FEM/RU)
 Switching (Readout Network)
 Potentially final event-building (input stage to SFCs)
Phy
NP4GS3
DRAM Data
2x
32Mx4
DDR
PARITY
D6
DASL A
Throttle
• Architecturally: 8 fully connected Gb Ethernet ports
• Applications in LHCb
DRAM Data
DMUs
A
B
C
D
RJ45
10/100 Base-T Ethernet
5V
3.3V
2.5V
1.8V
53.3MHz
125MHz
JTAG
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
19
Readout Network Implementation
 Readout Network has to provide
 the connectivity between the RUs and the SFCs
 the necessary bandwidth to sustain the expected data rates (~4 GB/s)
 Two possibilities for implementation
 Commercial, ‘monolithic’ switching fabric
 Composite switching fabric based on Readout Unit modules with
different topologies (Banyan, ‘Fully Connected’, etc.)
Crucial criterion, besides available bandwidth and
cost, is the capability to cope with the expected
traffic pattern
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
20
Network Topologies
 “Fully Connected” Topology
 Banyan Topology
64 x 64 port configuration
Basic Structure
D
S
D
D
Sources
0
1
2
3
4
5
6
7
8
D
9
10
11
12
13
14
15
D
S
D D D
S
S
S
S
S
63Sources x 72Destinations
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Destinations
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
21
Expected Readout Network Performance
 Simulation results for Banyan and ‘Fully Connected’
Solutions, based on Network Processor Boards
400
Max. Shared Buffer Occupancy
[kB]
Max. Output Buffer Occupancy
[kB]
400
350
66%
300
250
200
150
100
50
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
350
66%
300
250
200
150
100
1
50
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Relative Load
64+64 port Banyan Topology
72D+63S port Fully Connected Topology
 66% load (80 MB/s for Gb Ethernet) is safe working point
TDR Presentation to LHCC, 23 Jan. 2002
1
Relative Load
Beat Jost, Cern
22
Event Filter Farm
 Components
 Sub-Farm Controllers
 Sub-Farms
 Number of SFCs determined
by Data Rate
 Number of CPUs determined
by required CPU power
Readout Network
SFC
 Features
 Separation of data and
controls path
 Event-Building takes place
at input of SFCs  RUs only
know about SFCs
TDR Presentation to LHCC, 23 Jan. 2002
SFC
CPU
CPU
CPU
CPU
.
.
.
.
.
.
CPU
CPU
…
Controls Network
Controls PC
Beat Jost, Cern
23
Event Filter Farm Implementation
 It’s required that the farm CPUs are equipped with 2 Ethernet
interfaces in order to separate the controls and data paths
 Several options exist for implementing the Event-Filter Farm
 Standard PC boxes
 Cheap
 Manufacturer independent
 Large requirements in floor-space and cooling
 1U rack-mounted PCs
 Less cheap
 High cooling density needed
 Manufacturer independent
 Micro-server blades




Extremely compact
Lower cooling requirements and crate-based
‘Expensive’
Manufacturer dependent
 We will follow the developments in IT and the other LHC experiments
and will decide in due course on the solution to apply
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
24
Experiment Control System
The ECS is in charge of the control and monitoring of
 Data Acquisition and Trigger
 FE Electronics, Event building, EFF, etc.
 Detector Operations (ex Slow Controls)
 GAS, HV, LV, temperatures...
 Experimental Infrastructures
 Cooling, ventilation, electricity distribution, ...
 Interaction with the outside world
 Magnet, accelerator system, safety system, etc.
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
25
ECS Interfaces to Board-Level Electronics
 Two basic application areas in LHCb
1.
On/near detector with moderate radiation, but potential of
Single Event Upsets (SEU)
2. Counting room and surface, with no radiation problems
 Envisaged Solutions
1a) SPECS (Serial Protocol for ECS)
•
•
evolution of Atlas SPAC
radiation tolerant
•
•
based on CAN-bus
radiation tolerant
•
•
•
PCs with very small form-factor (85x65 mm2)
very flexible
Ethernet
1b) Atlas ELMB-based Solution
2. Credit-Card PCs
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
26
ECS Interfaces to Board-Level Electronics
Picture of a Credit-Card PC
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
27
DAQ and Front-End Control
LAN
Master Interface
Optional Conversion
Board
Long-Distance (~10m)
Controls PC
M
Issues are
• Radiation tolerance
• Performance
• External interfaces
C
Short-Distance Technology (~1 m)
Long-Distance
Technology (~100m)
Front-End Electronics
S
I2C
JTAG
Front-End Electronics
S
S
I2C
JTAG
I2C
JTAG
//BUS
(~1000+)
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
28
ECS Hardware Architecture
Storage
Configuration DB,
Archives,
Log files, etc.
WAN
LAN
…
Control PCs
Front-End Electronics
M
Other systems
(LHC, Safety, ...)
C
Controller/
PLC
Front-End Electronics
S
I2C
JTAG
Field Buses
S
Node
Features
S
Node
//BUS
Node
Experimental equipment
 LAN as integration technology
 Hierarchical
 Scalable
 Using industry standards
(Field Buses, PLCs, Ethernet)
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
29
ECS Software Architecture
Control
Units
DetDcs1
SubSys1
Device Units
DAQ
DCS
...
DetDcsN
DetDaq1
SubSys2
Dev1
Dev2
...
SubSysN
Dev3
DevN
To Devices (HW or SW)
TDR Presentation to LHCC, 23 Jan. 2002
DSS
Commands
GAS
LHC
Status & Alarms
T.S.
ECS
Features
 Hierarchical de-composition in Systems,
sub-systems, … , Devices (DCS and
DAQ)
 Local decision capabilities in subsystems  Scalability
Beat Jost, Cern
30
ECS Software Implementation
 The ECS Software will be based on
 a commercial SCADA (Supervisory Controls and DataAcquisition) System (PVSS II).
 an LHC(b)-wide common framework
 with utilities and tools
 enables detector groups and the central online group to develop
applications that can easily be integrated into a single, coherent
system later.
 Examples of interesting tools and technologies are
 OLE for Process Control (OPC)
 Distributed Information Manager (DIM)
 State Management Interface (SMI)
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
31
ECS Software Implementation
‘Parent’
Operator
Control
Units
Configuration
ECS
Commands/States
Database
Commands/States
Configuration data
DetDcs1
SubSys1
DAQ
DCS
...
DetDcsN
SubSys2
DetDaq1
DSS
...
Behaviour
Commands
GAS
LHC
Status & Alarms
T.S.
FSM
Logging &
Archiving
Alarm
Handling
PVSS II
Commands/States
SubSysN
Commands/States
‘Child’
Device Units
Dev1
Dev2
Dev3
Ownership
&
Partitioning
Commands/States
‘Child’
‘Child’
DevN
Operator
To Devices (HW or SW)
‘Parent’
Configuration
Database
Configuration data
Commands/States
FSM
Interface
Logging &
Archiving
PVSS II
Commands/States
Alarm
Handling
Device
Driver
Settings/Readings
HW/SW Device
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
32
General Computing Infrastructure
Readout
Network
Data
Switch
CPU Farm
Physics
Storage
Server(s)
40 TB
Experiment
~100
CERN (CDR)
Physics
Compute
Server(s)
Controls
Switch
General
Storage
Server(s)
CERN
General
Compute
Server(s)
1 TB
Acts as the integrating element connecting all the
components described so far.
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
33
Partitioning in Practice
Stop Run
Start Run
Start Run
ECS
DCS
Det1DCS
ECS
Det1
DetNDCS
...
Detector 1
Trigger
LHCb
Trigger
DAQ
RS
RS
RS
Det1DAQ
Throttle Switch
SubSys1
Dev1
...
SubSys2
Dev2
TFC Switch
SubSysN
Dev3
DevN
FE
Detector 1
To Devices (HW or SW)
Initial state
All Detectors running together
Final state
Detector 1 running stand-alone
(with 2 sub-farms)
The others running together
TDR Presentation to LHCC, 23 Jan. 2002
RS
RS
FE
Detector 2
FE
Detector 3
FE
Detector 4
FE
Detector 5
FEMs
FEMs
FEMs
FEMs
FEMs
RUs
RUs
RUs
RUs
RUs
Readout Network
SFC
Beat Jost, Cern
SFC
SFC
SFC
SFC
34
Performance Requirements
Parameter
Average Physics Event-Size
Average Level-1 Trigger rate
Average RAW Data Rate
Total CPU power for Online Data Processing
Average Event Size to Storage (RAW+ESD data)
Average Rate to Storage
Average data rate to Storage
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
Value
100 kB
40 kHz
4 GB/s
110000 SI95
200 kB
200 Hz
40 MB/s
35
Summary of the Scale
Item
Quantity
Readout Supervisors
12
TTCtx'
18
Optical Couplers
50
Throttle ORs
40
Network Processor Mezzanines
338
Network Processor Carriers
169
SFCs
66
Farm CPUs
920
CC-PCs
860
Control PCs
120
Data Switches
60
Control Switches
62
Crates
21
Includes 10% spares
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
36
Cost Summary
Sub-System
TFC System
FEM/RUs
Readout Network
CPU Farm
ECS
General Computing Infrastructure
Total
Cost
[kCHF]
525
1549
786
1909
1055
705
6529
Spares included
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
37
Major Milestones
Milestone
ECS electronics interfaces prototypes ready
ECS software framework first release
TFC prototypes ready
NP-based RU prototype ready
Readout Network implementation decision
NP-based RU production start
Start installation in Pit 8
Start progressive commissioning
Start Subdetector commissioning
Date
1-Apr-2002
1-Jun-2002
1-Jan-2003
1-Feb-2003
1-Jun-2003
1-Jul-2003
1-Oct-2003
1-Jan-2004
1-Jan-2005
Assumes LHC startup in April 2006
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
38
Responsibilities
 Collaborating Institutes (Developments)






Cern
Genoa (Italy)
LAL/Orsay (France)
Warsaw (Poland)
RAL (UK)
Detector Groups for SD specific developments
 ‘Dependencies’ and other Collaborations
Atlas (ELMB)
RD12-TTC (TTC Equipment)
Cern/IT and Industry (CPU Farm)
Industry (Network Processors, Credit-Card PCs, Switches,
Field-buses, PLCs, etc.)
 JCOP (Controls Framework and low-level tools + Product
Support)




TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
39
Conclusion
 The system outlined fulfills the requirements. Special attention
has been given to partitioning.
 The use of Ethernet as link technology leverages on a large
market with the advantage of a huge range of speeds
 The use of Network Processors results in a single (home-made)
module for all applications in the data flow sub-system
 A homogenous control system will give a coherent interface to
implementation and operation
 Scalability is ensured through hierarchy and ‘locality’ of
functionality
 The expected cost is within the originally foreseen budget
 The system is realizable (even today).
TDR Presentation to LHCC, 23 Jan. 2002
Beat Jost, Cern
40