Standard Parameters - Web Lecture Archive Project
Download
Report
Transcript Standard Parameters - Web Lecture Archive Project
TDAQ Report
LVL1 Calo & Muon
DAQ & HLT
DCS
Level-1 Calorimeter Trigger
Success in test-beam
Integrated with:
Single-tower
saturation level
•LAr and Tile Calorimeters, via
TileCal patch-panels and receivers
•Central Trigger Processor
•Region-of-Interest Builder
•ATLAS run-control, etc.
•ATLAS DAQ, via RODs and ROS
Slope ~1
Internally:
•All modules in the Calorimeter Trigger
logic chain functioned
•Fast serial links to CPM and JEM
worked well, verifying
–BC-MUX scheme for CPM (halves
number of links)
–PPM formation of jet elements
Produced e.m., jet, and
total-energy triggers
Slope ~0.55 to be corrected by:
• e/h ratio: x1.5
• ET not E: x1.25
to get ~1.0
20 GeV e.m.
trigger threshold
PreProcessor
Preprocessor Module
Control problems seen in test-beam have been understood
and corrected
Will build improved prototype
Could do early installation
using existing prototypes if
necessary
Multi-chip modules (MCMs)
4 AnIn
16 MCM
Tested successfully under:
Severe vibration
Temperature cycling (they even work at 120°C!)
PRR for ASIC/MCM assembly was held in January
LVDS
ASICs
Production ASIC wafers have now been series tested, and in principle
enough (~4400) have passed for final system
Early wafers had yields ~60%, as expected (process, die size)
Recent batches have quality control problems – yield lower and very
inconsistent
Being discussed with the manufacturer, more will be made
Cluster & Jet/Energy Processors
Cluster Processor Module
Latest prototypes look very good
Jet/Energy Module
Needs only minor design changes and a few more system tests
FDR planned for March
FDR planned for April
Common Merger Module
FDR was in September
Passed system tests
requested by FDR
6 CPMs, 2 JEMs, 2 CMMs
running in a crate with
maximum backplane traffic
Crate-to-crate merging
PRR planned for 28 February
System
CMM
Crate
CMM
Crate
CMM
CPMs
JEMs
Analogue Receivers,
Custom Backplane, and ROD
Receivers
Production TileCal receivers (2 crates) now built and being
shipped to CERN from Pittsburgh
LAr receivers (6 crates) will follow at ~ one crate-full per month
Backplane
FDR for backplane was on 6 December (no PRR needed)
Preparing for production (6 needed for CP and JEP)
ROD
Standalone tests of
first 9U ROD have
gone well
Second module is
now assembled
Firmware writing
(11 variants!) is
going well
Commissioning and Calibration
Held joint Calorimeter/Trigger workshop on
installation, commissioning and calibration 1 Feb.
Aimed at initiating and reviving discussions on:
Joint installation and commissioning plans, including testing
Calibration requirements and procedures, mainly during installation
phase and leading to normal running
Infrastructure needs, including DAQ etc.
Must agree on responsibilities in boundary areas between calorimetry and
trigger
Will set up working group of responsible people, etc.
Must agree on how to progress
Who designs and writes which software, and to what timetable?
Operational responsibilities – who takes care of what?
What info is in which database, how accessed and controlled?
Endcap Muon Trigger
Full chain of prototype system has
been operating for some time in lab
tests and, last year, at H8 with 25 ns
bunch structure beam
The PS boards used Version 4 of
“SLB” ASIC
Minor error in readout part of
ASIC at high rate was
understood, and revised
versions were submitted
towards the end of last year
Recently received prototypes of new
version (Version 4 ECO2) of SLB
ASIC
This is fully functional and the
problem with the previous
version has been solved
Now have fully-functional versions
for all ASIC types in the end-cap
muon trigger system
Barrel Muon Trigger
On-detector electronics related to the trigger
Splitter boxes
Production and testing well advanced; supply not a problem for
detector integration
Pad boxes
Depends crucially on Coincidence Matrix Array ASIC
Final motherboard prototype under evaluation at CAEN
Production of Pad-or mezzanine boards completed
Cabling (within and between stations)
Requires a lot of detailed design work
CMA ASIC
Original prototype ASIC worked according to specification,
but programmable delay range had to be extended for final
detector cabling
Redesigned ASIC submitted last September
Small functional changes, but major redesign
Packaged protos received by ASIC testing company
Last week we received encouraging news: all test
vectors passed without error
Must now do evaluation of the protos in Pad test system
Essential to validate the ASIC in the system as well as
with test vectors before drawing final conclusions – slice
test in Rome and cosmic-ray stand in BB5
Then place order for main production as soon as possible
Pads
A small number of Pad boxes equipped with the original ASIC are
available for testing RPC detector assemblies
Number limited by availability of proto ASICs (old version)
8 boxes now available in BB5
Some more Pad boxes will become available soon equipped with
the new prototype ASICs
Boards for 10 additional boxes available
Schedule is extremely critical for delivery of the Pad boxes in
production quantities
Main installation of RPC system starting late August
Plan prepared to ensure that Pad electronics will become
available very quickly after delivery of ASICs
TC organized a review of the plan last November
Big effort is being made to bring in some extra effort during
this critical period
ROBIN
TDAQ component that receives and buffers the data from the RODs
Final prototypes produced in Jan. 2005
PRR scheduled March 1st,2005: preparations completed … including documents
Subject to PRR, Pre-series production (50 boards) to start March 2005
Planning foresees a volume production (650 units) to be completed in 3Q05
200 boards to be installed and commissioned in 50 ROSs before end 2005
Overview of ROD Crate DAQ
RCD provides an application framework
Commands
R
RRRR
to interface the RODs for
C
OOOO
Control/Configuration
Data
D
DDDD
Monitoring (Statistics/Event sampling)
Data Readout (through VME bus)
Synchronous readout of multiple crates provided
Commissioning phase 1 will be largely based on RCD (all ROS units not available
yet)
Development history:
2003 – Initial implementation based on ROS software
Design mostly driven by ROS requirements
3Q04 – completed first major upgrade
Largely used in Combined testbeam for ROD configuration/control
1Q05 – completed second major upgrade
Based on CTB feedback
New code included in TDAQ-01-01-00 release
Latest RCD changes
Introduced alternative user API for data readout
Old one still supported
Introduced handling of VME interrupts
Enhanced configuration mechanism
More flexible publishing mechanism
Introduced multi-crate Event Builder
Architecture enhancements are documented in
ATL-DQ-ES-0066
Detector representatives are being individually contacted to
schedule software upgrades
Progress in Measurements & Analysis
Group
Performance measurements in testbeds in parallel with discrete event
simulation modelling
Predict the behavior in test-bed
Extrapolate to the final system size
Suggest optimizations for the final system
TDR proposed separate main switches for LVL2 & EB traffic
Optimisation Mix L2 and EB nodes on 2 data switches:
Better performance
Moderated traffic in both Switches
Queue sizes smaller in mixed network
More reliable system
The EB and L2 systems are divided in 2
More flexibility
Bigger systems possible in testbeds
System scalability
LVL2 Nodes: not running algorithms, driving the
DAQ as fast as they can.
→ROS: 30% Final Request Rate
→SFI: 41% GigaBit Bandwidth
→L2PU: 17 times more ROI rate
HLT/DAQ Pre-series
Fully functional, small scale, version of the complete HLT/DAQ system
Equivalent to a detector’s ‘module 0’
Purpose and scope of the pre-series system:
Pre commissioning phase:
o To validate the complete, integrated, HLT/DAQ functionality
o To validate the infrastructure, needed by HLT/DAQ, at point-1.
• Note it will be provisionally installed at point 1 (USA15 and SDX1)
Commissioning phase
o To validate a component (e.g. a ROS) or a deliverable (e.g. a Level-2 rack) prior to its
installation and commissioning
TDAQ post-commissioning development system.
o Validate new components (e.g. their functionality when integrated into a fully
functional system).
o Validate new software elements or software releases before moving them to the
experiment.
Pre-Series
USA15
SDX1
5.5
One
ROS
rack
-
TC rack
+ horiz.
Cooling
-
12 ROS
48 ROBINs
RoIB
rack
-
TC rack
+ horiz.
cooling
50% of
RoIB
One
Full L2
rack
-
TDAQ rack
30 HLT PCs
Partial
Superv’r
rack
One
Switch
rack
Partial
EFIO
rack
TDAQ
rack
3 HE PCs
TDAQ
rack
128-port
GEth for
L2+EB
TDAQ
rack
10 HE PC
(6 SFI 2 SFO 2 DFM)
-
-
-
Partial
EF rack
-
TDAQ
rack
12 HLT
PCs
ROS, L2, EFIO and EF racks : one Local File Servers, one or more Local
Switches
Partial
ONLINE
rack
-
TDAQ rack
4 HLT PC
(monitoring)
2 LE PC
(control)
2 Central
FileServers
Inside SDX1
Pre-Series
Pre-Series
Work Packages for installation and commissioning in point 1 being
defined now
Installation subject to communicated delivery dates
Work packages to be discussed with technical coordination asap
Planning of exploitation, operations & maintenance in progress
SysAdmin Task Force: active since mid December 2004
Goal: preparing a proposal for
Node system administration &
management at Point1
Topic so far addressed:
1. Users / Authentication
2. Booting / OS / Images
3. Software Istallation
4. File Systems
5. Farm Monitoring
6. Networking in general
7. Remote access to nodes
Task Force Members:
Andre DosAnjos
Gökhan Ünel
Haimo Zobernig
Lucian Leahu
Luis Bolinches
Marc Dobson
Marius Leahu
Matthias Wiesmann
Stefan Stancu
Currently: Collecting Input & discussing
with various people on a draft document
Soon: Make an EDMS note
Large Scale System Tests
Data Challenges for control aspects of the HLT/DAQ system
Annual exercise for last 3-4 years with increasing numbers
of processors
Tests this year planned in Canada, UK and CERN following
on from last year’s tests
CERN tests on LXSHARE
June/July timeframe in agreement with IT and discussed
in LCG PEB
1 month with # processors increasing from 200 to
~1,000
HLT Progress
Recent HLT workshop in Barcelona
Review status and plans for the various components required to
Integrate and Test HLT selection s/w
Infrastructure Issues related to HLT Selection
HLT Core s/w and plans
Selection system performance
Trigger configuration
Testbeds and commissioning
Monitoring in PT (Athena)
Follow-up established in particular on
Timing & performance measurement plans
Design review of core selection s/w
Trigger configuration
HLT commissioning
Active participation of one of the Offline commissioners
Preparation of various aspects of commissioning of HLT
Understand HLT needs of detectors for their commissioning
HLT planning for pre-series
HLT Issues
S/w stability and areas of interface with offline
Software testing
Modularity (complexity & dependencies) – considerable recent
progress with ID s/w
Data Preparation (recent discussions for LAr improvements in calo
trigger software workshop)
Evaluate timing performance of full system and isolate (and replace
if necessary) elements with insufficient performance - particularly
critical for LVL2
Established regular technical discussions with offline to clearly
identify areas which need improvement – plan & execute the work
Priority has to increase in detector s/w for HLT (Steinar’s talk)
Trigger configuration
Work to gain a complete picture of the requirements of trigger
configuration (LVL1/HLT)
Common LVL1 & HLT trigger configuration prototype using condDB
in progress
CTB HLT Trigger Studies
Study electron/pion separation on CTB data using calorimeter and
Inner Detector information from Level2 trigger algorithms.
Compare results with CTB simulated data
ELECTRON
SELECTION
STRATEGY
• Beam instrumentation (Cherenkov, muTag,
muHalo)
• Calorimeter information (ET, hadronic
leakage, shower shapes variables)
• TRT information (number of tracks, hits, EM
cluster-track matching)
• Preliminary results of electron identification using LVL2 calorimeter
information.
• Most TB runs were not taken with LAr and Tile ROD in Physics Mode. Therefore
T2Calo has been modified to make use of the offline calorimeter cells
• Physics Mode Runs will be looked at in the future
e- 50 GeV Run 2102410
T2Calo EM ET (MeV)
T2Calo EM ET
e
+
ET > 3 GeV
rejects muons
MuTag vs. T2Calo EM ET (MeV)
T2Calo EM ET
muTag < 460
rejects muons
Muon Sagitta reconstructed by Fast @ H8
Straight Muon Beam of 250 GeV
MOORE recontruction: s ≈ 60 m
The sagitta reconstruction is shifted
by 300 m because Fast doesn’t
make use of allignment corrections.
The s width could be dueMOORE
to a wrong
calibration: checks ongoing.
mm
PESA Performance
First meeting to re-focus the on-line Physics and Event Selection Validation and
Performance activities
Aims to address coherently the physics performance of the on-line selection in
the areas of
Electrons and photons, Muons, Jets / Taus / ETmiss, b-tagging, B-physics
Building on the work of the existing “vertical slices”, some of them deployed
also in the recent Combined Test Beam
The goal is the definition of complete Trigger Menus, validated against selected
Physics channels
List of items of increasing complexity, moving from simple processes (like Z
2e or Z 2) to others capable of steering more complex menus
(like H 2e2, top, …)
Aim for a full exercise on the time scale of DC3
Prepared also for the HLT commissioning during the cosmic data taking
Devise specific algorithms if needed (e.g. select non-pointing tracks)
Understand detector needs and requirements e.g. recent discussions
with LAr and MDT
PESA Performance
Presentations about the different selection schemes to identify objects with the
High Level Triggers
Emphasize areas where reconstruction, combined performance and physics
groups can bring in their expertize to optimise selections and help shaping the
Trigger Menus
http://agenda.cern.ch/age?a051058
Walk through available software (including steering of the
selections)
Need help to exploit selections on various data samples
Tune cuts, add details, evaluate rates and performances
Aim in ATLAS at Trigger-aware Physics analyses and Physics-aware Trigger
Selection
Analysis groups to evaluate their understanding and sensitivity to
current trigger strategy and performance and propose
enhancements/additions to trigger strategy and performance
See Fabiola & Steinar’s talk tomorrow morning
DCS Components
CAN system: Production well advanced
ELMB: production finished
ELMB motherboard: prototype ok, order placed
CAN Power Supply Unit (PSU): pre-series available,
production organised by PF/ESS for 3Q05
Other equipment in CERN stores
SW: Working versions (mostly) available
JCOP Framework for standard devices
Finite State Machine
Configuration DB
JCOP prototype being evaluated
Conditions DB
Using Lisbon MySQL for commissioning data
PVSS API manager to inject data in COOL being
studied, but technical problems not yet solved
Subdetectors are waiting for the database(s)!
DCS for ID integration in SR1
Aims of DCS prototype:
Realistic testing of FE I/O
Test of JCOP SW components
Interfacing to external services
Stand-alone operation of sub-systems
Integrated operation of Inner Detector
… service for detector construction !!!
The following slides are from P.Ferrari and represent the work of the ID subdetectors
DCS Setup in the SR Building
GCS
SR DAQ
DDC
supervisor
LAN
TRT
SCT
SCT
Therm
Encl.
ENV
SCT
TRT
LCS Power
Pack
LCS
Cooling
SCS
master
TRT LV
SCT PS
ELMB
ILock
Sensors:
Temp
ELMB
IBOX
SCT PS
ELMB
ILock
ELMB
SCS ID
evaporative
Cooling
PLC
Curr
Air T.
etc..
Hum.
Press.
TRT HV
ELMB
PLC
Temp
Hum.
Press.
ELMB
monophase
cooling
SCS master
Pixel
Pixel
LCS Power
LCS Env
Rack
monitoring
HV/LV
ELMB
Sensors:
Temp
Pixel
SCS CIC
Rack/Env
Control
(regulators
Compressor)
ETHERNET
CANBUS
SCS master
Power
SR env
sensors:
Temp,
Humidity,
Pressure
ELMB
IBOX
Sensors:
Temp,
Humidity
SCT Power Supply
TRT SCS
x
Summary
LVL1
Good progress on module production and software
On-detector muon trigger electronics is a critical area (ASICs, schedule)
Focus moving to commissioning of LVL1 and aspects with detectors
HLT/DAQ
Elements needed for first stage of commissioning now in place
Good progress on system performance and scalability studies
Pre-series system being purchased and installation ~on time
HLT testbeam analyses in progress
HLT system performance issues - effort established to isolate and improve
critical elements
Working with detectors to understand calibration requirements
Focus PESA work on complete menus, selection performance &
commissioning
DCS
System being used widely by detectors for commissioning and testing work
Good collaboration with DB group but much work remains to be done
Manpower remains a great concern in some areas of the project in particular
given the “client and server” nature of TDAQ
Trying to address this where possible with increased coherence between
TDAQ, detector software, physics & combined performance groups
Backup slides
Mixed LVL2 & EB nodes
PAUSE
PAUSE
109(1)
107(1)
3(1)