Experiment Control at the LHC – Joint Controls for Physics
Download
Report
Transcript Experiment Control at the LHC – Joint Controls for Physics
Experiment Controls at the LHC –
The JCOP Framework
Dr. Sascha Schmeling
CERN • IT/CO
Overview
CERN and the LHC
The Joint Controls Project
The JCOP Framework
–
PVSS II
the underlying software
– JCOP Framework
a collection of „Tools“ and „Devices“
Experience
–
–
–
Testbeam Activities
Experiments
Performance Tests
Conclusions and Outlook
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
2
CERN and the LHC
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
4
The Joint Controls Project
JCOP Mandate
Who are the players?
also
PH/ESS
– LHC experiments
– IT/CO
also other experiments benefit
from the developments
the former research sector
controls support group
Purpose and Scope
The scope of JCOP is to provide a common framework
of tools and components to allow the experiments to
build their own Detector Control System (DCS)
applications. The purpose of the DCS is the
initialization, monitoring and operation of the different
sub-detectors and the interaction with the Data
Acquisition system and external systems such as the
CERN infrastructure services and the LHC accelerator.
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
6
JCOP Deliverables
Tools, Services, and Engineering Support
an integrated set of tools, using PVSS
guidelines and implementations for standard
hardware items, interfaces, and communication
protocols
communication mechanisms between the
supervisory level SCADA system and HEPspecific equipment
communication mechanisms between the
experiments’ DCS and CERN infrastructure
services
Services
– distribution, training
and first-line support of the
Dr. Sascha Schmeling • CERN-IT/CO
28.09.2004
7
JCOP Deliverables
Controls Support for Common Infrastructure
generic components to support the interface
between each DCS and the various external
systems, e.g.
– LHC
– high-level safety systems
– infrastructure services, e.g. magnets, cryogenics,
and electrical power
consultancy and, where appropriate,
implementations for controls of common
experimental infrastructure
common items for experiments, e.g.
– Gas Control System
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
8
JCOP Organisation
Controls Coordination Board
steering body
(22 members from the experiments
and concerned departments)
day-to-day decisions
Project Leader
Executive Board
(project leader,exp. controls coord.)
Project Team
(people working on the project)
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
9
Experiment Controls @ LHC
Supervisory Level
The JCOP Framework Approach
Experiment Control Systems @ LHC
Layer Structure
Storage
Technologies
Commercial
Configuration DB,
Archives,
Log files, etc.
Custom
FSM
WAN
Supervision
SCADA
Other systems
(LHC, Safety, ...)
LAN
LAN
Controller/
PLC
Process
Management
OPC
DIM
Communication Protocols
VME
PLC/UNICOS
Field Bus
Node
Node
Field
Management
Experimental equipment
VME
Field buses & Nodes
Sensors/devices
Based on an original idea from LHCb
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
11
Supervisory Level
Supervisory Control And Data Acquisition
(SCADA)
– Commercial solution
– JCOP has chosen PVSS from ETM
– Single recommended SCADA Product CERNwide
Finite State Machines (FSM)
– Custom solution
– JCOP is using State Management Interface
(SMI++) which was developed for the DELPHI
experiment
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
12
Integration of PVSS and FSM
PVSS II has tools for:
– Device Description (Configuration Database):
• Data Point Types, Data Points, and Data Point items
– Device Access
• OPC, ProfiBus, Drivers
– Alarm Handling and Display
• Generation, Filtering, Summarising, Masking, etc
– Archiving, Logging, Trending
– User Interface Builder
– Access Control, etc.
– API
– Scripting
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
13
Integration of PVSS and FSM
PVSS II does not have tools specifically for:
– Abstract behaviour modelling
• Finite State Machines
– Automation & Error Recovery
• Expert System
But…
– FSM (SMI++) does
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
14
First Conclusions
At the supervisory layer two important technologies
are used; SCADA and FSM
JCOP has selected a commercial SCADA system
(PVSS) and a custom FSM (SMI++)
The integration of these two is very important to
support the proposed operational concept
There is a Framework providing a comprehensive
set of components, tools and utilities required by
the four LHC experiments
– Reduce overall development effort
– Reduce diversity and ensure a coherent controls system
The Framework is a key to maximising the benefit of
the use of the chosen technologies
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
15
Second Conclusions
PVSS selected for the LHC experiments after
for example:
an extensive evaluation (> 10 man-years)
Cryo, Vacuum,
PVSS is now the recommended SCADA
QPS, MPS, …
product CERN-wide
PVSS is used extensively within the LHC
experiments and increasingly in other
domains
CERN has a long-term partnership with ETM
and this is beneficial for both partners
ETM on-site support established
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
16
The JCOP Framework
PVSS II
very brief
Technical Overview
PROCESS VISUALIZATION AND CONTROL SYSTEM II
Manager concept
Visualization, Operation
Runtime = Vision, Graphical Editor = GeDI, Database Editor = PARA
Concepts
UI
UI
Userinterface
Editor
Userinterface
Runtime
Processing, Control
Script language = Control, Application Programming Interface = Manager API
x
y
CTRL
API
Controlmanager
API-Manager
DB EV
DatabaseManager
Eventmanager
D
D
Driver
Driver
CON
Connection to
other systems
Process Image, History
Communication u. Alarming = Event manager, History = Data manager
Process Interface
Driver: PLC, Field busses, DDC, Telemetry/RTU, Special drivers
19
Manager basics
Modularity: Autonomous functional units
Concepts
Software technology: Own process
Communication: Platform independent TCP
Efficiency: Event driven data exchange
The sum of all managers form a System
Scalability
» Only required Managers are started
» Managers may be distributed over several machines
» Load balancing
Flexibility: The computers forming a system can
run different operating systems
20
Driver
Provision of the connection to several
automation systems (control level)
Concepts
Handling of the communication
Smoothing
protocol
Unification of different
Conversion
communication methods
Smoothing, old/new comparison
(Time, value, chatter suppression)
Conversion: raw values into
engineering values and vice versa
Integration of peripheral time
stamp, time stamping on receipt
D
Driver
21
Transformation
Old / New
SW Protocoll
Event driven data exchange with
the rest of the system
HW/kernel
Communication with base automation
OPC Data Access (client and server)
OPC Alarms & Events (client and server)
Drivers
Field busses
» Profibus DP / FMS, S7 Communication, TLS, …
Ethernet
» Industrial Ethernet S7, Modbus TCP, AB Ethernet / IP,...
Tele Control Systems (RTU)
» SAT-SSI, IEC 60870-5-10x, Riflex,…
Direct drivers
» MPI, RK512, 3964R,…
Applicom General Driver Interface
Teleperm, …
22
More interfaces
Interfaces
Networks / Internet
» TCP, SSL
» HTTP (Web)
» WAP / WML
» SMTP (E-Mail)
» SNMP Agent
» SNMP Manager
» ...
Database
» ADO
» ODBC
» OLE-DB
» DDE
» …
Serial communication
» V24 / RS232
» ...
Software
» COM
» ActiveX
» …
23
Event Manager
Central message dispatcher
Concepts
Holds the process image (= current values
of all process variables)
Acts as data server for other processes
» Provider-Consumer model
» Spontaneous read/write access
» Connect (event driven callbacks)
» SQL-Queries
Alarm processing
Calculation of data point functions
EV
Event manager
24
Authorization / authentication
Just one event manager per system
Database Manager
Concepts
High performance process database
» Value archives
» Support for all data types
Parallel, autonomous archives
Automatic Administration
» File set switching, back up, long time storage
to tape or data server, deletion
Persistent values
Internal SQL-Interface
OLE-DB-Provider
Parameterization
DB
Database manager
25
database
Oracle Archiving soon
Control Manager
Implementation of complex functionality
Concepts
Implementation of algorithms with access to
» Current process values (data points)
» History (values, states, alarms)
» Graphics / process mimics
Time or event triggered execution
Numerous integrated functions
Multi threading
CTRL
Control manager
26
Powerful interfaces
» COM
» ADO / ODBC
» V24/RS232, TCP, SMS, E-Mail,…
User Interface Manager
Display interface to the operator
Concepts
Process mimics
Alarm list, Event screen, Trending,
operational dialogs, …
Different Types
» Visualization & Operation: VISION
» Database Editor: PARA
» Graphical Editor: GEDI
UI
User interface
Manager
27
True Client-Application
» Network compliant
» Multi user environment
Manager: Real System
Concepts
RDB
UI
UI
UI
Standard
Relational DB
User interface
Editor
User interface
Runtime
User interface
Runtime
Excel
Reporting
ADO/
UNIX-ODBC
Reporting,
RDB, OLAP
CTRL
CTRL
Control manager
Control manager
API
COM
API-Manage
“Prognosis”
MS-COMInterface
OLE-DB
OLE-Database
Access Provider
DIST
VA
VA
VA
Archive
Archive
Archive
DB
EV
Database manager
Event manager
Distribution
Manager
REDU
Redundancy
Manager
Other
Vendors
D
D
OPC Server
OPC Client
D
D
Modbus
Profibus
OPC-Server Ethernet
28
Field bus
Other PVSS
Systems
Hot Standby
System
Redundancy – Split Mode
Redundant servers are separated
Second server can be used for configuration tests
Automatic return of a selectable server as primary into redundancy mode
PLC, DDC, RTU
Process bus
Engineering
(Primary)
Server 1
(Test)
Hot-Standby
Server 2
Management
Ethernet-LAN
Operator 1
29
Operator 2
Conventional Data Model
Typical HMI and SCADA systems:
x
y
Data points
Analogy: Terminal strip as standard interface
Signals are treated independently
from their affiliation
Arduous “assembling” of related
data / signals
Flat model of separated process variables
» Tags, points, PV´s, variables, IO-points,…
Endless confusing tables
Requires perfect engineering discipline
Mapping of whole devices: expensive
30
Device Oriented Data Model
Possible data model with PVSS II:
x
y
Data points
Paragon: whole devices – drives, valves…
Signals are treated with respect
to their original affiliation?
Structures of process variables
» Data point consisting of several
hierarchically arranged process variables
Intuitive representation in a tree (TreeView)
Clear, standardized naming convention
Information can be found where it belongs
Mapping of whole devices: built in
31
Modeling of a device
x
Original device
y
Data points
Gate Valve (shut off device)
32
External signals
SS
SS
SS
CMD
CMD
CMD
AL
AL
Final position open
Final position closed
Position
Open
Close
Stop
Torque
General failure
…
Typically:
Administrative information
4 to 30
variables
per device !
Operating hours
Identification code
Type
…
Structured Data Point Type
DPT
x
GateValve
Structure
Data points
Structure
y
Response
StateSignals
Configs
Data point elements
Structure
33
Commands
Structure
Structure
Alarms
Structure
Open
Open
Torque
Bool
Bool
Bool
Closed
GeneralFailure
Stop
Bool
Bool
Close
Position
Float
Value range
Peripheral address
Conversion R I
Smoothing
Substitution value
Bool
Alarming
Peripheral address
Archiving
Bool
Peripheral address
Authorization
x
y
Device
Structured Data Point
GateValve
Structure
Data points
Response
Process Variables
StateSignals
Commands
Alarms
Open
Torque
Open
Closed
GeneralFailure
Stop
Position
Element address: GateValve.Response.Alarms.Torque
34
Close
Information of a Data Point
Either structural node or process variable
x
y
Data points
Selectable as internal or with field connection
Comprehensive information included
» Value (e.g. 47.2, TRUE,…)
» Time stamp (Dec 24th, 2002 04:31:17.837pm)
» Display format ( %f5.2, ###,##)
» Unit (m³/h, kW, rpm …)
» Description (multiple languages)
» Quality/Status: (32-bit-bloc e.g. invalid, from GI, …)
» Originator / Source (user, executing manager)
Addressing within the system
» Data point identifier (= Data point address)
» Alias (= second addressing system for AKS / KKS etc.)
35
The freedom of choice
OS
Microsoft
®
Windows XP
Operating
Systems
Windows 2000
Windows 2003 Server
SOLARIS
LINUX
» Red Hat
36
Resource Needs
The JCOP Framework
Experiment Controls System Architecture
Tools & Devices
Examples
Experiment Control Systems @ LHC
Layer Structure
Storage
Technologies
Commercial
Configuration DB,
Archives,
Log files, etc.
Custom
FSM
WAN
Supervision
SCADA
Other systems
(LHC, Safety, ...)
LAN
LAN
Controller/
PLC
Process
Management
OPC
DIM
Communication Protocols
VME
PLC/UNICOS
Field Bus
Node
Node
Field
Management
Experimental equipment
VME
Field buses & Nodes
Sensors/devices
Based on an original idea from LHCb
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
39
Aims of the JCOP Framework
Reduce the development
effort
–
–
–
–
Reuse of components
Hide complexity
Facilitate the integration
Help in the learning process
Reduce resources for
maintenance
– Homogeneous control
system
– Operation
– Maintenance
Provide a higher layer of
abstraction
– reduce knowledge of tools
– interface for non experts
Customize & Extend
industrial components
Modular/Extensible
– core
– mix & match components
As simple as possible
Development driven by the
JCOP FW Working Group
representing all experiments
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
40
System Architecture
Supervisory Application
JCOP FW (Supervisory)
SCADA (PVSS)
FSM, DB,
Web,etc.
Supervision
PC (Windows, Linux)
OPC,
PVSS
Comms,
DIM,
DIP
OPC,
PVSS
Comms,
DIM,
DIP
…
FE Application
Device Driver
FE Application
UNICOS FW
Commercial Devices
PLC
Communication
Other Systems
Front-end
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
41
JCOP FW Ingredients
Supervisory
Application
Device Editor
Navigator
C/C++ code
User specific
FE
GEH
Event
Manager
FW Custom
FE FE
FW Custom
S
7
-3
0
0
User
Framework
Controls
Hierarchy
Data
Manager
PVSS
Commercial
Devices
OPC client
OPC client
OPC client
OPC server
OPC server
OPC server
S
7
-3
0
0
Equipment
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
42
Devices
problems!
Generic Analog-Digital
devices
– Analog Input/Output
– Digital Input/Output
CAEN power supplies
– Crates SY127, SY403,
SY527, SY1527
– EASY being included
soon
Wiener Fan Tray,
Wiener power supplies
ISEG power supplies
– OPC server being
developed by the
company
– CAN interface. One board
currently supported (Peak)
PS and SPS machine data
server
– Common server provided
for all experiments
– SPS needs customization
for each beamline
– will change to DIP
– OPC server developed
Rack Monitoring and
by the company
Control
– CAN interface. One
– being finalized
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
43
ATLAS
DCS
The ELMB
(Embedded Local Monitor Board)
General purpose CAN node using CANopen
Flexible I/O functions
Data sent periodically, on-request or on-change
Radiation tolerant
Multiplexed ADC, 16 bit, 64 channels with
signal adaptation. Configurable for rate,
range, mode and number of channels
8 input, 8 output and 8 definable I/O ports
SPI bus
Low power consumption, opto-isolated
Add-ons: DAC, 12 bit, 16-64 channels
Interlock facility
0.5 Gy and 3*1010 neutrons per year
Operation in field of 1.5 Tesla
Remote diagnostics and loading of SW
SEE detection and recovery
44
Adapters
ATLAS
DCS
Temperature sensors – 2 wire
Temperature sensors – 4 wire
For PT100
Differential voltage attenuator
For NTC 10K or PT1000
Attenuates 1:100
Inline resistors
1Kohm resistors for simple voltage measurement
Resistor values on adapters may be modified
for greater accuracy over a known voltage range
45
Motherboard Top Side
ATLAS
DCS
ADC input
ch 0 - 15
ADC input
ch 32 - 47
Port F
Port A
SPI
Port C
ADC input
ch 16 - 31
ADC input
ch 48 - 63
Power cable
J3
J11
CAN bus
Reset
J4
J10
46
ATLAS
DCS
120W
ELMB System topology
Branch_1
120W
Branch_2
120W
120W
47
Tools
Device Editor/Navigator
– Main interface to the
Framework
– System management:
installation, login, etc
– Configuration and operation
of devices
Generic External Handler
– To incorporate C++ code to
panels and ctrl manager
– Easier to use than standard
PVSS C++
Component Installation
FSM
– High level view of
experiment
– Building the Controls
Hierarchy
Trending
Tree Browser
– Tree widget for Windows
and Linux
Communication Protocols
– Simplify & extend PVSS
trends (templates, tree, etc)
28.09.2004
– DIM
– DIP
Dr. Sascha Schmeling • CERN-IT/CO
50
Other Ingredients
Guidelines to produce a
coherent control system
– Look and feel
• e.g. colours, fonts, layout
– Alarm classes
– Naming convention
– Exception handling
– File organization
– and so on
Libraries
– Setting of address, alerts,
archiving, etc
– List manipulation
– Exception handling
28.09.2004
Examples
– Panels
– Buttons
– Scripts
DCS Course
– Introduction to PVSS
– Use of several Fw tools
– Connection to real
hardware/simulator
– Available as a five day
course from the
Training Service
Dr. Sascha Schmeling • CERN-IT/CO
51
Current Version
The JCOP Framework now exists in a so-called
redesigned fashion.
Version 2.0.9
– Released in July 2003
– Runs on Linux and Windows
– PVSS 3.0.1
Internal intermediate releases
– To meet user specific needs
– Small improvements/bug fixes
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
52
A Practical Example
How to integrate one CAEN HV board into the
control system
– add the crate
• ~100 pieces of information
– add the board
• ~30 pieces of information
– add the channels
• ~32 pieces of information per channel
– configure crate and board
• alert handling
• archiving of values
– connect to the hardware
28.09.2004
• connection via Dr.OPC
Sascha Schmeling • CERN-IT/CO
53
Experience
Activities
Performance Tests
Activities using PVSS
LHC Project
– all five experiments have prototype projects
– used for testbeam controls
– used for cryo, vacuum, QPS, MPS, …
Fixed Target Experiments
– COMPASS
• framework developed
• in production
– HARP
• using experience and framework from COMPASS
• (was) in production
– NA60
• benefit from COMPASS developments
• in production
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
55
Sheer number of systems
130 systems
simulated on 5
machines
40,000 DPEs
~5 million DPEs
Interconnected
successfully
Paul Burkimsher CERN IT/CO
56
Alert Avalanche Configuration
UI
91
94
UI
2 WXP machines
Each machine = 1 system
Each system has 5 crates declared x
256 channels x 2 alerts in each channel
(“voltage” and “current”)
40,000 DPEs total in each system
Each system showed alerts from both
systems
Paul Burkimsher CERN IT/CO
57
Traffic & Alert Generation
Simple UI script
Repeat
– Delay D mS
– Change N DPEs
Traffic rate D \ N
– Bursts.
– Not changes/sec.
Option provoke
alerts
Paul Burkimsher CERN IT/CO
58
Alert Avalanche Test Results - I
You can select which system’s
alerts you wish to view
UI caches ALL alerts from ALL
selected systems.
Needs sufficient RAM! (5,000 CAME
+ 5,000 WENT alerts needed 80Mb)
Screen update is CPU hungry and
an avalanche takes time(!)
– 30 sec for 10,000 lines.
Paul Burkimsher CERN IT/CO
59
Alert Avalanche Test Results - II
Too many alerts -> progressive
degradation
1) Screen update suspended
– Message shown
2) Evasive Action. Event Manager
eventually cuts the connection to
the UI; UI suicides.
– EM correctly processed ALL alerts
and LOST NO DATA.
Paul Burkimsher CERN IT/CO
60
Alert Avalanche Test Results - III
Alert screen update is CPU
intensive
Scattered alert screens behave the
same as local ones. (TCP)
“Went” alerts that are
acknowledged on one alert screen
disappear from the other alert
screens, as expected.
– Bugs we reported have now been
fixed.
Paul Burkimsher CERN IT/CO
61
Agreed Realistic Configuration
W
W
L
L
L
L
L
L
L
W
L
L
L
L
L
L
3 level hierarchy of machines
Only ancestral connections, no peer links. Only
direct connections allowed.
40,000 DPEs in each system, 1 sys per machine
Mixed platform (W=Windows, L=Linux)
Paul Burkimsher CERN IT/CO
62
Viewing Alerts coming from leaf
systems
91
92
95
03
93
04
05
06
07
94
08
09
10
11
12
13
1,000 “came” alerts generated on
PC94 took 15 sec to be absorbed by
PC91. All 4(2) CPUs in PC91
shouldered the load.
Additional alerts then fed from
PC93 to the top node.
– Same graceful degradation and
Paul Burkimsher CERN IT/CO
evasive action
seen as before. PC91’s
63
Rate supportable from 2 systems
91
92
95
03
93
04
05
06
07
94
08
09
10
11
12
13
Set up a high, but supportable rate of
traffic (10,000 \ 1,000) on each of PC93
and PC94, feeding PC91.
PC93 itself was almost saturated, but
PC91 coped (~200 alerts/sec average,
dual CPU)
Paul Burkimsher CERN IT/CO
64
Surprise Overload (manual)
91
92
95
03
93
04
05
06
07
94
08
09
10
11
12
13
Manually stop PC93
PC91 pops up a message
Manually restart PC93
Rush of traffic to PC91 caused PC93 to
overload
PC93’s EM killed PC93’s DistM
PC91 pops up a message
Paul Burkimsher CERN IT/CO
65
Reconnection Behaviour
No gaps in the Alert archive of the
machine that isolated itself by taking
evasive action. No data was lost.
It takes about 20 sec for 2 newly
restarted Distribution Managers to get
back in contact.
Existing (new-style!) alert screens are
updated with the alerts of new systems
that join (or re-join) the cluster.
Paul Burkimsher CERN IT/CO
66
Is the load of many Trends
reasonable?
Same configuration:
91
92
95
03
93
04
05
06
07
94
08
09
10
11
12
13
Trend windows were opened on
PC91 displaying data from more and
more systems. Mixed platform.
Paul Burkimsher CERN IT/CO
67
Is Memory Usage Reasonable?
RAM
(MB)
Steady state, no trends open on PC91
593
Open plot ctrl panel on 91
658
On PC91, open a 1 channel trend window from
PC03
658
On PC91, open a 1 channel trend window from
PC04
657
On PC91, open a 1 channel trend window from
PC05
657
On PC91, open a 1 channel trend window from
PC06
658
On PC91, open a 1 channel trend window from
PC07
Paul Burkimsher CERN IT/CO
658
68
Is Memory Usage Reasonable?
RAM
Steady state, no trends open on PC91
602
On PC91, open 16 single channel trend windows
from PC95Crate1Board1
604
On PC91, open 16 single channel trend windows
from PC03Crate1Board1
607
On PC91, open 16 single channel trend windows
from PC04Crate1Board1
610
Paul Burkimsher CERN IT/CO
69
Test 34: Looked at top node
plotting data from leaf machines’
archives
Performed excellently.
Test ceased when we ran out of
screen real estate to show even the
iconised trends (48 of).
Paul Burkimsher CERN IT/CO
70
Would sustained overload give
trend problems?
Zzzzzzz
High traffic (400mS delay\1000
changes) on PC93, as a scattered
member of PC94’s system.
PC94’s own trend plot could not
keep up.
PC91’s trend plot could not keep
up.
“Not keep up” means…
Paul Burkimsher CERN IT/CO
71
Conclusions & Outlook
Where to go from here?
Conclusions
PVSS forms today the backbone of many controls
systems @ CERN
using the JCOP Framework, all experiments build
their DCSs
using the UNICOS Framework, many applications
using PLCs are implemented
the chosen combination has proven to be powerful
and adequate for the needs of the LHC
experiments and the accelerator controls
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
73
Outlook
All LHC experiments will use PVSSII and the JCOP
Framework as the basis of their final DCSs
More and more domains will implement higher level
controls using PVSSII and the existing
customizations
CERN will continue to follow ETM’s developments
and also to contribute to the evolution of the
software system
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
74
Questions?
28.09.2004
Dr. Sascha Schmeling • CERN-IT/CO
75