Event fields
Download
Report
Transcript Event fields
IBM Tivoli Network Manager 3.9, 4.1, 4.1.1
Adding 3rd party event flows for topology enrichment and
Root Cause Analysis
Version 2.0
By Rob Clark
October 13, 2014
Table of Contents
ABSTRACT .................................................................................................................................................. 3
EVENT FIELDS ........................................................................................................................................... 3
EVENT AUTOMATION .................................................................................................................................. 3
NETWORK MANAGER IS A SOURCE OF EVENTS FOR OMNIBUS ................................................................... 4
VERIFYING INTEGRITY OF EVENTS FOR RCA .............................................................................................. 4
Polling reference point ......................................................................................................................... 5
Event fields used by Network Manager ................................................................................................. 6
To RCA or not to RCA? ........................................................................................................................ 9
Can Root cause events themselves be symptoms? ................................................................................10
SET UP AEL TO VERIFY EVENT INTEGRITY ................................................................................................11
NETWORK MANAGER ROOT CAUSE ANALYSIS RULES .............................................................13
SAME ENTITY SUPPRESSION ......................................................................................................................13
CONTAINED ENTITIES SUPPRESSION ..........................................................................................................14
CONNECTED ENTITY SUPPRESSION ...........................................................................................................15
ISOLATED SUPPRESSION – (DOWNSTREAM SUPPRESSION) ..........................................................................17
LOGICAL RELATED SUPPRESSION ..............................................................................................................19
COPYRIGHT NOTICES ...........................................................................................................................21
2
Abstract
At the end of this guide, you will have a working understanding of the event process and
how certain event fields can direct this process. Event fields are used to drive the Network Manager and OMNIbus processes for event enrichment, event reduction, and root
cause life cycle. To identify root cause for intra-device relationships such as cards, logical, and physical interfaces and also neighbor relationships such as devices downstream
from the outage.
With knowledge of the fields and how they should be constructed in probe rules files, you
will be able to introduce events from other sources to leverage these automated processes.
An automated tool is presented at the end that will identify events with key fields missing
and display the consequence of this so that you can correct it and thereby improve the
value of the events.
The event fields are in the alerts.status table in OMNIbus and the full set are described in
the documentation. Look for Reference->Object Server Tables.
http://www.ibm.com/support/knowledgecenter/SSSHTQ_7.4.0/com.ibm.netcool_OMNIb
us.doc_7.4.0/omnibus/wip/common/reference/omn_ref_tab_objservtables.html
Event fields
Event automation
OMNIbus events are generated using probes. There are a large number of probes available. Each probe has a rules file that provides the instructions on what information to include in the events. It is important that key information is standardized so that other entities can operate on the events in a uniform manner. For instance, OMNIbus has a “deduplication” automation that relies on events representing the same situation having the
same identifier. Instead of adding the same event again, OMNIbus simply increments the
“Tally” counter for the event with the same “Identifier” field.
OMNIbus has two other automations that help Network Manager events. The “generic_clear” automation matches resolution events (Type=2) with the corresponding problem events (Type = 1). It sets the severities of both to zero. The “delete_clears” automation runs every two minutes and deletes any event with Severity=0. Resolution and problem events are matched based on Node, AlertKey, AlertGroup, and Manager fields of
Alerts.status. Thus it is important resolution events are not sent to OMNIbus with a Severity of zero, otherwise they might be removed by the “delete_clears” automation before
resolving the problem event in the “generic_clear” automation.
Other processes, such as Network Manager, Impact, and other OMNIbus automations,
work on the events in OMNIbus to enrich the events with additional related information
or correlate them to find the root cause and consequent symptom events.
3
Network Manager is a source of events for OMNIbus
The Network Manager monitoring component is responsible for polling the network and
generating alerts for exceptional events – availability states and threshold breaches from
SNMP metrics and expressions. These events are generated by the nco_p_ncomonitor
probe, which, out of the box, handles all of the events produced from the monitoring system.
The SNMP probe (mttrapd) is usually installed with Network Manager and it receives
unsolicited SNMP traps from the network and converts them to events for OMNIbus.
The Netcool Knowledge Library (NCKL) is also installed and consists of an extensive set
of rules for thousands of SNMP traps. This library is updated and released periodically.
The syslog probe is also shipped with Network Manager and it can be configured to generate events for OMNIbus. NCKL includes rules for many situations identified by syslog.
Verifying integrity of events for RCA
Figure 1: Event Flow
4
Polling reference point
The reference point is required as a starting point by RCA when performing downstream
suppression of events. If Network Manager knows the layer 2 connections, it can calculate what devices are downstream from the poller for any device. This implies that the
Network Manager server hosting the poller must be part of the discovered topology. If
for some reason it cannot be (maybe it is behind an unmanaged router), then set the
NcpServerEntity field in EventGatewaySchema.cfg to a suitable reference
device close to the poller in the discovered topology.
If the Network Manager server itself in scope and discovered, the NcpServerEntity
field can be an empty string. In this case, the event gateway will try to find the local host
in the topology, based on the IP addresses of the local host.
If the Network Manager server is not in the scope of the discovery, the name (e.g. IP address or DNS name) of the ingress interface should be used. This is the interface within
the discovery scope from which network packets are transmitted to/from the Network
Manager server. The diagram below illustrates this concept - the IP address of the circled
interface would be given for the NcpServerEntity.
TIP: A common cause for RCA not identifying downstream symptom events is the
lack of a connected path from the reference point to the downstream devices. Use the
Hop View to find the reference device (Network Manager server or NcpServerEntity in
EventGatewaySchema.cfg) and expand the number of hops using the layer 2 layout and
walk the network to verify the path exists.
5
Event fields used by Network Manager
LocalNodeAlias
This is the hostname or IP address of the device the event relates to.
If this field is empty then Network Manager will ignore this event. Set this field in the
probe rules file.
NmosEventMap
This field is new for ITNM 3.9 and specifies the event map that the Network Manager
Event Gateway will use to handle the event. Typically this field is set in the probe rules
file for each event. The event map can also be specified in the config.precedence
table in NcoGateInserts.cfg which is used for backward compatibility.
The standard event maps are defined in EventGatewaySchema.cfg file and specify
the event gateway stitcher it will use from
$NCHOME/precision/eventGateway/stitcher/. EntityFailure is a standard
event map suitable for events indicating a failure of the device and using the LocalNodeAlias field to identify the device.
The database table ncmonitor.gwplugineventmaps, is used by the event gateway
to know which plugins, such as RCA, to send each event.
The precedence value, used by the root cause engine to handle multiple root cause events
for the same entity, can be set in the NmosEventMap by appending it thus:
@NmosEventMap =”EntityFailure.300”
EventId
This is the name of the event identifying a specific situation type. For example
NmosPingFail is the event Network Manager issues when an ICMP poll fails.
If this field is empty then Network Manager will ignore this event. Set this field in the
probe rules file.
Network Manager names all its events for out of the box poll definition threshold breaches. When you create a new poll definition with a threshold defined, Network Manager
will create the name of the event for you based on the name you gave the poll definition.
TIP: After creating a new poll definition, the GUI will create the eventId, but you
won’t see it in the GUI until you exit the Poll Definition page and re-enter. If you
want to create a variant of a poll definition with a different threshold value, then use
the Copy function so that you retain the same eventId.
For events created by non-Network Manager probes, ensure an EventId name is created
for each situation type in the probe rules file if you want Network Manager to enrich it or
pass it through the root cause engine.
6
TIP: A common reason to fail to find a match is that the device has not been discovered, or ITNM did not find a DNS name if the hostname is used. To see why, use the
LocalNodeAlias contents and check the topology database using ncp_oql on the
command line or in the GUI select, Administration->Network->Management Database
Access.
Select the Model service and run the following queries,
select * from master.entityByName where ExtraInfo->m_DNSName = <hostname-inLocalNodeAlias>
select * from master.entityByName where Address (2) = <IP-address-in-LocalNodeAlias>
If no map is found then the genericip-event map is used by default, which only
looks up the entityId for NmosObjInst and does not send it to any plugin.
NmosObjInst
Network Manager is responsible for populating this field. This is Network Manager’s
internal entityId for the device relating to the event. The ncp_g_event process reads
all events in OMNIbus with a non-empty LocalNodeAlias field and searches the topology database for a match.
If it does not find a match then this field will be blank and Network Manager will do no
further processing on it.
Events generated by Network Manager poller will pass the entityId of the entity relating
to the event, for example the interface or chassis, in the NmosEntityId field. This is
used to lookup the device entityId and populate NmosObjInst.
For all other events, Network Manager event gateway will use the LocalNodeAias to
look up the IP address or hostname to find the entityId of the device and populate
NmosObjInst. If the IP address is used, it will find the device hosting that address.
The Network Manager GUI uses both NmosObjInst and NmosEntityId to identify
events in the topology status calculations.
NmosCauseType
If the event is sent to the Root Cause engine, the NmosCauseType field will be
changed from “Unknown” to either “Root Cause” or “Symptom”.
NmosSerial
If the event is tagged as a symptom, this field will be set to the OMNIbus serial number
of the event identified as the root cause.
Severity
Indicates the alert severity level, which provides an indication of how the perceived capability of the managed object has been affected. The color of the alert in the event list is
controlled by the severity value:
•
•
•
•
•
•
0 - Clear
1 - Indeterminate
2 - Warning
3 - Minor
4 - Major
5 - Critical
7
Network Manager will boost the severity of an event that becomes the root cause. This is
controlled in the OMNIbus automation severity_from_causetype.
NmosManagedStatus
Specifies the managed status of the network entity for which the event was raised. When
a network entity is unmanaged, the Network Manager polls are suspended and events
from other sources are tagged as unmanaged. This field allows you to filter out events
from unmanaged entities. These are the possible values for this field.
•
•
•
•
0 - Managed
1 - Operator unmanaged
2 - System unmanaged
3 - Out of scope
NmosDomainName
This field contains the Network Manager domain name of the device. If you have multiple Network Manager domain names you need to pay attention to this field. Mostly it is
automatic – nco_p_ncpmonitor probe will populate this field for you. However, other
probes, including mttrapd and syslog do not populate this field. This may be okay, as the
event gateway will do a best effort to find the device and populate the field if empty.
There are two things to be aware of,
a) If domains contain overlapping device names, then the first domain to process this
event will win (essentially random).
b) If no device exists for that event in the topology, it will continue to be processed
by all domains. This may be a problem if there are many such events. In which
case you can modify the filter in config.nco2ncp table in EventGatewaySchema.cfg.
Node
This field is populated by the probe rules file. The events created by the Network Manager monitoring system tend to set this field to the IP address of the device. If you would
prefer to see the device name (entityName) instead, edit the file
NCHOME/probes/<platform>/nco_p_ncpmonitor.rules
and replace the following stanza
if (exists($ExtraInfo_ACCESSIPADDRESS))
{
@Node = $ExtraInfo_ACCESSIPADDRESS
}
else
{
@Node = $EntityName
with
8
@Node = $EntityName
Once the change has been made then the probe will need to be restarted. You can do this
by finding the PID of the nco_p_ncpmonitor process and issuing the command,
kill <pid>
It will restart automatically.
For the full set of alerts.status fields, look for Reference->Object Server Tables in
the documentation,
http://www.ibm.com/support/knowledgecenter/SSSHTQ_7.4.0/com.ibm.netcool_OMNIb
us.doc_7.4.0/omnibus/wip/common/reference/omn_ref_tab_objservtables.html
To RCA or not to RCA?
This section is useful to understand if you want to add additional events from other
sources and correlate with Network Manager’s topology and RCA.
Network Manager’s root cause analysis engine (RCA) has several rules based on correlation analysis of raw events. For given situations it will determine the root cause and tag
those that are determined to be symptoms. Starting with version 3.9, these rules are implemented using the stitcher language in
$NCHOME/precision/eventGateway/stitchers/RCA/. They handle the life
cycle of the correlated situation as incoming events provide new information. Some are
specialized for neighbor relationships with OSPF and BGP. Others handle containment
related events. For instance, if a card fails, it will be marked as the root cause while other
interface events will be marked as symptoms. Others handle the classic topological correlation for downstream effects of outages.
An EventMap contains the name of the stitcher that will be run against the event and a
flag that can be set if these events are likely to flap and should be handled to minimize
the false alerts.
This stitcher will attempt to find the entityId of the device or interface and set the
NmosObjInst (if not defined) using the LocalNodeAlias to map against the
DNSName or IP address. Others may assume the contents of the LocalPriObj field contains the ifIndex for example. It calls the StandardEventEnrichment stitcher to set these
and other event fields from topology data. The StandardEventEnrichment stitcher can be
used to enrich other fields from the topology.
The database table ncmonitor.gwplugineventmaps, is used by the event gateway
to know which plugins, such as RCA, to send each event.
The Precedence Table is used by events with no NmosEventMap and maps the EventId
to an EventMap, together with a value for the precedence. This value is important for it
9
is used to weigh multiple events for the same entity, such as an interface, and declare the
one with the highest weighting to be the cause and mark the others as symptoms.
0 indicates this event should never be the root cause – NmosCauseType can only be set to
Unknown or Symptom.
By convention, 10,000 indicates the event should always be marked as root cause and
cannot be marked as Symptom - NmosCauseType can only be set to Problem.
Inbetween numbers (usually one of 300, 600, 900, for instance) indicate how likely that
event would be the root cause.
Sometimes an event will appear as the root cause marking another as the symptom when
really it should be the other way around. At this point, you need to identify the two event
maps adjust the precedence weightings.
UNINTENDED CONSEQUENCES: Creating Poll Definitions for pinging.
Most threshold events result from exceeding some SNMP performance metric and not
suitable for RCA, so the default EventMap ItnmMonitorEventNoRca is used. If you
create a new poll definition for ICMP polling then this is not what you want, but it often is what you get.
To avoid this, create the new ping poll definition by cloning the default poll definition
so that the same EventId NmosPingFail is used.
If you have already created one and maybe noticed that those events are not being
marked as root cause or symptom (NmosCauseType always Unknown), then add an
insert statement in the NcoGateInserts.cfg. Let's assume the EventId is now POLLPingFail, you would add this line:
insert into config.precedence values (300, PrecisionMonitorEvent, POLL-PingFail;
Can Root cause events themselves be symptoms?
A root cause event can be suppressed by other higher order events - meaning that the root
cause of an event may be a symptom that has been further suppressed. This is normal behavior, and actually makes sense. As an example, if we get a linkDown trap, a ping fail
and a syslog message about an interface failing then one of these will become a root
cause, suppressing the other 2 – by default, the linkDown will suppress the ping and syslog. If subsequently the chassis fails, then the chassis root cause will suppress the interface root cause, i.e. the priority is fix the chassis, then fix the interface. From either of the
2 suppressed interface failures, the "Show Root Cause" will display the original root
cause, but now suppressed link Down, and from this suppressed linkDown you can then
use the same "Show Root Cause" tool to get to the top level.
10
Set up AEL to verify event integrity
Use this section to set up a system for verifying the integrity of events for RCA. This
adds a field to each event that will tell you if the event is missing a field and how to fix it.
Setup
1. Run the ItnmEventVerify.sql script shown in the figure below to
a. add two new fields to the OMNIbus alerts.status database table:
NmosRuleName, NmosMessage
b. add the NmosVerify automation that will evaluate each event and populate
these fields.
2. AEL view
Create a new view called say, VerifyRCAEvents, with the following fields:
Field Name
Serial
Node
Set by
OMNIbus
Probe rules
file
Summary
Probe, maybe enriched
by others
NmosCauseType RCA
NmosSerial
RCA
NmosMessage
NmosVerify
trigger
EventId
Probe rules
file
Probe rules
file
ncp_ncogate
LocalNodeAlias
NmosObjInst
NmosRuleName
NmosVerify
trigger
NmosEventMap
Probe rules
file
Comments
Unique number for each event in OMNIbus
Device name relating to the event
Description of the event
RCA sets this field from Unknown to Root Cause or
Symptom
For symptom events, this field contains the Serial
number of the root cause event
Set if not blank, this field identifies events that do not
meet the criteria for entering RCA. It will describe
what field is missing and how to fix it.
Unique name to identify the situation.
IP address or hostname of the device relating to the
event
The entityId of the device represented by LocalNodeAlias
This is the root cause rule name to be used by the
event. Presence of this field indicates the event will be
sent to RCA, but the EventMap will indicate whether
it will participate in root cause analysis or not.
The EventMap determines how the event is handled by
the event gateway. The EventMaps are configured in
$NCHOME/etc/precision/EventGatewaySchema.cfg
and specifies the stitcher that will be run.
11
ALTER TABLE alerts.status ADD COLUMN NmosMessage varchar(255);
CREATE OR REPLACE TRIGGER GROUP NmosVerifyGroup;
GO
CREATE OR REPLACE TRIGGER NmosVerify
GROUP NmosVerifyGroup ENABLED TRUE
PRIORITY 20
COMMENT 'Verifies the RCA Nmos columns and reports invalid conditions
in the NmosMessage field.'
EVERY 10 SECONDS
DECLARE invalidmsg char(255);
BEGIN
FOR EACH ROW invalidRca in alerts.status where
invalidRca.Manager <> 'ConnectionWatch' AND invalidRca.AlertGroup
<> 'ITNM Status'
BEGIN
IF (invalidRca.LocalNodeAlias = '') THEN
set invalidmsg = 'Modify ' + invalidRca.Manager + ' rules file
to set LocalNodeAlias to the IP address or Hostname that matches an
existing device in the topology.';
ELSEIF (invalidRca.EventId = '') THEN
set invalidmsg = 'Modify ' + invalidRca.Manager + ' rules file to
set the EventId for this event. It must match an event map in the
config.precedence table (NcoGateInserts.cfg) if NmosEventMap field is
not used.';
ELSEIF (invalidRca.NmosObjInst = 0) THEN
set invalidmsg = 'LocalNodeAlias (' + invalidRca.LocalNodeAlias +
') does not match an existing device in the topology. It may not be
discovered or the value does not match.';
ELSE
set invalidmsg = '';
END IF;
IF (invalidmsg <> invalidRca.NmosMessage) THEN
UPDATE alerts.status via invalidRca.Identifier SET NmosMessage
= invalidmsg;
END;
END;
GO
Contents of ItnmEventVerify.sql
12
Network Manager Root Cause Analysis Rules
RCA consists of five basic types of correlation for analyzing events to find the important
events and suppress symptom events. This section will give you an understanding of
how Network Manager determines events to be root cause or symptoms.
Same Entity Suppression
This scenario represents the failure of a single entity - in this case a single interface.
Events on the same entity are suppressed using a user definable precedence value; the
most informative events have the highest precedence.
ITNM has been pre-configured with hundreds of precedence values – and these can easily be added to at any stage to cater for new event types.
In this example there is a mix of both passive and active events; SNMP traps and syslog
messages via OMNIbus probes, and active poll events (ping and SNMP) from ITNM.
These events contain a mixture of IP addresses, DNS names, trap source and ifIndex information that would require some time to correlate manually.
ITNM will determine that each event is for the same network entity and escalate the most
informative event (in this case the syslog message) to the root cause, suppressing the other less informative events.
• Event sources
•
•
Active polling (ICMP, SNMP)
Passive reception (traps, syslog)
• Expected Results
•
Both passive and active events are correlated, with the most informative event being escalated to a root cause, suppressing the others.
Pre-RCA
After RCA
13
Contained Entities Suppression
This scenario shows how ITNM correlates events on entities that contain other entities –
a parent child relationship. In this case a card and its dependant interfaces.
The events here are passive and active (syslog and SNMP polls). The RCA engine will
establish the correct network component to raise each event against, and will then look
for a containment relationship in the topology.
In this case the card “contains” the interfaces, and is escalated to a root cause, suppressing the symptomatic interface failure events.
These parent child relationships are defined automatically during the discovery process
and do not require any configuration or administration.
This containment RCA method works at any level of containment, chassis to interface,
card to interface, interface to sub-interface etc.
• Event sources
•
•
Active polling (SNMP)
Passive reception (syslog)
• Expected Results
•
The card failure syslog message suppresses the interface failure messages from
the ITNM SNMP poll.
Pre-RCA
After RCA
14
Visualization
Connected Entity Suppression
This scenario represents the typical failure of a connection between two neighbouring
devices. This is indicated by active (ICMP, SNMP) and passive (trap, syslog) events.
The events generated do not indicate which end of a connection has failed – merely that
there is a problem with the link.
In these cases ITNM will declare one end of the link a root cause, locally suppressing
events on the same interface and remotely suppressing the events on the neighbour interface.
15
It would be possible to modify the behaviour of the RCA engine to change the Summary
text of the Root Cause event to be “Connection failure between – A and B” if required.
• Event sources
•
•
Active polling (ICMP, SNMP)
Passive reception (traps, syslog)
• Expected Results
•
A root cause is created for one end of the link, suppressing local and remote
symptomatic events.
Pre-RCA
After RCA
Visualization
16
Isolated Suppression – (downstream suppression)
This scenario represents the failure of an isolation point – a single point of failure within
the topology. In this case an interface serving an area of the network – for example the
connection between a PE and a CE and associated CPE devices.
In this case the most informative event on the isolating interface (a syslog message) is
declared as the root cause of its local events (traps, ping fails) and the failures from the
isolated devices.
This scenario also illustrates how ITNM treats common transient or flapping events – e.g.
linkDown SNMP traps.
These events are not permitted to escalate to a root cause and perform suppression until
they have existed for a period of time (30 seconds by default). This mechanism prevents
false positive or rapidly shifting root causes to be declared.
This behaviour can be applied to any event type from any suitable source.
• Event sources
•
•
Active polling (ICMP, SNMP)
Passive reception (traps, syslog)
• Expected Results
•
The failure of the isolating interface becomes the root cause, suppressing the
events from the isolated devices
Pre-RCA
17
After RCA
Visualization
18
Logical Related Suppression
This scenario shows how the ITNM RCA engine can be used to correlate physical and
logical failures where the events are not from physically direct or dependant neighbours.
The events here represent the physical failure of an interface (ping fail and link down
trap) and the resulting logical failures from related devices – here OSPF adjacency
changes.
The RCA engine uses both the discovered topology and the incoming event information
to correlate the physical failure and the remote logical failures.
The link down trap (being the most informative) suppresses the local interface ping fail
and the remote OSPF adjacency change.
• Event sources
•
•
Active polling (ICMP, SNMP)
Passive reception (traps, syslog)
• Expected Results
•
The logical failures are suppressed by the underlying physical failure
Pre-RCA
19
After RCA
Visualization
Thanks to Andrew “Spike” Hepburn for the RCA description taken from IBM ITNM
demo sales material and his advice.
20
Copyright Notices
Copyright IBM Corporation 2014. All rights reserved. May only be used pursuant
to a Tivoli Systems Software License Agreement, an IBM Software License
Agreement, or Addendum for Tivoli Products to IBM Customer or License
Agreement. No part of this publication may be reproduced, transmitted,
transcribed, stored in a retrieval system, or translated into any c omputer
language, in any form or by any means, electronic, mechanical, magnetic,
optical, chemical, manual, or otherwise, without prior written permission of
IBM Corporation. IBM Corporation grants you limited permission to make hardcopy
or other reproductions of any machine-readable documentation for your own use,
provided that each such reproduction shall carry the IBM Corporation copyright
notice. No other rights under copyright are granted without prior written
permission of IBM Corporation. The documen t is not intended for production and
is furnished “as is” without warranty of any kind. All warranties on this
document are hereby disclaimed, including the warranties of merchantability and
fitness for a particular purpose.
U.S. Government Users Restrict ed Rights -- Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.
Trademarks
IBM, the IBM logo, Tivoli, the Tivoli logo, AIX, Cross -Site, NetView, OS/2,
Planet Tivoli, RS/6000, Tivoli Certified, Tivoli Enterprise, Ti voli Enterprise
Console, Tivoli Ready, and TME are trademarks or registered trademarks of
International Business Machines Corporation or Tivoli Systems Inc. in the
United States, other countries, or both.
Lotus is a registered trademark of Lotus Developmen t Corporation. Microsoft,
Windows, Windows NT, and the Windows logo are trademarks of Microsoft
Corporation in the United States, other countries, or both. UNIX is a
registered trademark of The Open Group in the United States and other
countries. C-bus is a trademark of Corollary, Inc. in the United States, other
countries, or both PC Direct is a trademark of Ziff Communications Company in
the United States, other countries, or both and is used by IBM Corporation
under license. ActionMedia, LANDesk, MMX, Pentium, and ProShare are trademarks
of Intel Corporation in the United States, other countries, or both. For a
complete list of Intel trademarks, see
http://www.intel.com/sites/corporate/tra demarx.htm. SET and the SET Logo are
trademarks owned by SET Secure Electronic Transaction LLC. For further
information, see http://www.setco.org/aboutmark.html . Java and all Java-based
trademarks and logos are trademarks or registered trademark s of Sun
Microsystems, Inc. in the United States and other countries.
Other company, product, and service names may be trademarks or service marks of
others.
Notices
References in this publication to Tivoli Syste ms or IBM products, programs, or
services do not imply that they will be available in all countries in which
Tivoli Systems or IBM operates. Any reference to these products, programs, or
services is not intended to imply that only Tivoli Systems or IBM pro ducts,
programs, or services can be used. Subject to valid intellectual property or
other legally protectable right of Tivoli Systems or IBM, any functionally
equivalent product, program, or service can be used instead of the referenced
product, program, or service. The evaluation and verification of operation in
conjunction with other products, except those expressly designated by Tivoli
Systems or IBM, are the responsibility of the user. Tivoli Systems or IBM may
have patents or pending patent application s covering subject matter in this
document. The furnishing of this document does not give you any license to
these patents. You can send license inquiries, in writing, to the IBM Director
of Licensing, IBM Corporation, North Castle Drive, Armonk, New York 10504-1785,
U.S.A
21