Professional Documents
Culture Documents
com
All PDF
Table of Contents
1 Table of Contents 2 BMC Atrium Discovery 2.1 Business Service Management 2.2 The Discovery Engine 2.3 Data Model 2.4 The Reasoning Engine 2.5 The Pattern Language 2.6 How data is stored and maintained 2.7 Administration 2.8 Delivery 2.9 Glossary 3 Legal Information
Page 1
BMC Software delivers a BSM solution according to the key functions of an IT organization: Planning Capture and track information related to demand, supply, resources, finances, and risk. Support Reduce complexity, integrate customer support, change management, and asset management, and increase their efficiency. Operations Find and fix problems before your users do with analytics, event automation, and performance management. Automate repetitive, manual tasks to eliminate errors and get things done more quickly. The IT Infrastructure Library (ITIL) defines standards for the processes that IT departments use to manage IT hardware and software, such as problem management, incident management, and change management. These processes ensure the availability of the critical IT services that sustain a business, like banking systems, ordering systems, and manufacturing systems. Additionally, ITIL configuration management needs reliable data about components in the IT environment and needs to understand the impact to key business services when there are changes to that environment. For a large company, maintaining information about thousands of pieces of hardware and software is difficult to manage. Central to these ITIL processes is the Configuration Management Database (CMDB). The data in the CMDB feeds the related applications that perform ITIL processes. As an enabler to ITIL Configuration Management, BMC Atrium Discovery automates the process of populating CMDB by exploring IT systems to identify hardware and software, and then creating CIs and relationships from the discovered data. With BMC Atrium Discovery, the Configuration Manager is able to Dramatically decrease CMDB population times, which accelerates the implementation of BSM and the applications that perform ITIL Service Operations processes. Increase data accuracy by automatically and continuously updating configuration data in the CMDB. Identify the IT components that comprise enterprise applications and business services.
Page 2
The major features of BMC Atrium Discovery's discovery are: Discovery UI simple management of the discovery process. Scanning levels can be set for individual discovery runs. Exclude ranges entry simple management of IP ranges that must not be scanned. Creation of Directly Discovered Data provides an exact record of what has been discovered. You can also run search queries across your data. Reliable stopping and restarting when Discovery is stopped, all runs are paused and their state is saved so each run can continue when Discovery is restarted. Consolidation consolidation of scanned data from multiple scanning appliances to a single consolidation appliance. A pattern can invoke Discovery as required, for example to read a file or execute a command.
Data Model
The data model in BMC Atrium Discovery is unique in the way it allows the data produced to be represented in a consistent and understandable manner. The data model stores different types of related data in separate areas of the model. This allows a high degree of clarity in what a particular piece of information represents and where it came from. The main types of data represented in the model are: Directly Discovered Data Knowledge Information Inferred Information Provenance Information For more information about the BMC Atrium Discovery data model see the BMC Atrium Discovery Node Lifecycle Document
Knowledge Information
Knowledge Data is supplied through Technology Knowledge Updates and includes static information about products, publishers and so on. Knowledge information is information about hardware, applications and software products and their versions, and is included in the patterns
Page 3
Inferred Information
Inferred information is information that is reasoned from directly discovered data, and it is this information which will be of primary interest to most users. These include items such as Hosts, Subnets, SIs (Software Instances), and BAIs (Business Application Instances). The inference of any type of node can be controlled by the user by writing patterns which are run by the reasoning engine. It includes classifications of discovered processes, groupings of discovered information into running instances of products, and so on. A Host node is not considered directly discovered information, since a level of inference is required to decide which Host a discovered endpoint corresponds to, and whether it corresponds to a host at all.
Provenance Information
By linking data items and their attributes to the evidence for them, BMC Atrium Discovery enables its data to be easily verified; a prerequisite for trusting it. This feature is called "Provenance". Provenance Data is created by the Reasoning Engine and consists of relationships between Directly Discovered Data items and entities inferred from them a relationship back to the source from the inferred entity. For example, there is a provenance relationship between each Software Instance (SI) and the data that caused the SI to be inferred typically one or more processes, files, or the like. Provenance information is meta-information describing how the other information came to exist. It is generated as Reasoning builds and maintains the model. Provenance information is stored as relationships in the model. Inference types Primary indicates that the existence of the evidence node is the reason that the inferred node was created. Contributor indicates that the evidence node provided information used in building the inferred node. Associate indicates that we know there is a relationship between the evidence node and the inferred node. Relationship indicates that we know a relationship exists because of the evidence node. Deletion indicates that the removal of the inferred node was due to withdrawal of the evidence node. Maintaining pattern the pattern maintaining a Software Instance. For example, the following illustration shows software instance provenance information for an Apache Tomcat Application server software instance.
Page 4
Engine. However, it is not the intention that end users should need to modify these rules. The Software Instance and Business Application Instance creation processes are now part of Reasoning rather than discrete components and are controlled using the Pattern Language.
Scan level
BMC Atrium Discovery provides the following default scan levels. Sweep Scan: This will do a sweep scan, trying to determine what is at each endpoint in the scan range. It will attempt to login to a device to determine the device type. Full Discovery: This will retrieve all the default information for hosts, and infer all of the information that we can. You can choose any scan level when you are adding a discovery run. If you do not choose a scan level, the appliance default (which is configurable) is used.
Host rescan
The Host page has a Scan Now button. When the Scan Now button is clicked, all IP addresses are shown but the host will be scanned through the IP address determined to be the best IP for that host.
Provenance
When you are viewing a node representing an inferred entity in the UI, you can always access the information which was used to create the entity. Click the Show Provenance button to view the provenance information. This displays an additional column which contains information for each attribute on how recent the information is, and the discovery method that obtained the information. It also displays an additional row which contains details of the primary data provenance. For example; on a host, the latest scan From DeviceInfo means that the attribute was updated at the latest scan with information obtained from the getDeviceInfo discovery method. In the Reports section of the Discovery page, there is a report on Hosts with attribute retrieval failures. This provides information on attributes and their hosts which have were not updated at the most recent scan. See Discovery reports.
Pattern Management
The Package Management page enables you to upload packages to the appliance and activate or deactivate packages that are stored on the appliance. You can also download packages from the appliance. Packages are: TPL files which contain one pattern module. Zip archives of one or more TPL files. See Pattern management for more information.
Page 5
associated IP addresses). However, many attributes are defined as strings with no size limit, allowing very flexible data definitions. Attributes can have one or more of the following flags: Required: Specifies that an attribute is mandatory when data is created. Required attributes are marked by an asterisk on the user interface. Expected or Optional: Defines whether or not an attribute affects the Data Quality of an object. Note that unless the attribute is also marked Required, an Expected attribute is not mandatory.
Acquiring data
The data used by BMC Atrium Discovery can be acquired from the following data sources: Discovery: Discovery of your infrastructure is the best way of acquiring information about the hardware in your system and the programs running. For details, see The Discovery Engine. The Pattern Language: Data is created using the Pattern Language. The patterns trigger on certain conditions and can create data which models different systems/configurations (for example, Solaris Zones and Virtual Machines). For more information about managing patterns, see Pattern management. Import: Data about your current systems and business can be imported from existing systems such as Asset Management or Inventory systems and from databases.
Administration
Page 6
Delivery
Virtual Appliance
The BMC Atrium Discovery solution is supplied as a virtual appliance. Except for installation, all interaction with a virtual machine-based appliance is identical to the hardware appliance. See Virtual Appliance for more information.
Kickstart DVD
The kickstart DVD can be used to install BMC Atrium Discovery on customer-supplied hardware. See Installing BMC Atrium Discovery for more information. See also hardware platforms for more information about the hardware onto which you can install BMC Atrium Discovery using the kickstart DVD.
Physical Appliance
The BMC Atrium Discovery solution was previously supplied as a physical appliance with all software pre-installed, including the BMC Atrium Discovery application.
Glossary
This section provides a glossary of terms.
application map
A dynamic, automatically maintained representation of application structures in your environment. An effective application map identifies the key relationships between how your business operates and the infrastructure that supports it. It also becomes the initial, crucial part of Service Impact Analysis by maintaining accurate service models for BSM.
blackout window
A configuration that enables you to prevent access to the CMDB during sensitive times. During a blackout window, all processing of the CMDB synchronization queue is paused, and processing cannot resume until the window ends.
Page 7
bonding
Enables you to join two NICs as a single physical device so that they appear as one interface. This is usually performed to provide failover capabilities or load balancing. Also known as teaming.
command-line utility
A tool that you can run on a command-line interface to configure BMC Atrium Discovery by obtaining information from specific systems.
component
A general term that is used to mean one part of something more complex. For example, a computer system may be a component of an IT service, and an application may be a component in an application server.
consolidation
The playback of scanned data from multiple scanning appliances to a single consolidation appliance.
data aging
Discovered data is regarded as valid at the time of its last successful scan. The nature of IT infrastructure means that frequent, minor changes to configurations, hosts, and software are common. Consequently, discovered data can be regarded as becoming less current with the passing of time. In BMC Atrium Discovery when data passes a certain configurable aging threshold, it is destroyed.
datastore
All data used by the BMC Atrium Discovery system is held in an object database. The datastore treats data as a set of objects and the relationships between them.
Discovery
The part of the BMC Atrium Discovery system that communicates with host systems, and obtains information from them. Discovery is driven by
Page 8
Reasoning which infers detailed information about hosts and programs and populates the datastore. See also Reasoning Engine.
Discovery endpoint
The endpoint of a single Discovery access, the IP address of the discovery target.
Discovery Run
A scan of one or more Discovery endpoints, specified as an IP address, address, or range of addresses that are scanned as an entity. For each Discovery Run, a node is created that records information such as the user who started the run, the start and end time, and so forth.
event
A change or action that affects the discovery process, such as a software instance that was created or updated. In BMC Atrium Discovery, the Rules Engine (ECA Engine) executes rules in response to events.
functional component
A node created by patterns based on Functional Component Definitions. The functional component is a single block of information that combines similar functionality into logical groups that help application owners and data consumers discover applications at discovery time.
host
A node in the model which represents a physical or virtual computer system including information about its operating system and its physical or virtual hardware. A host is sometimes referred to as an OSI (Operating System Instance). See Host Node.
ID
A unique identifier for a node (also known as a Node ID). For a BMC Atrium Discovery node, an internal identifier that is used as an index by the database. It is a binary identifier represented in hexadecimal format. An ID is not intended to be human-readable; it is designed to be used by the datastore (for example, 4e4fd2c2ae4ccf123272d8446e486f7374. It identifies a stored node, not the item that the node represents. If the node corresponding to an entity is destroyed, and a new node is subsequently created for it, the new node will have a different ID, but it will have the same key.
inferencing
The act of drawing conclusions about data based on what is known about other data.
key
A unique identifier for the entity that a node represents. Unlike the node ID, the key of a node is persistent.
kind
The type of a node, such as Host, Application Version or Person. Also referred to as Node Kind.
lifecycle
The conditions that describe when an entity comes into existence to when it no longer exists. For nodes in the BMC Atrium Discovery model, the lifecycle stages are: Current: Describes nodes that exist in the model. BMC Atrium Discovery contains evidence that nodes currently exist in your environment. Aging: Describes nodes that exist in your model; however, they represent entities that BMC Atrium Discovery has not detected in a certain period of time, and has 'aged out' of the model. BMC Atrium Discovery does not always age entities that it cannot identify over a period of time. Destroyed: Describes nodes that have been marked as destroyed (yet remain in the model). Purged: Describes destroyed nodes that have been purged from the model. Purging a node indicates that it no longer exists in the model and it has been removed from the datastore.
logical host
Page 9
A hardware or software host that is contained in a virtual machine (software), a collaborating host in a cluster (hardware) or a blade in a blade server (hardware).
node
An object in the BMC Atrium Discovery datastore that represents an entity in the environment. Nodes have a kind, such as 'Host', and a number of named attributes. Nodes can be connected to other nodes using relationships. Most node kinds have a key that uniquely identifies the entity in the environment.
node ID
See ID.
node kind
The type of a node, such as a Host or Software Instance. The default set of nodes and their associated attributes and relationships are defined in the BMC Atrium Discovery taxonomy.
ontology
BMC Atrium Discovery's Knowledge Library of hardware and software products, vendors, technologies, infrastructure, and so forth.
pattern
In BMC Atrium Discovery, the Pattern Language (TPL) that creates and maintains the model. Each pattern in TPL has a corresponding pattern node in the model, which is related to the nodes that the pattern maintains. Patterns are used to extend the functionality of the reasoning engine.
production hours
The core business hours corresponding to a specific time zone. Typically, Discovery scans are scheduled at non-production hours to avoid any potential impact on the BMC Atrium Discovery end users, or target critical systems when they are the busiest, or schedule any CMDB synchronization black-out windows to avoid impacting the AR System and CMDB end users.
provenance
Meta-information describing how inferred information came to exist. It is generated as Reasoning builds and maintains the model. Provenance information is stored as relationships in the model.
Reasoning Engine
An event-based engine that orchestrates and drives the population of different parts of the data model through a series of rules that make up the core functionality of the BMC Atrium Discovery product. It is extensible through the use of patterns.
relationship
The way that objects are associated with each other. Relationships are non-directional, and are defined by the roles represented by each object. They are stored in the datastore in the format Node:Role:RelationshipLink:Role:Node.
relationship link
The connection between two roles in a relationship.
RemQuery
A utility that enables you to execute commands on remote Windows hosts in a similar way to the commercial PsExec utility. When BMC Atrium Discovery requests a discovery action using the RemQuery utility, RemQuery copies a binary (itself) to the ADMIN$ share on the target system, and then installs and runs that binary as a service. Each of these steps requires Local Administrator permissions. The service is then used to execute the discovery scripts. At the end of the scan, the service is stopped and uninstalled, but the executable is left in the ADMIN$ share. If a copy already exists, it is not copied again.
removal
The concept of taking data out of the model using one or more of the BMC Atrium Discovery lifecycle methodologies (Aging, Destroyed or Purged).
role
Page 10
The responsibility or actions of the relationship between two nodes. A node with a relationship to another node acts in a role in the relationship, which indicates its part of the relationship. For example, in a 'Dependency' relationship, one node has the role 'Dependant' and the other has the role 'DependedUpon'.
Rules Engine
Another term used to describe the Reasoning Engine. The Rules Engine processes the rules that are generated from Patterns, in order to maintain the model. The Rules Engine is an Event Condition Action (ECA) engine.
rules
Small fragments of executable code that run in the Rules Engine in BMC Atrium Discovery. Rules are generated from patterns when they are activated. Additional core rules are distributed with BMC Atrium Discovery.
scanner file
A plain text file that is used to simulate the discovery of a system that is unreachable, or one that you are not permitted to scan. You create a scanner file by running the standard discovery commands on a host and saving the output. Only the standard discovery commands are run on the host; information that is discovered by patterns is not available.
seed data
In Collaborative Application Mapping, a small sample of host names that are involved in the application or component names. The goal of finding seed data is to provide just a few pieces of information to the application owner (typically communicated through e-mail or instant message) that are clues to help determine what to start investigating.
Software Instance
A Software Instance (SI) Node represents an instance of an off-the-shelf software product or equivalent proprietary item of software. It can correspond to one single running process or a group of processes (possibly on multiple hosts). A Software Instance often corresponds to a licensable entity. Where possible, versions of Software Instances are retrieved and stored. They are created, maintained and destroyed by patterns
taxonomy
The template defining the nodes, attributes, and relationships used by BMC Atrium Discovery and stored in the datastore. The BMC Atrium Discovery taxonomy also defines how much of the data model is represented in the user interface.
total duration
The time it takes to discover and process the data from the target (the duration between the start and end times). See also Session Establishment Duration and Total Discovery Duration.
trigger
The conditions under which a pattern executes. Triggers correspond to the creation, confirmation, modification, or destruction of a node.
Windows proxy
A discovery proxy that is installed on a Windows system, on which the discovery process is controlled by a Linux-based appliance (known as the master).
Legal Information
Copyright 2012 BMC Software, Inc. BMC, BMC Software, and the BMC Software logo are the exclusive properties of BMC Software, Inc., are registered with the U.S. Patent and
Page 11
Trademark Office, and may be registered or pending registration in other countries. All other BMC trademarks, service marks, and logos may be registered or pending registration in the U.S. or in other countries. All other trademarks or registered trademarks are the property of their respective owners. The information included in this documentation is the proprietary and confidential information of BMC Software, Inc., its affiliates, or licensors. Your use of this information is subject to the terms and conditions of the applicable End User License agreement for the product and to the proprietary and restricted rights notices included in the product documentation. BMC Software, Inc. 5th Floor 157-197 Buckingham Palace Road London SW1W 9SP United Kingdom Tel: +44 (0) 800 279 8925 Email: customer_support@bmc.com Web: http://www.bmc.com/support
Important information about this release BMC Atrium Discovery 9.0 is a major change from version 8.3.x. You should be aware of the following changes which cause major behavioral changes in the product. Moving from BMC Atrium Discovery 8.3.x to version 9.0 is not the same as previous upgrades. While the upgrade is straightforward, there is an important decision to be made. You must read the Installation, migration, and upgrade overview to determine which path to take. Customer installed software on the appliance may no longer work as the number of packages in version 9.0 on RHEL 6 has been reduced. Consolidation does not work between versions 8.3.x and 9.0. Some root node kind subtype attributes have been deprecated, for example, device_type on Printer. You should start using type instead. A summary is provided here. Some non-TKU patterns must be reviewed due to model changes. These are highlighted in the upgrade. Some custom reports must be reviewed due to model changes. These are highlighted in the upgrade. Regular expressions in credentials are no longer supported, with the exception of '.*'. Any you have of this type must be fixed manually. An example is shown here Support for Microsoft Internet Explorer 7 has been deprecated. 32 bit kickstarts, virtual appliances, and upgrades are no longer provided. The migration tools provided will enable you to migrate to a 64 bit appliance. The following features do not work fully in version 9.0 on Red Hat Enterprise Linux 5 (all features work fully on RHEL 6). Thus, if you choose to upgrade your BMC Atrium Discovery 8.3.2 appliance to version 9.0 (as opposed to deploying a new version 9.0 and migrating your data to it), the following features will not be available: Scanning over IPv6. Running in FIPS compliant mode. The CMDB export adapter in the exporter has been removed after being deprecated in version 8.2. You should use CMDB synchronization instead. All other export adapters (JDBC, CSV) are still present, working, and supported. The Virtual Appliance is now supplied in OVF format. Note, however, that only VMware virtualization products are supported. Windows workstations are now excluded by default, even after an upgrade. You need to change an option to revert to the previous behavior. Note that we recommend you migrate rather than upgrade. Migrating is required, either in version 9.0 or in the next version of BMC Atrium Discovery. Future releases will only provide an upgrade from version 9.0 running on RHEL 6.
Version history
BMC Atrium Discovery 9.0 was released on 31 October, 2012.
Page 12
Release Notes
These Release Notes detail the following information: New in this version New in version 9.0 SP1 New in version 9.0 SP2 Out of the box discovery capabilities Important information for users of BMC Atrium CMDB Documentation changes since release Limitations and restrictions of this version Defects resolved in this version Known defects in this version Installation, migration, and upgrade overview Model changes Changes to Discovery Commands Windows proxy compatibility matrix Appliance specification summary Changes to third party software license terms Package list - BMC Atrium Discovery 9.0 Package list - BMC Atrium Discovery 9.0 SP1 Package list - BMC Atrium Discovery 9.0 SP2 Supported versions of BMC Atrium Discovery Licensing entitlement BMC Atrium Discovery copyright information
Table of Contents
1 Table of Contents 2 Release Notes 2.1 New in this version 2.2 Out of the box discovery capabilities 2.3 Important information for users of BMC Atrium CMDB 2.4 Documentation changes since release 2.5 Limitations and restrictions of this version 2.6 Defects resolved in this version 2.7 Known defects in this version 2.8 Installation, migration, and upgrade overview 2.8.1 Sizing guidelines 2.8.2 Installing BMC Atrium Discovery 2.8.3 Installing the virtual appliance 2.8.4 Upgrading to version 9.0 2.8.5 Migrating from version 8.3.x to 9.0 2.8.6 Migrating from version 7.5 to 9.0 2.9 Model changes 2.10 Changes to Discovery Commands 2.11 Windows proxy compatibility matrix 2.12 Appliance specification summary 2.13 Changes to third party software license terms 2.14 Package list - BMC Atrium Discovery 9.0 2.15 Supported versions of BMC Atrium Discovery 2.16 BMC Atrium Discovery copyright information 3 Legal Information
Release Notes
This set of Release Notes pertains to version 9.0 of BMC Atrium Discovery and Dependency Mapping (BMC Atrium Discovery).
Page 13
Important information about this release BMC Atrium Discovery 9.0 is a major change from version 8.3.x. You should be aware of the following changes which cause major behavioral changes in the product. Moving from BMC Atrium Discovery 8.3.x to version 9.0 is not the same as previous upgrades. While the upgrade is straightforward, there is an important decision to be made. You must read the Installation, migration, and upgrade overview to determine which path to take. Customer installed software on the appliance may no longer work as the number of packages in version 9.0 on RHEL 6 has been reduced. Consolidation does not work between versions 8.3.x and 9.0. Some root node kind subtype attributes have been deprecated, for example, device_type on Printer. You should start using type instead. A summary is provided here. Some non-TKU patterns must be reviewed due to model changes. These are highlighted in the upgrade. Some custom reports must be reviewed due to model changes. These are highlighted in the upgrade. Regular expressions in credentials are no longer supported, with the exception of '.*'. Any you have of this type must be fixed manually. An example is shown here Support for Microsoft Internet Explorer 7 has been deprecated. 32 bit kickstarts, virtual appliances, and upgrades are no longer provided. The migration tools provided will enable you to migrate to a 64 bit appliance. The following features do not work fully in version 9.0 on Red Hat Enterprise Linux 5 (all features work fully on RHEL 6). Thus, if you choose to upgrade your BMC Atrium Discovery 8.3.2 appliance to version 9.0 (as opposed to deploying a new version 9.0 and migrating your data to it), the following features will not be available: Scanning over IPv6. Running in FIPS compliant mode. The CMDB export adapter in the exporter has been removed after being deprecated in version 8.2. You should use CMDB synchronization instead. All other export adapters (JDBC, CSV) are still present, working, and supported. The Virtual Appliance is now supplied in OVF format. Note, however, that only VMware virtualization products are supported. Windows workstations are now excluded by default, even after an upgrade. You need to change an option to revert to the previous behavior. Note that we recommend you migrate rather than upgrade. Migrating is required, either in version 9.0 or in the next version of BMC Atrium Discovery. Future releases will only provide an upgrade from version 9.0 running on RHEL 6.
Version history
BMC Atrium Discovery 9.0 was released on 31 October, 2012. BMC Atrium Discovery 9.0 SP1 was released on 13 March 2013.
Release Notes
These Release Notes detail the following information: New in this version New in version 9.0 SP1 New in version 9.0 SP2 Out of the box discovery capabilities Important information for users of BMC Atrium CMDB Documentation changes since release Limitations and restrictions of this version Defects resolved in this version Known defects in this version Installation, migration, and upgrade overview Model changes Changes to Discovery Commands Windows proxy compatibility matrix Appliance specification summary Changes to third party software license terms Package list - BMC Atrium Discovery 9.0 Package list - BMC Atrium Discovery 9.0 SP1 Package list - BMC Atrium Discovery 9.0 SP2 Supported versions of BMC Atrium Discovery Licensing entitlement BMC Atrium Discovery copyright information
This section describes the new features and enhancements that have been introduced in the version 9.0 release of BMC Atrium Discovery. This release of BMC Atrium Discovery was released October 2012. If you are new to BMC Atrium Discovery, BMC recommends that you refer to Getting started as an introduction to using the product.
IPv6
This release introduces support for discovery using IPv6. This means that the BMC Atrium Discovery system can run in an IPv6-only or in a dual-stack (IPv4/v6) environment, do discovery over IPv6, and retrieve IPv6 information from discovery targets. The BMC Atrium Discovery system runs in an IPv6-only or in a dual-stack (IPv4/v6) environment. Network interface information: The display of network information in the user interface has been improved to support IPv6 and complex configurations such as bonded interfaces. See the Host page for more information. Pill user interface: A new pill UI has been introduced, which converts text input into discrete objects representing endpoints that you can then manipulate and edit in place using in-line controls. The pill UI has been incorporated into many of the areas where IP addresses or ranges are entered, including the Add New Run dialog. Note that various access methods and discovery platforms do not support IPv6. These include: Mainframe discovery: The MainView infrastructure on which discovery relies does not support IPv6. VMware discovery over the vSphere API Windows Server 2003 and older discovery. Note that to discover via IPv6, the proxy must be running on Windows Server 2008 or newer. UnixWare discovery Extended database discovery using the jTDS JDBC driver Any discovery via JMX
Discovery
This release improves host and device discovery significantly. You can now discover a much wider range of SNMP devices and even create rules to recognize previously unrecognized devices. SNMP recognition rules: A new SNMP recognition rule UI enables you to create recognition rules for unrecognized sysObjectIDs. Recognition rules lead to inferred nodes being created when you subsequently scan such devices. This allows you to quickly populate your CMDB, for example, with devices not yet officially supported through TKU. Not all devices can be comprehensively discovered in this way; in such cases, proper support requires a device definition to be created as part of the usual TKU monthly updates. Enable and disable credentials: You can now enable or disable individual credentials. When you view matching credentials, disabled credentials are ignored. You can no longer specify IP ranges for credentials using regular expressions. Where credentials which use regular expressions are brought over from previous versions, either as an upgrade or a migration they are converted where possible, or disabled. Solaris 11 support: The system now supports discovery of Solaris 11. Network device coverage greatly increased: The number and type of supported network devices has been increased by over 1500. Some non-network infrastructure was previously identified as network devices, such as printers and UPS devices, are now identified as Printers, and SNMP managed devices. Exclude desktop hosts: Desktop hosts are no longer discovered by default. You can now set a discovery option to continue creating desktop hosts in the inferred model. Proxy improvements: New proxies created using the proxy manager prompt you to register with the appliance the proxy manager was downloaded from. Pool data no longer written by default: Creating pool data imposes considerable overhead on the system and is rarely needed. It is no longer written by default, though is still created when running the appliance in record mode. Pool data can be enabled for normal discovery if required and the performance penalty is acceptable. Contact BMC Customer Support for information on enabling pool data. Note however that some pool data must still be written for Windows discovery; getInfo and getInfo.meta are created for each Windows discovery target. Package vendor information: Package node attributes have been enhanced by adding package epoch as an attribute. The following attributes have been added to these additional platforms: Linux: vendor, description, and epoch Solaris: vendor HPUX: vendor
Model
The model has been changed significantly to cater for IPv6, complex network configurations, and SNMP managed devices. The model has also been made more consistent, harmonizing the approach taken across root nodes. Network model changes: The network model has been changed to cater for IPv6 and complex configurations such as bonded interfaces. In previous versions of BMC Atrium Discovery, a NetworkInterface node was created for each interface/IP address pair. Consequently where an interface had more than one IP address, multiple NetworkInterface nodes were created. In IPv6 where each interface has three IP addresses by default (IPv4, IPv6 global, and IPv6 link local) the older approach is inefficient. Similarly, to model bonded interfaces, multiple NetworkInterface nodes with the same IP address are required. The new model introduced with BMC Atrium Discovery version 9.0 removed IP address information from the NetworkInterface node and introduces a separate IPAddress node. To
Page 15
model an interface using IPv6, a single NetworkInterface node is used, with relationships to three IPAddress nodes, one each for the IPv4 address, the IPv6 global address, and the IPv6 link local address. To model a bonded interface, multiple NetworkInterface nodes are used, each with a relationship to the single shared IPAddress node. The IPAddress node has been added. Subnet nodes are now linked to IPAddress nodes rather than NetworkInterface nodes The PortInterface node has been removed. Attributes previously held on the PortInterface node such as speed, duplex, negotiation, MAC address, and so forth are now part of the NetworkInterface node. SNMP managed devices. A new node type called SNMPManagedDevice represents a device that BMC Atrium Discovery does not have sufficient knowledge of to fully discover, or it may be inappropriate to discover (for example a UPS). It may be a switch, router, or SNMP enabled coffee machine for which we have not yet implemented support. Attribute changes: In previous versions of BMC Atrium Discovery, the type attribute on root nodes (Host, NetworkDevice, Printer, and MFPart) were inconsistent. As part of the model changes introduced in BMC Atrium Discovery 9.0 these have been harmonized, including the addition of a new root node type, SNMPManagedDevice. For each root node kind, the table below shows the previous type attribute, whether it is retained, the new harmonized type attribute, and what happens to them at upgrade, or migration. Node Host Host NetworkDevice Previous type attributes host_type 9.0 Type Attributes host_type (deprecated) type Upgrade/migration behavior None Copy from host_type None Always set to "Network Infrastructure", this is used on device_summary. Note Both attributes are populated.
device_type
device_type (deprecated)
NetworkDevice Printer
type device_type
type type
None Rename The device_type attribute was not previously in the taxonomy. New node at this release. Both attributes are populated.
Attribute changes: A new utility tw_tax_deprecated is provided to detect attributes deprecated in the changes described in the table above. This runs as part of the upgrade but can also be run as a standalone utility. DDD aging logic improved: The logic determining how DDD is aged has been changed. In the user interface, the list of DiscoveryAccesses on inferred entities now displays a trash can icon next to DiscoveryAccesses that are about to be deleted. The corresponding DiscoveryAccess page now shows a banner stating that the node is to be removed shortly as part of DDD aging. The inferred entities are: Host Network device Printer SNMP managed device Mainframe MF Part DDD aging is faster than in previous releases, and has a lower impact on scanning performance. You may consider reviewing existing DDD removal blackout windows when upgrading or migrating to this version.
Datastore
The datastore has been updated considerably for this release, including work in the following major areas: Datastore refactoring: The datastore has been substantially refactored to prepare for larger scale deployments which may be supported in future releases. Upgrades take some time as the datastore re-indexes the contents. During the re-indexing, progress messages are written to tw_svc_model.log, enabling you to track progress. The upgrade also provides an estimate for the length of time it will take. The datastore now supports phrase searches: in the quick-search box, enter a phrase in quotes (such as "apache webserver") to find nodes containing that specific phrase. New offline compaction tool: We've built a completely new offline compaction tool that takes care of all steps required to compact the datastore. Previously, it was necessary to perform a series of steps before and after the actual compaction, and data loss could occur if these steps weren't followed exactly.
Page 16
DDD aging improvements: DDD aging has been re-engineered as part of the datastore refactor and the model changes to further reduce the impact of removing eligible DDD nodes. Improved housekeeping: You can specify that history and destroyed nodes are purged automatically. By default, history is not purged and destroyed nodes are purged after one year.
TPL
TPL validation: TPL now validates traversals (that is, TRAVERSE, STEP IN, STEP OUT) in searches, key expressions in searches and TPL, and the traverse expression in sync mappings, against the taxonomy. As the taxonomy does not include all possible relationships only individual parts of the traversal are checked. Where parts of the traversal in the taxonomy are not present an error is reported. This may flag errors in existing patterns.
Appliance
The change in underlying operating system means that installation, migration, and upgrade path requires more consideration than before, though the individual procedures are simpler. You must read the Installation, migration, and upgrade overview and the individual procedures before attempting to perform any installation, migration, or upgrade. Red Hat Enterprise Linux 6: New installations of BMC Atrium Discovery now run on Red Hat Enterprise Linux (RHEL) 6. Upgrades: You can upgrade a BMC Atrium Discovery 8.3.x appliance to a BMC Atrium Discovery 9.0 running on RHEL 5, which does not support discovery over IPv6 (it supports discovery of IPv6 data over IPv4). Migration: You can migrate a BMC Atrium Discovery 8.3.x appliance to a BMC Atrium Discovery 9.0 running on RHEL 6. Appliance backup: A new appliance backup utility has been introduced. It enables you to create a backup of the appliance locally or on a remote server over ssh, or to a Windows share. The utility shuts down the services prior to taking the backup and restarts them afterwards. It replaces the snapshot feature available in previous releases of BMC Atrium Discovery, and improves on it by allowing you to take a backup even if you have minimal disk space available. Disk management: A disk management UI has been added which enables you to use new disks which have been added to your BMC Atrium Discovery installation. You can move data and add swap space to the new disks. Usage data collection: You can now configure on your appliance. To help BMC better understand the ways in which BMC Atrium Discovery is used in customer environments, the Usage Data Collection feature enables submission of anonymous usage data to us. By default this is not enabled.
Page 17
Some of my attributes are missing! Some attributes may not be discovered on a given operating system in your environment due to privilege requirements, hardware and software configurations, customizations, firewall configurations, security policies, and so forth. Additional attributes may be discovered by patterns, whether TKU provided, or your own custom patterns.
Page 18
os_arch os_build os_class os_edition os_type Node/Attribute os_vendor os_version pdc processor_speed processor_type psu_status ram serial service_pack threads_per_core uuid vendor workgroup zone_uuid zonename Node/Attribute Patch name Node/Attribute Package arch description epoch name pkgname revision vendor version Node/Attribute NetworkInterface adapter_type aggregated_by aggregated_with X A X B X X X X X C X D X X X X X X X A B X C D A B C D X X X X X X X X X X X X X X X X X X X X X X X X X A X X X X X B X X X X X C X X X X X D X X
X X X X X E X X X X X X F X X X X X G X X X X X H X X
X X
X X X X X
X X
X X X K X X
X X X L X X
X X X M X X
X I X X
X J X X
X N X X
X P
X R X
X X
X X
X X
X X
X X
X X
X X
X X X
X X
X X
X X
X X
X X
X X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X X
X X
X X
X X
X X E F G H I J K L M N P R
X E F G H I J K L M N
X P
X R
X X X X
X X X
X X X
X X
X X X E
X X
X X
X X X P R
X F
X G
X H
X I J
X K
X L
X M
X N
X X X X X X
Page 19
aggregates aggregation_mode bonded default_gateway description dhcp_enabled dhcp_server dns_servers duplex group_name ifindex interface_type is_console mac_addr name negotiation physical_adapters Node/Attribute physical_location ppa primary_wins_server secondary_wins_server service_name setting_id shared_adapters speed virtual_adapters Node/Attribute IPAddress broadcast ip_addr netmask prefix prefix_length temporary zone Node/Attribute FibreChannelHBA aix_manufacturer_code X A B C D X X X X X X X X X X X X X X X X X X X X X X X A B C D X X X X X X X A X X B C D X X X X X X X X X X X X X X X X X X X X X X
X X X
X X X X
X X X X
X X
X X X X X X X
X X X
X X X
X X X
X X X
X X X
X X X
X X X
X X X
X X X
X X X
X X X
X X X X
X X X X X X
X X X X X X
X X X X X
X X X X X
X X X X X
X X X X X
X X X X X X X
X X X X X
X X X X X
X X X X X X
X X X X X
X X X X X X
Page 20
aix_part_number boot_bios driver_name driver_version firmware hba_id model option_rom_version optrom_bios_version optrom_efi_version optrom_fcode_version optrom_fw_version serial Node/Attribute FibreChannelNode firmware wwnn Node/Attribute FibreChannelPort fabric_name port_speed port_state port_type supported_classes supported_speeds wwpn
X X X X X X X X X X X X X X X X X A B C D E F G H I J X K L M X X X X X X X X X X X X X X X X X X X N P R X X X X X X X
X X A B C D
X X E F G H I J
X X K L M
X X N X P
X X R
X X X X X X X X
X X X X X X X
X X X X X X X X
Page 21
SNMP devices
The number of SNMP devices that can be discovered by BMC Atrium Discovery increases with each monthly TKU release. This section contains information from Configipedia, which details the applications that can be discovered with the latest TKU installed. Clicking the links in this section will take you directly into Configipedia. The following classes of SNMP device are supported. Network devices Printers SNMP managed devices Monthly TKU releases may increase the SNMP device coverage.
Localization support
BMC Atrium Discovery supports the discovery of systems in any locale.
Interaction of BMC Atrium Discovery 9.0 with other BMC data providers BMC Atrium Discovery 9.0 can co-exist with other BMC data providers. In order to prevent duplicated Configuration items (CIs) in the BMC Atrium CMDB Asset dataset BMC recommends that the following minimum product versions when using other data providers in addition to BMC Atrium Discovery. BMC Atrium CMDB 8.0 no patches required. BMC Atrium CMDB 7.6.04 no patches required. BMC Atrium CMDB 7.6.03 no patches required. BMC Atrium CMDB 7.6 - recommended Patch 001 and Hot Fix for defect SW00355736. BMC Atrium CMDB 7.5 - recommended Patch 004 and Hot Fix for defect SW00355736. BladeLogic - recommended BladeLogic Server Automation 8.0 SP2 and Hot fixes for SW00355142 and QM001646701. See below for details. BMC Performance Manager (BPM) and BMC Performance Assurance (BPA) recommended BMC Performance Manager Portal 2.7.00.040. See below for details. BMC BladeLogic Client Automation (BBCA) - recommended BMC BladeLogic Client Automation 8.1.01. See below for details. If you are using BBCA please be aware that there are two known issues that are fixed in BMC BladeLogic Client Automation 8.1.01 . Please see the table below for exact details.
Page 22
SW00355736
The standard Atrium CMDB reconciliation rules for BMC_ComputerSystem do not cope with the situation when one tool gets the serial number and another does not, but where Hostname, Domain, isVirtual, Primary Capability match. In this situation duplicate BMC_ComputerSystem CIs will be created. The standard Atrium CMDB reconciliation rules for BMC_Application, BMC_SoftwareServer and BMC_ConnectivitySegment have fallback rule based only on the Type attribute. These rules should be based on Name. The impact of this is that duplicate BMC_Application, BMC_SoftwareServer and BMC_ConnectivitySegment CIs will be created during reconciliation with BMC Atrium Discovery.
Download Hot Fix for defect SW00355736 (7.5 P4 Hotfix available, 7.6 P1 Hotfix - available now) Atrium 7.5 Patch 004 Atrium 7.6 Patch 001
SW00348852 SW00352573
An additional step is required if your Atrium CMDB instance is earlier than: 7.5 patch 004 7.6 patch 001 See Modifying the Reconciliation Rules for more details
After you have performed these steps, you can start synchronizing BMC Atrium Discovery data to BMC Atrium CMDB.
Performance considerations
To obtain the maximum synchronization performance when using CMDB synchronization with BMC Atrium Discovery version 8.2.03 and later, you should consider tuning the database which BMC Atrium CMDB (or BMC Remedy AR System) is using. See the "BMC Remedy AR System Server 7.6 Performance Tuning for Business Service Management" White Paper for more information. The following sections in the White Paper are particularly relevant: Tuning an Oracle server. Best practices for tuning Oracle database servers. Tuning a SQL Server database. Best practices for tuning SQL Server database servers.
BladeLogic
Issue Description Fix
Page 23
SW00355356
Serial number is a strong indicator of identity for the BMC_ComputerSystem and is therefore used in reconciliation. On older Solaris Sparc platforms serial number is not available. BL would set the serial number to a value derived from hostid of the BMC_ComputerSystem class. In this situation duplicate BMC_ComputerSystem CIs would be created. For Windows Hosts BladeLogic sets the Domain attribute for BMC_ComputerSystem to be the Windows Domain and not its DNS Domain. The Domain attribute forms a part of the TokenID for a BMC_ComputerSystem CI. In this situation duplicate BMC_ComputerSystem CIs will be created. isVirtual attribute incorrectly set to No for Physical system. In this situation duplicate BMC_ComputerSystem CIs will be created.
Contact BMC Customer Support requesting BladeLogic Server Automation 8.0 SP2 Contact BMC Customer Support requesting Hot Fix for defect SW00355142 Contact BMC Customer Support requesting BladeLogic Server Automation 8.0 SP2 Contact BMC Customer Support requesting Hot Fix for defect QM001646701 Contact BMC Customer Support requesting BladeLogic Server Automation 8.0 SP2
SW00355142
SW00355359
QM001646701
When executing the BL reconciliation job after the BMC Atrium Discovery reconciliation job the CIs associated with a merged BMC_ComputerSystem class ( e.g.: BMC_Processor and BMC_IPEndpoint) are not merged and result in duplicate CIs. Server name should be picked up from FQ_HOST instead of Host property on it's product. When a ComputerSystem is discovered by both BladeLogic and BMC Atrium Discovery this issue can result in the ComputerSystem not being reconciled when the case of the Hostname is not the same for the data from the two products.
QM001642175
BPM/BPA
Issue QM001609002 Description BPM has no concept of Virtual Hosts. Therefore, when it sets TokenID for a BMC_ComputerSystem it uses the format defined for Physical Hosts. The reconciliation of this data with data from other providers that are aware of virtualization will fail if you reconcile the BPM data after another discovery provider. This will result in duplicate CIs. Fix BMC Performance Manager Portal 2.7.00.040
BBCA
Issue SW00355355 Description Serial number is a strong indicator of identity for the BMC_ComputerSystem and is therefore used in reconciliation. On older Solaris Sparc platforms serial number is not available. BBCA would set the serial number to a value derived from hostid of the BMC_ComputerSystem class. In this situation duplicate BMC_ComputerSystem CIs would be created. isVirtual attribute incorrectly set to No for Physical system. In this situation duplicate BMC_ComputerSystem CIs will be created. Fix Fixed in BMC BladeLogic Client Automation 8.1.01. Fixed in BMC BladeLogic Client Automation 8.1.01.
SW00355361
The following sections or pages have been removed from the documentation: None at this time.
Page 25
NDD discovery interface support Computer CIs do not always reconcile correctly Record data should not be processed with tools that change line endings WMI might report incorrect memory WMI arguments may be truncated Home directory of discovery user on target computer must not be read-only Solaris 10 truncates process information for non-privileged users Solaris 8 and ifconfig Process information truncated in AIX OpenVMS support IP address change requires appliance restart "<attrib> = None" construct in WHERE clause not supported Processor type correctly reported only by non-srvinfo access methods Disabling "Ping hosts" setting slows Discovery AIX user password must be changed by user after creation by root SNMP credential does not validate IP address key Manual cron changes are overwritten Do not run service tideway status as root Search facility searches hidden attributes Modifying standard reports Visualization size limit (QM001711035) Xen para-virtualized hosts discovery limitations (QM001744086) IBM AIX machine serial number changes when moved from one host container to another (QM001775765) Internet Explorer 10 browser support limitations
Page 26
Tcpvcon version later than 2.34 cannot return port information (QM001716854)
To discover port information (getProcessToConnectionMapping) from computers running Windows 2000 or earlier, you must have version 2.34 of Tcpvcon installed on them. If a more recent version of Tcpvcon has been installed on the target, you must replace it with version 2.34 to discover port information. Version 2.34 of Tcpvcon is shipped along with the BMC Atrium Discovery Windows proxies. To replace the recent version with version 2.34, perform the following steps: 1. On the computer running a Windows proxy, copy the tcpvcon.exe file, version 2.34, from the following location: C:\Program Files\BMC Software\ADDM Proxy 2. On the target host, navigate to the location of the recent version of the tcpvcon.exe file and replace that file with the version 2.34 file that you have copied. 3. Run discovery again.
Page 27
If a recent version of Tcpvcon is installed on a remote host, execution of the Tcpvcon command on the BMC Atrium Discovery appliance will fail and display the following timeout error message in the Windows proxy debug logs of the host:
RemQuery(): user = TSL\admintest: Timed out status = FAILURE
The timeout error will be reported because recent versions of Tcpvcon require a GUI-based end-user license agreement (EULA) to be confirmed when it is run for the first time. If you confirm the EULA on the host either manually or by using the accepteula switch, the Tcpvcon command is invoked successfully. However, as BMC Atrium Discovery does not support recent versions of Tcpvcon, parsing of the command output will fail and the following error message will be displayed in the log:
Failed to parse command output status = FAILURE
where /xx/xx/xx/xx is the IP address of the host. For more information on scanning hosts from scanner files and how to handle pool data, see Using scanner files.
Concurrent lock attempts can lock all users from editing the port scan settings
In the port scanning page, if a user locks it for editing and another user subsequently tries lock it, the second user's attempt fails. If the user who successfully locked the page cancels the operation and leaves the page, it remains locked for the unsuccessful user, and on refresh for the successful user too.
Page 28
Third-party applications depending on Tideway security must be run after the security service has started
If third-party applications that depend on the Tideway Security Service are run before it has completed initialization, they will fail as you cannot validate permissions and users from the Security Service. Ensure that the Tideway Security service has completed initializing before running third-party applications.
Record data should not be processed with tools that change line endings
BMC Atrium Discovery stores record data in UNIX and DOS formats. UNIX format files have LF line endings, and DOS format file have CR LF line endings. If you process the record data with a tool that changes line endings, you will see exceptions in the Discovery logs.
Solaris 8 and 9
Patches have been rolled out to replicate this behavior on Solaris 8 and 9. Solaris 8 patch 109023-05 Solaris 9 patch 120240-01
Page 29
Do not modify the path statement to correct this issue as that will cause other problems.
OpenVMS support
Support for OpenVMS is limited to systems running the native vendor TCP stack.
You should state explicitly that you are looking for an undefined attribute. For example:
SEARCH SoftwareInstance WHERE name HAS SUBWORD "Web" AND NOT version IS DEFINED and SEARCH SoftwareInstance WHERE NOT version IS DEFINED
Page 30
IBM AIX machine serial number changes when moved from one host container to another (QM001775765)
When an IBM AIX machine is moved from one host container to another, the discovered serial number of the host changes. As a result, a new host node is created and synchronized to the BMC Atrium CMDB (if CMDB synchronization is configured). The earlier host node is automatically
Page 31
removed through aging based on the Model Maintenance settings. It is possible to manually destroy the earlier host node before it is removed through aging. However, manual destruction of host nodes is not recommended in production appliances. For more information, see Destroying data.
ADDM-14553
ADDM-14554
ADDM-14555
ADDM-14557
ADDM-14579
ADDM-14584
ADDM-14590
QM001786373
ADDM-14619 ADDM-14620
ADDM-14628
Page 32
ADDM-14629
Problem: BMC Atrium Discovery versions 9.0 and 9.0 SP1 kickstart installation fails on HP ProLiant BL460c hardware. Resolution: Code fix, this no longer occurs. Problem: The Host List page fails to display those discovered operating systems for which the Discovered OS filter column contains values with non-ASCII characters. Resolution: Code fix, this no longer occurs. Problem: For VMware ESXi hosts, BMC Atrium Discovery may assign incorrect subnet and netmask. Resolution: Code fix, this no longer occurs. Problem: Searching for a host with a fully qualified domain name (FQDN) as the search expression may fail to return results. Resolution: Code fix, this no longer occurs. Problem: On AIX, discovering the exact number of cores on the newer versions of IBM POWER microprocessors is not possible. Resolution: Code fix, discovering the number of IBM POWER microprocessors is no longer supported. Problem: The memory requirement for appliances with large number of network connections is high. Resolution: Code fix by improving caching efficiency and batching large data chunks. Problem: On Internet Explorer, when you access the Host List page and filter the discovered operating systems by selecting Others from the Filter Column, the values containing non-ASCII characters are not displayed. Resolution: Code fix, this no longer occurs. Problem: During an upgrade, an invalid warning about a pattern using a deprecated attribute may be displayed. Resolution: Code fix, this no longer occurs. Problem: CPU and memory information is not displayed for some discovered HP-UX hosts. Resolution: Code fix, this no longer occurs. Problem: When you edit a scheduled discovery run, the fields on the Edit an Existing Run dialog are not correctly initialized. Resolution: Code fix, this no longer occurs. Problem: Excessive memory usage on the consolidation and scanning appliances. Resolution: Code fix, this no longer occurs. Problem: The query builder (which is also used in collaborative application mapping) fails to list any attributes if the node kind contains attributes name differing only in case (for example, port and Port). Resolution: Code fix, this no longer occurs. Problem: The disk configuration utility does not consider the system data and possible core dumps while estimating the size of record data. Resolution: Code fix, this no longer occurs. Problem: The getMACaddress method may fail to parse Tru64 NetRAIN information. Resolution: Code fix, this no longer occurs. Problem: For Software Instances, when you select Detailed Observed Communication from the Reports drop down, the technical difficulties page is displayed. Resolution: Code fix, this no longer occurs. Problem: When you delete some of the contents of a required Functional Component, the Functional Component itself is deleted and in turn causes the BAI to be deleted. Resolution: Code fix, this no longer occurs. Problem: The appliance's UI may fail to correctly display the updated mapping set baseline. Resolution: Code fix, this no longer occurs.
QM001789134
ADDM-14645
ADDM-14656
QM001786536
ADDM-14657
ADDM-14663
ADDM-14665
QM001790378
ADDM-14674
QM001791015
ADDM-14681
ADDM-14682
QM001778972
ADDM-14684
QM001792097
ADDM-14688
QM001790378
ADDM-14708
QM001793106
ADDM-14711
ADDM-14712
QM001790807
ADDM-14741
ADDM-14745
QM001794480
ADDM-14755
QM001794338
Page 33
ADDM-14768
Problem: When you perform a system backup, it allows certain special characters in the notes field. However, when you attempt to restore, it fails until the special characters are removed. Resolution: Code fix, this no longer occurs. Problem: When BMC Atrium Discovery is installed, HTTPS is enabled by default. Resolution: Code fix, HTTPS is no longer enabled by default. Problem: Reports and queries producing list of times fail to convert the time into a readable format. Resolution: Code fix, this no longer occurs. Problem: When mapping an application using Collaborative Application Mapping (CAM), it is possible to generate a TPL that will not compile. Resolution: Code fix, this no longer occurs. QM001795579
ADDM-14813
ADDM-14824
ADDM-14827
In addition to the defects in the preceding table, version 9.0 SP2 has a number of security enhancements and QM001789294 has also been fixed.
ADDM-14508
QM001783344
ADDM-14491
ADDM-14475
ADDM-14471
ADDM-14470
ADDM-14466
QM001779011
Page 34
ADDM-14464
Problem: CPU information is not displayed for the discovered some HPUX hosts. Resolution: Code fix, this no longer occurs. Problem: From the Device Filter tab of the CMDB Sync page, if you apply a device filter with more than one None conditional conjunctions, the filter evaluation is incorrect. Resolution: Code fix, this no longer occurs. Problem: For defunct processes on Linux, Solaris, HPUX, and AIX, patterns may create Software Instance (SI) nodes. Resolution: Code fix, this no longer occurs. Problem: Solaris defunct processes may return no value for DiscoveredProcess cmd which results in ECA error. Resolution: Code fix, this no longer occurs. Problem: When a BMC Atrium CMDB device filter uses a DDD node, synchronization stops deleting the destroyed CIs. Resolution: Code fix, this no longer occurs. Problem: While scanning Solaris hosts, Discovery Conditions may fail with an ECA error, RuleError on rule tpl_defn_functions_fn_getDiscoveryUserID_body_0 due to: Error while executing a rule ValueError: invalid literal for int() with base 10. Resolution: Code fix, this no longer occurs. Problem: Static data integration point (DIP) requests are not consolidated and report and error message, Request for information not part of the consolidated data. Resolution: Code fix, this no longer occurs. Problem: When you export a search result which contains nodeLink, incorrect output is produced. Resolution: Code fix, this no longer occurs. Problem: When pasting a list of IP addresses on the pill UI, invalid pills are not placed first (unless sorted by type). Resolution: Code fix, this no longer occurs. Problem: While adding a new run from the Add New Run dialog, it was possible to click OK even if invalid IP address ranges were entered, which subsequently removes all the IP address ranges entered for the run. Resolution: Code fix, this no longer occurs. Problem: When the BMC Atrium Discovery log files are moved to a different disk using the disk configuration utility, the tw_compress_logs script run by cron does not compress the old logs and the ~tideway/log symlink is automatically deleted 30 days after the initial log file was created. Resolution: Code fix, this no longer occurs. Problem: In Solaris hosts, failure to parse kstat CPU information causes failure of host discovery. Resolution: Code fix, this no longer occurs. Problem: If the omniNames service has been stopped, backing up the appliance fails and running the tw_backup command reports the error message, Backup create normally runs when appliance services are running, either start services or use --force option. Resolution: Code fix, this no longer occurs. Problem: Even though the IPv6 site local addresses are not scanned by BMC Atrium Discovery, the pill UI does not show the IPv6 site local addresses as invalid. Resolution: Code fix, this no longer occurs. Problem: When adding an IP address to a scheduled range, pressing return to convert to a pill works, but entering a second return dismisses the dialog without adding the additional IP address. Resolution: Code fix, this no longer occurs.
QM001778972
ADDM-14453
QM001778221
ADDM-14428
QM001776558
ADDM-14414
QM001776724
ADDM-14402
QM001775182
ADDM-14399
ADDM-14379
QM001775865
ADDM-14349
ADDM-14346
ADDM-14345
ADDM-14343
ADDM-14327
ADDM-14322
ADDM-14318
ADDM-14289
Page 35
ADDM-14274
Problem: In BMC Atrium Discovery version 9.0, when you click on a discovered host which has multiple network interfaces that do not have MAC addresses, the UI fails with a traceback. Resolution: Code fix, this no longer occurs. Problem: While migrating from version 8.3.x to 9.0, if services are not running, the tw_addm_8_3_migration script gives a traceback instead of an error message. Resolution: Code fix, this no longer occurs. Problem: Where hosts are related through a BAI, in some cases a sync of one host can cause SIs on another related hosts to be marked as deleted. Resolution: Code fix, this no longer occurs. Problem: Single sign on does not support extracting values from X.509 certificate extensions. Resolution: Support added. Problem: Configuring certificate revocation lists (CRLs) for HTTPS is not supported. Resolution: Support added. Problem: Cannot discover virtual AIX HBA cards. Resolution: Code fix, this no longer occurs. Problem: SNMP discovery of AIX VIO incorrectly displays the Operating System as AIX, instead of VIO. Resolution: Code fix, this no longer occurs. QM001750559 QM001768414
ADDM-14265
ADDM-14230
ADDM-14039
ADDM-14038
ADDM-13985 ADDM-14286
In addition to the defects in the preceding table, version 9.0 SP1 has a number of security enhancements and QM001753496, QM001772404, and QM001777487 have also been fixed.
12700
14245
14365
QM001692983
15407
QM001725494
15423
QM001724526
15574
QM001732061
Page 36
15660
Problem: MDB test connection failed when the AR server was encrypted using BMC Remedy Encryption Security Product. Resolution: Code fix, this no longer occurs. Problem: In the help for tw_scheduled_snapshot, pessimistic option is incorrect. Resolution: Code fix, this no longer occurs. Problem: Daily disk usage statistics graph fails to load data when long filesystem names are used. Resolution: Code fix, this no longer occurs. Problem: Manual pattern execution does not work for NetworkDevices. Resolution: Code fix, this no longer occurs. Problem: Read-only users should not be allowed attach documents to host nodes. Resolution: Code fix, this no longer occurs. Problem: Some Fibre Channel HBA attributes are missing after scanning VMWare ESXi 4.1.0/4.0.0/5.0.0. Resolution: Code fix, this no longer occurs. Problem: The partition_id attribute is not populated on AIX & Linux on POWER LPARs. Resolution: Code fix, this no longer occurs. Problem: Appliance Specification UI incorrectly reports disk space if the path includes a symlink. Resolution: Code fix, this no longer occurs. Problem: Solaris returns inconsistent model names for same O/S version. Resolution: Code fix, this no longer occurs. Problem: When discovering an HP-UX B.11.00 host, getDeviceInfo gives a 'Connection timing out' error. Resolution: Code fix, this no longer occurs. Problem: Unused permissions still available. Resolution: Code fix, permissions removed. Problem: PDF Reports with notes fail with an exception in docutils. Resolution: Code fix, this no longer occurs. Problem: The datastore (Model Maintenance -> Size Warning) has a maximum of 100GB. This is too small. Resolution: Code fix, this has been changed. Problem: Processor speed is not obtained from most HP-UX servers. Resolution: No acceptable fix available, resolving as won't fix. Problem: Component Filter UI sometimes ignores present filter condition's containers. Resolution: Code fix, this no longer occurs. Problem: Discovery process fails on servers with Open Unix 8.0.0 operating system. Resolution: Code fix, this no longer occurs. Problem: Failure to capture all of the output from the iterated lscfg -vl $adapter 2>/dev/null command on AIX. Resolution: Code fix, this no longer occurs. Problem: Search service fails to expand explode in nodeset before applying filter. Resolution: Code fix, this no longer occurs. Problem: VMware ESX is incorrectly identified as Red Hat Linux. Resolution: Code fix, this no longer occurs. Problem: AIX/filesystem parsing error when a partition is reported as -1% used. Resolution: Code fix, this no longer occurs. Problem: Ugly CORBA permission error in Change Password UI when users do not have the security/user/passwd permission. Resolution: Code fix, this no longer occurs. Problem: The /var/log/secure files fill with sudo nmap traces, which may fill /var. Resolution: Code fix, this no longer occurs.
QM001737346
15668
ISS03895053
15688
QM001738802
15719 15727
QM001741255 QM001741327
15739 15741
QM001742085 QM001736818
15886
QM001745735 QM001753527
15984
QM001749172
Page 37
15985
Problem: After rebooting the appliance, IP Range configuration in Windows Proxy pool is lost. Resolution: Code fix, this no longer occurs. Problem: SUSE Linux reports wrong versioning. Resolution: Code fix, this no longer occurs. Problem: Incorrect file name when saving a visualization as an image. Resolution: Code fix, this no longer occurs. Problem: A different user name is displayed in the UI where more than one user has a similar first name. Problem: Group permission page needs more detail. Resolution: Documentation fix. Problem: CDM: CTI mapping does not provide values for the capabilities 10, 48, and 22. Resolution: Code fix, this no longer occurs. Problem: On first scan of an ESX host via ssh, discovery was incomplete, and the session log reported may "Command not found" errors. Resolution: Code fix, this no longer occurs. Problem: Parser should handle the command failure gracefully when fcinfo failed to load FC-HBA common library. Resolution: Code fix, this no longer occurs. Problem: Solaris discovery script (DeviceInfo) contains -f file test condition, it should be -r. Problem: The IBM.DB2.RDBMS_MF_Extended.DB2_Databases pattern takes too long for large mainframe databases. Resolution: Code fix, this no longer occurs. Problem: Incorrect cores calculation on HP-UX. Resolution: Code fix, this no longer occurs. Problem: ECA error in traverseFindOrCreateNode for integer values too large in df data. Resolution: Code fix, this no longer occurs. Problem: Incorrect OS discovered for Oracle Linux >= 5.3. Resolution: Code fix, this no longer occurs. Problem: Scan Failure on Linux when /etc/*-release file is empty or has no read permissions. OS is reported as (none). Resolution: Code fix, this no longer occurs. Problem: The OpenVMS processes are not parsed where they include a space character and/or where the process cmd is displayed on multiple lines. Resolution: Code fix, this no longer occurs. Problem: SPARC T3 processor type shown as Unknown (SPARC-T3). Resolution: Code fix, this no longer occurs. Problem: ESX blades report wrong serial number. Resolution: Code fix, this no longer occurs. Problem: The vim-cmd-interfaces output is parsed incorrectly. Resolution: Code fix, this no longer occurs. Problem: Login session failed because MAX_READ_SIZE is too big and the searched string could not be found within MAX_SEARCH_SIZE. Resolution: Code fix, this no longer occurs. Problem: Linux parser error when parsing hba_procfs output in which BIOS, FCODE, EFI and Flash information does not exist. Resolution: Code fix, this no longer occurs. Problem: Add discovery for package vendor, description, and epoch for Linux, Solaris, and HP-UX. Resolution: Code fix.
QM001748266
16021
16028
QM001750209
16032
QM001750210
16057
QM001751695
16092 16099
QM001752617 QM001749794
16103 16124
QM001749279 QM001751719
16127
QM001753180
16173
QM001755689
16179
QM001709962
Page 38
16182 16186
Problem: Report generation sticks at 95%. Resolution: Code fix, this no longer occurs. Problem: Should not report general parser error when hba_sysfs contains data but returns no HBA data. Resolution: Code fix, this no longer occurs. Problem: Query Builder incorrectly displays/selects different non-taxonomy attributes with identical printable names. Resolution: Code fix, this no longer occurs. Problem: Solaris machines on which serial numbers are not discovered can collapse OSIs. Resolution: Code fix, this no longer occurs. Problem: Improper HBA Discovery on Windows 2008 Server SP2. Resolution: Code fix, this no longer occurs. Problem: You cannot discover SUSE 12.1 with a root credential. Resolution: Code fix, this no longer occurs. Problem: Duplicate Host nodes are created when a host has two IP addresses, one of which is "locked down". Problem: Saved queries for a user are not restored from the the snapshot. Resolution: Code fix, this no longer occurs. The snapshot feature has been replaced with appliance backup and restore. Problem: Synchronization into CMDB updates CIs that have not been modified. Resolution: Code fix, this no longer occurs. Problem: Win32_PhysicalMemory fails with Error 'Capacity' when the memory bank is empty. Resolution: Code fix, this no longer occurs. Problem: Tru64 discovery fails to discover RAM. Resolution: Code fix, this no longer occurs. Problem: The hwmgr command must be run with escalated privileges to get the model number on Tru64. Resolution: Code fix, this no longer occurs. Problem: If you create a Group node in TPL you need to provide __pdf_detail_level Resolution: Code fix, this no longer occurs. Problem: Improve documentation for the process for adding swap as a file or a new disk/partition. Resolution: Code and documentation fix. See Managing disks and swap space.
QM001755722 QM001755686
16194
QM001757280
16203
QM001754173
16267 16342
QM001759740 QM001763233
16385 16405
QM001765619 QM001766276
16458
QM001764951
16464
QM001759466
Page 39
Problem: Only the maximum speed is discovered for multiple processor hosts. Workaround: None. Problem: Fujitsu machine has vendor "Sun Microsystems". Resolution: Code fix, this no longer occurs. Problem: The visualization and PDF report fail when a host with a Japanese name is included. Resolution: Code fix, this no longer occurs. Problem: A failure during database discovery results in an IntegrationResult with a node as an attribute. The following error is displayed on the scanning appliance: "Unexpected exception while sending data to the consolidation appliance". Resolution: Code fix, this no longer occurs. Problem: Invalid raw_speed on PortInterfaces. On high speed interfaces such as 10Gb/s, ifHighSpeed is not read, instead the value for raw_speed is shown in PortInterfaces. This is usually 4294.967295 Mb/s or similar. Problem: Mainframe discovery uses incorrect view area code for MQ Listeners. Resolution: Fixed in the code. See also Mainframe discovery commands. Problem: TRANSIENT_CallTimedOut errors occur when SNMP discovery attempts to scan some very complex network devices. Resolution: Code fix, this no longer occurs. Problem: If you have viewed a chart from results obtained using the query builder, using the back button does not work correctly. Resolution: Code fix, this no longer occurs. Problem: You can only select and import one credential type at a time otherwise the import will fail silently and simply refresh the Credential Migration page. Resolution: Fixed in the code. Problem: Consolidation start time is misleading. Resolution: Code fix, this no longer occurs. Problem: Tripwire looking for BerkeleyDB.4.7, and incorrectly named image, background4.jpg should read background4.gif. Resolution: Code fix, this no longer occurs. Problem: Cannot cancel consolidation of a run that has been canceled on the scanner. Resolution: Code fix, this no longer occurs. Problem: Legacy Mainframe MDZ code used to generate XML is inefficient. Resolution: Code fix, this no longer occurs. Problem: The OS classifier identifies all variants (Novell SUSE Linux Enterprise Server (9.0 - 11.0), SUSE Linux Enterprise Server (v11 onwards), and OpenSUSE) of SUSE Linux as 'SUSE Linux'. Resolution: Code fix, this no longer occurs. This fix is also included in TKU releases 2012-01-10 and later. Problem: Search get() function accepts NodeHandles but not Nodes. Resolution: Fixed in the code. Problem: ECA engine is always O(n) on number of root trackers. Resolution: Code fix, this no longer occurs. Problem: Daily SAR chart shows nonsense data, a date in the I/O. Resolution: Code fix, this no longer occurs. Problem: A failure during database discovery results in an IntegrationResult with a node as an attribute. The following error is displayed on the scanning appliance: "Unexpected exception while sending data to the consolidation appliance". Resolution: Code fix, this no longer occurs. Problem: RemQuery does not pass a valid stdin handle to processes it creates. Resolution: Code fix, this no longer occurs.
QM001659086 QM001715432
14942
QM001719307
14951
QM001734820
15171 15207
15218
15235
15273 15283
QM001722913 QM001735294
15285
QM001722947
15288 15302
QM001723073
15409
QM001727216
Page 40
15430
Problem: Issue while upgrading to 8.3 where fibre channel HBA nodes were missing. Resolution: Code fix, this no longer occurs. Problem: Openvms Discovery: getInterfaceList failure (parser error) when an interface has no IP address assigned. Resolution: Code fix, this no longer occurs. Problem: SNMP discovery for hosts not able to cope with "null" IP Address and Netmask values. Resolution: Code fix, this no longer occurs. Problem: In BMC Atrium Discovery, versions 8.2, 8.2.01, 8.2.02, 8.2.03, 8.2.04, 8.3, and 8.3 SP1, the OS string for some versions of Oracle Enterprise Linux operating system (OS) are not displayed completely. Resolution: Fixed in the code. Workaround: In addition, there is a workaround for releases prior to 8.3 SP2. For more information, see the Known Defects in this Version pages for 8.3 and 8.2. Problem: File content comparison does not display the correct status. Resolution: Code fix, this no longer occurs. Problem: Discovery does not populate information on filesystems for OpenVMS 8.3. Resolution: Code fix, this no longer occurs. Problem: The drag/drop channel does not work in Internet Explorer 8. Resolution: Code fix, this no longer occurs. Problem: Method of finding z/OS agent version is incorrect. Resolution: Code fix, this no longer occurs. Problem: Output from netstat on Red Hat is not parsed correctly. Resolution: Code fix, this no longer occurs. Problem: Windows Scanner file processing fails with traceback. Resolution: Code fix, this no longer occurs. Problem: Linux "ip address show" parser doesn't handle Infiniband addresses. Resolution: Code fix, this no longer occurs.
QM001728184
15450
QM001726175
15484
QM001730078
15487
QM001727595 QM001727313
15524 15531
QM001723284 QM001732011
15415 15434
Page 41
Problem: Appliance Baseline page needs sort order. Resolution: Code fix. Problem: When an IP range list is "hidden" you cannot click to see the full list. Resolution: Code fix. Problem: Useful virtualisation reports needed. Resolution: Code fix. Problem: Uptime not extracted from SystemInfo or Srvinfo calls. Resolution: Code fix. Problem: SNMP is failing on Netware 5.70.05. Resolution: Code fix. Problem: Core dump during shutdown - Discovery. Resolution: Code fix. Problem: Broadcom NIC not detected on Windows. Resolution: Code fix. Problem: Privileged execution of getDirectoryListing, getFileMetaData and getFileContent. Resolution: Code fix. Problem: Need a "Lack of credential" error message. Resolution: Code fix. Problem: Offline compaction fails due to a pattern incorrectly storing a node reference on a Host node. Workaround: Do not perform the compaction with the recode option. Problem: Export to Oracle fails when columns have spaces. Resolution: Code fix. Problem: You cannot filter using the third column of the Installed Packages report. Resolution: Code fix. Problem: Slave purge button inconsistency. Resolution: Code fix. Problem: Linux ifconfig does not always find all bound IP addresses. Resolution: Code fix. Problem: The date selector in the Report Builder does not honor the user's date preference setting. Resolution: Code fix.
11855
SF9555
12418 12443
12453 12587
22267 22679
12636 12704
22897
QM001673044
12952 13115
ISS03565196 ISS03596673
TP13281
QM001665930
13360
Problem: Swap allocation on appliances should be increased to 8GB. Resolution: Code fix. Problem: RPMs should be installed for kdump use. Resolution: Code fix. Problem: After certain Discovery scans are started, runs are stopping at 99% completion. Resolution: Code fix.
ISS03631844
13402 13472
13482
Problem: Stopping discovery while the discovery of a network device is in progress can take a long time. The device may be rediscovered when discovery is restarted. Resolution: Code fix. Problem: Discovering a large network device can take longer than the default reasoning timeout (30 minutes). Resolution: Code fix.
13493
Page 42
13660
Problem: The upgraded Admin group does not have permission to view the Export or CMDB-Sync pages. Resolution: Code fix. Problem: Mainframe scan ending with reason "Unknown error". Resolution: Code fix. Problem: Visualisation not updated for MainFrame. Resolution: Code fix. Problem: Technical difficulties page on completion of gather. Resolution: Code fix. Problem: Transaction logs from datastore compaction can fill up the whole partition. Resolution: Code and documentation fix. Problem: An excessively long "where" clause in a query generates an unhelpful error CORBA.COMPLETED_MAYBE. Resolution: Code fix. Problem: tw_injectip --replace does not work as documented. Resolution: Documentation fix. Problem: Solaris prtdiag parse failure results in number of physical and logical processors being reported incorrectly. Resolution: Code fix. Problem: Appliance baseline gives spurious warning about ntpd and vmware-tools at runlevel 5. Resolution: Code fix. Problem: The getMFPart method fails. Resolution: Code fix. Problem: Usage notes (--help) misleading for the tw_cmdb_export2sync tool. Resolution: Code fix. Problem: For OpenVMS discovery, show network parsing fails if hostname is not known. Resolution: Code fix. Problem: Security vulnerability in returnURL redirect parameter. Resolution: Code fix. Problem: HBAs not detected on AIX, because of discovery parsing issue. Resolution: Code fix. Problem: When you run a report that has relationships in the results, such as the Installed Packages report, the New Packages discovered on Hosts report, and the New Patches discovered on Hosts report, the Query Builder is not displayed. Resolution: Code fix. ISS03712257, ISS03750161 ISS03658956, ISS03681960 ISS03666994 ISS03680154
QM001677253
13750
QM001667859
13764 13768
ISS03636264 ISS03652666
QM001679007
13818
ISS03673992
ISS03676415 ISS03678004
13960
ISS03677985
13971 13982
ISS03662280 ISS03672449
TP13982
QM001670520
14004
Problem: Discovery Access document should explain the meaning of the runCommand status. Resolution: Documentation fix. Problem: Cisco Nexus Network switches show wrong serial number. Resolution: Code fix. Problem: Bonded interfaces report nonsensical speed/duplex settings. Resolution: Code fix. Problem: The gather page in the user interface does not collect session logs. Resolution: Code fix. Problem: Reconciliation job is causing duplicate records as Domain field is not populated. Resolution: Code fix. ISS03694229 ISS03680887
14028
QM001678691
14029
QM001685207
14035
QM001685002
14048
ISS03683463
Page 43
QM001685621
14064
Problem: When a program running via RemCom crashes or initiates a UI pop-up on target machine, RemCom is blocked. Resolution: Code fix. Problem: When a UNIX device with an unsupported shell is scanned, it fails with a timeout. Resolution: Code fix. Problem: When discovering an HP-UX host, parsing getInterface list does not always find the speed, the duplex and negotiation mode. Resolution: Code fix. Problem: Problem with MDZ service point caching. Resolution: Code fix. Problem: Ciscoworks import fails when a speed is not specified for an entry. Resolution: Code fix. Problem: Cisco works import does not differentiate whether InvalidEntryExceptions block the import or not. Resolution: Code fix. Problem: When a list view would only contain one entry, the UI displays the node view after showing the Ajax "spinner". When this occurs, you are unable to use the browser's back button to return to the previous page. Resolution: Code fix. Problem: CiscoWorks Import fails to import the CSV file content when entries without IP addresses are detected. Resolution: Code fix. Problem: When running offline compaction, you cannot use the original datastore after preparation steps. Resolution: Code fix. Problem: The template_sql_asset_integration.tpl file cannot be activated in 8.2 because default connections are not supported. Resolution: Code fix. Problem: In the device filter, queries using LifecycleStatus and Location do not work. Resolution: Code fix. Problem: Discovery fails on OpenVMS because we add the NUL byte before any prompt string. Resolution: Code fix. Problem: Network device discovery error "Unable to get the deviceinfo: UNKNOWN_PythonException". Resolution: Code fix. Problem: Mainframe IMS Database information discovery incomplete. Resolution: Code fix. Problem: Stop All Scans button does not work when defunct discovery processes exist. Resolution: Code fix. Problem: WMI arguments can be truncated. Resolution: Documentation fix. See WMI arguments Problem: Poor discovery performance and appliance errors from SNMP discovery. Resolution: Code fix. Problem: Upgrade script should check that there is available space in /var before upgrade. Resolution: Code fix. Problem: Create Integration point link should be removed. Resolution: Code fix.
ISS03678131
QM001681080
14073
ISS03674449
QM001682098
14074
ISS03669062
ISS03696044
QM001686481
14110
ISS03696044
14120
QM001686863
14161
ISS03676549
QM001686948
14190
ISS03700422
QM001688159
14207
ISS03705373
QM001688468
14220
ISS03699116
QM001691290
14222
QM001690476
14224
ISS03702019
14243
ISS03693787
QM001703603
14251
QM001695701 QM001692510
14259 14268
ISS03704879
QM001689008
14285
QM001688141
14300
Page 44
QM001693822
14333
Problem: BMC Atrium Discovery to BMC Atrium mapping requires additional dependency between Mainframe and BAI. Resolution: Code fix. Problem: Remove unused 'Confirm' button in the snapshot restore screen. Resolution: Code fix. Problem: Cannot discover the CPU/RAM information on HP-UX. Resolution: Code fix. Problem: Mainframe extension for CMDB incorrectly sets audittype=none for BMC_BaseElement Core CDM attributes. Resolution: Code fix. Problem: Editing a Location Node in the UI is not working as expected. Resolution: Code fix. Problem: Discovery is failing in getInterfaceList on AIX 4.2. Resolution: Code fix. Problem: In Mainframe JVM Settings increase maximum memory limit to 1024MB. Resolution: Code fix. Problem: For "Java heap space" error is misleading, should read "Out of memory error - Java heap space". Resolution: Code fix. Problem: The tideway.py file is showing as deprecated in the appliance startup. Resolution: Code fix. Problem: Failure in generic search casued by processwith. Resolution: Code fix. Problem: Xen host with "aix" in the hostname causes AIX discovery scripts to be used for discovery. Resolution: Code fix. Problem: Uninformative error messages in UI when CMDB mapping patterns are not activated. Resolution: Code fix. Problem: The remcomsvc.exe file should be removed from Windows targets. Resolution: Code fix. Problem: The Observed Communications report fails. Resolution: Code fix. Problem: Zebra printer with sysoid : 1.3.6.1.4.1.683.6 incorrectly discovered. Resolution: Code fix. Problem: OpenVMS process names including spaces cannot be parsed correctly. Resolution: Code fix. Problem: Discovery fails on Solaris 10 when sudo requires password for PRIV_PS. Resolution: Code fix. Problem: The /var directory fills with cron emails preventing Services from restarting. Resolution: Code fix. Problem: Non-existent commands are reported as "OK" in the DiscoveryAccess RunCommand status. Resolution: Code fix. Problem: SYSPLEX information not shared in environment where a sysplex spans multiple mainframes. Resolution: Code fix.
ISS03722398
QM001680931
14358
ISS03678094
QM001692734 QM001686848
14359 14381
ISS03700534
QM001696908
14393
QM001686895 QM001697505
14401 14406
ISS03712888
QM001697506
14408
ISS03712888
QM001696447
14427
ISS03736960
QM001700149 QM001699198
14448 14468
ISS03766011 ISS03745096
QM001697510
14473
QM001697909
14474
QM001701089 QM001693571
14477 14500
ISS03725047
QM001687441
14520
ISS03701751
QM001702664
14536
ISS03749573
QM001671087
14554
QM001670514
14563
ISS03646411
QM001703929
14578
ISS03756625
Page 45
QM001702839
14595
Problem: Documentation should explain the concept of root nodes and why new custom nodes are not automatically exported. Resolution: Documentation fix.
QM001704455
14602
Problem: Discovery of OpenVMS version 8.4 fails at getInterfaceList script due to unequal lengths of Autonegotiation entries. Resolution: Code fix. Problem: Asynchronous search can be incorrectly cancelled during blocking getResults calls. Resolution: Code fix. Problem: From HP-UX version 11.23, "parstatus" in the host_info script cannot find the CPU information. Resolution: Code fix. Problem: Need script to remove "dark space" tw_remove_darkspace. Resolution: Code fix. Problem: Categorization for BMC_IPENDPOINT and BMC_LANENDPOINT submitted to CMDB is incorrect. Resolution: Code fix. Problem: CSV > FTP Export fails the error messages are unhelpful. Resolution: Code fix. Problem: The BMC.ADDM dataset has the incorrect SoftwareServerType defined for Oracle database servers. Resolution: Code fix. Problem: Need to add settings to DB_CONFIG file to prevent db_recover running out of memory. Resolution: Code fix. Problem: Windows hosts with identical serial number are very likely to be merged. Resolution: Code fix. Problem: Poorly positioned and immovable dialog boxes on restore from backup. Resolution: Code fix. Problem: Need to alter the perception of the system to an appliance rather than a general purpose operating system. Resolution: Code and documentation fix. Problem: Relationship between CICS and Sysplex not being shown. Resolution: Code fix. Problem: CMDB sync mapping incorrect for Host with processors with multiple CPU types and speeds. Resolution: Code fix. Problem: Processor sync mapping to CMDB (CMDB.Host_processor) does not support PowerPC 6 and 7. Resolution: Code fix. Problem: Engine and pattern performance information not available at all Reasoning log levels. Resolution: Code fix. Problem: The tw_purge_slave_logs tool fails when run with the --oldest-days=NUM option. Resolution: Code fix. Problem: DB Cache size setting in UI can be set too high. Resolution: Code fix.
ISS03750864
QM001705138
14603
QM001703740
14607
QM001689493
14614
ISS03704879
14623
14627
ISS03737219
14632
QM001709102
14642
ISS03772783
14644
ISS03735278
QM001685174
14662
QM001706866
14668
14672
ISS03774183
14686
ISS03779482
QM001706278 QM001707286
14713
14717
QM001706357
14723
QM001706655
14731
Page 46
QM001698828
14754
Problem: Patch name reported on Solaris systems where no patches are installed. Resolution: Code fix. Problem: For HP-UX platforms, getInterfaceList uses netstat -in whereas -inw is sometimes required. Resolution: Code fix. Problem: The weblogic JDBC resource is not found when the URL leads to an aliased database. Resolution: Code fix. Problem: CapabilityList for BMC_ComputerSystem and BMC_Mainframe is set to enum value rather than name. Resolution: Code fix. Problem: Discovery of HP-UX 11.31 Itanium systems does not populate RAM or Processor details. Resolution: Code fix. Problem: The netadmin user should be the recommended way of configuring network settings. Resolution: Documentation fix. Problem: Upgrade resets the time zone settings to BST. Resolution: Code and documentation fix. Problem: Export log file name, tw_cmdb-export.log is confusing, should be renamed tw_export.log. Resolution: Code fix. Problem: Discovery stays logged in on target regardless of scan timeout value. Resolution: Code fix. Problem: Synchronizing into a new custom CMDB attribute is not possible unless the tideway service is restarted. Resolution: Code and documentation fix. Problem: A baseline alert (MINOR: Export exporter configurations have been altered) is triggered after each export. Resolution: Code fix. Problem: CMDB Sync mappings: Name needs valid value for BMC_IPEndpoint. Resolution: Code fix. Problem: Mainframe discovery - Insufficient user access reported only in logs. Should be reported as script failures on Discovery Access page. Resolution: Code fix. Problem: Null Pointer Exceptions when using CMDB/API 7.6, but not when using CMDB/API 2.0 during CMDB Sync. Resolution: Code and documentation fix. Problem: Traceback when applying sensitive data filters. Resolution: Code fix. Problem: How many threads should be specified when setting up the CMDB sync connection? Resolution: Documentation fix. Problem: Documentation does not provide enough details about the permission of the database credentials. Resolution: Documentation fix. Problem: AIX Host package_count differs from the count of packages. Resolution: Code fix. Problem: Technical difficulties page on LDAP Configuration when an invalid CA Certificate is uploaded . Resolution: Code fix. ISS03830890 ISS03820387 ISS03778681 ISS03806752 ISS03790271
QM001709071
14756
QM001693910
14796
ISS03692485
QM001720262
14828
ISS03835157, ISS03836568
QM001708299
14836
14841
QM001711796 QM001680947
14844 14917
ISS03781188
QM001707385
14928
QM001705323
14929
ISS03767301
QM001664211
14941
QM001711782
14946
QM001690038
14949
QM001716540
14962
QM001716869 QM001706541
14982 15006
QM001699098
15007
QM001719571
15030
QM001703981
15042
ISS03763880
Page 47
QM001719109
15054
Problem: Excessive memory consumption when a very large list of IP addresses is entered for scanning. Resolution: Code fix. Problem: DDD removal option "never" should only be used under guidance from Customer Support. Resolution: Code fix. Problem: The tw_supportability script should have timestamps. Resolution: Code fix. Problem: An ssh timeout caused by "PASSWORD" instead of "Password". Resolution: Code fix. Problem: Typo in upgrade script. Resolution: Code fix. Problem: Data store unnecessarily re-indexes unchanged attributes. Resolution: Code fix. Problem: Null Pointer Exception when CMDB Sync finds an unexpected CI type. Resolution: Code fix. Problem: Enhancement to add tw_option, DANGEROUS_HOST_ID_USING_ENDPOINT_HOSTNAME, so that hostname will be used as last resort to identify a host. Resolution: Code fix. Problem: Solaris processor speed conversion error, should set Speed to the higher when multiple speeds are found. Resolution: Code fix. ISS03603147 ISS03773760
QM001712384
15061
QM001714773 QM001718966
15062 15064
ISS03812186 ISS03825411
QM001719228 QM001719109
15083 15100
ISS03826915
QM001718878
15118
15131
15184
14679
14690
14695
14698
14721
Page 48
14722
Problem: The db_recover command runs out of memory. Resolution: Fixed in the code. Settings were added to DB_CONFIG file so that db_recover does not run out of memory. Customer Case Number: ISS03772783 Problem: Mainframe EDM should support discovery from zOS agent versions 1.6.00 and later. Resolution: Fixed in the code. The mainframe EDM will support the discovery of agents from version 1.7.00 after its forthcoming release. However, the EDM will not gather the new dependency data provided by zOS agent version 1.7.00; it will use the version 1.6 data that the new release provides. Problem: Processor sync mapping to CMDB (CMDB.Host_processor) does not support PowerPC 6 and 7. Resolution: Fixed in the code. Customer Case Number: ISS03770806 Problem: localToRemote can result in inefficient searches and cause performance issues. Resolution: Fixed in the code. Customer Case Number: ISS03762873 Problem: One datastore query (findOrCreateNode) is slow for certain datasets and can cause performance issues, particularly for large datastores. Resolution: Fixed in the code. Customer Case Number: ISS03793019
14725
14774
14780
14811
14206
14246
14260
14266
14268, 14321
14284
14286
14295
Page 49
14298
Problem: The HRD importer leaves behind old nodes. This can lead to various follow on issues, such as an inability to export data or create Host Profile reports. Resolution: Fixed in the code. Problem: The slave cannot run a command with spaces in its name and no arguments. Resolution: Fixed in the code. Problem: When logging on to an appliance user interface with permissions to view reasoning status (but without system persmissions), you cannot receive reasoning on-hold information. Resolution: Fixed in the code. Problem: Cancelled consolidation runs may not complete until all in-progress endpoint runs to completion. Resolution: Fixed in the code. Problem: To ensure discovery of IBMi/AS400, modifications are required to the IBMi firewall. Resolution: Fixed in the code. Problem: During discovery of mainframe computers, orphan MFPart and Sysplex nodes could be created. Resolution: Fixed in the code and in the documentation. Problem: When building a BAI that contains SoftwareInstances from both distributed and Mainframe servers, the model in BMC Atrium CMDB is missing a dependency, and there is no visible connection between the mainframe components and the BAI. Resolution: Fixed in the code; a mapping tpl file has been created. Customer Case Number: ISS03722398 Problem: The documentation includes a link to Microsoft's website that no longer resolves. Resolution: Fixed in the documentation. Customer Case Number: ISS03726173 Problem: The MarketVersion attribute must be mapped for version BMC Atrium CMDB 7.6.03 and later to support an improved Software License Management (SWLM) feature for managing licenses. Resolution: Fixed in the code; CDM mapping now includes the MarketVersion attribute that is populated for BMC_SoftwareServer, BMC_Product, BMC_OperatingSystem and BMC_Application classes. Problem: When you cancel a consolidation run, and that same run receives more data from the scanning appliance, the system may continue processing the run. Resolution: Fixed in the code; now, cancelling a consolidation also cancels the in-progress consolidation run and stops receiving data from the scanning appliance. Problem: If a BMC Atrium CMDB user sets the auditType attribute to any value, the BMC Atrium Discovery Mainframe extension overrides the change. Resolution: Fixed in the code. Customer Case Number: ISS03700534 Problem: The mapping for the BMC_MFSoftwareServer class in BMC Atrium CMDB version 7.6.03 uses incorrect attribute names. Resolution: Fixed in the code. Problem: In Mainframe JVM Settings, increase the maximum memory limit to 1024MB. Resolution: Fixed in the code and the documentation. Problem: For a "Java heap space" error, the Mainframe DA page should explain to the user how to increase memory. Resolution: Fixed in the code. Customer Case Number: ISS03712888 Problem: SNMP discovery can fail on Netware 5.70.05. Resolution: Fixed in the code. Customer Case Number: ISS03726115 Problem: A SQL level deadlock error displays when you update impact relationships between target hosts. Resolution: Fixed in the code. Problem: You cannot use the same variable as a parameter and return a value of user-defined functions. Resolution: Fixed in the code.
14316 14325
14327
14329 14331
14334
14346
14366
14369
14382
14396
14407 14409
14412
14424
14425
Page 50
14435
Problem: Both vendor and model are available in systeminfo output. They should be extracted to match WMI behavior. Resolution: Fixed in the code. Problem: Scanning Windows systems has poor performance and results in Discovery errors. Resolution: Fixed in the code. Problem: Using the processwith function communicatingSIs in searches or TPL may fail. Resolution: Fixed in the code. Problem: A host with "aix" in the name causes AIX discovery scripts to be used for discovery. Resolution: Fixed in the code. Customer Case Number: ISS03745096 Problem: Running the Observed Communications report results in an error in the processwith function. Resolution: Fixed in the code. Problem: An upgrade can fail when the TKU has been applied using the Knowledge Update page. Resolution: Fixed in the code. Problem: Documentation is required regarding the implications for the BMC.ADDM dataset when you use tw_model_init to reinitialize the datastore. Resolution: Fixed in the documentation. Problem: Cascade Removal incorrectly removes SIs when its components are removed. Resolution: Fixed in the code. Problem: Files that accumulate in the /var/spool/clientmqueue directory prevent services from restarting if SEND_EMAIL is not enabled on the VA. Resolution: Fixed in the code. Problem: If multiple Mainframes are discovered from a particular DiscoveryAccess, then only one is inferred. Also, SYSPLEX relationships are not properly discovered when a sysplex is shared across mainframes. Resolution: Fixed in the code. Problem: The documentation should explain the concept of root nodes and why new custom nodes are not automatically exported. Resolution: Fixed in the documentation.
14537 14556
14579
14595
13682, 13909
14107
14058
13992 12540
Page 51
12545
Problem: Mac OS CPU data missing Number of Processors information not catered for in discovery script. Resolution: Code fix, this no longer occurs. Customer Case Number: 00022581 Problem: Disable default use of weak ciphers Resolution: Code fix, this no longer occurs. Customer Case Number: ISS03539021 Problem: A leading space in the host field or username of a adapter prevents the adapter from working. You cannot test the adapter or use it to export data. Resolution: Code fix, this no longer occurs. Problem: Bad reporting of timeout of nmap. Resolution: Code fix, this no longer occurs. Customer Case Number: ISS03590819 / QM001654697 Problem: Processor speed conversion error. Resolution: Code fix, this no longer occurs. Customer Case Number: ISS03603147 Problem: lsof-i parsing is massively inefficient Resolution: Code fix, this no longer occurs. Customer Case Number: ISS03615289 / QM001667255 Problem: XML API - export does not encode nonprintable characters as XML entities. Resolution: Code fix, this no longer occurs. Customer Case Number: ISS03611510 / QM001660995 Problem: tw_injectip --replace does not work. Resolution: Documentation fix giving examples of how to determine scan IDs. Customer Case Number: ISS03636264 / QM001667859 Problem: Solaris prtdiag parse failure results in wrong number of physical and logical processors Resolution: Code fix, this no longer occurs. Customer Case Number: ISS03652666 Problem: Scan level description misleading. Resolution: Documentation and UI fix. Customer Case Number: ISS03660048 Problem: XSS vulnerability in "Edit Channels" page. Resolution: Code fix, this no longer occurs. Customer Case Number: ISS03662279 Problem: Security vulnerability in returnURL redirect parameter. Resolution: Code fix, this no longer occurs. Customer Case Number: ISS03662280 Problem: Add support for Cisco Catalyst 4510R-E. Resolution: Cisco Catalyst 4510R-E now identified. Customer Case Number: ISS03672076 / QM001678618 Problem: HBAs not detected on AIX, because of discovery parsing issue. Resolution: Code fix, this no longer occurs. Customer Case Number: ISS03672449 Problem: getMFPart method failure. Resolution: Code fix, this no longer occurs. Customer Case Number: ISS03676415 / QM001679896 Problem: OpenVMS show network parsing fails if no hostname. Resolution: Code fix, this no longer occurs. Customer Case Number: ISS03677985 Problem: Cisco Nexus Network switches show wrong serial number. Resolution: Code fix, this no longer occurs. Customer Case Number: ISS03680887 / QM001686251 Problem: RE job is causing duplicate records as Domain field is not populated Resolution: Code fix, this no longer occurs. Customer Case Number: ISS03683463 / QM001685002
12762
12692
13741
13119
13737
13734
13926
13697
13713
13703
13704
14013
13981
13854
13961
14027
14037
Page 52
11611
Problem: Bonded interfaces report nonsensical speed/duplex settings. Resolution: Code fix, this no longer occurs. Customer Case Number: QM001678691
13007
12892
Problem: Retrieving Files from 64-bit Windows systems. This impacts discovery of files that are under C:\Windows\System32 directory when running the slave on 2008. When the slave is running on a 2008 system and there is a request to discover a file beneath the C:\Windows\System32 directory it will fail. The IIS, HyperV and DB2 patterns are impacted by this. Resolution: Code fix, this no longer occurs. Problem: Large amount of hosts makes auto-grouping useless. Workaround: Documentation fix. Automatic grouping is intended as a baselining tool for use at the scale of 100 to 500 hosts. At larger scale the usefulness and usability of the visualizations is diminished. Customer Case Number: 22698 Problem: When discovering Oracle instances, Discovery currently only attempts to log into the IP that the host was discovered on. Resolution: Code fix, this no longer occurs. Problem: Visualizations cannot display manual groups with the '$' character in their name. Resolution: Code fix, this no longer occurs. Problem: Registry queries against 64-bit Windows 1. Package discovery: The package list for a 64-bit system will always be incomplete. When running the slave on a 32-bit platform only the 64-bit packages are discovered. When running the slave on a 64-bit system only the 32-bit packages are discovered. 2. Registry queries: Registry queries will succeed when the slave is running on a 32-bit system. The result of registry queries for a slave running on a 64-bit system will vary depending on the Operating System version. For slaves running on XP_64 or 2003_64 the queries will succeed. For slaves running on 2008_64 the queries are likely to fail - it is not predictable. Resolution: Code fix, this no longer occurs.
12675
12608
12382 11665
Page 53
ADDM-14904
Problem: Automatic grouping is run whenever a Discovery run completes. Where multiple runs complete close together, the automatic grouping process for the runs may overlap, possibly leading to performance problems. Workaround: If possible, combine multiple small discovery runs into fewer large runs. Alternatively, you may choose to disable the automatic grouping feature. Problem: In version 9.0 SP2, if the Tripwire policy file is not updated, a file not found error is reported in the Tripwire reports. Workaround: Update the tripwire policy file as follows: Remove /var/lib/nfs/rmtab and /etc/yp.conf. Update the following: /etc/syslog.conf to /etc/rsyslog.conf /var/www/html/restore.html to /var/www/html/restart.html /usr/tideway/BerkeleyDB.5.2 to /usr/tideway/BerkeleyDB.5.3 Problem: If a log file does not have access permission for the tideway user account: Attempts to watch or download the log file from the appliance's UI ( Administration > Logs > Log Files) display a technical difficulties page Attempts to compress the log file fail Workaround: As the user with permissions (root or netadmin), you must do one of the following: Provide read access to the tideway user by running the chmod a+r command, or Change the ownership of the file for the tideway user to access it by running the chown tideway:tideway command Problem: You cannot log in to an https enabled appliance using the Safari browser. A security error dialog is displayed. Workaround: None. Problem: If graph definitions refer to a node kind that is not in the taxonomy, the visualizations fail to start. The logging shows that the problem is related to the taxonomy, though this can be difficult to find as there is much unrelated information. Workaround: Remove the node kind from the graph definition. Problem: When upgrading, if you have a taxonomy extension that refers to an attribute that is no longer used in BMC Atrium Discovery 9.0 (ip_addr), the on screen instructions prompt you to restart the tideway services before fixing the taxonomy extension. If you restart the services before fixing the taxonomy extension, the UI will not restart. Workaround: Fix the taxonomy extension before restarting the services. Problem: Some browser bookmarks using http://appliancename/ui/portal/ which were valid in previous releases are no longer valid and give a 404 page not found error. Workaround: If this occurs, navigate to the required page and reset your bookmark. Problem: Kickstarting a hardware appliance using the DVD kickstart via iDRAC does not work. Workaround: If possible, avoid using iDRAC. Problem: Backup over SSH backs up into a subdirectory called YYYY-MM-DD_hhmmss_addm_backup inside the specified directory. When restoring, if you enter the directory you specified in the backup, the backup cannot be found and the following error is displayed: "addm_backup.xml is not present under backup directory". Workaround: Include the subdirectory in the specified path. Problem: DHCP on IPv6 does not work. Workaround: None. Problem: When connecting to an IPv6 appliance over HTTPS, it prompts to add a security exception for the self-signed certificate. In Firefox you cannot add the exception. This Firefox bug is being tracked here. Workaround: None. Problem: The http to https redirect does not work when using a literal IPv6 address in the URL. This Apache bug is being tracked here. Workaround: None.
ADDM-14851
ADDM-14860
16450
15813
16539
16370
DE73950
DE73470
DE71852 DE69115
DE69110
Page 54
DE69059
Problem: User interface pages which includes the Community channel, for example the home page, load slowly in an IPv6 only environment. Workaround: None. Problem: In some situations a Windows host may take many hours to discover. Where this occurs, view the corresponding DiscoveryAccess and select the Discovery Method Timings report from the Reports drop-down. You may see a series of Discovery Method Duration entries lasting 29 to 30 minutes and then some shorter ones of up to a few seconds in duration. Where this occurs, the worker process may have been discarded and a new one created. Consequently, a large number of defunct worker processes are visible in the Windows Task Manager. For each of these processes (when at DEBUG log level) the last entry in the log file is of the form:
2692: 2012-02-01 18:25:25,740: discovery.slave.worker.remquery.connection: DEBUG: Failed to find ADDM Remote Query on 192.168.1.53: (1060, 'OpenService', 'The specified service does not exist as an installed service.')
15667
You will also see entries in the manager process log file of the form:
"Exception checking worker alive: CORBA.TRANSIENT(omniORB.TRANSIENT_CallTimedout, CORBA.COMPLETED_MAYBE) 1784: 2012-01-30 21:49:26,351: discovery.slave.manager.workers: INFO: Dropping worker IOR:010000002f"
This defect is also present in BMC Atrium Discovery 8.3 and 8.3 SP1. Workaround: Rescan the host. If the number of defunct processes becomes excessive, you must reboot the host to remove them. 15490 Problem: If you try and extract (running the upgrade script with the --upgrade option) the contents of the upgrade package on a host that does not have BMC Atrium Discovery installed, the script returns errors. Workaround: None. Problem: When a CMDB sync connection is configured with a user with minimal permissions, you may not be able to connect if the user has open connections from other locations. Workaround: Use a user with at least the CMDB Demo user permissions. Problem: The name server retries option that can be set the Administration > Appliance Configuration > Name Resolution page has no effect. It will always attempt 3 retries. Workaround: None. Problem: In the Collaborative Application Mapping (CAM) user interface, using a named value that has a list in the results results in a traceback. Workaround: None. Problem: The query builder "show all/show fewer" links are not always visible in Internet Explorer 6. Note that support for Internet Explorer 6 is deprecated in BMC Atrium Discovery 8.3 SP2. Workaround: None. Problem: The wiki markup in the CAM text editor does not render correctly in the PDFs. Workaround: None. Problem: The query builder does not show any attributes if nodekind instances are not present in the datastore. Workaround: None. Problem: AD Proxy shows the confusing log reference of <no username> in logs. Workaround: None. QM001720259
15466
15421
15254
15247
15231
15213
15208
Page 55
15207
Problem: TRANSIENT_CallTimedOut errors occur when SNMP discovery attempts to scan some very complex network devices. Workaround: None. Problem: Attempting to discover a host server running version 2.5 of the optional MainView for WebSphere Application Server product results in the script failure "message View 49Z not available", and no WebSphere Discovered Application Components are created, preventing extended discovery. Workaround: None. Problem: Instrument the scanning appliance to capture unexpected exceptions while sending data to the consolidator. Workaround: None. Problem: When the IP list for a host changed dramatically, on the consolidator no Endpoint Identity was created. Workaround: None. Problem: The user interface displays a "The appliance is having technical difficulties with this page" error if you add a login credential that includes a control character ASCII 0-31. Workaround: None. Problem: Processor sync mapping to CMDB (CMDB.Host_processor) does not support PowerPC 6 and 7 for BMC_Processor.ProcessorFamily. Workaround: None. Problem: In the query builder, a condition that matches a string containing a tab character results in messages such as Unknown string qualifier at 'u' on line 1. The same is true for defining functional components. Workaround: None. Problem: Cannot move proxy pools with spaces in their names using actions menu. Workaround: None. Problem: Person nodes do not have a key. As a result you get incorrect results updating Person in TPL. Workaround: Specify an explicit key in your pattern. Problem: "ksh[n]: tw_report: not found" appended to the output of many Discovered results for Solaris 10. Workaround: Set force subshell = True for that login credential. Problem: When millions of DAs nodes are generated from darkspaces scan, the VA can become so slow you cannot login. Workaround: Run the tw_remove_darkspace script. Problem: TCPvcon cannot be pushed to Windows 2000 hosts. Workaround: Deploy the utility manually. Problem: When saving queries in the Reports Builder, if you specify a name that already exists, the save button is disabled, though no reason for this is shown in the user interface. Workaround: None. Problem: Cannot generate a bar graph for a scan whose name contains an apostrophe. Workaround: None. Problem: On the Discovery Home page the Recent Runs tab does not show any discovery runs resulting from topology runs. These discovery runs are however shown in the Discovery Status panel on the Home page. BMC Atrium Discovery version 8.3 introduces Edge Connectivity discovery as a replacement for the network topology feature that was introduced in BMC Atrium Discovery 8.2. Note that Network topology discovery has been removed in BMC Atrium Discovery version 8.3 and replaced with Edge connectivity discovery. Workaround: None. Problem: Consolidation stopped. Workaround: None. Problem: Upgrades from BMC Atrium Discovery 7.3 to 7.3.1 produce errors about discovery running. Workaround: None. 22943 22622 ISS03668576 ISS03643235 ISS03780851 QM001718958
15169
15158
15106
15077
15003
14968
14681
14570
14555
ISS03729494
14227
ISS03704879
13963 13881
13801
13583
12647 12590
Page 56
12458
Problem: Wrong RAM reported by WMI against Windows 2008 Server VMs on Hyper-V. Workaround: None. Problem: Upgrade highlights changes to Windows discovery scripts when the content is unchanged. Workaround: None. Problem: Errors appear in ECA logs "Errors not related to a node". Workaround: None. Problem: Log rolling failure in Windows Proxy results in traceback. Workaround: None. Problem: The datastore can crash when compacting the datastore using the new online compaction tool. Workaround: Although the crash does not corrupt data and the online compaction can be re-run and it will continue where it left off, it is recommended that you continue to use the offline compaction utility. Problem: When you have edited the graph-definition.txt file, you must restart the Tideway services to apply the changes. Workaround: None. Problem: If the "Only show working set in visualizations" menu option is used while viewing a visualization, the visualization does not always redraw correctly to honor the new option state. Workaround: Manually refresh the browser page. Problem: Hiding nodes in a visualization such that no nodes remain in the visualization can cause Internet Explorer 6 to crash with the message "R6025 pure virtual function call". Note that support for Internet Explorer 6 is deprecated in BMC Atrium Discovery 8.3 SP2. Workaround: Do not hide all nodes in a visualization. Problem: TRANSIENT_CallTimedout should be modified slightly for user display. Workaround: None. Problem: Discovery runs older than the ten most recent are not displayed well. Workaround: None. Problem: Discovery of a service processor results in creation of a host node. Workaround: None. Problem: TPL activation doesn't handle special characters well, for example a back tick. Workaround: None. Problem: When connecting to BMC Atrium Discovery via HTTPS using an alias hostname, you are prompted for verification. Workaround: None. Problem: History comparison is unusable with more than a small number of entries. Workaround: None. Problem: When doing an snmpGet in a pattern, it is possible that some SNMP agents for certain OIDs will return data for a different OID than the one requested. Because the OID for the data retrieved doesn't match the OID that the request was for, the corresponding attribute will not be set. Workaround: The mapping can be set to include the requested OID and the OID that will actually be returned. For example:
table oid_map 1.0 ".1.3.6.1.4.1.11.2.7.1.75.8.10.10.10.1" -> "wont_get_this"; ".1.3.6.1.4.1.11.2.7.1.0" -> "pingtime"; end table;
ISS03812417
12445
22254
ISS03752142 21992
12330 *
12297 *
12296 *
10521
10277
8444
9179 *
Page 57
7848 *
Problem: On Windows hosts there is overlap between what is returned as packages and what is returned as patches. The WMI getPackageList code retrieves everything that is uninstallable which will include all the patches. The pstools version does the same, but filters out hot fixes. The data returned by getPackageList will be different depending on the access method used. Duplicate Patch and Package nodes will be created, with potential impact on performance and disk usage. Queries for patch/package information will have to be more complex or may return duplicate results. Workaround: None. Problem: An Appscan Auto test revealed several XSS vulnerabilities in the RelationshipSearch and ReleationshipMultisearch widgets. Workaround: None, though these have now been identified as false positives. Problem: When discovering Windows NT hosts with a Credential Windows proxy platform minimum specification, this Discovery fails although a valid credential is defined. The credential should be localhost\username. Workaround: Ensure that the credentials for these hosts are entered as localhost\username. This is not required for Credential Slaves released with BMC Atrium Discovery 8.2 and later as the Credential Slave adds "localhost\" to any unqualified username. In these examples, replace user name with the correct user name for Discovery. Customer Case Number: C5293. Problem: On BMC Atrium Discovery and Dependency Mapping version 9.0 appliances, when you re-provision a NIC (for example, change the interface type on a virtual machine) and reboot the system for the changes to take effect, the NIC is provisioned as eth1 rather than eth0, which BMC Atrium Discovery requires. Workaround: See Appliance does not have an IP on eth0.
3534 *
3151, 4456. *
ADDM-14401 ADDM-14360
Page 58
Installation
New installations of BMC Atrium Discovery version 9.0 are installed from a kickstart DVD or a downloaded Open Virtualization Format (OVF) virtual machine. See Installing BMC Atrium Discovery and Installing the virtual appliance for more information.
Sizing guidelines
The following guidelines are based on typical deployments in the field, and are intended to serve only as recommended configurations for your environment. This section defines four "classes" of appliance deployment that broadly follow how BMC Atrium Discovery is deployed in the field. They are differentiated by how many Operating System Instances (OSIs) that are being scanned by BMC Atrium Discovery. The names given to these classes are of use only in this document and do not relate to the various editions that BMC Atrium Discovery is available in. The classes are: Proof of Concept. Small, time-limited test deployments of BMC Atrium Discovery, scanning up to 150 OSIs. Baseline. A typical baseline as offered by BMC. Scanning up to 500 OSIs. Datacentre. A typical large scale deployment. Scanning up to 5000 OSIs. Consolidated Enterprise. Enterprise scale deployments, typically a Consolidation Appliance taking feeds from many Scanning Appliances. Typically scanning or consolidating of the order of 20000 OSIs or more. At these levels, a weekly scanning or focused scanning strategy may need to be adopted.
Proof of Concept The Proof of Concept class has minimal storage allowance as they are only intended for a limited period of scanning such as a week long trial. For longer periods or a continuously used development or UAT system, the Baseline class is the minimum recommended.
Page 59
Memory and swap usage Memory and swap usage depends on the nature of the discovery being performed, with Mainframe and VMware (vCenter/vSphere) devices requiring more than basic UNIX and Windows devices.
Where the following tables refer to CPUs, full use of a logical CPU (core) is assumed. For example, if eight CPUs are required, then you may provide them in the following ways: Eight virtual CPUs in your virtualization platform, such as VMware Infrastructure. Four dual core physical CPUs. Two quad core physical CPUs.
The disk configuration utility uses the following calculation to determine the best swap size. Where the amount of memory is less than 16 GB swap size is set at double the memory size. Where the amount of memory is between 16 and 32 GB swap is set at 32 GB. Where the amount of memory exceeds 32 GB swap is set to equal them memory. The disk requirement with local backup is lower than in previous versions as the appliance backup feature does not use as much disk space as appliance snapshot when creating the backup.
Memory requirements for POC class 2GB RAM is sufficient for normal operation, but is insufficient to activate a new TKU. To activate a TKU requires 4GB of RAM. You can increase the memory for activation and then reduce it for normal operation if required.
No additional software is supported on the appliance BMC Atrium Discovery is built as an appliance that is not intended to have any additional software installed on it, with the single exception of BMC PATROL. If you have an urgent business need to install additional software on the appliance, please contact BMC Customer Support before doing so.
Page 60
No OS customizations are supported on the appliance The BMC Atrium Discovery software is delivered as an appliance model (either virtual or physical), and includes the entire software stack from a Linux operating system to the BMC application software. The operating system must not be treated for general purpose use, but rather as a tightly integrated part of the BMC Atrium Discovery solution. As such, customizations to the operating system should only be made at the command line level if explicitly described in this online documentation, or under the guidance of BMC Customer Support. If you have an urgent business need to change any operating system configuration, please contact BMC Customer Support before doing so.
Unsupported options As part of the operating system installation you are presented with options that are not supported as part of BMC Atrium Discovery. For example: Encrypt disks option in the partitioning screens Advanced Storage options. These are not supported.
Prerequisites
Before installing BMC Atrium Discovery from the kickstart DVD, you should ensure that you satisfy the following prerequisites: You must be an experienced Linux system administrator You must be installing on x86-64 based hardware that is supported by Red Hat Enterprise Linux 6 (64-bit only). A facility for installation on a 32 bit machine is not available.
Page 61
Proof of Concept The Proof of Concept class has minimal storage allowance as they are only intended for a limited period of scanning such as a week long trial. For longer periods or a continuously used development or UAT system, the Baseline class is the minimum recommended.
Memory and swap usage Memory and swap usage depends on the nature of the discovery being performed, with Mainframe and VMware (vCenter/vSphere) devices requiring more than basic UNIX and Windows devices.
Where the following tables refer to CPUs, full use of a logical CPU (core) is assumed. For example, if eight CPUs are required, then you may provide them in the following ways: Eight virtual CPUs in your virtualization platform, such as VMware Infrastructure. Four dual core physical CPUs. Two quad core physical CPUs.
The disk configuration utility uses the following calculation to determine the best swap size. Where the amount of memory is less than 16 GB swap size is set at double the memory size. Where the amount of memory is between 16 and 32 GB swap is set at 32 GB. Where the amount of memory exceeds 32 GB swap is set to equal them memory. The disk requirement with local backup is lower than in previous versions as the appliance backup feature does not use as much disk space as appliance snapshot when creating the backup.
Memory requirements for POC class 2GB RAM is sufficient for normal operation, but is insufficient to activate a new TKU. To activate a TKU requires 4GB of RAM. You can increase the memory for activation and then reduce it for normal operation if required.
BMC recommend that for the best performance, you use two logical disks. For single disk installations your sizing calculations should be based on the size of the database (see the table above) plus the size of the OS disk (146 GB).
Page 62
Partitioning destroys all data on disks Installing BMC Atrium Discovery involves partitioning your disks. Partitioning disks destroys any data on those disks. You should understand partitioning before installing BMC Atrium Discovery.
To install BMC Atrium Discovery from a kickstart DVD: 1. Boot your host using the kickstart DVD. See the documentation supplied with the hardware platform for information on this. You are presented with a splash screen which enables you to select installer options (press F2 to see more information). The options are: install to install on a system configured as specified in Appliance hardware platforms. This option performs an installation that completely overwrites any data on the system, and enables you to set network configuration, keyboard layout, language and timezone. BMC Atrium Discovery will always be installed onto the first disk. Additional disks can be configured from the BMC Atrium Discovery UI when the install is complete. custom This option enables you to customize disk partitioning, but is otherwise identical to the install option. Always use the install option in preference to the custom option, even if you have multiple disks. Should you use custom to create a non-standard disk layout, the Disk Configuration feature will not be able to manage BMC Atrium Discovery storage. When using the custom option, ensure that the /boot partition is 250 MB as in the default partitioning scheme.
Unsupported boot options At this stage you may specify boot options if, for example, you want to customize the install. However, this is not supported. See the Red Hat Enterprise Linux documentation for information on boot options. 2. Enter one of the supported options at the boot: prompt and press enter. The Red Hat Enterprise Linux installer starts. 3. If you have more than one network interface, you are prompted to choose one. Select the first ethernet device, eth0, and click OK. 4. When the TCP/IP configuration screen is displayed, enter the required networking information and click OK. If you select manual configuration, one or more further configuration screens will be presented. Provide the requested information. 5. If the disks are not recognized or have never been partitioned, a partitioning table warning is displayed. Click Yes to proceed. 6. When prompted, choose the language to use during installation 7. When prompted, choose the appropriate keyboard for the system 8. When the time zone configuration screen appears Tick the "System clock uses UTC" option Select the correct time zone 9. If you selected the custom option, use the following steps to configure the disks.
Non standard disk configuration These custom install instructions will result in a valid disk layout identical to that created by the install option. The disk configuration utility will not be able to manage disks if volume groups or a non standard disk configuration are used. a. b. c. d. Select the Replace Existing Linux System(s) installation type. Do not select the Encrypt system option. Select the Review and modify partitioning layout option. Click next. On the storage device selection screen, select the first disk, and click the right arrow to move it into the Install Target Devices list. e. Click next. The Review Partitioning screen is displayed.
f. Delete the volume group shown beneath LVM Volume Groups or create the partitions in this volume using the sizes described in the table below as a guide. If you do not delete the LVM volume group, you will not be able to use the disk configuration utility to manage disks. When you delete the volume group displayed below LVM Volume Groups, or create the partitions in the volume, the currently existing disk partitioning will not be populated by the installer. You must enter the partition information as instructed in the following step. g. If you have deleted the LVM volume group, delete all remaining partitions on all disks and enter the following partition information for the first disk: Mountpoint Type swap Size (MB) 16384 See Appliance sizing guidelines for information on recommended swap size.
Page 63
250 1768 896 896 2048 1536 1024 100 (Fill to maximum allowable space.)
h. If you have more than one disk, do not configure them at this time. Instead use the disk configuration utility when you have installed BMC Atrium Discovery. The partitioning and installation of the operating system and BMC Atrium Discovery begins. This may take some time. When the installation has completed, remove the DVD and click the Reboot button. The BMC Atrium Discovery banner displays networking information. The installation is now complete. If you have additional disks, use the disk configuration utility to configure them.
No additional software is supported on the appliance BMC Atrium Discovery is built as an appliance that is not intended to have any additional software installed on it, with the single exception of BMC PATROL. If you have an urgent business need to install additional software on the appliance, please contact BMC Customer Support before doing so.
No OS customizations are supported on the appliance The BMC Atrium Discovery software is delivered as an appliance model (either virtual or physical), and includes the entire software stack from a Linux operating system to the BMC application software. The operating system must not be treated for general purpose use, but rather as a tightly integrated part of the BMC Atrium Discovery solution. As such, customizations to the operating system should only be made at the command line level if explicitly described in this online documentation, or under the guidance of BMC Customer Support. If you have an urgent business need to change any operating system configuration, please contact BMC Customer Support before doing so.
Pursuant to the terms of VMwares VMware Ready program. BMC Atrium Discovery is a qualifying partner product in that it has been approved by VMware as meeting all of the requirements of the VMware Ready program. BMC Atrium Discovery is certified in the Application Software category of the VMware Ready program and is fully supported by BMC. The product has been fully tested to ensure reliable operation under fully-loaded conditions. This certification can be validated via this article, in the Installing the virtual appliance guide, and on VMwares Solution Exchange. VMware and VMware Ready is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions.
Page 64
You must add a CD-ROM drive to the virtual machine before you can install VMware tools. Do this before starting the virtual machine.
The major steps in this procedure are outlined below. For full information, see the VMware documentation. 1. Copy and extract the BMC Atrium Discovery Virtual Appliance zip file to the local file system. 2. In VMware Virtual Infrastructure, from the File menu, select Deploy OVF Template. The wizard guides you through the rest of the process described here. 3. Enter the file name of the BMC Atrium Discovery OVF file into the Filename dialog box, or click Browse... to locate the file. Click Next. 4. Review the details provided. Click Next. 5. Provide a name for the virtual machine. Click Next. 6. Choose datastore location for the virtual machine. Click Next. 7. Choose thin or thick disk format. The default is thick which pre-allocates all disk space. Click Next. 8. Review the configuration and click Finish to proceed. A progress bar is displayed while the virtual machine is deployed. 9. Click the Hardware tab and click Add. 10. Select DVD/CD-ROM Drive and click Next. 11. Accept the defaults for the drive. Optionally you can now add additional disk storage which can be managed with the disk configuration utility. 12. Click the Hardware tab and click Add. 13. Select Hard Disk and click Next. 14. Choose the disk capacity and other options. For information on recommended capacity, see the sizing guidelines. 15. Click Next. 16. Review the settings and click Finish. 17. Click OK to save the changes. You can now start the virtual machine.
4. Enter a name for the virtual machine and click Import. VMware Workstation imports the virtual machine and the virtual machine is displayed in the virtual machine library.
Earlier versions
VMware Workstation version 7 and earlier and VMware Player version 3 and earlier do not directly support appliances generated by their current tools. However, you can convert the OVF file using the OVF converter in order to import it. Note also that some of these older versions of VMware tools do not natively support Red Hat Enterprise Linux version 6. To convert, enter the following at a command prompt:
"c:\Program Files\VMware\VMware OVF Tool\ovftool" ADDM_VA_64_9.0.1_308185_ga_Community.ovf ADDM_VA_64_9.0.1_308185_ga_Community.vmx
When you have converted the file, you can install it using File > Open.
Post Installation
Important Post Installation Steps It is important to consider the following steps after installing. VMware Tools Installing VMware Tools Setting up NTP
VMware tools
VMware Tools is required but is not installed on the shipped BMC Atrium Discovery Virtual Appliance, as the correct version of the tools is dependent on the version of the VMware host. When you have installed the BMC Atrium Discovery Virtual Appliance, you must always install VMware Tools.
5.
PDF generated from online docs at http://discovery.bmc.com BMC Atrium Discovery 9.0
c. Enter one of the following commands depending on whether VMware Tools is supplied on the virtual CD as an RPM, or a tar file. i. If you have an rpm enter the following command: rpm -i /mnt/cdrom/VMwareTools-XXX-nnnnnn.x86_64.rpm umount /mnt/cdrom ii. If there is a tar file, enter the following commands: cd /tmp tar zxvf /mnt/cdrom/VMwareTools-XXX-nnnnnn.tar.gz umount /mnt/cdrom cd vmware-tools-distrib ./vmware-install.pl 6. Respond to the configuration questions on the screen. Press Enter to accept the default value. If at any later point, you need to re-run the VMware tools script, the default location will be: /usr/bin/vmware-install.pl
Setting up NTP
It is essential to configure ntp on your BMC Atrium Discovery Virtual Appliance. If significant clock skews occur then it can impact the functioning of the system or even prevent the BMC Atrium Discovery product from starting. VMware have documented their recommended timekeeping best practices for Linux based VMs. The KnowledgeBase article can be found here.
Next Steps
Once the Virtual Appliance is installed it is a good time to configure it to have sufficient resources for its intended use. Doing so now will be easier than extending the resources once the Virtual Appliance has been in use for some time. See Configuring the Virtual Appliance for details.
Note If you are upgrading from an earlier 7.X or 8.X version you will first need to upgrade to one of the versions listed above.
Page 67
4. Where Vn_nnnnnn is the version number and rhel5 denotes that the upgrade script is for versions of BMC Atrium Discovery running on Red Hat Enterprise Linux 5. There is no upgrade for versions of BMC Atrium Discovery running on Red Hat Enterprise Linux 6, rather, you can install that from a kickstart DVD.
Warnings
Changes to OS Configuration Files If you have made changes to operating system configuration files on the appliance, these changes may be overwritten by the upgrade process. After the upgrade has completed, you must check any configuration files you have previously modified and reapply the changes as required.
Database upgrade This upgrade performs an upgrade of the BMC Atrium Discovery database. It is highly recommended that you do not skip running a snapshot. Where a snapshot is created, it can only be restored to an appliance running the pre-upgrade version.
For users with multiple appliances You may have a number of appliances, for example one or more appliances consolidating to a central consolidated appliance. When a system uses consolidation, the required approach in this upgrade is: 1. 2. 3. 4. 5. 6. Stop discovery on scanning appliances. Ensure that all consolidation operations are complete. Stop discovery on consolidating appliances. Upgrade all appliances. Restart discovery on the consolidating appliances. Restart discovery on the scanning appliances.
Upgrade considerations
Click the following items to show additional information or explanation. All credentials changed in post-upgrade baseline The upgrade to BMC Atrium Discovery version 9.0 includes the ability to enable and disable individual credentials. As each credential has an attribute added to support this, all credentials are shown as changed in a post-upgrade baseline. Regex/wildcard specified credentials disabled after upgrade In previous versions you could specify IP addresses for which the credential would be used with a regular expressions or wildcards. This approach does not work with IPv6 addresses and has been removed. Any existing credential using the .* regular expression is changed to match all. For IPv4 that is 0.0.0.0 and for IPv6 ::/ is used. Any credential using any other regular expression is disabled and must be modified and re-enabled before it can be used. Custom patterns referencing old network model require changes post-upgrade BMC Atrium Discovery version 9.0 includes changes to the underlying network model. Any patterns that refer to elements of the old model must be updated in order to continue to work correctly. During the upgrade such pattern modules are flagged and should be reviewed before using BMC Atrium Discovery. Retaining the set timezone When you run the upgrade, the timezone you have specified will be overwritten and returned to Europe/London unless you have updated the variable ZONE in /etc/sysconfig/clock. See Localizing the appliance for information on how to do this. For users of CMDB Sync Where an upgrade makes changes to syncmapping files (see Default CDM Mapping and Syncmapping block), the initial CMDB syncs after upgrade may result in longer reconciliation times. Examples of such changes are key changes or attribute changes on a CMDB CI. Miscellaneous upgrade items The following items are also affected by the upgrade:
Page 68
SQL and JDBC credentials: where SQL and JDBC credentials are in use, their existing properties files continue to be used and updated properties files are installed but not used. Where such credentials are unused, the old properties files are replaced with the latest.
Re-running an upgrade
The upgrade to BMC Atrium Discovery 9.0 introduces the ability to re-run the upgrade. Destructive actions are recorded and are skipped on subsequent upgrade runs. Consequently when an upgrade is re-run, it is usually significantly quicker. You are unlikely to need to run the upgrade more than once, as it would only be required if an upgrade fails. An option is provided ( --redo) to re-run an upgrade as if no previous upgrades to this version have previously run.
--extract
--tmpdir dirname
--no-clean
--auto
--upgrade-discovery-scripts
--username --password
--redo
--verbose
--help
In the following procedure, the filename is referred to as ADDM_Upgrade_64_Vnnnnn_nnnnnn_dev.sh.gz. Replace Vn_nnnnnn_ga with the version number, in the commands as appropriate. For example, ADDM_Upgrade_64_9.0_268697_dev.sh.gz.
Page 69
2.
PDF generated from online docs at http://discovery.bmc.com BMC Atrium Discovery 9.0
4. Copy the ADDM_Upgrade_<arch>_Vn_nnnnn_ga.sh.gz file to a temporary directory, such as /tmp. 5. Unzip the archive file using the following command:
[root@localhost tmp]# gunzip -v ADDM_Upgrade_64_Vnnnnn_nnnnnn_dev.sh.gz
Have you read the Release Notes, and do you have everything you need to complete the upgrade (yes/no)?
7. Enter yes if you have all that you need to perform the upgrade. Answering no aborts the installation. The script checks that all system requirements are fulfilled.
Page 70
============================================================================== STAGE 1: Pre Upgrade Checks ============================================================================== 1.1: Check Operating System [ OK ] 1.2: Check ADDM version [ OK ] 1.3: Check for development packages [ OK ] 1.4: Get credentials ADDM application (UI) credentials are required for upgrade. Please use an ADDM application user with sufficient privileges, for example the "system" user. Please enter ADDM user (Default: system): Please enter password for system: Testing credential ... 1.5: Check temporary directory
[ [
OK OK
] ]
Temporary directory /usr/tideway/tmp/twf.upgrade does not exist. Create it (yes/no)? yes 1.6: Check disk space [ OK ] 1.7: Check if ADDM services can be restarted [ OK ] 1.8: Check for Vault passphrase [ OK ]
8. Then the upgrade itself is commenced, beginning with extracting the files from the archive, and then estimating the time required for the upgrade.
============================================================================== STAGE 2: Archive extraction and estimation ============================================================================== 2.1: Extracting files [ OK ] 2.2: Extracting upgrade tools [ OK ] 2.3: Estimating remaining upgrade time [ OK ] The upgrade is estimated to take between 30 minutes and an hour NOTE: This does NOT include any time required to wait for any active Discovery or CMDB synchronization to complete. Also note that this estimate assumes sufficient resources are available, i.e. tasks will complete in a reasonable time and not be swapped. Do you want to continue (yes/no)?
[ [ [ [
OK OK OK OK
] ] ] ]
Page 71
============================================================================== STAGE 3: Upgrade Operating System ============================================================================== 3.1: Upgrading the Operating System [ OK ]
11. The upgrade then tests that the RPM will install correctly against the current system.
============================================================================== STAGE 4: RPM Upgrade Tests ============================================================================== 4.1: Remove any old RPM public signature key [ OK ] 4.2: Check for new RPM public signature key [ OK ] 4.3: Testing that RPMs can be upgraded [ OK ]
12. The next part of the upgrade is configuring the system, for example backing up existing discovery scripts.
============================================================================== STAGE 5: Configure System for Upgrade ============================================================================== 5.1: Audit upgrade start [ OK ] 5.2: Backup existing Discovery scripts [ OK ] 5.3: Removing old CMDB export definitions [ OK ] 5.4: Get pre upgrade option values [ OK ]
13. The upgrade script now upgrades BMC Atrium Discovery and any dependencies. Part of this stage is to create a snapshot unless you specified otherwise.
Page 72
============================================================================== STAGE 6: Upgrade ADDM and dependencies ============================================================================== 6.1: Performing snapshot [ OK ] 6.2: Stop services Stopping Tideway application services Stopping Application Server service: [ OK ] Stopping Reports service: [ OK ] Stopping Tomcat: [ OK ] Stopping Reasoning service: [ OK ] Stopping CMDB Sync (Transformer) service: [ OK ] Stopping CMDB Sync (Exporter) service: [ OK ] Stopping vSphere Proxy service: [ OK ] Stopping SQL Provider service: [ OK ] Stopping Phurnace Integration service: [ OK ] Stopping Mainframe Provider service: [ OK ] Stopping Model service: [ OK ] Stopping Discovery service: [ OK ] Stopping Vault service: [ OK ] Stopping Options service: [ OK ] Stopping Security service: [ OK ] Stopping Free Space Monitor service: [ OK ] Stopping omniNames [ OK ] Appliance shutdown tasks Stopping Restore Watchdog service: [ OK ] Stopping httpd: [ OK ] 6.3: Check persistence files [ OK ] 6.4: Performing DB checkpoint [ OK ] 6.5: Backup existing mapping sets [ OK ] 6.6: Copy JDBC drivers [ OK ] 6.7: Remove any old custom rules and actions [ OK ] 6.8: Remove any old permissions.db [ OK ] 6.9: Removing old record data [ OK ] 6.10: Removing old pool data [ OK ] 6.12: Backup tripwire policy files [ OK ] 6.13: ADDM RPM upgrade [ OK ] 6.14: Check tripwire configuration [ OK ] 6.15: Check system services [ OK ] 6.16: Remove any old BerkeleyDB files [ OK ] 6.17: Upgrade permissions [ OK ] 6.18: Cleaning up old pattern code [ OK ] 6.19: Check installed mapping sets [WARNING] Mapping sets have changed in the new version of ADDM
14. The BMC Atrium Discovery application has now been upgraded, but a number of configuration steps need to take place, for example re-importing the taxonomy, upgrading discovery scripts, and recompiling patterns. In this upgrade the datastore is reindexed which may take some time, depending on the amount of data contained in the datastore.
Page 73
============================================================================== STAGE 7: Post Upgrade Configuration. ============================================================================== 7.1: Start services for Taxonomy upgrade Starting omniNames [ OK ] Starting Tideway application services Starting Security service: [ OK ] Starting Model service: [ OK ] 7.2: Export existing taxonomy [ OK ] 7.3: Import new taxonomy [ OK ] 7.4: Verifying Graph Definitions [ OK ] 7.5: Stop services after Taxonomy upgrade Stopping Tideway application services Stopping Model service: [ OK ] Stopping Security service: [ OK ] Stopping omniNames [ OK ] 7.6: Update discovery.conf [ OK ] 7.7: Fix old style CORBA objects [ OK ] 7.8: Restart all services Starting httpd: [ OK ] Appliance start up tasks Clearing appliance temporary files [ OK ] Starting Restore Watchdog service: [ OK ] Starting omniNames for the first time [ OK ] Starting Tideway application services Starting Free Space Monitor service: [ OK ] Starting Security service: [ OK ] Starting Model service: [ OK ] Starting Coordinator service: [ OK ] Starting Options service: [ OK ] Starting Vault service: [ OK ] Starting Discovery service: [ OK ] Starting Mainframe Provider service: [ OK ] Starting Phurnace Integration service: [ OK ] Starting SQL Provider service: [ OK ] Starting vSphere Proxy service: [ OK ] Starting CMDB Sync (Exporter) service: [ OK ] Starting CMDB Sync (Transformer) service: [ OK ] Starting Reasoning service: [ OK ] Starting Tomcat: [ OK ] Starting Reports service: [ OK ] Starting Application Server service: [ OK ] Updating cron tables: [ OK ] 7.9: Check for credentials with invalid IP Ranges [ OK ] 7.10: Upgrading option default values [ OK ] 7.11: Upgrading option values [ OK ] 7.12: Enable Cross Site Framing prevention [SKIPPED] 7.13: Upgrading JDBC drivers [ OK ] 7.14: Recompiling patterns Started: Fri Aug 17 10:12:32 BST 2012 Estimate: 15 minutes to 35 minutes Finished: Fri Aug 17 10:49:10 BST 2012 Actual: about 35 minutes [ OK ] 7.15: Copy Windows Proxy files to download location [ OK ] 7.16: Copying TKU bundle to release location [ OK ] 7.17: Check Discovery scripts [ OK ] 7.18: Check for deprecated attributes [WARNING] Possible use of deprecated attributes has been found 7.19: Checking custom reports [WARNING] Custom reports using removed or deprecated attributes or kinds 7.20: Check if ADDM services are running [ OK ] 7.21: Update IPv4 firewall [ OK ] 7.22: Restart IPv4 firewall [WARNING] Failed to restart IPv4 firewall, probably due to kernel update 7.23: Disabling IPv6 support [ OK ] 7.24: Audit upgrade end [ OK ]
Page 74
15. The software upgrade process is now complete. If any further steps are required as part of the application software upgrade, in this case re-baselining Tripwire, you are informed now.
============================================================================== STAGE 8: Post Upgrade Task Summary ============================================================================== One or more of the export mapping sets have changed in the new version of ADDM. The Discovery scripts have not been upgraded. You must manually upgrade them yourself using the Administration > Discovery Platforms UI. Please do this before initiating any further scans. Possible use of deprecated attributes has been found. Custom reports using removed or deprecated attributes or kinds. This task summary can be found in /usr/tideway/log/postupgrade_9.0_TODO.log
16. If any further steps are required as part of the operating system upgrade, in this case rebooting the system after a kernel upgrade, you are informed now, before the script exits. The appliance is now running BMC Atrium Discovery version 9.0.
============================================================================== STAGE 9: Post Operating System Task Summary ============================================================================== The Kernel has been upgraded. The system MUST be rebooted. Changes have been made to turn IPv6 support off. The system MUST be rebooted.
Operating System task summary can be found in /usr/tideway/log/postosupgrade_5.12.08.53-283372_TODO.log ============================================================================== Upgrade complete - Fri Aug 17 10:51:46 BST 2012
As well as the notes on this page you should refer to the postupgrade_9.0_TODO.log written out by the upgrade script at STAGE 6 above. This contains tailored advice of the tasks that must be completed on that particular appliance and these must be completed for correct future behavior.
Page 75
upgrade script wraps it in an information message: 2011-07-25 09:36:46: INFO: FATAL: Could not load /lib/modules/2.6.18-53.1.14.el5 /modules.dep: No such file or directory This is expected behavior and does not indicate a problem with the upgrade.
You must activate the new TKU package, unless a newer TKU package is already activated. To know about the latest available TKU, see the latest TKU documentation.
Baseline changes
The baseline tool tracks changes to the system configuration from a known baseline. After an upgrade, the appliance configuration will have changed significantly. You should view the baseline page after an appliance upgrade and examine the changes made to the system. When you understand the changes that have been made, you can rebaseline the appliance so that the tool can check for changes from the configuration after upgrading to a newer version of BMC Atrium Discovery.
Page 76
2.
[tideway@appliance01 ~]$ screen -ls There is a screen on: 23274.pts-0.appliance01 (Detached) 1 Socket in /var/run/screen/S-tideway.
The virtual terminal is recovered and you can see how the upgrade is progressing.
2. Download the migration script and copy it to the version 8.3 appliance. You can do this in one operation using wget from a command line on the version 8.3 appliance. In the following example, the version 9.0 appliance is called 90appliance and the 8.3 appliance is called 83appliance. Enter:
[tideway@83appliance ~]$ wget http://90appliance.bmc.com/addm_8_3_migration.sh.gz --2012-08-20 10:22:46-- http://90appliance.bmc.com/addm_8_3_migration.sh Resolving 90appliance... 192.168.1.101 Connecting to 90appliance.bmc.com|192.168.1.101|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 27650 (27K) [application/x-gzip] Saving to: `addm_8_3_migration.sh.gz' 100%[======================================>] 27,650 --.-K/s in 0s
2. Change into the addm_8_3_migration directory and run the migration utility. The example here shows the utility being used to migrate the data over SSH to the backup directory on the version 9.0 appliance.
[root@83appliance tideway]# cd addm_8_3_migration [root@83appliance addm_8_3_migration]# ./tw_addm_8_3_migration --backup-ssh --backup-dir=/usr/tideway/var/backup --host=90appliance.bmc.com --remote-user=tideway
3. You are prompted for a password. The password required is that of the BMC Atrium Discovery system user of the local (8.3) host.
Local ADDM appliance password for user system: <system user password>
4. When the system user password has been accepted, the SSH backup commences. Enter the password of the user specified in the --remote-user option above. In this example, this is the tideway user.
Starting remote backup (SSH) Remote password: <tideway user password> Task: Stopping Services ... Successfully created archive addm_cmdb_sync.tgz Successfully created archive addm_export_adapters.tgz ... Backup datastore Data: Time left 2 mins 15 secs, Processed: 84% ... Task: Starting Services
The backup of the the 8.3.x appliance has now been created in the /usr/tideway/var/backup directory of the 9.0 appliance. Click here for additional notes on backing up to an Windows share If you undertake the backup to a Windows share, you also have to enter the BMC Atrium Discovery system user password at the initial prompt. Then you must enter the password of the remote user on the Windows share. Part of an example session is shown below.
Page 78
[root@83appliance addm_8_3_migration]# ./tw_addm_8_3_migration --backup-smb --backup-dir=tmp --host=137.72.95.127 --remote-user=tideway --share=tideway Password: <system user password> Starting remote backup (Windows share) Remote Password: <tideway user password for Windows share> Task: Stopping Services ...
If you do backup to a Windows share, then you will have to transfer the backup from the share to the /usr/tideway/var/backup directory of the 9.0 appliance.
Restoring data may take a long time Restoring migrated data takes longer than restoring backed up data as the model must be migrated and the datastore re-indexed.
1. From the Appliance section of the Administration tab, click Backup & Restore.
Page 79
1. The Appliance Backup page is displayed. 2. Click the Restore Backup tab. 3. Select Preserve Identity to preserve the identity and the HTTPS configuration of the current (9.0) appliance rather than take on the identity of the backed up (8.3) appliance. Clear the check box to use the identity from the backup. 4. Choose On Appliance from Backup type 5. Click Shutdown & Restore to start the restore operation. You are requested for confirmation. 6. When the system starts, it still has some migration tasks to complete. One of these is to re-import the version 9.0 taxonomy, after which the services must be restarted. When this is complete, the appliance is running BMC Atrium Discovery version 9.0 with your migrated data. 7. When you first log in to the appliance UI after migrating, you are presented with a details window showing the post migration task list. This may include custom patterns which require changes in order to work correctly with the new network model introduced in version 9.0. These should be reviewed in the same way as during an upgrade.
Warning Ensure that you deactivate the version 8.3 TKUs rather than deleting them. Deletion of the TKUs results in deletion of the associated nodes.
Page 80
Any pattern packages that you do not wish to update immediately can be deactivated and will no longer be listed. Alternatively, any pattern packages that are no longer required can be deleted.
Baseline changes
The baseline tool tracks changes to the system configuration from a known baseline. After a migration, the appliance configuration will have changed significantly. You should view the baseline page after an appliance migration and examine the changes made to the system. When you understand the changes that have been made, you can rebaseline the appliance so that the tool can check for changes from the configuration after upgrading to a newer version of BMC Atrium Discovery.
Migration background
The BMC Topology Discovery and Foundation Discovery products (later renamed to BMC Atrium Discovery and Dependency Mapping, or BMC Atrium Discovery) were recently replaced by the Tideway Foundation product that BMC acquired in October 2009. Immediately upon acquisition, BMC started shipping the Tideway product as BMC Atrium Discovery version 8.0. Going forward, BMC recommends that all existing users migrate to version 9.0 from their installations of earlier versions of the product.
For users of Tideway Foundation If you are using a version of Tideway Foundation prior to version 8.0, you do not need to migrate your installation. You simply need to upgrade to the latest version of BMC Atrium Discovery. See Upgrading to version 9.0 for more information. Migration is only required from the BMC Discovery products released before the BMC Software acquisition of Tideway.
Clearly, versions 8 and later contain a different product architecture from versions 7.5 (and its predecessors). Data discovered by these products and synchronized to BMC Atrium CMDB is going to be very different as a result. Two illustrative examples include: Version 7.5 populates a list of products for a host based on the list of installed packages, whereas version 9.0 populates the list of products based on the products it finds running. As a result, the lists are very different, often with little overlap. Version 7.5 provides a number of CMDB extensions to create classes that it then populates (for example, for J2EE and SAP data). Version 9.0 uses the out-of-box CMDB model for these items, and therefore creates different CIs in the CMDB. Based on these examples, reconciling these CIs in the CMDB with each other is essentially impossible: they have different structures, different sources, and often different contents. When migrating to version 9.0, a migration approach is needed to ensure that items are not duplicated in BMC Atrium CMDB and that any tools relying on the data in the CMDB can continue to function correctly. For more detail on the differences between the two versions, and how the characteristics of each discovery methodology might impact your migration strategy, see Fundamental differences in discovering configuration data.
Page 81
version. The following pages describe how to migrate these items to the latest version of BMC Atrium Discovery: Migrating credentials Migrating UAD signatures Migrating scheduled discovery tasks
Note The supported migration path for BMC Atrium Discovery is from version 7.5.01.03 to version 9.0. If you plan to migrate data from earlier versions of BMC Atrium Discovery (or BMC Topology Discovery and BMC Foundation Discovery), you must first upgrade to version 7.5.01.03. For users migrating their mainframe data using BMC Discovery for z/OS version 1.5, you must first upgrade to version 1.6.
Impact on users
It is important to understand that users are going to see significant changes to the data that they work with. Some data will appear differently, and some items will show up in different CMDB classes. BMC recommends that you first migrate your development CMDB environment, both to validate the migration approach in your organization, and to familiarize yourself with the changes that happen to the data in your CMDB. It is important that you educate your users about the changes that they should expect in order to keep them productive immediately after the migration. The migration solutions detailed in the following sections follow this methodology: check the impact on your environment, test, migrate the data, and then validate.
Recommendation After you have determined that you are ready to migrate the data, perform the migration promptly to avoid possible implications of introducing additional data into the CMDB. This will change your original test results, and the resultant inconsistency in the data might cause confusion.
Note The supported migration path for BMC Atrium Discovery is from version 7.5.01.03 to version 9.0. If you plan to migrate data from earlier versions of BMC Atrium Discovery (or BMC Topology Discovery and BMC Foundation Discovery), you must first upgrade to version 7.5.01.03. For users migrating their mainframe data using BMC Discovery for zOS version 1.5, you must first upgrade to version 1.6.
A comparison of the configuration data specified in BMC Atrium Discovery version 7.5 and version 9.0 illustrates that two systems running each version use different approaches for specifying a discovery scan. Therefore, the migration of discovery data must account for those differences. The following table illustrates the approaches that each system uses based on the version of BMC Atrium Discovery that is deployed, by specifying the characteristics of each of the fundamental elements required to run scanned data. Version Terminology BMC Atrium 7.5 Task. Users specify a discovery by creating a wizard-generated task that defines the scope, method, credentials, and schedule through a sequence of steps. Users create Discovery Domains that group a collection of IP addresses, IP ranges and subnets on which the task is run. Users can also define what type of Discovery to perform, including the level (for example, Asset, Host), and the scope (for example, Databases, Enterprise Applications, Virtual Systems). Users define access methods (for example, SSH/Telnet, WMI) to define settings for Discovery types. Users can define a named set of login credentials. A Discovery Task attempts to use only the specified credentials. BMC Atrium 9.0 Scan. Users run a snapshot, a one off scan to discover the environment, or perform a scheduled scan that runs periodically. Neither concepts begin with creating a task. Users can define what level of Discovery to perform (Full Discovery or Sweep Scan). Regardless of the type of level used, what to scan for is dependent on what TKU patterns have been loaded.
What is Scanned
Methods
Users specify access methods against the credential (ssh, telnet, rlogin, Windows). Credentials are not specified for the Discovery Run; instead, they are specified for a particular IP Range (IP address, IP range, or IP address regex). When an IP address is scanned, the credentials that match the IP address are attempted in priority order. Enables you to define a schedule for the run or perform the scan immediately
Credentials
Scheduling
The table indicates that each version handles configuration data differently, meaning that the migration of that data will also be handled in a different way. For example, because of the difference between the Discovery types in version 7.5 and the scan level in version 9.0, it is difficult to preserve the types directly in the current version because all Discovery runs are run at a full Discovery level. As another example, because BMC Atrium Discovery does not contain named collections of IPs, so these named addresses, ranges, or subnets do not map one-to-one to scheduled runs. For more information about how BMC Atrium Discovery migrates scheduled tasks from version 7.5 to version 9.0, see Migrating scheduled discovery tasks. Despite the circumstances predicated by different discovery approaches, BMC Atrium Discovery 9.0 provides functionality (through both the user interface and by a command line interface) that enables the product to translate the 7.5 data and to bring that data into version 9.0 with no loss of integrity.
Migrating credentials
Users of BMC Atrium Discovery 7.5 who have upgraded to version 9.0 can migrate their discovery credentials using the Credential Migration utility. This utility takes the credentials stored on version 7.5, encrypts them with a user provided passphrase, and saves the encrypted file to the version 7.5 file system. The encrypted file is manually uploaded to the 9.0 system, decrypted using the same passphrase, and imported into the Credential Vault.
Note The supported migration path for BMC Atrium Discovery is from version 7.5.01.03 to version 9.0. If you plan to migrate data from earlier versions of BMC Atrium Discovery (or BMC Topology Discovery and BMC Foundation Discovery), you must first upgrade to version 7.5.01.03. For users migrating their mainframe data using BMC Discovery for zOS version 1.5, you must first upgrade to version 1.6.
Page 83
Migrating WebLogic credentials If you are migrating discovery credentials for WebLogic, note that BMC Atrium Discovery version 9.0 requires host credentials for the WebLogic server. If available in version 7.5, the host credentials will be migrated. See [Discovering WebLogic] for information on the credentials required to discover WebLogic servers fully.
Migrating Mainframe credentials If you are migrating discovery credentials for mainframe, note that BMC Atrium Discovery version 9.0 only supports one port per credential and so will normalise the version 7.5 credentials to fit this requirement. This may result in more credentials being created then the number that was exported from version 7.5.
Page 84
The imported Windows domain information and login credentials are held in the credential vault.
For each domain in the BMC Atrium Discovery 7.5 Windows Domains list the following links are provided: Register click this to register an Active Directory Windows proxy with the appliance. The Windows proxy must already be installed on a Windows host on the specified domain. When you click register, the Add Active Directory Windows proxy window is displayed. Enter the IP address of the Windows proxy and edit the name if required. Click Apply to apply the changes the changes and return to the Credential Migration page. Download click this to download an Active Directory Windows proxy installer. You need to be using a browser on the machine on which you want to install the Windows proxy. At the end of the Windows proxy installation procedure, select the Register Windows proxy with the Appliance you downloaded it from, and click Finish. The Add Active Directory Windows proxy page is displayed. Edit the name if required and add the domains which it will be used to scan. Ignore click this to delete the imported domain. Ungroup click this to remove this domain from the domains list and put the underlying credentials into the Windows login credentials list.
Each login credential in the BMC Atrium Discovery 7.5 NT Login Credentials list is of the following form: username for IP_regex. The IP address or range of addresses is a regular expression. To migrate a version 7.5 credential to version 9.0 for use by a credential Windows proxy, use the arrows between the two lists to migrate (or undo) the credential across to 9.0. In addition, the following buttons are also provided: Test IP Access if the user is unsure of whether 9.0 can already access an endpoint then the user can click this button to determine whether the appliance can log into the endpoint. When clicked a popup window will be displayed. Enter the IP address and click the test button to test access to that address. See Testing Credentials for a full description of testing IP access. Add To Domain if some 7.5 credentials have an implicit domain, then they can be assigned to that domain using this button. If that domain is already covered by an AD Windows proxy then the credentials will just be removed. Otherwise the new domain will be added to the list of domains to be migrated. Ignore Credentials click this button to delete the selected credential or credentials. Register Credential Windows proxy Individual credentials can only be migrated once a credential Windows proxy is registered with BMC Atrium Discovery 9.0. In order to do this click this button to register a Credential Windows proxy with the appliance. The Windows proxy must already be installed. When you click register, the Add Credential Windows proxy window is displayed. Enter the IP address of the Windows proxy and edit the name if required. Click Apply to apply the changes and return to the Credential Migration page.
Where ten or fewer credentials of a particular type (UNIX, SNMP, WebLogic, and mainframe) are imported the IP range for them is set to .*
Page 85
Note The supported migration path for BMC Atrium Discovery is from version 7.5.01.03 to version 9.0. If you plan to migrate data from earlier versions of BMC Atrium Discovery (or BMC Topology Discovery and BMC Foundation Discovery), you must first upgrade to version 7.5.01.03. For users migrating their mainframe data using BMC Discovery for zOS version 1.5, you must first upgrade to version 1.6.
This section describes an approach to migrating a small number of UAD signatures. For more complex situations you should contact BMC Customer Support.
Module declaration
The module declaration is mandatory and must be the first non-comment line of the pattern module. It specifies the language version tpl 1.5 and the name of the module that this pattern is part of. The convention used here for naming modules is "CompanyName.ProductSuite.ProductName".
tpl 1.5 module BMC.REMEDY.USER;
Pattern name
The pattern declaration provides the pattern name and version and a description of the pattern which is displayed when viewing the pattern in the BMC Atrium Discovery user interface. The version number is that of the pattern, not the product being identified.
pattern BMC_Remedy_Windows_Client 1.0
Metadata
The metadata section contains information on the software identified by the pattern. Enter the products string from the Name field of the Manage the Application Library dialog:
products := "BMC Remedy Windows Client";
Enter the publisher string from the Vendor field of the Manage the Application Library dialog
publisher := "Remedy";
Overview
The overview section contains additional information on the pattern. Commonly this is restricted to tags which are used to classify the pattern. The tags you enter can be viewed in the BMC Atrium Discovery user interface and can also be searched. Add a UAD_MIGRATION tag to ensure that nodes created from migrated UAD customization files can be tracked.
overview tags UAD_MIGRATION; end overview;
Page 86
Triggers
The triggers section of the pattern is the beginning of the active part of the pattern. It defines the conditions which causes the body of the pattern to be evaluated. The conditions which fire triggers are typically the creation or modification of data in the datastore. In the version 7.5 UI, look for a process on which to trigger the pattern. From the Processes tab of the Manage the Application Library dialog, select the process with a checked Specifies an Instance check box. Click the Edit Process button. For a process to be suitable to trigger the pattern it must have the following Process Properties checked: I want to display applications and communications using this process (Is Interesting) The process can be executed by a unique application (Is Reliable)
If the process has these properties checked, copy its name, without any file suffix, from the Name field or the Pattern field. Enter this name in the trigger section of the pattern. For example:
on process := DiscoveredProcess where cmd matches windows_cmd "aruser"
Here, a regular expression is specified to search the command used to start the BMC Remedy Windows Client. The regex is prefixed with windows_cmd which builds an expression suitable for identifying Windows commands, formed by prefixing the given string with "(?i)\b" and suffixing it with "\.exe$". Consequently you should not copy the suffix from the Name or Pattern field. For example, windows_cmd "aruser" creates the following regular expression;
('(?i)\baruser\.exe$')
A similar prefix is available for UNIX commands unix_cmd which prefixes the given string with "\b" and suffixing it with "$". Where an application runs on Windows and UNIX, you should create a separate pattern for each.
Body
The body section is where the main work of the pattern is undertaken, and the datastore is updated with the results of the pattern. Here the process obtained from the triggers section is passed to the model.host function. This returns the host node on which that process is running.
host := model.host(process);
Versioning
In BMC Atrium Discovery 7.5 versioning information is obtained through Windows APIs or on UNIX using deployed packages. This information is not recorded in UAD customization files, so you must choose a suitable versioning method for the discovered application. This example shows the use of registry data to determine the version information and installation path.
registry_query := discovery.registryKey(host, 'HKEY_LOCAL_MACHINE\\SOFTWARE\\Remedy\\AR System User\\Version');
Retrieving the registry entry may fail, so we must check that we received a value.
Page 87
The following example shows the use of package data to determine the version information.
// Find any packages that match the regular expressions defined in // the constants. packages := model.findPackages(host, package_regexes); // If any packages were found, use the first one that has a version set for package in packages do if package.version then version := package.version; break; end if; end for;
Page 88
Tag NAME
Description The process name. This is used as the trigger in the TPL pattern. Is the process name case sensitive? True or False. The size (in Bytes) of the process binary. Ports used by the process. Are any properties defined for this process? True or False. Is the process considered interesting? True or False. An interesting process is a process for which all the communications it generates are considered to identify and create relationships. If this is false, do not model this processes communications. Is the process considered reliable? True or False. A reliable process is one that provides a positive indication that a specific application is active on a given host. If this is false, do not model or migrate this process. A list of APPLICATION entries.
SIZE PORTS
PROPERTIES
RELIABLE
The name of the application that this process is a part of. This value is used to define the type of SI created to represent the application. Is the process considered mandatory? True or False. If a process is mandatory, it should be used as the pattern trigger.
MANDATORY
Tag APPLICATION
Description The application identifier. Is the application interesting? True or False. Interesting applications are those applications for which CIs are discovered and stored in the datastore. Does this application communicate with other applications or processes? True or False. A free text label for the application. This value is generally used to populate the name attribute of the Software Instance used to model the application in BMC Atrium Discovery 9.0. The class of application, for example BMC_Application. The application type, for example, ClusteredApplication.
HAS_COMMUNICATION
LABEL
VALUE
APP_CLASS
VALUE TYPE
Page 89
APP_TYPE
VALUE
The type of application (for example, Client or Server). This enables you to decide the correct relationship types to model between the application and others it communicates with. For example, a client depends on a server Dependant whereas a sever is depended upon by a client DependedUpon. This information is not used in BMC Atrium Discovery 9.0. Ports are not used to trigger patterns. This information is not used in BMC Atrium Discovery 9.0. Ports are not used to trigger patterns. This information is not used in BMC Atrium Discovery 9.0. Ports are not used to trigger patterns.
LISTENING_PORT_ DETECTION
USE_LISTENING_PORT
USE_ALL_ DETECTED_ LISTENING_PORT USE_LISTENING_ PORTS_ CONFIGURATION_ FOR_PORTS_ CREATION MANUFACTURER VALUE
The name of the manufacturer (for example, BMC Software, Inc.). This value is used in the publisher section of the TPL pattern. A free text description of the application.
DESCRIPTION
VALUE
// // // // // // // // // // // // // // // //
This is a template pattern module containing a pattern for identifying a simple software instance which is considered for UAD Migration. It provides some guidelines for converting UAD customization to TKU pattern. You will need UAD customization files For using this template,Applications_User_1.xml and 'Processes_User_1.xml' Detailed documentaion can be found in UAD Migration documentation. Location for corresponding entities in UAD customization files is provided in comments around. Names surrounded by double dollar signs like $$pattern_name$$ should all be replaced with values suitable for the pattern. Text prefixed with // like these lines are comments that extend to the end of the line.
// Required module declaration. The first non-comment line of a // pattern module must always have this form. 1.4 here is the version // of TPL. tpl 1.4 module $$module_name$$;
// Pattern declaration. 1.0 here is the version of the pattern (not // the version of the product being identified). The version number // should be updated if the pattern is modified. pattern $$pattern_name$$ 1.0 """ This is a template pattern for a simple SoftwareInstance based on identifying a process.
Page 90
This required description block should be replaced with a description of the pattern and the product it is identifying. The description appears as an attribute on the Pattern node stored in the data store. You can get additional product description from UAD customization 'Applications_User_1.xml'::APPLICATIONS->APPLICATION->DESCRIPTION->VALUE """ // Metadata section containing information about the // product(s) identified by this pattern. Each entry should be a // comma-separated list of strings. Without this information BMC_Product will // not be correctly created on CMDB Sync metadata // Applications_User_1.xml::APPLICATIONS->APPLICATION->LABEL->VALUE products := $$products$$; // Applications_User_1.xml::APPLICATIONS->APPLICATION->MANUFACTURER->VALUE publishers := $$publishers$$; end metadata; // Required overview section. Some tags must be defined. Tags are // comma-separated lists of identifiers, e.g.: // // overview // tags database, example_pattern; // end overview; overview tags UAD_MIGRATION, $$tags$$; end overview; // Constants section used here to declare the type of // SoftwareInstance being created. You can use UID attribute // from UAD customization files here constants // Applications_User_1.xml::APPLICATIONS->APPLICATION->UID type := "$$type$$"; end constants; // Required triggers section. The unix_cmd prefix builds a regular expression suitable for identifying Unix commands; windows_cmd builds a regular expression suitable for identifying Windows commands. For a general regular expression, use regex as the prefix.
// // // //
triggers // Processes_User_1.xml::PROCESSES->PROCESS->NAME->VALUE // Also check for other attributes like, if process name is case sensitive // With use of logical operators and regular expressions you can write more // Logically correct trigger. For details on writng trigger with more options // Rrefer UAD Migration Document. on process := DiscoveredProcess where cmd matches windows_cmd "$$command$$"; end triggers; // Required body section. Finds the Host node corresponding to the // process so its name can be used, then creates or updates a group // SoftwareInstance that counts the number of instances on the host. body host := model.host(process); // This section will guide you getting values for other attributes where // UAD used to get it automatically from packages(Unix) or APIs(Windows) // Version :
Page 91
// UAD gets version information from windows APIs in case of windows OS // For Unix UAD uses information from deployed packages but in TKU // Threre are many ways to get this information like, Windows Registry, // Configuration files, Packages discovered on machine and commands // By convention, if a pattern attempts to find version information // but fails, the version and product_version attributes of the // SoftwareInstance are set to empty strings. version := ""; // Get the registry key registry_query := discovery.registryKey(host, "$$registry_key$$"); // Retrieving the registry entry may fail, so we must check that // we received a value. If we did, we have a // DiscoveredRegistryValue node, which has a value member. if registry_query and registry_query.value then version := registry_query.value; end if; // Set the product_version. Here, we set it to extract Major and Minor // components of the full version, but we could also use a mapping // table to generate known marketing version values. if version then // Attempt to extract a product_version product_version := regex.extract(version, regex'(\d+(?:\.\d)?)', raw'\1'); // If cannot extract product_version, make it equal full_version if not product_version then product_version := version; end if; end if; // Set the user-friendly name attribute if product_version then name := "%type% %product_version% on %host.name%"; else name := "%type% on %host.name%"; end if; // Create a grouped SoftwareInstance, setting the version // attributes model.SoftwareInstance(type := type, name := name, version := version, product_version := product_version); // You can debug pattern progress and verify with debug statements like below // Log can be found here Administration -> Appliance -> Logs -> Log Files log.debug( "$$StatementstoLog$$"); end body;
Page 92
end pattern;
Note The supported migration path for BMC Atrium Discovery is from version 7.5.01.03 to version 9.0. If you plan to migrate data from earlier versions of BMC Atrium Discovery (or BMC Topology Discovery and BMC Foundation Discovery), you must first upgrade to version 7.5.01.03. For users migrating their mainframe data using BMC Discovery for zOS version 1.5, you must first upgrade to version 1.6.
For each discovery task with scheduling information attached, migrate the the task according to the steps below.
Page 93
Enter IP information
To enter the IP information from the discovery task: 1. Fully expand the Discovery Domains section of the discovery task in the 7.5 UI. Each range or IP address must be entered into the 9.0 UI as part of a comma separated list. a. For Addresses, simple copy the IP addresses as a comma separated list into the Range field of the Add a New Run dialog. b. For Ranges, the format must be converted from one that 7.5 understands to the format understood by 9.0. 7.5 uses the format a.b.c.d-e.f.g.h, which represents all IP addresses between a.b.c.d and e.f.g.h inclusive as though these were treated as 32 bit numbers. 9.0 supports ranges only on individual octets, for example a.b.c.d-e. Converting the 7.5 form into the 9.0 form is relatively straightforward if the range is small: for example 1.1.1.1-1.1.1.254 becomes 1.1.1.1-254. For larger ranges, you must include more IP addresses on 9.0 than you did on 7.5, or enter multiple ranges, each covering a different portion of the 7.5 task range. Multiple ranges can be entered by separating them by commas. It is recommended that, rather than just doing a mechanical translation, you think about what each task is trying to achieve and translate that into how that would be accomplished in version 9.0. The following screens illustrate an example of how to convert 7.5 discovery domains to range information in 9.0. The first screen shows the 7.5 Discovery domains fully expanded, and the second screen shows how the IP addresses and ranges in the 7.5 domains are entered as comma-separated values and 9.0-compliant format.
Page 94
Each option is described in the following table: Field Name No schedule. Once. Details No equivalent in 9.0. You cannot migrate this option to 9.0, because there is no way of representing a named set of IP addresses that are scheduled only at a later date. Use a [snapshot scan] in BMC Atrium Discovery version 9.0. No equivalent in 9.0. Can be reproduced in the monthly scan entry by selecting a single day, start time, and duration. This can only be in the next month; there is no way to schedule a single scan more than one month in the future. The scheduled scan must be manually removed after the scan has occurred. No equivalent in 9.0. BMC Software recommends that you replace this option with a daily scan.
Every <number of minutes>. Every <number of hours>. Every <number of days>. Weekly on <day>. Monthly on <day>.
No equivalent in 9.0. BMC Software recommends that you replace this option with a daily scan.
Can be reproduced in the weekly or monthly scan entry by selecting multiple days. For example, for a discovery task scheduled every 2 days in version 7.5, configure scheduled weekly scan starting on a Monday, a Wednesday, and a Friday. This is not an exact reproduction, so you must decide the best days to run the scan. Can be reproduced exactly in the weekly scan entry, although the start time can only be specified to the nearest hour. Can be reproduced exactly in the monthly scan entry, although the start time can only be specified to the nearest hour.
Level
Label
Page 95
Frequency
Select a frequency for the discovery run to be performed. For example, this can be Daily, Weekly, or Monthly. For a weekly discovery run, select the day or days that you want the run to take place by clicking the appropriate button. For a monthly discovery run, select the day or days that you want the run to take place by clicking the appropriate button. Alternatively, select the Scan on the radio button and choose one of the following options: First Second Third Fourth Last Additionally, select the day that you want the scan to take place. In this way you can select the Second Tuesday of the month and so forth. Select a time for the scan to start. Discovery must be running at the time specified. Select a duration in hours. This is the length of the scan window and can be from 1 to 23 hours.
Considerations for migrating data for population into BMC Atrium CMDB
As described in Migrating from version 7.5 to 9.0, it is not feasible to attempt to reconcile all of the CIs written to the BMC Atrium CMDB by BMC Atrium Discovery 7.5 with those written by BMC Atrium Discovery 9.0. The migration approach detailed in the following sections helps make it easier to delete the CIs of other classes that were populated by version 7.5 and permit version 9.0 to repopulate them.
Note The supported migration path for BMC Atrium Discovery is from version 7.5.01.03 to version 9.0. If you plan to migrate data from earlier versions of BMC Atrium Discovery (or BMC Topology Discovery and BMC Foundation Discovery), you must first upgrade to version 7.5.01.03. For users migrating their mainframe data using BMC Discovery for z/OS version 1.5, you must first upgrade to version 1.6.
A robust migration utility has been provided so that the migration process can be more seamless and effective: the utility runs through discovered data and identifies data that is necessary to your estate to ensure that your CMDB has only the information that you need to manage. All that is different is the way you customize your data after reconciliation based on the type of consuming applications used. For ITSM applications, customization involves ensuring that hidden CIs are not used again in the future, ensuring that you maintain only the data that helps drive root cause analysis through your technical infrastructure models. For more information, see Hide obsolete CIs to prevent them from being used by other applications. For SIM, customization refers to updating your services models to ensure proactive operations. For more information, see Rebuild your service model. The combination of the functionality in the migration utility and your own analysis and data customization can result in an effective strategy that ensures that your CMDB includes only reliable and necessary data.
Page 96
where options are any of the commands described in the following table. Command Line Option none Description Displays a plain text report that details the CI associations currently residing in BMC Atrium CMDB Displays the required use of the utility and describes the available options
-h, --help -N, --noeol -A, --company -C, --noeol -F, --full -M, --migrate
Performs the data migration in the BMC.IMPORT.TOPO according to the CI associations that the utility finds.
-d, --delete -i, --ignore -l, --logdir -u, --username -p, --password -P, --port
User example
In the following example, you specify that the migration tool generates a full report on a CMDB located at the IP 10.2.3.4 using the default dataset BMC.IMPORT.TOPO, with a user named Demo and an empty password:
./tw_cmdb_addm75_migration -F 10.2.3.4
The utility returns a set of progress messages while it reports on CI associations. This enables you to follow along as the tool runs through the configuration. A set of information messages display, showing the progress of the process. For example:
Page 97
Running in reporting mode Report will include full CI details Connecting to CMDB: 10.2.3.4 with primary dataset BMC.IMPORT.TOPO Connecting to dataset BMC.IMPORT.TOPO Connecting to dataset BMC.ASSET Getting Topo CIs Found 12245 Topo CIs 12245/12245 ReconIds retrieved 12245/12245 Topo CIs processed
The information returned by the report enables you to view and understand the current CI associations affecting your migration scenario, and enables you to evaluate the impact on your environment before the data is altered in your BMC Atrium CMDB. Based on this information, you can then commit the reported changes to the CMDB by running an RE job to ensure that the CI data is reconciled in the BMC.ASSET dataset. You can run the utility as many times as necessary, enabling you to analyze and commit changes in an iterative fashion until you are comfortable with the result of the migrated data.
Page 98
Turn off synchronization between BMC Atrium Discovery 7.5 and the BMC.IMPORT.TOPO dataset
Before proceeding with the migration, the CI data from BMC Atrium Discovery 7.5 should no longer be synchronizing to the staging dataset. Disable synchronization so that CIs are no longer being pushed from version 7.5 to the CMDB. You cannot have the same CI being populated by both versions of the product at the same time.
A text file is generated that lists all CIs have active associations with SIM, AE, ITSM, and so forth. The report also contains the list of CIs that are not associated with any of these applications and, therefore, will be soft deleted. 2. Review the CIs that are not part of service models and planned for deletion, and start evaluating which CIs you will require to replace the deleted CIs to rebuild the service models. For an example report, see Sample report.
Note Running the utility with the --migration option generates a report in addition to executing the changes in the BMC.IMPORT.TOPO dataset. No changes are made to the BMC.ASSET dataset.
The migration utility identifies BMC Atrium Discovery 7.5 data that is part of a service model and other unnecessary data, and marks those CIs that are not used in that model for deletion when they are populated to the CMDB. Unnecessary items are CIs that match the following conditions: They were populated by BMC Atrium Discovery 7.5
Page 99
They are not associated to any ITSM item: incident, problem, change, or contract They do not have any audit history They have not been reconciled with any other data source (this ensures that BMC Atrium Discovery does not delete items that another data source has an interest in) They are not part of a service model They do not have any non-automatic relationships to non 7.5 CIs
Sample report
The following section of a report illustrates the CI details reported by the migration utility that are relevant to impact models.
Page 100
96 Affected SIM Service Models __________________________________________________________________ The following SIM Service Models use ADDM 7.5 CIs and need to be manually repaired __________________________________________________________________ ----------------------------------------------> InstanceId : OI-3E608C0AB10543128E362E28B2477C5B ReconId : OI-1782B46513454F2494EA505ED655D26A Dataset : BMC.IMPACT.PROD Is a Topo CI : No ClassName : BMC_ORGANIZATION Name : Alok_Model ShortDesc : n/a --> InstanceId ReconId Dataset Is a Topo CI ClassName Name ShortDesc --> InstanceId : OI-71EA5BE0993B4E3D93814E6F4EBF9C09 ** ReconId : OI-86FCD31B5D09469F9D62B7ABFF3B4CD9 Dataset : BMC.IMPACT.PROD Is a Topo CI : Yes ClassName : BMC_SOFTWARESERVER Name : mgcbr1s07:BEA Weblogic Application Server:-245120096 ShortDesc : BEA Weblogic Application Server : examplesServer ------------------------------------------------------------------------------------------> : : : : : : : OI-E63ADE85697D4BC3A167A5DC52D0E559 OI-35BBD620E4A84C1499648B8BD1C66E6B BMC.IMPACT.PROD No BMC_ACTIVITY Alok_Activity n/a
Turn off synchronization between BMC Atrium Discovery 7.5 and the BMC.IMPORT.TOPO dataset
Page 101
Before proceeding with the migration, the CI data from BMC Atrium Discovery 7.5 should no longer be synchronizing to the staging dataset. Disable synchronization so that CIs are no longer being pushed from version 7.5 to the CMDB. You cannot have the same CI being populated by both versions of the product at the same time.
A text file is generated that lists all CIs have active associations with SIM, AE, ITSM, and so forth. The report also contains the list of CIs that are not associated with any of these applications and are therefore to be soft deleted. 2. Review the report to understand the impact on your ITSM applications when the data is migrated. All CIs that are not actively associated with other applications will be set for soft deletion, and all CIs that are, will be set for AssetLifecycleStatus = End of Life in the BMC.IMPORT.TOPO dataset.
Note CIs are flagged as End of Life to enable migration of data in BMC Atrium Discovery, not because the asset itself is being retired.
Turn off synchronization between BMC Atrium Discovery 7.5 and the BMC.IMPORT.TOPO dataset
Before proceeding with the migration, the CI data from BMC Atrium Discovery 7.5 can no longer be synchronizing to the staging dataset. Disable synchronization so that CIs are no longer being pushed from version 7.5 to version 8.3. You cannot have the same CI being populated by both versions of the product at the same time.
Note Running the utility with the --migration option generates a report in addition to executing the changes in the BMC.IMPORT.TOPO dataset. No changes are made to the BMC.ASSET dataset.
A text file is generated that lists all CIs that are and are not actively associated with any other applications or services, and the utility performs the corresponding data changes in the BMC.IMPORT.TOPO dataset. This process involves marking unnecessary CIs for deletion so that they will not be used in the future. For ITSM, unnecessary items are CIs that match the following conditions: 1. They were populated by BMC Atrium Discovery 7.5 They are not associated to any ITSM item: incident, problem, change, or contract They do not have any audit history They have not been reconciled with any other data source (this ensures that BMC Atrium Discovery does not delete items that another data source has an interest in)
Page 102
For all CIs that have either audit history or that have at least one ITSM item associated to them, the migration utility sets the AssetLifecyleStatus attribute to End of Life. This prevents additional ITSM items from being associated with these CIs, thereby preventing them from being used to create new incidents in ITSM.
Verify CIs have been set to End of Life in BMC Atrium CMDB
Next, as a check on how well the migration process has completed, you can verify which CIs have been set to End of Life in BMC Atrium CMDB. To perform this verification: 1. In the BMC Remedy Action Request System Server (AR System Server) User tool, open the BMC_BaseElement form. 2. Set the AssetLifecycleStatus attribute to End of Life. 3. Click Search. This returns a list of all End of Life CIs in the BMC.ASSET dataset, which you can use to compare with the results of what the utility reported and changed.
Hide obsolete CIs to prevent them from being used by other applications
The migration from BMC Atrium Discovery 7.5 to BMC Atrium Discovery 8.3 entails running a script which changes the status of all CIs that were discovered with BMC Atrium Discovery 7.5 to End of Life. New CIs discovered with BMC Atrium Discovery 8.3 are created corresponding to each CI discovered with BMC Atrium Discovery 7.5. Any old CIs that had not been related to any Incident, Contract, Change, or so forth are also Marked As Deleted. CIs that are "Marked As Deleted" do not show up in ITSM CI search dialogs, so cannot be related to an Incident. However, old CIs already related to an Incident, Contract, Change, or so forth before the migration cannot simply be "Marked As Deleted", as they are associated with an active ticket. So their status is simply changed to "End of Life". After migrating, we only want relationships to be made to the new CIs discovered by BMC Atrium Discovery 8.3. Typically and ITSM user would not choose to relate an Incident, Contract, Change, or so forth for an old CI, since the "CI Relationship Search" form used for this purpose would clearly show its CI Status of "End of Life". To ensure that this occurs you can modify the behavior of the "CI Relationship Search" form (AST:CI Association Search) so that it does not display any CI that has been marked as "End Of Life" ( AssetLifecycleStatus = "End of Life") and originates from an BMC Atrium Discovery 7.5 dataset (AttributeDataSourceList LIKE "%BMC.IMPORT.TOPO%"). To do this: 1. In the left pane of BMC Remedy Developer Studio, expand "All Objects" then double-click on "Forms". 2. Double click the AST:BaseElement entry. The Form is displayed in edit mode. Right click on the form and select Add Fields from BMC.CORE:BMC_BaseElement.
The "Add Fields" dialog displays all fields that can be added to the form. 3. Sort the fields by Name, find and select AttributeDataSourceList, and click OK.
Page 103
A new field displays at the top-left corner of the form. 4. Drag and drop the field into some free space (for example, under the "Status Reason" field), and then save the modified form.
5. In the left pane of BMC Remedy Developer Studio, expand "All Objects" and double-click on "Forms". 6. Double-click the AST: CI Associations Search entry. The Form is displayed in edit mode.
7. Scroll down to the z2TH_ConfigurationItem table (the only table on the form) and select it. The properties of the table are displayed in the right pane.
8. In the Properties pane, under Attributes, Tree/Table, click the "25 Column(s)" value, and then click the "..." button.
9. Modify the default qualification EXTERNAL($z1D_Qualification$) by appending it with the following: AND ( NOT (('AttributeDataSourceList' LIKE "%BMC.IMPORT.TOPO%") AND ('AssetLifecycleStatus' = "End of
Page 104
Life")))
10. Click OK and then save the modified form. When you have completed this, BMC Atrium Discovery 7.5 CIs that were set to End Of Life are not displayed in the result set of the AST: CI Associations Search form (CI Relationships Search) which is used to relate a CI.
Sample report
The following section of a report illustrates the CI details reported by the migration utility that are relevant for ITSM applications.
############################################################ Active ITSM records __________________________________________________________________ These Topo (ADDM 7.x) CIs will not be deleted as they are related to one or more ITSM records __________________________________________________________________ InstanceId : OI-F2ADC2D11E1A434EAAD3479C7A199502 ReconId : OI-FE4E21C678394A208A2E17DEF161C855 Dataset : BMC.ASSET Is a Topo CI : Yes ClassName : J2EE:BMC_J2EEAPPLICATIONSERVER Name : medrec:10.128.88.20:7011:mgcbr1s07:MedRecServer NameFormat : DomNm:AdmSrv.IP:AdmSrv.Port:NdNm:AppSrvNm ShortDesc : MedRecServer Associations : Type CMDB Form ITSM ReqId ITSM Form : : : : Incident AST:J2EEApplicationServer INC000000000003 HPD:Help Desk
-----------------------------------------------------------------InstanceId : OI-06726C3DBA364E02866B008A0525B62A ReconId : OI-53417A05E9814BC7B8345DE73ED91CEF Dataset : BMC.ASSET Is a Topo CI : Yes ClassName : J2EE:BMC_J2EECLUSTER Name : medrec:10.128.88.20:7011:DefaultCluster NameFormat : DomNm:AdmSrv.IP:AdmSrv.Port:ClstNm ShortDesc : DefaultCluster Associations : Type CMDB Form ITSM ReqId ITSM Form : : : : Known Error AST:J2EECluster PKE000000000005 PBM:Known Error
------------------------------------------------------------------
Model changes
The model has been changed significantly to cater for IPv6, complex network configurations, and SNMP managed devices. The model has also been made more consistent, harmonizing the approach taken across root nodes.
Page 105
Network model changes: The network model has been changed to cater for IPv6 and complex configurations such as bonded interfaces. In previous versions of BMC Atrium Discovery, a NetworkInterface node was created for each interface/IP address pair. Consequently where an interface had more than one IP address, multiple NetworkInterface nodes were created. In IPv6 where each interface has three IP addresses by default (IPv4, IPv6 global, and IPv6 link local) the older approach is inefficient. Similarly, to model bonded interfaces, multiple NetworkInterface nodes with the same IP address are required. The new model introduced with BMC Atrium Discovery version 9.0 removed IP address information from the NetworkInterface node and introduces a separate IPAddress node. To model an interface using IPv6, a single NetworkInterface node is used, with relationships to three IPAddress nodes, one each for the IPv4 address, the IPv6 global address, and the IPv6 link local address. To model a bonded interface, multiple NetworkInterface nodes are used, each with a relationship to the single shared IPAddress node. The IPAddress node has been added. Subnet nodes are now linked to IPAddress nodes rather than NetworkInterface nodes The PortInterface node has been removed. Attributes previously held on the PortInterface node such as speed, duplex, negotiation, MAC address, and so forth are now part of the NetworkInterface node. SNMP managed devices. A new node type called SNMPManagedDevice represents a device that BMC Atrium Discovery does not have sufficient knowledge of to fully discover, or it may be inappropriate to discover (for example a UPS). It may be a switch, router, or SNMP enabled coffee machine for which we have not yet implemented support. Attribute changes: In previous versions of BMC Atrium Discovery, the type attribute on root nodes (Host, NetworkDevice, Printer, and MFPart) were inconsistent. As part of the model changes introduced in BMC Atrium Discovery 9.0 these have been harmonized, including the addition of a new root node type, SNMPManagedDevice. For each root node kind, the table below shows the previous type attribute, whether it is retained, the new harmonized type attribute, and what happens to them at upgrade, or migration. Node Host Host NetworkDevice Previous type attributes host_type 9.0 Type Attributes host_type (deprecated) type Upgrade/migration behavior None Copy from host_type None Always set to "Network Infrastructure", this is used on device_summary. Note Both attributes are populated.
device_type
device_type (deprecated)
NetworkDevice Printer
type device_type
type type
None Rename The device_type attribute was not previously in the taxonomy. New node at this release. Both attributes are populated.
Attribute changes: A new utility tw_tax_deprecated is provided to detect attributes deprecated in the changes described in the table above. This runs as part of the upgrade but can also be run as a standalone utility. DDD aging logic improved: The logic determining how DDD is aged has been changed. In the user interface, the list of DiscoveryAccesses on inferred entities now displays a trash can icon next to DiscoveryAccesses that are about to be deleted. The corresponding DiscoveryAccess page now shows a banner stating that the node is to be removed shortly as part of DDD aging. The inferred entities are: Host Network device Printer SNMP managed device Mainframe MF Part DDD aging is faster than in previous releases, and has a lower impact on scanning performance. You may consider reviewing existing DDD removal blackout windows when upgrading or migrating to this version.
Page 106
Entirely new discovery platforms Changes to comments only Commands which have been removed and not replaced Changes to echo only statements
AIX
getHostInfo
The following code:
# Extract CPU info from lparstat num_physical_processors=`egrep '^Active Physical CPUs in system' /tmp/tideway.$$ | cut -f2 -d:` num_logical_processors=`egrep '^Online Virtual CPUs' /tmp/tideway.$$ | cut -f2 -d:`
is replaced with:
# Extract CPU info from lparstat num_logical_processors=`egrep '^Online Virtual CPUs' /tmp/tideway.$$ | cut -f2 -d:`
is replaced with:
echo "cpu_threading_enabled:" $smt_enabled fi if [ "$num_logical_processors" != "" ]; then echo "num_logical_processors:" $num_logical_processors fi
Linux
getHostInfo
The following code is added:
if [ -d /sys/class/dmi/id ]; then echo "candidate_model[]: " `cat /sys/class/dmi/id/product_name 2>/dev/null` echo "candidate_serial[]: " `PRIV_CAT /sys/class/dmi/id/product_serial 2>/dev/null` echo "candidate_uuid[]: " `PRIV_CAT /sys/class/dmi/id/product_uuid 2>/dev/null` echo "candidate_vendor[]: " `cat /sys/class/dmi/id/sys_vendor 2>/dev/null` fi
getNetworkInterfaces
Page 107
is replaced with:
ifconfig -a 2>/dev/null if [ $? -eq 0 ]; then ETHTOOL="" if [ -f /sbin/ethtool ]; then
is replaced with:
echo 'begin details:' for i in `ifconfig -a 2>/dev/null | egrep '^[a-z]' | awk -F: '{print $1;}'` do
is replaced with:
done echo 'end details:' fi
HP-UX
getMACAddresses
The following code:
lanscan -am
is replaced with:
lanscan | grep ETHER | awk '{print $2, $8;}'
VMware ESX
getHostInfo
The following code is added:
Page 108
if [ -d /sys/class/dmi/id ]; then echo "candidate_model[]: " `cat /sys/class/dmi/id/product_name 2>/dev/null` echo "candidate_serial[]: " `PRIV_CAT /sys/class/dmi/id/product_serial 2>/dev/null` echo "candidate_uuid[]: " `PRIV_CAT /sys/class/dmi/id/product_uuid 2>/dev/null` echo "candidate_vendor[]: " `cat /sys/class/dmi/id/sys_vendor 2>/dev/null` fi
getNetworkInterfaces
The following code:
ifconfig -a 2>/dev/null ETHTOOL=""
is replaced with:
ifconfig -a 2>/dev/null if [ $? -eq 0 ]; then ETHTOOL=""
is replaced with:
echo 'begin details:' for i in `ifconfig -a 2>/dev/null | egrep '^[a-z]' | awk -F: '{print $1;}'` do
Solaris
getHostInfo
The following code is added:
if [ -x /usr/sbin/virtinfo ]; then # LDOM support for Solaris 11 in system/core-os, and Solaris 9/10 in SUNWcsu echo 'begin virtinfo:' /usr/sbin/virtinfo -ap 2>/dev/null echo 'end virtinfo:' fi
AIX
getHBAList
The following code:
Page 109
do adapter_name=`echo $i | cut -f1 -d:` adapter_status=`echo $i | cut -f2 -d:` adapter_type=`echo $i | cut -f3 -d:` if [ $adapter_status = "Available" ]; then echo begin lscfg-vl-$adapter_name: lscfg -vl $adapter_name
is replaced with:
do adapter_name=`echo $i | cut -f1 -d:` adapter_status=`echo $i | cut -f2 -d:` adapter_type=`echo $i | cut -f3 -d: | sed -e 's/,/./'` if [ $adapter_status = "Available" ]; then echo begin lscfg-vl-$adapter_name: lscfg -vl $adapter_name
Mainframe
The following method is added: getTransactionProgram
HPUX
The following method is added getHBAList
initialise
The following code is added:
# fcmsutil requires superuser privileges to display any HBA information PRIV_FCMSUTIL() { "$@" }
Solaris
The following method: getInterfaceList is replaced with: getMACAddresses getNetworkInterfaces getIPAddresses
getProcessList
The following code:
Page 110
if [ `uname -r | cut -f2 -d.` -ge 10 ] && [ -x /usr/bin/zonename ]; then zone=`/usr/bin/zonename 2>/dev/null` ps -eo pid,ppid,uid,user,zone,args 2>/dev/null | awk "\$5~/^($zone|ZONE)$/ {print}" else ps -eo pid,ppid,uid,user,args 2>/dev/null fi
is replaced with:
os_ver=`uname -r | cut -d. -f2` if [ $os_ver -ge 10 -a -x /usr/bin/zonename ]; then zone=`/usr/bin/zonename 2>/dev/null` ps -eo pid,ppid,uid,user,zone,args 2>/dev/null | awk "\$5~/^($zone|ZONE)$/ {print}" else ps -eo pid,ppid,uid,user,args 2>/dev/null fi
is replaced with:
if [ $os_ver -ge 11 ]; then PRIV_PS /usr/bin/ps axww 2>/dev/null else if [ -x /usr/ucb/ps ]; then PRIV_PS /usr/ucb/ps -axww 2>/dev/null fi fi if [ -x /usr/bin/pargs -a -d /proc ]; then echo begin pargs: PRIV_PARGS /usr/bin/pargs `ls -1 /proc` 2>/dev/null echo end pargs: fi
getNetworkConnectionList
The following code is added:
netstat -an -f inet6 2>/dev/null | grep -v '^ *\*\.\*'
getHostInfo
The following code:
showrev 2>/dev/null | nawk -F: '/^Kernel version/ {gsub("^ *", "", $2); print "kernel:", $2; exit}' echo 'model:' `uname -i 2>/dev/null` /usr/sbin/prtconf 2>/dev/null | nawk '/^Memory size:/ {print "ram: " $3 "MB"}'
is replaced with:
Page 111
os_ver=`uname -r | cut -d. -f2` if [ $os_ver -ge 11 ]; then pkg list -H --no-refresh system/kernel | awk '{print "kernel:", $2; exit}' else showrev 2>/dev/null | nawk -F: '/^Kernel version/ {gsub("^ *", "", $2); print "kernel:", $2; exit}' fi echo 'model:' `uname -i 2>/dev/null` /usr/sbin/prtconf 2>/dev/null | nawk '/^Memory size:/ {print "ram: " $3 "MB"}'
is replaced with:
fi echo 'hostid:' `hostid` if [ -r /etc/ssphostname ]; then # E10K support echo 'ssphostname:' `cat /etc/ssphostname` fi
getPackageList
The following code:
pkginfo -l 2>/dev/null
is replaced with:
os_ver=`uname -r | cut -d. -f2` if [ $os_ver -ge 11 ]; then echo begin pkg_list: echo arch: `uname -p` pkg list -H --no-refresh 2>/dev/null echo end pkg_list: fi PKGINFO=`tw_which pkginfo` if [ ! -z "$PKGINFO" -a -x $PKGINFO ]; then pkginfo -l 2>/dev/null fi
initialise
The following code:
ls "$@" }
is replaced with:
Page 112
# This function supports privilege listing of file systems and related # size and usage. PRIV_DF() { "$@" } tw_which() { SAVE=$IFS IFS=: "$@" }
is replaced with:
Page 113
# dladm requires superuser privileges to display speed, duplex settings, and # port aggregation information PRIV_DLADM() { "$@" } # ndd requires superuser privileges to display any interface speed # and negotiation settings PRIV_NDD() { "$@" } # By default, the standard ps command on Solaris will truncate command lines # to 80 characters. This affects Solaris 11, Solaris 10 and Solaris 8 & 9 # with certain patches. # # In order to display unlimited command lines, there are several options: # # pargs - This tool is available in more recent updates of Solaris 9 and # all updates of Solaris 10 and later. This tool requires the # proc_owner privilege to display unlimited command lines for all # processes. # # /usr/bin/ps - On Solaris 11, the standard ps command can display # unlimited command lines by using BSD style command line arguments. # This still requires the the tool is run with proc_owner privilege # # /usr/ucb/ps - This tool is part of the UCB compatibility package which is # usually installed by default on versions up to and including # Solaris 10 (SUNWscpu package). This tool requires the # proc_owner privilege to display unlimited command lines for all # processes. # # On Solaris 11, the compatibility/ucb is not usually installed # by default and in any case, the /usr/ucb/ps command is simply # a link to /usr/bin/ps # # In order for the Discovery Condition pattern to correctly detect whether # ps is being executed with proc_owner privilege, this function must accept # both the ps command and the ppriv command used by pattern. PRIV_PS() { "$@" } # See comments above PRIV_PS, above PRIV_PARGS() { "$@" }
getFileSystems
The following code:
Page 114
echo begin df: df -lk 2>/dev/null echo end df: echo begin mount: if [ -f /etc/mnttab ]; then cat /etc/mnttab fi echo end mount: echo begin xtab: if [ -f /etc/xtab ]; then cat /etc/xtab fi echo end xtab: noformat} s replaced with: noformat} echo begin df: PRIV_DF df -lk 2>/dev/null echo end df: echo begin mount: if [ -r /etc/mnttab ]; then cat /etc/mnttab fi echo end mount: echo begin xtab: if [ -r /etc/xtab ]; then cat /etc/xtab fi echo end xtab:
is replaced with:
echo begin smbconf: configfile=`smbstatus -v 2>/dev/null | grep "using configfile" | awk '{print $4;}'` if [ "$configfile" != "" ]; then if [ -r $configfile ]; then cat $configfile fi fi
getDeviceInfo
The following code:
Page 115
ihn=`hostname 2>/dev/null` echo 'hostname:' $ihn if [ -f /etc/resolv.conf ]; then echo 'dns_domain:' `awk '/^(search|domain)/ { print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'domain:' `domainname 2>/dev/null` if [ -f /etc/release ]; then echo 'os:' `head -1 /etc/release 2>/dev/null` else echo 'os:' `uname -sr 2>/dev/null`
is replaced with:
ihn=`hostname 2>/dev/null` echo 'hostname:' $ihn if [ -r /etc/resolv.conf ]; then echo 'dns_domain:' `awk '/^(search|domain)/ { print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'domain:' `domainname 2>/dev/null` if [ -r /etc/release ]; then echo 'os:' `head -1 /etc/release 2>/dev/null` else echo 'os:' `uname -sr 2>/dev/null`
getPatchList
The following code:
showrev -p 2>/dev/null | grep -v "No patches are installed" | cut -c-16 | nawk '{print $2;}'
is replaced with:
os_ver=`uname -r | cut -d. -f2` if [ $os_ver -lt 11 ]; then showrev -p 2>/dev/null | grep -v "No patches are installed" | cut -c-16 | nawk '{print $2;}' else echo NO PATCHES fi
AIX
The following method: getInterfaceList is replaced with: getMACAddresses getNetworkInterfaces getIPAddresses The host_info command is removed from the getPatchList method. The lsof command is removed from the getPatchList method.
initialise
The following code:
Page 116
ls "$@" } # lslpp requires superuser privileges to list all installed packages PRIV_LSLPP() { "$@"
is replaced with:
ls "$@" } # This function supports privilege listing of file systems and related # size and usage. PRIV_DF() { "$@" } # lslpp requires superuser privileges to list all installed packages PRIV_LSLPP() { "$@"
getNetworkConnectionList
The following code:
if [ `uname -v` if [ `uname netstat else netstat fi else netstat -an fi -ge 6 ]; then -W` -eq 0 ]; then -an -f inet -@ 2>/dev/null | egrep "Global|Proto" | sed -e 's/Global //' -an -f inet 2>/dev/null
-f inet
2>/dev/null
is replaced with:
if [ `uname -v` if [ `uname netstat netstat else netstat netstat fi else netstat -an netstat -an fi -ge -W` -an -an 6 ]; then -eq 0 ]; then -f inet -@ 2>/dev/null | egrep "Global|Proto" | sed -e 's/Global //' -f inet6 -@ 2>/dev/null | egrep "Global|Proto" | sed -e 's/Global //'
getDeviceInfo
The following code:
Page 117
ihn=`hostname 2>/dev/null` echo 'hostname:' $ihn if [ -f /etc/resolv.conf ]; then echo 'dns_domain:' `awk '/^(domain|search)/ { print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'domain:' `domainname 2>/dev/null` # VIO support if [ -f /usr/ios/cli/ios.level ]; then viover=`cat /usr/ios/cli/ios.level` os="VIO $viover" else
is replaced with:
ihn=`hostname 2>/dev/null` echo 'hostname:' $ihn if [ -r /etc/resolv.conf ]; then echo 'dns_domain:' `awk '/^(domain|search)/ { print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'domain:' `domainname 2>/dev/null` # VIO support if [ -r /usr/ios/cli/ios.level ]; then viover=`cat /usr/ios/cli/ios.level` os="VIO $viover" else
getHBAList
The following code:
if [ $adapter_status = "Available" ]; then echo begin lscfg-vl-$adapter_name: lscfg -vl $adapter_name echo end lscfg-vl-$adapter_name: echo begin lslpp-$adapter_name: lslpp -L devices.*.$adapter_type.*
is replaced with:
if [ $adapter_status = "Available" ]; then echo begin lscfg-vl-$adapter_name: lscfg -vl $adapter_name echo $? > /dev/null # Prevent terminal issues echo end lscfg-vl-$adapter_name: echo begin lslpp-$adapter_name: lslpp -L devices.*.$adapter_type.*
getFileSystems
The following code:
Page 118
echo begin df: if [ -x /usr/sysv/bin/df ]; then /usr/sysv/bin/df -lg 2>/dev/null #else # df -k 2>/dev/null fi echo end df: echo begin mount: mount 2>/dev/null echo end mount: echo begin xtab: if [ -f /etc/xtab ]; then cat /etc/xtab fi echo end xtab:
is replaced with:
echo begin df: if [ -x /usr/sysv/bin/df ]; then PRIV_DF /usr/sysv/bin/df -lg 2>/dev/null #else # PRIV_DF df -k 2>/dev/null fi echo end df: echo begin mount: mount 2>/dev/null echo end mount: echo begin xtab: if [ -r /etc/xtab ]; then cat /etc/xtab fi echo end xtab:
is replaced with:
echo begin smbconf: configfile=`smbstatus -v 2>/dev/null | grep "using configfile" | awk '{print $4;}'` if [ "$configfile" != "" ]; then if [ -r $configfile ]; then cat $configfile fi fi
Windows
The following SSH discovery methods are removed: getProcessList getProcessToConnectionMapping getHostInfo getInterfaceList getDirectoryListing
Page 119
Mac OS X
The following method: getInterfaceList is replaced with: getMACAddresses getNetworkInterfaces getIPAddresses
getNetworkConnectionList
The following code:
netstat -an -f inet 2>/dev/null
is replaced with:
netstat -an -f inet 2>/dev/null netstat -an -f inet6 -l 2>/dev/null
initialise
The following code:
PRIV_LS() { ls "$@" }
is replaced with:
PRIV_LS() { ls "$@" } # This function supports privilege listing of file systems and related # size and usage. PRIV_DF() { "$@" }
getDeviceInfo
The following code:
Page 120
echo "hostname:" `hostname -s 2>/dev/null` system_profiler SPHardwareDataType SPSoftwareDataType > /tmp/tideway-hw-$$ 2>/dev/null grep "Computer Name: " /tmp/tideway-hw-$$ | sed 's/^.*: \(.*\)$/description: \1/' if [ -f /etc/resolv.conf ]; then echo 'dns_domain:' `awk '/^(domain|search)/ { print $2; exit }' /etc/resolv.conf` fi grep "System Version: " /tmp/tideway-hw-$$ | sed 's/^.*: Mac OS X \(.*\)$/os: Mac OS X \1/' rm -f /tmp/tideway-hw-$$ echo 'os_arch:' `uname -p 2>/dev/null`
is replaced with:
echo "hostname:" `hostname -s 2>/dev/null` system_profiler SPHardwareDataType SPSoftwareDataType > /tmp/tideway-hw-$$ 2>/dev/null grep "Computer Name: " /tmp/tideway-hw-$$ | sed 's/^.*: \(.*\)$/description: \1/' if [ -r /etc/resolv.conf ]; then echo 'dns_domain:' `awk '/^(domain|search)/ { print $2; exit }' /etc/resolv.conf` fi grep "System Version: " /tmp/tideway-hw-$$ | sed -e 's/^.*:/os:/' rm -f /tmp/tideway-hw-$$ echo 'os_arch:' `uname -p 2>/dev/null`
getFileSystems
The following code:
echo begin df: df -lk 2>/dev/null echo end df: echo begin mount: mount 2>/dev/null echo end mount: echo begin xtab: if [ -f /etc/xtab ]; then cat /etc/xtab fi echo end xtab:
is replaced with:
echo begin df: PRIV_DF df -lk 2>/dev/null echo end df: echo begin mount: mount 2>/dev/null echo end mount: echo begin xtab: if [ -r /etc/xtab ]; then cat /etc/xtab fi echo end xtab:
Page 121
is replaced with:
echo begin smbconf: configfile=`smbstatus -v 2>/dev/null | grep "using configfile" | awk '{print $4;}'` if [ "$configfile" != "" ]; then if [ -r $configfile ]; then cat $configfile fi fi
IRIX
The following method: getInterfaceList is replaced with: getMACAddresses getNetworkInterfaces getIPAddresses
initialise
The following code:
PRIV_LS() { ls "$@" }
is replaced with:
PRIV_LS() { ls "$@" } # This function supports privilege listing of file systems and related # size and usage. PRIV_DF() { "$@" }
getNetworkConnectionList
The following code:
netstat -an -f inet 2>/dev/null
is replaced with:
netstat -an -f inet 2>/dev/null netstat -anW -f inet6 2>/dev/null
getDeviceInfo
The following code:
Page 122
ihn=`hostname 2>/dev/null` echo 'hostname:' $ihn if [ -f /etc/resolv.conf ]; then echo 'dns_domain:' `awk '/^(domain|search)/ { print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'os:' `uname -s 2>/dev/null` `uname -R | awk '{print $2;}'`
is replaced with:
ihn=`hostname 2>/dev/null` echo 'hostname:' $ihn if [ -r /etc/resolv.conf ]; then echo 'dns_domain:' `awk '/^(domain|search)/ { print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'os:' `uname -s 2>/dev/null` `uname -R | awk '{print $2;}'`
getFileSystems
The following code:
echo begin df: df -lk 2>/dev/null echo end df: echo begin mount: mount 2>/dev/null echo end mount: echo begin xtab: if [ -f /etc/xtab ]; then cat /etc/xtab fi echo end xtab:
is replaced with:
echo begin df: PRIV_DF df -lk 2>/dev/null echo end df: echo begin mount: mount 2>/dev/null echo end mount: echo begin xtab: if [ -r /etc/xtab ]; then cat /etc/xtab fi echo end xtab:
is replaced with:
Page 123
echo begin smbconf: configfile=`smbstatus -v 2>/dev/null | grep "using configfile" | awk '{print $4;}'` if [ "$configfile" != "" ]; then if [ -r $configfile ]; then cat $configfile fi fi
POWER HMC
The following method: The following method: getInterfaceList is replaced with: getMACAddresses getNetworkInterfaces getIPAddresses The netstat command is replaced by hostinfo in the getProcessList method. The lshmc-n command is replaced by monhmc-rdisk in the getPatchList method.
Tru64
The following method: The following method: getInterfaceList is replaced with: getMACAddresses getNetworkInterfaces getIPAddresses
getHostInfo
The following code:
echo 'kernel:' `uname -v 2>/dev/null` echo 'model:' `hwmgr get attr -cat platform -a name 2>/dev/null | grep name | sed 's/name =//'` echo 'begin tru64_psrinfo:' psrinfo -v echo 'end tru64_psrinfo'
is replaced with:
echo 'kernel:' `uname -v 2>/dev/null` echo 'model:' `PRIV_HWMGR hwmgr get attr -cat platform -a name 2>/dev/null | grep name | sed 's/name =//'` vmstat -P 2>/dev/null | grep "^Total Physical Memory" | sed -e 's/Total Physical Memory *= */ram: /' /sbin/consvar -g sys_serial_num 2>/dev/null | sed -e 's/sys_serial_num *= */serial: /' echo 'begin tru64_psrinfo:' psrinfo -v echo 'end tru64_psrinfo'
initialise
The following code:
Page 124
ls "$@" } # setld requires superuser privileges to display information on packages PRIV_SETLD() { "$@" }
is replaced with:
ls "$@" } # This function supports privilege listing of file systems and related # size and usage. PRIV_DF() { "$@" } # setld requires superuser privileges to display information on packages PRIV_SETLD() { "$@" } # hwmgr requires superuser privileges to get hardware component information PRIV_HWMGR() { "$@" }
getDeviceInfo
The following code:
ihn=`hostname 2>/dev/null` echo 'hostname:' $ihn if [ -f /etc/resolv.conf ]; then echo 'dns_domain:' `awk '/^(domain|search)/ { print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'os:' `uname -sr 2>/dev/null`
is replaced with:
ihn=`hostname 2>/dev/null` echo 'hostname:' $ihn if [ -r /etc/resolv.conf ]; then echo 'dns_domain:' `awk '/^(domain|search)/ { print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'os:' `uname -sr 2>/dev/null`
getFileSystems
The following code:
Page 125
echo begin df: df -k -t nonfs,nfsv3 2>/dev/null echo end df: echo begin mount: mount 2>/dev/null echo end mount: echo begin xtab: if [ -f /etc/xtab ]; then cat /etc/xtab fi echo end xtab:
is replaced with:
echo begin df: PRIV_DF df -k -t nonfs,nfsv3 2>/dev/null echo end df: echo begin mount: mount 2>/dev/null echo end mount: echo begin xtab: if [ -r /etc/xtab ]; then cat /etc/xtab fi echo end xtab:
is replaced with:
echo begin smbconf: configfile=`smbstatus -v 2>/dev/null | grep "using configfile" | awk '{print $4;}'` if [ "$configfile" != "" ]; then if [ -r $configfile ]; then cat $configfile fi fi
OpenVMS
The following method: getInterfaceList is replaced with: getMACAddresses getNetworkInterfaces getIPAddresses
Mainframe
The getDatabaseDetail method is now disabled by default. Previously it was enabled.
Page 126
VMware ESXi
The following method: getInterfaceList is replaced with: getMACAddresses getNetworkInterfaces getIPAddresses
getHostInfo
The following code:
/sbin/smbiosDump | awk '{ if( $1 ~ /Serial:/ ) { sub(".*Serial: *",""); gsub( "\"", ""); printf( "serial: %s\n", $0 ); } }' | head -1 echo begin vim-cmd-hostsummary: vim-cmd hostsvc/hostsummary
is replaced with:
# Get the Serial number from System Info block - Each block starts with double space and [A-Za-z] serial=`/sbin/smbiosDump | sed -n '/^ System Info:/,/^ [A-Za-z]/ p' | grep '^ *Serial:'` # If the Serial number does NOT exist in the System block, get it from Board Info if [ "$serial" = "" ]; then serial=`/sbin/smbiosDump | sed -n '/^ Board Info:/,/^ [A-Za-z]/ p' | grep '^ *Serial:'` fi if [ "$serial" != "" ]; then echo serial: `echo $serial | cut -d: -f2- | sed 's/"//g'` fi echo begin vim-cmd-hostsummary: vim-cmd hostsvc/hostsummary
initialise
The following code:
PRIV_LS() { ls "$@" }
is replaced with:
PRIV_LS() { ls "$@" } # This function supports privilege listing of file systems and related # size and usage. PRIV_DF() { "$@" }
getDeviceInfo
The following code:
Page 127
echo 'hostname:' $ihn echo 'fqdn:' `hostname -f 2>/dev/null` dns_domain=`hostname -d 2>/dev/null | sed -e 's/(none)//'` if [ "$dns_domain" = "" -a -f /etc/resolv.conf ]; then dns_domain=`awk '/^(domain|search)/ {sub(/\\\\000$/, "", $2); print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'dns_domain: ' $dns_domain
is replaced with:
echo 'hostname:' $ihn echo 'fqdn:' `hostname -f 2>/dev/null` dns_domain=`hostname -d 2>/dev/null | sed -e 's/(none)//'` if [ "$dns_domain" = "" -a -r /etc/resolv.conf ]; then dns_domain=`awk '/^(domain|search)/ {sub(/\\\\000$/, "", $2); print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'dns_domain: ' $dns_domain
getFileSystems
The following code:
echo begin df: df -k 2>/dev/null echo end df: echo begin mount: mount 2>/dev/null
is replaced with:
echo begin df: PRIV_DF df -k 2>/dev/null echo end df: echo begin mount: mount 2>/dev/null
UnixWare
The following method: getInterfaceList is replaced with: getMACAddresses getNetworkInterfaces getIPAddresses
initialise
The following code is added:
# This function supports privilege listing of file systems and related # size and usage. PRIV_DF() { "$@" }
getDeviceInfo
Page 128
is replaced with:
echo 'hostname:' `uname -n 2>/dev/null` if [ -r /etc/resolv.conf ]; then echo 'dns_domain:' `awk '/^(domain|search)/ { print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'os:' `uname -sv 2>/dev/null`
getFileSystems
The following code:
echo begin df: df -lk 2>/dev/null echo end df: echo begin mount: if [ -f /etc/mnttab ]; then cat /etc/mnttab fi echo end mount: echo begin xtab: if [ -f /etc/xtab ]; then cat /etc/xtab fi echo end xtab:
is replaced with:
echo begin df: PRIV_DF df -lk 2>/dev/null echo end df: echo begin mount: if [ -r /etc/mnttab ]; then cat /etc/mnttab fi echo end mount: echo begin xtab: if [ -r /etc/xtab ]; then cat /etc/xtab fi echo end xtab:
is replaced with:
Page 129
echo begin smbconf: configfile=`smbstatus -v 2>/dev/null | grep "using configfile" | awk '{print $4;}'` if [ "$configfile" != "" ]; then if [ -r $configfile ]; then cat $configfile fi fi
FreeBSD
The following method: getInterfaceList is replaced with: getMACAddresses getNetworkInterfaces getIPAddresses
getNetworkConnectionList
The following code is added:
netstat -an -f inet6 -W 2>/dev/null
initialise
The following code is added:
# This function supports privilege listing of file systems and related # size and usage. PRIV_DF() { "$@" }
getDeviceInfo
The following code:
ihn=localhost fi echo 'hostname:' $ihn if [ -f /etc/resolv.conf ]; then echo 'dns_domain:' `awk '/^(domain|search)/ { print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'os:' `uname -sr 2>/dev/null`
is replaced with:
ihn=localhost fi echo 'hostname:' $ihn if [ -r /etc/resolv.conf ]; then echo 'dns_domain:' `awk '/^(domain|search)/ { print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'os:' `uname -sr 2>/dev/null`
getFileSystems
Page 130
is replaced with:
echo begin df: PRIV_DF df -lk 2>/dev/null echo end df: echo begin mount: mount 2>/dev/null echo end mount: echo begin xtab: if [ -r /etc/xtab ]; then cat /etc/xtab fi echo end xtab:
is replaced with:
echo begin smbconf: configfile=`smbstatus -v 2>/dev/null | grep "using configfile" | awk '{print $4;}'` if [ "$configfile" != "" ]; then if [ -r $configfile ]; then cat $configfile fi fi
Linux
The following method: getInterfaceList is replaced with: getMACAddresses getNetworkInterfaces getIPAddresses
getHostInfo
The following code:
Page 131
if [ -f /usr/sbin/esxcfg-info ]; then # On a VMWare ESX controller, report the *real* hardware information file=/tmp/tideway-hw-$$ PRIV_ESXCFG /usr/sbin/esxcfg-info --hardware > ${file} 2>/dev/null uuid="" if [ $? -eq 0 ]; then physical=`grep "Num Packages\." ${file} | sed -e "s/[^0-9]//g"` logical=`grep "Num Cores." ${file} | head -n 1 | sed -e "s/[^0-9]//g"`
is replaced with:
if [ -f /usr/sbin/esxcfg-info ]; then # On a VMWare ESX controller, report the *real* hardware information file=/tmp/tideway-hw-$$ uuid="" PRIV_ESXCFG /usr/sbin/esxcfg-info --hardware > ${file} 2>/dev/null if [ $? -eq 0 ]; then physical=`grep "Num Packages\." ${file} | sed -e "s/[^0-9]//g"` logical=`grep "Num Cores." ${file} | head -n 1 | sed -e "s/[^0-9]//g"`
is replaced with:
# zLinux? if [ -r /proc/sysinfo -a -d /proc/dasd ]; then echo "candidate_vendor[]:" `egrep '^Manufacturer:' /proc/sysinfo | awk '{print $2;}'` type=`egrep '^Type:' /proc/sysinfo | awk '{print $2;}'` model=`egrep '^Model:' /proc/sysinfo | awk '{print $2;}'`
getPackageList
The following code:
COLUMNS=256 dpkg -l '*' | egrep '^ii '
is replaced with:
COLUMNS=256 dpkg -l '*' 2>/dev/null | egrep '^ii '
initialise
The following code is added:
# This function supports privilege listing of file systems and related # size and usage. PRIV_DF() { "$@" }
Page 132
getDeviceInfo
The following code:
echo 'hostname:' $ihn echo 'fqdn:' `hostname --fqdn 2>/dev/null` dns_domain=`hostname -d 2>/dev/null | sed -e 's/(none)//'` if [ "$dns_domain" = "" -a -f /etc/resolv.conf ]; then dns_domain=`awk '/^(domain|search)/ {sub(/\\\\000$/, "", $2); print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'dns_domain: ' $dns_domain
is replaced with:
echo 'hostname:' $ihn echo 'fqdn:' `hostname --fqdn 2>/dev/null` dns_domain=`hostname -d 2>/dev/null | sed -e 's/(none)//'` if [ "$dns_domain" = "" -a -r /etc/resolv.conf ]; then dns_domain=`awk '/^(domain|search)/ {sub(/\\\\000$/, "", $2); print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'dns_domain: ' $dns_domain
is replaced with:
os="" # SuSE lsb_release does provide service pack so prefer SuSE-release file. if [ "$os" = "" -a -r /etc/SuSE-release ]; then os=`cat /etc/SuSE-release` fi if [ "$os" = "" -a -x /usr/bin/lsb_release ]; then # We'd like to use -ds but that puts quotes in the output! os=`/usr/bin/lsb_release -d | cut -f2 -d: | sed -e 's/^[ \t]//'` if [ "$os" = "(none)" ]; then os="" else # Check to see if its a variant of Red Hat rpm -q oracle-logos > /dev/null 2>&1 if [ $? -eq 0 ]; then # Oracle variant os="Oracle $os" fi fi fi if [ "$os" = "" -a -r /proc/vmware/version ]; then os=`grep -m1 ESX /proc/vmware/version` fi if [ "$os" = "" -a -r /etc/vmware-release ]; then os=`grep ESX /etc/vmware-release` fi if [ "$os" = "" -a -r /etc/redhat-release ]; then os=`cat /etc/redhat-release`
Page 133
if [ "$os" = "" -a -x /usr/bin/lsb_release ]; then # We'd like to use -ds but that puts quotes in the output! os=`/usr/bin/lsb_release -d | cut -f2 -d:` fi if [ "$os" = "" -a -e /proc/vmware/version ]; then os=`grep -m1 ESX /proc/vmware/version` fi if [ "$os" = "" -a -f /etc/vmware-release ]; then os=`grep ESX /etc/vmware-release` fi if [ "$os" = "" -a -f /etc/SuSE-release ]; then os=`head -n 1 /etc/SuSE-release` fi if [ "$os" = "" -a -f /etc/debian_version ]; then ver=`cat /etc/debian_version` os="Debian Linux $ver" fi if [ "$os" = "" -a -f /etc/mandrake-release ]; then os=`cat /etc/mandrake-release` fi if [ "$os" = "" ]; then
is replaced with:
if [ "$os" = "" -a -r /etc/debian_version ]; then ver=`cat /etc/debian_version` os="Debian Linux $ver" fi if [ "$os" = "" -a -r /etc/mandrake-release ]; then os=`cat /etc/mandrake-release` fi if [ "$os" = "" ]; then
getFileSystems
The following code:
echo begin df: df -lk 2>/dev/null echo end df: echo begin mount: mount 2>/dev/null
is replaced with:
echo begin df: PRIV_DF df -lk 2>/dev/null echo end df: echo begin mount: mount 2>/dev/null
is replaced with:
Page 134
echo begin smbconf: configfile=`smbstatus -v 2>/dev/null | grep "using configfile" | awk '{print $4;}'` if [ "$configfile" != "" ]; then if [ -r $configfile ]; then cat $configfile fi fi
OpenBSD
The following method: getInterfaceList is replaced with: getMACAddresses getNetworkInterfaces getIPAddresses
initialise
The following code is added:
# This function supports privilege listing of file systems and related # size and usage. PRIV_DF() { "$@" }
getNetworkConnectionList
The following code:
netstat -an -f inet -W 2>/dev/null
is replaced with:
netstat -an -f inet -v 2>/dev/null netstat -an -f inet6 -v 2>/dev/null
getDeviceInfo
The following code:
ihn=`hostname 2>/dev/null` echo 'hostname:' $ihn if [ -f /etc/resolv.conf ]; then echo 'dns_domain:' `awk '/^(domain|search)/ { print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'os:' `uname -sr 2>/dev/null`
is replaced with:
Page 135
ihn=`hostname 2>/dev/null` echo 'hostname:' $ihn if [ -r /etc/resolv.conf ]; then echo 'dns_domain:' `awk '/^(domain|search)/ { print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'os:' `uname -sr 2>/dev/null`
getFileSystems
The following code:
echo begin df: df -lk 2>/dev/null echo end df: echo begin mount: mount 2>/dev/null echo end mount: echo begin xtab: if [ -f /etc/xtab ]; then cat /etc/xtab fi echo end xtab:
is replaced with:
echo begin df: PRIV_DF df -lk 2>/dev/null echo end df: echo begin mount: mount 2>/dev/null echo end mount: echo begin xtab: if [ -r /etc/xtab ]; then cat /etc/xtab fi echo end xtab:
is replaced with:
echo begin smbconf: configfile=`smbstatus -v 2>/dev/null | grep "using configfile" | awk '{print $4;}'` if [ "$configfile" != "" ]; then if [ -r $configfile ]; then cat $configfile fi fi
NetBSD
The following method:
Page 136
initialise
The following code is added:
# This function supports privilege listing of file systems and related # size and usage. PRIV_DF() { "$@" }
getNetworkConnectionList
The following code:
netstat -an -f inet -W 2>/dev/null
is replaced with:
netstat -an -f inet -v 2>/dev/null netstat -an -f inet6 -v 2>/dev/null
getDeviceInfo
The following code:
ihn=`hostname 2>/dev/null` echo 'hostname:' $ihn if [ -f /etc/resolv.conf ]; then echo 'dns_domain:' `awk '/^(domain|search)/ { print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'os:' `uname -sr 2>/dev/null`
is replaced with:
ihn=`hostname 2>/dev/null` echo 'hostname:' $ihn if [ -r /etc/resolv.conf ]; then echo 'dns_domain:' `awk '/^(domain|search)/ { print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'os:' `uname -sr 2>/dev/null`
getFileSystems
Page 137
echo begin df: df -lk 2>/dev/null echo end df: echo begin mount: mount 2>/dev/null echo end mount: echo begin xtab: if [ -f /etc/xtab ]; then cat /etc/xtab fi echo end xtab:
is replaced with:
echo begin df: PRIV_DF df -lk 2>/dev/null echo end df: echo begin mount: mount 2>/dev/null echo end mount: echo begin xtab: if [ -r /etc/xtab ]; then cat /etc/xtab fi echo end xtab:
is replaced with:
echo begin smbconf: configfile=`smbstatus -v 2>/dev/null | grep "using configfile" | awk '{print $4;}'` if [ "$configfile" != "" ]; then if [ -r $configfile ]; then cat $configfile fi fi
HP-UX
The following method: getInterfaceList is replaced with: getMACAddresses getNetworkInterfaces getIPAddresses
getNetworkConnectionList
The following code is added:
Page 138
getHostInfo
The following code:
fi echo 'license:' `uname -l` # machinfo indicates Itanium based system if [ -x /usr/contrib/bin/machinfo ]; then echo 'begin machinfo:' if [ $OSVER -ge 1131 ]; then
is replaced with:
fi echo 'license:' `uname -l` if [ -x /usr/contrib/bin/machinfo ]; then echo 'begin machinfo:' if [ $OSVER -ge 1131 ]; then
is replaced with:
cores=`/usr/sbin/ioscan -kC processor 2>/dev/null | grep processor | wc -l` fi if [ "$OSVER" -ge 1100 ]; then chip_type=$((`getconf CPU_CHIP_TYPE` >> 5)) fi if [ "$chip_type" = "20" ]; then cores_per_processor=2 else cores_per_processor=1
Page 139
echo "cpu_chip_type:" $chip_type fi echo 'cpu_model:' `model 2>/dev/null` KERNEL=/stand/vmunix MEMDEV=/dev/mem if [ -r $MEMDEV ]; then if [ "$kernel_bits" -eq 64 ]; then ADB="adb64 -k" else ADB="adb -k" fi echo itick_per_usec/D | $ADB $KERNEL $MEMDEV 2>/dev/null | egrep -v ':$' fi echo "end hpux_cpu_info" if [ $ram -eq 0 ]; then
is replaced with:
echo "cpu_chip_type:" $chip_type fi echo 'cpu_model:' `model 2>/dev/null` echo "end hpux_cpu_info" if [ $ram -eq 0 ]; then
getPackageList
The following code:
PRIV_SWLIST swlist -l product 2>/dev/null | egrep -v '^#|PH[A-Z]{2}_[0-9]+'
is replaced with:
PRIV_SWLIST swlist -v -l product -a tag -a revision -a title -a vendor_tag 2>/dev/null
initialise
The following code is added:
# This function supports privilege listing of file systems and related # size and usage. PRIV_DF() { "$@" }
getDeviceInfo
The following code:
ihn=`hostname 2>/dev/null` echo 'hostname:' $ihn if [ -f /etc/resolv.conf ]; then echo 'dns_domain:' `awk '/^(domain|search)/ { print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'domain:' `domainname 2>/dev/null`
is replaced with:
Page 140
ihn=`hostname 2>/dev/null` echo 'hostname:' $ihn if [ -r /etc/resolv.conf ]; then echo 'dns_domain:' `awk '/^(domain|search)/ { print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'domain:' `domainname 2>/dev/null`
getFileSystems
The following code:
echo begin df: bdf -l 2>/dev/null echo end df: echo begin mount: mount -p 2>/dev/null echo end mount: echo begin xtab: if [ -f /etc/xtab ]; then cat /etc/xtab fi echo end xtab:
is replaced with:
echo begin df: PRIV_DF bdf -l 2>/dev/null echo end df: echo begin mount: mount -p 2>/dev/null echo end mount: echo begin xtab: if [ -r /etc/xtab ]; then cat /etc/xtab fi echo end xtab:
is replaced with:
echo begin smbconf: configfile=`smbstatus -v 2>/dev/null | grep "using configfile" | awk '{print $4;}'` if [ "$configfile" != "" ]; then if [ -r $configfile ]; then cat $configfile fi fi
VMware ESX
The PATH environment variable has changed from /bin:/sbin to /bin:/usr/bin:/sbin:/usr/sbin.
Page 141
getHostInfo
The following code:
if [ -f /usr/sbin/esxcfg-info ]; then # On a VMWare ESX controller, report the *real* hardware information file=/tmp/tideway-hw-$$ PRIV_ESXCFG /usr/sbin/esxcfg-info --hardware > ${file} 2>/dev/null uuid="" if [ $? -eq 0 ]; then physical=`grep "Num Packages\." ${file} | sed -e "s/[^0-9]//g"` logical=`grep "Num Cores." ${file} | head -n 1 | sed -e "s/[^0-9]//g"`
is replaced with:
if [ -f /usr/sbin/esxcfg-info ]; then # On a VMWare ESX controller, report the *real* hardware information file=/tmp/tideway-hw-$$ uuid="" PRIV_ESXCFG /usr/sbin/esxcfg-info --hardware > ${file} 2>/dev/null if [ $? -eq 0 ]; then physical=`grep "Num Packages\." ${file} | sed -e "s/[^0-9]//g"` logical=`grep "Num Cores." ${file} | head -n 1 | sed -e "s/[^0-9]//g"`
is replaced with:
# zLinux? if [ -r /proc/sysinfo -a -d /proc/dasd ]; then echo "candidate_vendor[]:" `egrep '^Manufacturer:' /proc/sysinfo | awk '{print $2;}'` type=`egrep '^Type:' /proc/sysinfo | awk '{print $2;}'` model=`egrep '^Model:' /proc/sysinfo | awk '{print $2;}'`
getPackageList
The following code:
COLUMNS=256 dpkg -l '*' | egrep '^ii '
is replaced with:
COLUMNS=256 dpkg -l '*' 2>/dev/null | egrep '^ii '
initialise
Page 142
# This function supports privilege listing of file systems and related # size and usage. PRIV_DF() { "$@" }
getDeviceInfo
The following code:
echo 'hostname:' $ihn echo 'fqdn:' `hostname --fqdn 2>/dev/null` dns_domain=`hostname -d 2>/dev/null | sed -e 's/(none)//'` if [ "$dns_domain" = "" -a -f /etc/resolv.conf ]; then dns_domain=`awk '/^(domain|search)/ {sub(/\\\\000$/, "", $2); print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'dns_domain: ' $dns_domain
is replaced with:
echo 'hostname:' $ihn echo 'fqdn:' `hostname --fqdn 2>/dev/null` dns_domain=`hostname -d 2>/dev/null | sed -e 's/(none)//'` if [ "$dns_domain" = "" -a -r /etc/resolv.conf ]; then dns_domain=`awk '/^(domain|search)/ {sub(/\\\\000$/, "", $2); print $2; exit }' /etc/resolv.conf 2>/dev/null` fi echo 'dns_domain: ' $dns_domain
is replaced with:
Page 143
os="" # SuSE lsb_release does provide service pack so prefer SuSE-release file. if [ "$os" = "" -a -r /etc/SuSE-release ]; then os=`cat /etc/SuSE-release` fi if [ "$os" = "" -a -x /usr/bin/lsb_release ]; then # We'd like to use -ds but that puts quotes in the output! os=`/usr/bin/lsb_release -d | cut -f2 -d: | sed -e 's/^[ \t]//'` if [ "$os" = "(none)" ]; then os="" else # Check to see if its a variant of Red Hat rpm -q oracle-logos > /dev/null 2>&1 if [ $? -eq 0 ]; then # Oracle variant os="Oracle $os" fi fi fi if [ "$os" = "" -a -r /proc/vmware/version ]; then os=`grep -m1 ESX /proc/vmware/version` fi if [ "$os" = "" -a -r /etc/vmware-release ]; then os=`grep ESX /etc/vmware-release` fi if [ "$os" = "" -a -r /etc/redhat-release ]; then os=`cat /etc/redhat-release`
is replaced with:
Page 144
os="Oracle $os" fi fi if [ "$os" = "" -a -r /etc/debian_version ]; then ver=`cat /etc/debian_version` os="Debian Linux $ver" fi if [ "$os" = "" -a -r /etc/mandrake-release ]; then os=`cat /etc/mandrake-release` fi if [ "$os" = "" ]; then
getFileSystems
The following code:
echo begin df: df -lk 2>/dev/null echo end df:
is replaced with:
echo begin df: PRIV_DF df -lk 2>/dev/null echo end df:
is replaced with:
echo begin smbconf: configfile=`smbstatus -v 2>/dev/null | grep "using configfile" | awk '{print $4;}'` if [ "$configfile" != "" ]; then if [ -r $configfile ]; then cat $configfile fi fi
Solaris
getHostinfo
The following code is removed:
Page 145
-cputype="" # Physical Processor Count physical=`/usr/sbin/psrinfo -p 2>/dev/null` if [ "$physical" = "" ]; then physical=`kstat cpu_info 2>/dev/null | grep chip_id | sort | uniq | wc -l` if [ $physical -eq 0 ]; then physical="" fi fi
Page 146
echo 'num_logical_processors:' $logical fi # Total Processor Core Count cores=`kstat cpu_info 2>/dev/null | grep -w core_id | sort | uniq | wc -l` if [ "$cores" -eq 0 ]; then echo "__discovery_errors: Unable to determine total processor core count" else echo 'cores:' $cores fi # Processor Speed speed=`kstat cpu_info 2>/dev/null | grep clock_MHz | awk '{print $2;}' | sort -n -r | head -n 1` if [ "$speed" = "" ]; then echo "__discovery_errors: Unable to determine processor speed" else echo 'processor_speed:' $speed fi # Processor Type if [ "$cputype" = "" ]; then cputype=`kstat cpu_info 2>/dev/null | grep implementation | sort | uniq | awk '{print $2;}' | head -n 1` if [ "$cputype" = "" ]; then echo "__discovery_errors: Unable to determine processor type" else if [ "$speed" != "" ]; then cputype="$cputype ${speed}MHz" fi
Is replaced with:
echo 'end solaris_uptime_string:'
initialise
The following code:
# insulate against systems with -u set by default set +u
Is replaced with:
Page 147
# Stop alias commands changing behaviour. unalias -a # Insulate against systems with -u set by default. set +u
Is replaced with:
if [ -f /etc/release ]; then echo 'os:' `head -1 /etc/release 2>/dev/null` else echo 'os:' `uname -sr 2>/dev/null` fi echo 'os_arch:' `isainfo -k 2>/dev/null`
AIX
initialise
The following code:
Is replaced with:
# Stop alias commands changing behaviour. unalias -a # Insulate against systems with -u set by default. set +u
getDeviceInfo
The following code is added:
maintlevel=`oslevel -r 2>/dev/null` if [ $? -ne 0 ]; then maintlevel="" fi if [ "$maintlevel" != "" ]; then echo 'os_level:' $maintlevel fi
Mac OS X
Page 148
initialise
The following code:
Is replaced with:
# Stop alias commands changing behaviour. unalias -a # Insulate against systems with -u set by default. set +u
IRIX
initialise
The following code:
Is replaced with:
# Stop alias commands changing behaviour. unalias -a # Insulate against systems with -u set by default. set +u
Tru64
initialise
The following code:
Is replaced with:
# Stop alias commands changing behaviour. unalias -a # Insulate against systems with -u set by default. set +u
VMware ESXi
getHostInfo
The following code:
Page 149
if [ -r /etc/slp.reg ]; then uuid=`grep hardwareUuid /etc/slp.reg | cut -f2 -d= | tr '[:upper:]' '[:lower:]' | sed -e 's/"//g'`
Is replaced with:
if [ -r /etc/slp.reg ]; then uuid=`grep hardwareUuid /etc/slp.reg | cut -f2 -d= | awk '{ print tolower($_); }' | sed -e 's/"//g'`
initialise
The following code:
Is replaced with:
# Stop alias commands changing behaviour. unalias -a # Insulate against systems with -u set by default. set +u
UnixWare
The following code:
initialise
Is replaced with:
# Stop alias commands changing behaviour. unalias -a # Insulate against systems with -u set by default. set +u
FreeBSD:
initialise
The following code:
Is replaced with:
Page 150
# Stop alias commands changing behaviour. unalias -a # Insulate against systems with -u set by default. set +u
Linux
getNetworkConnectionList
The following code:
Is replaced with:
PRIV_NETSTAT netstat -aneep --tcp --udp -T 2>/dev/null if [ $? -eq 4 ]; then # netstat failed due to invalid option, try -W PRIV_NETSTAT netstat -aneep --tcp --udp -W 2>/dev/null if [ $? -eq 4 ]; then # netstat still failed, try without any wide option PRIV_NETSTAT netstat -aneep --tcp --udp 2>/dev/null fi fi
Linux
getHostInfo
The following code:
if [ `echo ${opteron_number} | cut -c1` -eq 4 ]; then if [ `echo ${opteron_number} | cut -c3` -lt 6 ]; then # 41xx where xx is lower than 60 are 4 cores core_hint=4 sibling=4
Is replaced with:
if [ `echo ${opteron_number} | cut -c1` -eq 4 ]; then if [ `echo ${opteron_number} | cut -c3` -lt 6 ]; then # 41xx where xx is lower than 60 are 4 cores cores_hint=4 siblings=4
Page 151
elif [ `echo ${opteron_number} | cut -c1` -eq 6 ]; then if [ `echo ${opteron_number} | cut -c3` -lt 6 ]; then # 61xx where xx is lower than 60 are 8 cores core_hint=8 sibling=8 else # 61xx where xx is higher than 60 are 12 cores core_hint=12 sibling=12
Is replaced with:
elif [ `echo ${opteron_number} | cut -c1` -eq 6 ]; then if [ `echo ${opteron_number} | cut -c3` -lt 6 ]; then # 61xx where xx is lower than 60 are 8 cores cores_hint=8 siblings=8 else # 61xx where xx is higher than 60 are 12 cores cores_hint=12 siblings=12
Linux
initialise
The following code:
Is replaced with:
# Stop alias commands changing behaviour. unalias -a # Insulate against systems with -u set by default. set +u
getDeviceInfo
The following code:
Page 152
echo 'domain:' `hostname -y 2>/dev/null | sed -e 's/(none)//'` os="" if [ "$os" = "" -a -x /usr/bin/lsb_release ]; then # We'd like to use -ds but that puts quotes in the output! os=`/usr/bin/lsb_release -d | cut -f2 -d:` fi if [ "$os" = "" -a -f /etc/vmware-release ]; then os=`grep ESX /etc/vmware-release` fi if [ "$os" = "" -a -f /etc/redhat-release ]; then os=`cat /etc/redhat-release` # Check to see if its a variant of Red Hat rpm -q oracle-logos > /dev/null 2>&1 if [ $? -eq 0 ]; then # Oracle variant os="Oracle $os" fi fi if [ "$os" = "" -a -f /etc/SuSE-release ]; then os=`head -n 1 /etc/SuSE-release`
Is replaced with:
nis_domain=`domainname 2>/dev/null` if [ "$nis_domain" == "" ]; then nis_domain=`hostname -y 2>/dev/null` fi echo 'domain: ' $nis_domain | sed -e 's/(none)//' os="" if [ "$os" = "" -a -f /etc/redhat-release ]; then os=`cat /etc/redhat-release` # Check to see if its a variant of Red Hat rpm -q oracle-logos > /dev/null 2>&1 if [ $? -eq 0 ]; then # Oracle variant os="Oracle $os" fi fi if [ "$os" = "" -a -x /usr/bin/lsb_release ]; then # We'd like to use -ds but that puts quotes in the output! os=`/usr/bin/lsb_release -d | cut -f2 -d:` fi if [ "$os" = os=`grep fi if [ "$os" = os=`head
OpenBSD
initialise
The following code:
Is replaced with:
Page 153
# Stop alias commands changing behaviour. unalias -a # Insulate against systems with -u set by default. set +u
NetBSD
initialise
The following code:
Is replaced with:
# Stop alias commands changing behaviour. unalias -a # Insulate against systems with -u set by default. set +u
HPUX
initialise
The following code:
Is replaced with:
# Stop alias commands changing behaviour. unalias -a # Insulate against systems with -u set by default. set +u
VMware ESX
getNetworkConnectionList
The following code:
Is replaced with:
Page 154
PRIV_NETSTAT netstat -aneep --tcp --udp -T 2>/dev/null if [ $? -eq 4 ]; then # netstat failed due to invalid option, try -W PRIV_NETSTAT netstat -aneep --tcp --udp -W 2>/dev/null if [ $? -eq 4 ]; then # netstat still failed, try without any wide option PRIV_NETSTAT netstat -aneep --tcp --udp 2>/dev/null fi fi
VMware ESX
getHostInfo
The following code:
if [ `echo ${opteron_number} | cut -c1` -eq 4 ]; then if [ `echo ${opteron_number} | cut -c3` -lt 6 ]; then # 41xx where xx is lower than 60 are 4 cores core_hint=4 sibling=4 else # 41xx where xx is higher than 60 are 6 cores cores_hint=6
Is replaced with:
if [ `echo ${opteron_number} | cut -c1` -eq 4 ]; then if [ `echo ${opteron_number} | cut -c3` -lt 6 ]; then # 41xx where xx is lower than 60 are 4 cores cores_hint=4 siblings=4 else # 41xx where xx is higher than 60 are 6 cores cores_hint=6
elif [ `echo ${opteron_number} | cut -c1` -eq 6 ]; then if [ `echo ${opteron_number} | cut -c3` -lt 6 ]; then # 61xx where xx is lower than 60 are 8 cores core_hint=8 sibling=8 else # 61xx where xx is higher than 60 are 12 cores core_hint=12 sibling=12
Is replaced with:
elif [ `echo ${opteron_number} | cut -c1` -eq 6 ]; then if [ `echo ${opteron_number} | cut -c3` -lt 6 ]; then # 61xx where xx is lower than 60 are 8 cores cores_hint=8 siblings=8 else # 61xx where xx is higher than 60 are 12 cores cores_hint=12 siblings=12
Page 155
VMware ESX
initialise
The following code:
Is replaced with:
# Stop alias commands changing behaviour. unalias -a # Insulate against systems with -u set by default. set +u
getDeviceInfo
The following code:
echo 'dns_domain: ' $dns_domain echo 'domain:' `hostname -y 2>/dev/null | sed -e 's/(none)//'` os="" if [ "$os" = "" -a -x /usr/bin/lsb_release ]; then # We'd like to use -ds but that puts quotes in the output! os=`/usr/bin/lsb_release -d | cut -f2 -d:` fi if [ "$os" = "" -a -f /etc/vmware-release ]; then os=`grep ESX /etc/vmware-release` fi if [ "$os" = "" -a -f /etc/redhat-release ]; then os=`cat /etc/redhat-release` # Check to see if its a variant of Red Hat rpm -q oracle-logos > /dev/null 2>&1 if [ $? -eq 0 ]; then # Oracle variant os="Oracle $os" fi fi if [ "$os" = "" -a -f /etc/SuSE-release ]; then os=`head -n 1 /etc/SuSE-release`
Is replaced with:
Page 156
echo 'dns_domain: ' $dns_domain nis_domain=`domainname 2>/dev/null` if [ "$nis_domain" == "" ]; then nis_domain=`hostname -y 2>/dev/null` fi echo 'domain: ' $nis_domain | sed -e 's/(none)//' os="" if [ "$os" = "" -a -f /etc/redhat-release ]; then os=`cat /etc/redhat-release` # Check to see if its a variant of Red Hat rpm -q oracle-logos > /dev/null 2>&1 if [ $? -eq 0 ]; then # Oracle variant os="Oracle $os" fi fi if [ "$os" = "" -a -x /usr/bin/lsb_release ]; then # We'd like to use -ds but that puts quotes in the output! os=`/usr/bin/lsb_release -d | cut -f2 -d:` fi if [ "$os" = "" -a -f /etc/vmware-release ]; then os=`grep ESX /etc/vmware-release` fi if [ "$os" = "" -a -f /etc/SuSE-release ]; then os=`head -n 1 /etc/SuSE-release`
Solaris
The PATH is changed. The following code:
/bin:/usr/bin:/sbin:/usr/sbin
is replaced with:
/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin
Changed method: getHBAList enabled changed: was hbainfo, now hba_lputil The following code is added:
echo 'arp:' arp -a -n 2>/dev/null | awk '{ if ( $4 ~ /SP/ ) { print $2, $5 } }' echo 'end arp'
Page 157
is replaced with:
NDD=`tw_which ndd` KSTAT=`tw_which kstat` if [ -x "$KSTAT" ]
is replaced with:
$KSTAT -p -m $iface 2>/dev/null
is replaced with:
instance=`PRIV_NDD $NDD -get /dev/$NIC_TYPE instance 2>/dev/null`
is replaced with:
PRIV_TEST -d "${P}" -a -r "${P}" > /dev/null 2> /dev/null if [ $? -eq 0 ]; then (cd "${P}"; PRIV_LS -al)
Page 158
# Used in conjunction with PRIV_CAT and PRIV_LS PRIV_TEST() { test "$@" } # This function supports privilege listing of files and directories # Used in conjunction with PRIV_TEST PRIV_LS() { ls "$@" } tw_which() { SAVE=$IFS IFS=: for d in $PATH do if [ -x $d/$1 -a ! -d $d/$1 ] then echo $d/$1 break fi done IFS=$SAVE }
is replaced with:
PRIV_TEST -f "${P}" > /dev/null 2> /dev/null if [ $? -eq 0 ]; then echo "ls:" `PRIV_LS -ld "${P}"` DIGEST=`tw_which digest` if [ -x "$DIGEST" ]; then echo "md5sum:" `$DIGEST -a md5 "${P}" 2>/dev/null`
Solaris: getHBAList: hbainfo: REMOVED Solaris: getHBAList: hba_lputil: ADDED The following code:
showrev -p 2>/dev/null | cut -c-16 | nawk '{print $2;}'
is replaced with:
showrev -p 2>/dev/null | grep -v "No patches are installed" | cut -c-16 | nawk '{print $2;}'
AIX
Page 159
is replaced with:
echo 'model:' `egrep '^System Model:' /tmp/tideway.$$ | cut -f2 -d: | sed -e 's/IBM,//'`
is replaced with:
# Use this processor value if lparstat failed if [ "$num_logical_processors" = "" ]; then num_logical_processors=`egrep '^Number Of Processors:' /tmp/tideway.$$ | cut -f2 -d:` fi
is replaced with:
# Use this processor value if lparstat failed if [ "$num_logical_processors" = "" ]; then num_logical_processors=`lscfg -lproc\* 2>/dev/null | wc -l` fi
is replaced with:
fi if [ "$num_physical_processors" != "" ]; then echo "num_processors": $num_physical_processors fi if [ "$num_logical_processors" != "" ]; then echo "num_logical_processors:" $num_logical_processors fi if [ "$cputype" != "" -a "$cpuspeed" != "" ]; then echo "processor_type:" $cputype $cpuspeed echo "processor_speed:" $cpuspeed fi
Page 160
is replaced with:
echo 'model:' `egrep '^System Model:' /tmp/tideway.$$ | cut -f2 -d: | sed -e 's/IBM,//'`
is replaced with:
# Use this processor value if lparstat failed if [ "$num_logical_processors" = "" ]; then num_logical_processors=`egrep '^Number Of Processors:' /tmp/tideway.$$ | cut -f2 -d:` fi
is replaced with:
# Use this processor value if lparstat failed if [ "$num_logical_processors" = "" ]; then num_logical_processors=`lscfg -lproc\* 2>/dev/null | wc -l` fi
is replaced with:
fi if [ "$num_physical_processors" != "" ]; then echo "num_processors": $num_physical_processors fi if [ "$num_logical_processors" != "" ]; then echo "num_logical_processors:" $num_logical_processors fi if [ "$cputype" != "" -a "$cpuspeed" != "" ]; then echo "processor_type:" $cputype $cpuspeed echo "processor_speed:" $cpuspeed fi
Page 161
if [ `uname -v` -ge 6 ]; then wparaddrs=`lswpar -Nqa address 2>/dev/null | xargs echo | sed -e 's/\./\./g ' -e 's/ /|/g'` fi if [ "$wparaddrs" = "" ]; then ifconfig -a 2>/dev/null else ifconfig -a 2>/dev/null | egrep -v "$wparaddrs"
is replaced with:
if [ `uname -v` -lt 5 ]; then for name in `netstat -i | cut -f 1 -d\ | grep -Ev '^Name|^lo0|^$' | sort -u` do ifconfig $name 2>/dev/null done else if [ `uname -v` -ge 6 ]; then wparaddrs=`lswpar -Nqa address 2>/dev/null | xargs echo | sed -e 's/\./\./g ' -e 's/ /|/g'` fi if [ "$wparaddrs" = "" ]; then ifconfig -a 2>/dev/null else ifconfig -a 2>/dev/null | egrep -v "$wparaddrs" fi
is replaced with:
if [ `uname -v` -lt 5 ]; then for name in `netstat -i | cut -f 1 -d\ | grep -Ev '^Name|^lo0|^$' | sort -u` do ifconfig $name 2>/dev/null done else if [ `uname -v` -ge 6 ]; then wparaddrs=`lswpar -Nqa address 2>/dev/null | xargs echo | sed -e 's/\./\./g ' -e 's/ /|/g'` fi if [ "$wparaddrs" = "" ]; then ifconfig -a 2>/dev/null else ifconfig -a 2>/dev/null | egrep -v "$wparaddrs" fi
is replaced with:
Page 162
entstat -d ${adapter} 2>/dev/null | egrep "(ETHERNET STATISTICS)|(Device Type)|(Media Speed)|(Hardware Address)|(PVID)|(Port VLAN ID)" 2>/dev/null
is replaced with:
PRIV_TEST -d %(path)s -a -r %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then (cd %(path)s; PRIV_LS -al) else echo 'DIRECTORY NOT FOUND' fi
Page 163
osver=`oslevel 2>/dev/null` if [ $? -ne 0 ]; then osver="" fi if [ "$osver" = "" -o "$osver" = "TYPERML" ]; then osmaj=`uname -v 2>/dev/null` osmin=`uname -r 2>/dev/null` osver="$osmaj.$osmin" fi echo 'os:' `uname -s 2>/dev/null` $osver echo 'os_arch:' `uname -p 2>/dev/null`
is replaced with:
# VIO support if [ -f /usr/ios/cli/ios.level ]; then viover=`cat /usr/ios/cli/ios.level` os="VIO $viover" else osver=`oslevel 2>/dev/null` if [ $? -ne 0 ]; then osver="" fi if [ "$osver" = "" -o "$osver" = "TYPERML" ]; then osmaj=`uname -v 2>/dev/null` osmin=`uname -r 2>/dev/null` osver="$osmaj.$osmin" fi os="`uname -s 2>/dev/null` $osver" fi echo 'os:' $os echo 'os_arch:' `uname -p 2>/dev/null`
is replaced with:
PRIV_TEST -f %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then echo "ls:" `PRIV_LS -ld %(path)s`
Mac OS-X
The following code:
grep "Memory: " /tmp/tideway-hw-$$ | sed 's/^.*: \(.*\)$/ram: \1/'
is replaced with:
grep "^[[:blank:]]*Memory: " /tmp/tideway-hw-$$ | sed 's/^.*: \(.*\)$/ram: \1/'
Page 164
is replaced with:
PRIV_TEST -d %(path)s -a -r %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then (cd %(path)s; PRIV_LS -alT)
is replaced with:
# This function supports privilege testing of attributes of files. # Used in conjunction with PRIV_CAT and PRIV_LS PRIV_TEST() { test "$@" } # This function supports privilege listing of files and directories # Used in conjunction with PRIV_TEST PRIV_LS() { ls "$@" }
is replaced with:
PRIV_TEST -f %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then echo "ls:" `PRIV_LS -lTd %(path)s` echo "md5sum:" `md5sum %(path)s 2>/dev/null | awk '{print $1;}'` else echo "FILE NOT FOUND"
IRIX
The following code:
if [ -d %(path)s -a -r %(path)s ]; then (cd %(path)s; ls -al) else echo 'DIRECTORY NOT FOUND' fi
is replaced with:
Page 165
PRIV_TEST -d %(path)s -a -r %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then (cd %(path)s; PRIV_LS -al) else echo 'DIRECTORY NOT FOUND' fi
is replaced with:
# This function supports privilege testing of attributes of files. # Used in conjunction with PRIV_CAT and PRIV_LS PRIV_TEST() { test "$@" } # This function supports privilege listing of files and directorie # Used in conjunction with PRIV_TEST PRIV_LS() { ls "$@" }
is replaced with:
PRIV_TEST -f %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then echo "ls:" `PRIV_LS -ld %(path)s` else echo "FILE NOT FOUND" fi
Tru64
The following code:
if [ -d %(path)s -a -r %(path)s ]; then (cd %(path)s; ls -al) else echo 'DIRECTORY NOT FOUND' fi
is replaced with:
Page 166
PRIV_TEST -d %(path)s -a -r %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then (cd %(path)s; PRIV_LS -al) else echo 'DIRECTORY NOT FOUND' fi
is replaced with:
# This function supports privilege testing of attributes of files. # Used in conjunction with PRIV_CAT and PRIV_LS PRIV_TEST() { test "$@" } # This function supports privilege listing of files and directories # Used in conjunction with PRIV_TEST PRIV_LS() { ls "$@" }
is replaced with:
PRIV_TEST -f %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then echo "ls:" `PRIV_LS -ld %(path)s` else echo "FILE NOT FOUND" fi
Mainframe
New methods: getApplication getDependency Changed methods: getDatabaseDetail enabled changed: was False, now True. getTransaction enabled changed: was False, now True. getMQDetail enabled changed: was False, now True.
UnixWare
The following code:
Page 167
if [ -d %(path)s -a -r %(path)s ]; then (cd %(path)s; ls -al) else echo 'DIRECTORY NOT FOUND' fi
is replaced with:
PRIV_TEST -d %(path)s -a -r %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then (cd %(path)s; PRIV_LS -al) else echo 'DIRECTORY NOT FOUND' fi
is replaced with:
# This function supports privilege testing of attributes of files. # Used in conjunction with PRIV_CAT and PRIV_LS PRIV_TEST() { test "$@" } # This function supports privilege listing of files and directories # Used in conjunction with PRIV_TEST PRIV_LS() { ls "$@" }
is replaced with:
PRIV_TEST -f %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then echo "ls:" `PRIV_LS -ld %(path)s` else echo "FILE NOT FOUND" fi
FreeBSD
The following code:
Page 168
if [ -d %(path)s -a -r %(path)s ]; then (cd %(path)s; ls -alT) else echo 'DIRECTORY NOT FOUND' fi
is replaced with:
PRIV_TEST -d %(path)s -a -r %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then (cd %(path)s; PRIV_LS -alT) else echo 'DIRECTORY NOT FOUND' fi
is replaced with:
# This function supports privilege testing of attributes of files. # Used in conjunction with PRIV_CAT and PRIV_LS PRIV_TEST() { test "$@" } # This function supports privilege listing of files and directories # Used in conjunction with PRIV_TEST PRIV_LS() { ls "$@" }
is replaced with:
PRIV_TEST -f %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then echo "ls:" `PRIV_LS -lTd %(path)s` echo "md5sum:" `md5 -q %(path)s 2>/dev/null` else echo "FILE NOT FOUND"
Linux
The following code:
Page 169
is replaced with:
if [ "${model}" != "" ]; then echo 'candidate_model[]:' ${model} fi
is replaced with:
echo 'candidate_serial[]:' ${serial} fi if [ "${model}" != "" ]; then echo 'candidate_model[]:' ${model} fi
is replaced with:
Page 170
# On a VMWare ESX controller, report the *real* hardware information file=/tmp/tideway-hw-$$ PRIV_ESXCFG /usr/sbin/esxcfg-info --hardware > ${file} 2>/dev/null uuid="" if [ $? -eq 0 ]; then physical=`grep "Num Packages\." ${file} | sed -e "s/[Changes to Discovery Commands^0-9]//g"` logical=`grep "Num Cores." ${file} | head -n 1 | sed -e "s/[Changes to Discovery Commands^0-9]//g"` tmp=`echo ${cputype} | sed 's/ @.*$//'` cputype="${tmp} ${cpuspeed}" ram=`grep "Physical Mem\." ${file} | sed 's/[Changes to Discovery Commands^0-9]*//g'`B uuid=`grep "BIOS UUID\." ${file}` else print=0 fi rm -f ${file} # Get UUID as hostid if possible if [ "$uuid" != "" ]; then # Process horrid BIOS UUID format :( echo "$uuid" | sed -e 's/\./ /g' | awk '{ printf("hostid: "); for(i = 3 ; i < 19; i++) { printf("%02s", substr($i,3,2)); if (i == 6 || i == 8 || i == 10 || i == 12) printf("-"); } printf("\n"); }' else if [ -r /etc/slp.reg ]; then uuid=`grep hardwareUuid /etc/slp.reg | cut -f2 -d= | tr '[:upper:]' '[:lower:]' | sed -e 's/"//g'` if [ "${uuid}" != "" ]; then echo 'hostid:' ${uuid} fi fi fi fi if [ -f /opt/xensource/bin/xe ]; then print=0 } }'
Page 171
# Can we get information from the BIOS? if [ -f /usr/sbin/dmidecode ]; then PRIV_DMIDECODE /usr/sbin/dmidecode 2>/dev/null | sed -n '/DMI type 1,/,/^Handle 0x0/p' | awk ' $1 ~ /Manufacturer:/ { sub(".*Manufacturer: *",""); printf( "vendor: %s\n", $0); } $1 ~ /Vendor:/ { sub(".*Vendor: *",""); printf( "vendor: %s\n", $0 ); } $1 ~ /Product/ && $2 ~ /Name:/ { sub(".*Product Name: *",""); printf( "model: %s\n", $0 ); } $1 ~ /Product:/ { sub(".*Product: *",""); printf( "model: %s\n", $0 ); } $1 ~ /Serial/ && $2 ~ /Number:/ { sub(".*Serial Number: *",""); printf( "serial: %s\n", $0 ); } ' fi if [ -f /usr/sbin/hwinfo ]; then PRIV_HWINFO /usr/sbin/hwinfo --bios 2>/dev/null | sed -n '/System Info:/,/Info:/p' | awk ' $1 ~ /Manufacturer:/ { sub(".*Manufacturer: *",""); gsub( "\"", ""); printf( "vendor: %s\n", $0 ); } $1 ~ /Product:/ { sub(".*Product: *",""); gsub( "\"", ""); printf( "model: %s\n", $0 ); } $1 ~ /Serial:/ { sub(".*Serial: *",""); gsub( "\"", ""); printf( "serial: %s\n", $0 ); }' fi # PPC64 LPAR? if [ -f /proc/ppc64/lparcfg ]; then sed -e 's/=/: /' /proc/ppc64/lparcfg | egrep '^[a-z]' | sed -e 's/^serial_number/serial/' -e 's/^system_type/model/' -e 's/partition_id/lpar_id/' -e 's/^group/lpar_partition_group_id/' -e 's///' fi # zLinux? if [ -f /proc/sysinfo -a -d /proc/dasd ]; then echo "vendor:" `egrep '^Manufacturer:' /proc/sysinfo | awk '{print $2;}'` type=`egrep '^Type:' /proc/sysinfo | awk '{print $2;}'` model=`egrep '^Model:' /proc/sysinfo | awk '{print $2;}'` echo "model: $type-$model" echo "zlinux_seqence:" `egrep '^Sequence Code:' /proc/sysinfo | awk '{print $3;}'` echo "zlinux_vm_name:" `egrep '^VM00 Name:' /proc/sysinfo | awk '{print $3;}'` echo "zlinux_vm_software:" `egrep '^VM00 Control Program:' /proc/sysinfo | awk '{print $4, $5;}'` fi
is replaced with:
Page 172
# zLinux? if [ -f /proc/sysinfo -a -d /proc/dasd ]; then echo "candidate_vendor[]:" `egrep '^Manufacturer:' /proc/sysinfo | awk '{print $2;}'` type=`egrep '^Type:' /proc/sysinfo | awk '{print $2;}'` model=`egrep '^Model:' /proc/sysinfo | awk '{print $2;}'` echo "candidate_model[]: $type-$model" echo "zlinux_sequence:" `egrep '^Sequence Code:' /proc/sysinfo | awk '{print $3;}'` echo "zlinux_vm_name:" `egrep '^VM00 Name:' /proc/sysinfo | awk '{print $3;}'` echo "zlinux_vm_software:" `egrep '^VM00 Control Program:' /proc/sysinfo | awk '{print $4, $5;}'` fi # Can we get information from the BIOS? We use lshal if available as that # requires no superuser permissions but we attempt to run all tools as some # can return invalid values in some cases. The system will select the "best" # candidate from the values returned, where "best" is the first non-bogus value if [ -x /usr/bin/lshal ]; then /usr/bin/lshal 2>/dev/null | sed -e 's/(string)$//g' -e "s/'//g" | awk ' $1 ~ /system.hardware.serial/ { sub(".*system.hardware.serial = *", ""); printf("candidate_serial[]: %s\n", $0); } $1 ~ /system.hardware.uuid/ { sub(".*system.hardware.uuid = *", ""); printf("candidate_uuid[]: %s\n", $0); } $1 ~ /system.hardware.product/ { sub(".*system.hardware.product = *", ""); printf("candidate_model[]: %s\n", $0); } $1 ~ /system.hardware.vendor/ { sub(".*system.hardware.vendor = *", ""); printf("candidate_vendor[]: %s\n", $0); }' fi if [ -f /usr/sbin/dmidecode ]; then PRIV_DMIDECODE /usr/sbin/dmidecode 2>/dev/null | sed -n '/DMI type 1,/,/^Handle 0x0/p' | awk ' $1 ~ /Manufacturer:/ { sub(".*Manufacturer: *", ""); printf("candidate_vendor[]: %s\n", $0); } $1 ~ /Vendor:/ { sub(".*Vendor: *", ""); printf("candidate_vendor[]: %s\n", $0); } $1 ~ /Product/ && $2 ~ /Name:/ { sub(".*Product Name: *", ""); printf("candidate_model[]: %s\n", $0); } $1 ~ /Product:/ { sub(".*Product: *",""); printf("candidate_model[]: %s\n", $0 ); } $1 ~ /Serial/ && $2 ~ /Number:/ { sub(".*Serial Number: *", ""); printf("candidate_serial[]: %s\n", $0); } $1 ~ /UUID:/ { sub(".*UUID: *", ""); printf( "candidate_uuid[]: %s\n", $0 ); } ' fi if [ -f /usr/sbin/hwinfo ]; then PRIV_HWINFO /usr/sbin/hwinfo --bios 2>/dev/null | sed -n '/System Info:/,/Info:/p' | awk ' $1 ~ /Manufacturer:/ { sub(".*Manufacturer: *", ""); gsub("\"", ""); printf("candidate_vendor[]: %s\n", $0); } $1 ~ /Product:/ { sub(".*Product: *", ""); gsub("\"", ""); printf("candidate_model[]: %s\n", $0); } $1 ~ /Serial:/ { sub(".*Serial: *", ""); gsub("\"", ""); printf("candidate_serial[]: %s\n", $0); } $1 ~ /UUID:/ { sub(".*UUID: *", ""); gsub("\"", ""); printf("candidate_uuid[]: %s\n", $0); } ' fi # PPC64 LPAR? if [ -r /proc/ppc64/lparcfg ]; then echo begin lparcfg: cat /proc/ppc64/lparcfg 2>/dev/null echo end lparcfg fi # LPAR name? if [ -r /proc/device-tree/ibm,partition-name ]; then echo "lpar_partition_name:" `cat /proc/device-tree/ibm,partition-name` fi
Page 173
is replaced with:
PRIV_TEST -d %(path)s -a -r %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then (cd %(path)s; PRIV_LS -a --full-time --color=never) else echo 'DIRECTORY NOT FOUND' fi
is replaced with:
# This function supports privilege testing of attributes of files. # Used in conjunction with PRIV_CAT and PRIV_LS PRIV_TEST() { test "$@" } # This function supports privilege listing of files and directories # Used in conjunction with PRIV_TEST PRIV_LS() { ls "$@" }
Page 174
os="" if [ "$os" = "" -a -e /proc/vmware/version ]; then os=`grep -m1 ESX /proc/vmware/version` fi if [ "$os" = "" -a -f /etc/SuSE-release ]; then os=`head -n 1 /etc/SuSE-release` fi if [ "$os" = "" -a -f /etc/lsb-release ]; then ostype=`grep DISTRIB_ID /etc/lsb-release | cut -f2 -d=` osver=`grep DISTRIB_RELEASE /etc/lsb-release | cut -f2 -d=` os="$ostype $osver" fi if [ "$os" = "" -a -f /etc/debian_version ]; then ver=`cat /etc/debian_version` os="Debian Linux $ver"
is replaced with:
os="" if [ "$os" = "" -a -x /usr/bin/lsb_release ]; then # We'd like to use -ds but that puts quotes in the output! os=`/usr/bin/lsb_release -d | cut -f2 -d:` fi if [ "$os" = "" -a -e /proc/vmware/version ]; then os=`grep -m1 ESX /proc/vmware/version` fi if [ "$os" = "" -a -f /etc/SuSE-release ]; then os=`head -n 1 /etc/SuSE-release` fi if [ "$os" = "" -a -f /etc/debian_version ]; then ver=`cat /etc/debian_version` os="Debian Linux $ver"
is replaced with:
PRIV_TEST -f %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then echo "ls:" `PRIV_LS -ld %(path)s` echo "md5sum:" `md5sum %(path)s 2>/dev/null | awk '{print $1;}'` else echo "FILE NOT FOUND"
OpenBSD
Page 175
is replaced with:
PRIV_TEST -d %(path)s -a -r %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then (cd %(path)s; PRIV_LS -alT) else echo 'DIRECTORY NOT FOUND' fi
is replaced with:
# This function supports privilege testing of attributes of files. # Used in conjunction with PRIV_CAT and PRIV_LS PRIV_TEST() { test "$@" } # This function supports privilege listing of files and directories # Used in conjunction with PRIV_TEST PRIV_LS() { ls "$@" }
is replaced with:
PRIV_TEST -f %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then echo "ls:" `PRIV_LS -lTd %(path)s` echo "md5sum:" `md5 -q %(path)s 2>/dev/null` else echo "FILE NOT FOUND"
NetBSD
The following code:
Page 176
if [ -d %(path)s -a -r %(path)s ]; then (cd %(path)s; ls -alT) else echo 'DIRECTORY NOT FOUND' fi
is replaced with:
PRIV_TEST -d %(path)s -a -r %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then (cd %(path)s; PRIV_LS -alT) else echo 'DIRECTORY NOT FOUND' fi
is replaced with:
# This function supports privilege testing of attributes of files. # Used in conjunction with PRIV_CAT and PRIV_LS PRIV_TEST() { test "$@" } # This function supports privilege listing of files and directories # Used in conjunction with PRIV_TEST PRIV_LS() { ls "$@" }
is replaced with:
PRIV_TEST -f %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then echo "ls:" `PRIV_LS -lTd %(path)s` echo "md5sum:" `md5 -q %(path)s 2>/dev/null` else echo "FILE NOT FOUND"
HP-UX
The following code:
Page 177
par_id=`/usr/sbin/parstatus -w 2> /dev/null | grep "The local partition number is" | sed 's/.*The local partition number is \([0-9]*\)\..*/\1/'` if [ "$par_id" ]; then echo par_id: $par_id par_status=`/usr/sbin/parstatus -C -M | grep "$par_id"\$` for i in `echo "$par_status" | sed 's/^[Changes to Discovery Commands^:]*:[Changes to Discovery Commands^:]*:[Changes to Discovery Commands^:]*: *\([0-9]*\).*/\1/'`; do cores=$((cores + i)) done for i in `echo "$par_status" | sed 's/^[Changes to Discovery Commands^:]*:[Changes to Discovery Commands^:]*:[Changes to Discovery Commands^:]*:[Changes to Discovery Commands^:]*: *\([0-9]*\).*/\1/'`; do ram=$((ram + i)) done ram=`echo "$ram" '* 1048576 * 1024' | bc` echo ram: $ram fi
is replaced with:
par_id=`/usr/sbin/parstatus -w 2> /dev/null | grep "The local partition number is" | sed 's/.*The local partition number is \([0-9]*\)\..*/\1/'` if [ "$par_id" ]; then echo par_id: $par_id cpumem=`/usr/sbin/parstatus -C -M |awk -v PAR_ID=$par_id 'BEGIN{ FS=":" };{ if ($NF==PAR_ID) { split($4,cpuarray,"/"); split($5,memarray,"/"); cpu+=cpuarray[1]; mem+=int(memarray[1])} };END{ printf "%d %d\n",cpu,mem }'` cores=`echo "$cpumem" |awk '{print $1}'` ram=`echo "$cpumem" |awk '{print $2}'` ram=`echo "$ram" '* 1048576 * 1024' | bc` echo ram: $ram fi
is replaced with:
netstat -inw > /tmp/addm.$$ 2>/dev/null if [ $? -ne 0 ]; then netstat -in > /tmp/addm.$$ 2>/dev/null fi interfaces=`cat /tmp/addm.$$ | cut -d\ -f 1 | grep -Ev '\*|^Name|^lo0|^IP|^$' | sort -u` rm /tmp/addm.$$ for interface in $interfaces do ifconfig $interface 2>/dev/null
is replaced with:
Page 178
PRIV_TEST -d %(path)s -a -r %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then (cd %(path)s; PRIV_LS -al) else echo 'DIRECTORY NOT FOUND' fi
is replaced with:
# This function supports privilege testing of attributes of files. # Used in conjunction with PRIV_CAT and PRIV_LS PRIV_TEST() { test "$@" } # This function supports privilege listing of files and directories # Used in conjunction with PRIV_TEST PRIV_LS() { ls "$@" }
is replaced with:
PRIV_TEST -f %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then echo "ls:" `PRIV_LS -ld %(path)s` echo "md5sum:" `md5sum %(path)s 2>/dev/null | awk '{print $1;}'` else echo "FILE NOT FOUND"
VMware ESXi
The following code is added:
Page 179
# Get UUID as hostid if possible uuid=`/sbin/smbiosDump | grep UUID | grep -v undefined` if [ "$uuid" != "" ]; then # UUID from smbiosDump is not formated and bytes are reversed :( echo $uuid | awk '{ sub(".*UUID: *",""); printf("hostid: %s%s%s%s-%s%s-%s%s-%s%s-%s%s%s%s%s%s\n", substr($0,31,2), substr($0,29,2), substr($0,27,2), substr($0,25,2), substr($0,23,2), substr($0,21,2), substr($0,19,2), substr($0,17,2), substr($0,15,2), substr($0,13,2), substr($0,11,2), substr($0,9,2), substr($0,7,2), substr($0,5,2), substr($0,3,2), substr($0,1,2)); }' else # Attempt to get UUID from slp.reg if [ -r /etc/slp.reg ]; then uuid=`grep hardwareUuid /etc/slp.reg | cut -f2 -d= | tr '[:upper:]' '[:lower:]' | sed -e 's/"//g'` if [ "${uuid}" != "" ]; then echo 'hostid:' ${uuid} fi fi fi
is replaced with:
PRIV_TEST -d %(path)s -a -r %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then (cd %(path)s; PRIV_LS -lae --color=never) else echo 'DIRECTORY NOT FOUND' fi
is replaced with:
# This function supports privilege testing of attributes of files. # Used in conjunction with PRIV_CAT and PRIV_LS PRIV_TEST() { test "$@" } # This function supports privilege listing of files and directories # Used in conjunction with PRIV_TEST PRIV_LS() { ls "$@" }
Page 180
is replaced with:
PRIV_TEST -f %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then echo "ls:" `PRIV_LS -ld %(path)s` echo "md5sum:" `md5sum %(path)s 2>/dev/null | awk '{print $1;}'` else echo "FILE NOT FOUND"
is replaced with:
PRIV_TEST -f %(path)s > /dev/null 2> /dev/null if [ $? -eq 0 ]; then echo "ls:" `PRIV_LS -ld %(path)s` echo "md5sum:" `md5sum %(path)s 2>/dev/null | awk '{print $1;}'` else echo "FILE NOT FOUND"
is replaced with:
ihn=`hostname -s 2>/dev/null` if [ "$ihn" = "" -o "$ihn" = "localhost" ]; then ihn=`hostname 2>/dev/null` fi echo 'hostname:' $ihn
Page 181
Windows 2003 SP2 (x86 and x86_64) IPv4 discovery only Windows 2008 - Service Pack 2 (x86 and x86_64) Windows 2008 R2 Windows 2003 SP2 (x86 and x86_64) IPv4 discovery only Windows 2008 - Service Pack 2 (x86 and x86_64) Windows 2008 R2
getFileSystems The getFileSystems discovery method was introduced with BMC Atrium Discovery version 8.2. Windows proxies before version 8.2 do not support this method although are supported in all other respects.
To avoid any impact during resource-intensive periods of discovery, it is strongly recommended not to install the Windows proxy on any host supporting other business services. This is true even if the minimum Windows proxy specification is exceeded, since the Windows proxy will attempt to use what resources are available, in order to optimize scan throughput.
Page 182
CD/DVD Drive: 1 (auto detect) Network Interface Cards: 1 (eth0) bridged, configured to use DHCP to obtain an IP address. The Network Interface Card (eth0) is configured to use DHCP. However, this can be changed to use a static IP. During the install process, the following network configuration is required: DHCP or static IP for eth0 interface If Static IP then provide the following: Appliance IP Address Gateway Subnet Mask DNS for address lookup
Defaults For initial testing at a small scale, the default configuration is sufficient. For production use, or use at scale you must increase the RAM, CPU and disk configuration in accordance with the sizing guidelines. One size Virtual Appliance is supplied, because a single size simplifies delivery and does not require you to download various sized VMs for various applications such as scanning and consolidation appliances. 2GB RAM is insufficient to activate a TKU in an acceptable time. To use a TKU in testing, you must increase the amount of RAM in the system to 4GB or more.
BMC Atrium Discovery is no longer supplied on physical platforms, it is supplied as a virtual appliance which is described in the Virtual Appliance section, or as a kickstart DVD which can be installed onto your own hardware. The BMC Atrium Discovery appliance platforms are x86-64 based hardware that is supported by Red Hat Enterprise Linux 6 (64-bit only).
Only servers can be used in a production environment. Workstations and laptops are not up to production specification.
Page 183
Page 184
Page 185
device-mapper-1.02.74-10.el6.x86
device-mapper-libs-1.02.74-10.el6 diffutils-2.8.1-28.el6.x86_64 dracut-004-284.el6_3.1.noarch e2fsprogs-1.41.12-12.el6.x86_64 elfutils-libelf-0.152-1.el6.x86_64 expat-2.0.1-11.el6_2.x86_64 file-libs-5.04-13.el6.x86_64 fipscheck-1.2.0-7.el6.x86_64 ftp-0.17-51.1.el6.x86_64 gcc-4.4.6-4.el6.x86_64 glib2-2.22.5-7.el6.x86_64
iptables-ipv6-1.4.7-5.1.el6_2.x86_6 kbd-1.15-11.el6.x86_64
Page 186
kbd-misc-1.15-11.el6.noarch kernel-headers-2.6.32-279.11.1.el6.x86_64 keyutils-libs-1.4-4.el6.x86_64 krb5-libs-1.9-33.el6_3.3.x86_64 libacl-2.2.49-6.el6.x86_64 libcap-2.16-5.5.el6.x86_64 libcurl-7.19.7-26.el6_2.4.x86_64 libffi-3.0.5-3.2.el6.x86_64 libgomp-4.4.6-4.el6.x86_64 libnih-1.0.1-7.el6.x86_64 libselinux-utils-2.0.94-5.3.el6.x86_64 libss-1.41.12-12.el6.x86_64 libtalloc-2.0.1-1.1.el6.x86_64 libusb-0.1.12-23.el6.x86_64 libuuid-2.17.2-12.7.el6.x86_64 lm_sensors-libs-3.1.1-10.el6.x86_64 lvm2-2.02.95-10.el6.x86_64 mailcap-2.1.31-2.el6.noarch MAKEDEV-3.24-6.el6.x86_64 mingetty-1.08-5.el6.x86_64 mpfr-2.4.1-6.el6.x86_64 ncurses-5.7-3.20090208.el6.x86_64 net-snmp-5.5-41.el6_3.1.x86_64 nmap-6.01-1.rhel6.x86_64 nss-softokn-3.12.9-11.el6.x86_64 nss-sysinit-3.13.5-1.el6_3.x86_64 ntp-4.2.4p8-2.el6.x86_64 openldap-clients-2.4.23-26.el6_3.2.x86_64 openssh-clients-5.3p1-81.el6.x86_64 pam-1.1.1-10.el6_2.1.x86_64 pcre-7.8-4.el6.x86_64 perl-Module-Pluggable-3.90-127.el6.x86_64 perl-version-0.77-127.el6.x86_64 plymouth-0.8.3-24.el6.x86_64 policycoreutils-2.0.83-19.24.el6.x86_64 ppl-0.10.2-11.el6.x86_64 pth-2.0.7-9.3.el6.x86_64 readline-6.0-4.el6.x86_64 rootfiles-8.1-6.1.el6.noarch
kernel-2.6.32-279.11.1.el6.x86_64 kexec-tools-2.0.0-245.el6.x86_64 kpartx-0.4.9-56.el6_3.1.x86_64 krb5-workstation-1.9-33.el6_3.3.x86_64 libattr-2.4.44-7.el6.x86_64 libcap-ng-0.6.4-3.el6_0.1.x86_64 libdrm-2.4.25-2.el6.x86_64 libgcc-4.4.6-4.el6.x86_64 libgpg-error-1.7-4.el6.x86_64 libpcap-1.0.0-6.20091201git117cb5.el6.x86_64 libsemanage-2.0.43-4.1.el6.x86_64 libssh2-1.2.2-11.el6_3.x86_64 libtdb-1.2.1-3.el6.x86_64 libuser-0.56.13-5.el6.x86_64 libxml2-2.7.6-8.el6_3.3.x86_64 logrotate-3.7.8-15.el6.x86_64 lvm2-libs-2.02.95-10.el6.x86_64 mailx-12.4-6.el6.x86_64 man-1.6f-30.el6.x86_64 module-init-tools-3.9-20.el6.x86_64 mysql-libs-5.1.61-4.el6.x86_64 ncurses-base-5.7-3.20090208.el6.x86_64 net-snmp-libs-5.5-41.el6_3.1.x86_64 nspr-4.9.1-2.el6_3.x86_64 nss-softokn-freebl-3.12.9-11.el6.i686 nss-tools-3.13.5-1.el6_3.x86_64 ntpdate-4.2.4p8-2.el6.x86_64 openldap-devel-2.4.23-26.el6_3.2.x86_64 openssh-server-5.3p1-81.el6.x86_64 parted-2.1-18.el6.x86_64 perl-5.10.1-127.el6.x86_64 perl-Pod-Escapes-1.04-127.el6.x86_64 pinentry-0.7.6-6.el6.x86_64 plymouth-core-libs-0.8.3-24.el6.x86_64 popt-1.13-7.el6.x86_64 procps-3.2.8-23.el6.x86_64 python-2.6.6-29.el6_3.3.x86_64 redhat-logos-60.0.14-1.el6.noarch rpm-4.8.0-27.el6.x86_64
kernel-firmware-2.6.32-279.11.1.el keyutils-1.4-4.el6.x86_64
libedit-2.11-4.20080712cvs.1.el6.x libgcrypt-1.4.5-9.el6_2.2.x86_64 libidn-1.18-2.el6.x86_64 libselinux-2.0.94-5.3.el6.x86_64 libsepol-2.0.41-4.el6.x86_64 libstdc++-4.4.6-4.el6.x86_64 libudev-147-2.42.el6.x86_64 libutempter-1.1.5-4.1.el6.x86_64 libxslt-1.1.26-2.el6_3.1.x86_64 lua-5.1.4-4.1.el6.x86_64 m4-1.4.13-5.el6.x86_64 make-3.81-20.el6.x86_64 mdadm-3.2.3-9.el6.x86_64
mod_ssl-2.2.15-15.el6_2.1.x86_64 nano-2.0.9-7.el6.x86_64
nss-softokn-freebl-3.12.9-11.el6.x8 nss-util-3.13.5-1.el6_3.x86_64
perl-Pod-Simple-3.13-127.el6.x86_ pkgconfig-0.23-9.1.el6.x86_64
python-libs-2.6.6-29.el6_3.3.x86_6
redhat-release-server-6Server-6.3. rpm-libs-4.8.0-27.el6.x86_64
Page 187
rsh-0.17-60.el6.x86_64 samba-client-3.5.10-125.el6.x86_64 screen-4.0.3-16.el6.x86_64 selinux-policy-targeted-3.7.19-155.el6_3.4.noarch smartmontools-5.42-2.el6.x86_64 sudo-1.7.4p5-13.el6_3.x86_64 tar-1.23-7.el6.x86_64 telnet-0.17-47.el6_3.1.x86_64 tideway-device-api-4.0-1.rhel6.noarch traceroute-2.0.14-2.el6.x86_64 tw-db-5.3.15-2.py2.7.rhel6.x86_64 tw-egenix-mx-base-3.2.4-1.py2.7.rhel6.x86_64 tw-jsvc-1.0.10-2.rhel6.x86_64 tw-omniORB-bootscripts-4.1.99.6105-1.py2.7.rhel6.x86_64 tw-omniORB-doc-4.1.99.6105-1.py2.7.rhel6.x86_64 tw-omniORBpy-4.1.99.6105-1.py2.7.rhel6.x86_64 tw-omniORBpy-doc-4.1.99.6105-1.py2.7.rhel6.x86_64 tw-PIL-1.1.7-2.py2.7.rhel6.x86_64 tw-pyOpenSSL-0.11-1.py2.7.rhel6.x86_64 tw-python-devel-2.7.3-4.rhel6.x86_64 tw-python-tools-2.7.3-4.rhel6.x86_64 tw-snmp++-3.2.25-6.rhel6.x86_64 tw-tripwire-2.4.2-1.rhel6.x86_64 unzip-6.0-1.el6.x86_64 util-linux-ng-2.17.2-12.7.el6.x86_64 wget-1.12-1.4.el6.x86_64 xz-libs-4.999.9-0.3.beta.20091007git.el6.x86_64
rsyslog-5.8.10-2.el6.x86_64 samba-common-3.5.10-125.el6.x86_64 sed-4.2.1-10.el6.x86_64 setup-2.8.14-16.el6.noarch sqlite-3.6.20-1.el6.x86_64 sysstat-9.0.4-20.el6.x86_64 tcpdump-4.0.0-3.20090921gitdf3cb4.2.el6.x86_64 tideway-8.90.6-291836.rhel6.dev.x86_64 tideway-devices-4.0.2012.09.2-290770.ga.noarch tw-atrium-1.2-1.rhel6.x86_64 tw-db-debuginfo-5.3.15-2.py2.7.rhel6.x86_64 tw-esmre-0.3.1-2.py2.7.rhel6.x86_64 tw-M2Crypto-0.21.1-1.rhel6.x86_64 tw-omniORB-debuginfo-4.1.99.6105-1.py2.7.rhel6.x86_64 tw-omniORB-servers-4.1.99.6105-1.py2.7.rhel6.x86_64 tw-omniORBpy-debuginfo-4.1.99.6105-1.py2.7.rhel6.x86_64 tw-omniORBpy-standard-4.1.99.6105-1.py2.7.rhel6.x86_64 tw-ply-3.4-1.py2.7.rhel6.noarch tw-python-2.7.3-4.rhel6.x86_64 tw-python-ldap-2.3.13-2.py2.7.rhel6.x86_64 tw-PyXML-0.8.4-4.py2.7.rhel6.x86_64 tw-sshpass-1.05-1.rhel6.x86_64 tzdata-2012f-1.el6.noarch upstart-0.6.5-12.el6.x86_64 vim-minimal-7.2.411-1.8.el6.x86_64 which-2.19-6.el6.x86_64 xz-lzma-compat-4.999.9-0.3.beta.20091007git.el6.x86_64
samba-3.5.10-114.el6.x86_64
samba-winbind-clients-3.5.10-125.
selinux-policy-3.7.19-155.el6_3.4.n
shadow-utils-4.1.4.2-13.el6.x86_64
strace-4.5.19-1.11.el6_3.2.x86_64
sysvinit-tools-2.87-4.dsf.el6.x86_64
tcp_wrappers-libs-7.6-57.el6.x86_6
tideway-appliance-6.12.10.67-2917
tideway-documentation-8.90.6-291
tw-bsddb3-5.3.0-1.py2.7.rhel6.x86
tw-docutils-0.8.1-3.py2.7.rhel6.noa
tw-ipaddr-2.1.10-py2.7.rhel6.noarc
tw-omniORB-4.1.99.6105-1.py2.7.
tw-omniORB-devel-4.1.99.6105-1.
tw-omniORB-utils-4.1.99.6105-1.py
tw-omniORBpy-devel-4.1.99.6105-
tw-pexpect-2.5.1-1.beta.py2.7.rhel
tw-pycrypto-2.0.1-3.py2.7.rhel6.x86
tw-python-debuginfo-2.7.3-4.rhel6.
tw-python-tkinter-2.7.3-4.rhel6.x86
Webware-1.1-1.py2.7.rhel6.x86_64
xz-4.999.9-0.3.beta.20091007git.e zlib-1.2.3-27.el6.x86_64
Page 188
bash-3.2-32.el5 bind-libs-9.3.6-20.P1.el5_8.5 bitstream-vera-fonts-1.10-7 bluez-utils-3.7-2.2 bzip2-libs-1.0.3-6.el5_5 checkpolicy-1.33.1-6.el5 compat-libstdc++-33-3.2.3-61 coolkey-1.1.0-15.el5 cpp-4.1.2-52.el5_8.1 cracklib-dicts-2.8.9-3.3 cryptsetup-luks-1.0.3-8.el5 curl-7.15.5-15.el5 cyrus-sasl-lib-2.1.22-7.el5_8.1 db4-devel-4.3.29-10.el5_5.2 dbus-libs-1.1.2-16.el5_7 desktop-file-utils-0.10-7 device-mapper-multipath-0.4.7-48.el5_8.1 dhcpv6-client-1.0.10-20.el5 dmidecode-2.11-1.el5 dnsmasq-2.45-1.1.el5_3 dump-0.4b41-6.el5 ed-0.2-39.el5_2 elinks-0.11.1-6.el5_4.1 ethtool-6-4.el5 fbset-2.1-22 findutils-4.2.27-6.el5 fipscheck-lib-1.2.0-1.el5 freetype-2.2.1-31.el5_8.1 gamin-python-0.1.7-10.el5 GConf2-2.14.0-9.el5 gdbm-1.8.0-26.2.1.el5_6.1 glib2-2.12.3-4.el5_3.1 glibc-devel-2.5-81.el5_8.7 gmp-4.1.4-10.el5 gpg-pubkey-66a46459-4fffc350 groff-1.18.1.1-13.el5 gzip-1.3.5-13.el5 hesiod-3.1.0-8 htmlview-4.0.0-2.el5
bc-1.06-21 bind-utils-9.3.6-20.P1.el5_8.5 bluez-gnome-0.5-5.fc6 busybox-1.2.0-13.el5 cairo-1.2.4-5.el5 chkconfig-1.3.30.2-2.el5 compat-openldap-2.3.43_2.2.29-25.el5_8.1 coreutils-5.97-34.el5_8.1 cpuspeed-1.2.1-10.el5 crash-5.1.8-1.el5 cups-1.3.7-30.el5 cyrus-sasl-2.1.22-7.el5_8.1 cyrus-sasl-plain-2.1.22-7.el5_8.1 dbus-1.1.2-16.el5_7 dbus-python-0.70-9.el5_4 device-mapper-1.02.67-2.el5 dhcdbd-2.2-2.el5 diffutils-2.8.1-15.2.3.el5 dmraid-1.0.0.rc13-65.el5 dos2unix-3.1-27.2.el5 e2fsprogs-1.39-34.el5_8.1 eject-2.1.5-4.2.el5 emacs-21.4-24.el5 expat-1.95.8-11.el5_8 file-4.17-21 finger-0.17-33 firstboot-tui-1.4.27.9-1.el5 ftp-0.17-37.el5 gawk-3.1.5-15.el5 gd-2.0.33-9.4.el5_4.2 gettext-0.17-1.el5 glibc-2.5-81.el5_8.7 glibc-headers-2.5-81.el5_8.7 gnupg-1.4.5-14.el5_5.1 gpm-1.20.1-74.1 grub-0.97-13.5 hal-0.5.8.1-62.el5 hicolor-icon-theme-0.9-2.1 httpd-2.2.3-65.el5_8
beecrypt-4.1.2-10.1.1 binutils-2.17.50.0.6-20.el5_8.3 bluez-libs-3.7-1.1 bzip2-1.0.3-6.el5_5 ccid-1.3.8-1.el5 chkfontpath-1.10.1-1.1 conman-0.1.9.2-8.el5 cpio-2.6-23.el5_4.1 cracklib-2.8.9-3.3 crontabs-1.10-8 cups-libs-1.3.7-30.el5 cyrus-sasl-devel-2.1.22-7.el5_8.1 db4-4.3.29-10.el5_5.2 dbus-glib-0.73-10.el5_5 Deployment_Guide-en-US-5.8-1.el5 device-mapper-event-1.02.67-2.el5 dhclient-3.0.5-31.el5_8.1 distcache-1.4.5-14.1 dmraid-events-1.0.0.rc13-65.el5 dosfstools-2.11-9.el5 e2fsprogs-libs-1.39-34.el5_8.1 elfutils-libelf-0.137-3.el5 emacs-common-21.4-24.el5 expat-devel-1.95.8-11.el5_8 filesystem-2.4.0-3.el5 fipscheck-1.2.0-1.el5 fontconfig-2.4.1-7.el5 gamin-0.1.7-10.el5 gcc-4.1.2-52.el5_8.1 gdb-7.0.1-42.el5_8.1 giflib-4.1.3-7.3.3.el5 glibc-common-2.5-81.el5_8.7 glibc-utils-2.5-81.el5_8.7 gnutls-1.4.1-7.el5_8.2 grep-2.5.1-55.el5 gtk2-2.10.4-21.el5_7.7 hdparm-6.6-2 hmaccalc-0.9.6-4.el5 httpd-devel-2.2.3-65.el5_8
Page 189
hwdata-0.213.26-1.el5 initscripts-8.45.42-1.el5_8.1 ipsec-tools-0.6.5-14.el5_8.5 iptstate-1.4-2.el5 irqbalance-0.55-15.el5 kbd-1.12-21.el5 kernel-headers-2.6.18-308.16.1.el5 kpartx-0.4.7-48.el5_8.1 ksh-20100621-5.el5_8.1 lftp-3.7.11-7.el5 libattr-2.4.32-1.1 libdrm-2.0.2-1.1 libFS-1.0.0-3.1 libgcrypt-devel-1.4.4-5.el5_8.2 libgpg-error-devel-1.4-2 libICE-1.0.1-2.1 libjpeg-6b-37 libpcap-0.9.4-15.el5 libselinux-python-1.33.4-5.7.el5 libsepol-1.15.2-3.el5 libstdc++-4.1.2-52.el5_8.1 libtiff-3.8.2-15.el5_8 libutempter-1.1.4-4.el5 libX11-1.0.3-11.el5_7.1 libXdmcp-1.0.1-2.1 libXfont-1.2.2-1.0.4.el5_7 libXinerama-1.0.1-2.1 libxml2-python-2.6.26-2.1.15.el5_8.5 libXrandr-1.1.1-3.3 libxslt-1.1.17-4.el5_8.3 libXtst-1.0.1-3.1 logrotate-3.7.4-12 lvm2-2.02.88-7.el5 mailcap-2.1.23-1.fc6 MAKEDEV-3.23-1.2 mcelog-0.9pre-1.32.el5 mesa-libGL-6.5.1-7.10.el5 mingetty-1.07-5.2.2 mktemp-1.5-24.el5
ifd-egate-0.05-17.el5 iozone-3-398 iptables-1.3.5-9.1.el5 iputils-20020927-46.el5 iscsi-initiator-utils-6.2.0.872-13.el5 kernel-2.6.18-308.16.1.el5 kexec-tools-1.102pre-154.el5_8.1 krb5-libs-1.6.1-70.el5 kudzu-1.2.57.1.26-3 libacl-2.2.39-8.el5 libcap-1.10-26 libevent-1.4.13-1 libgcc-4.1.2-52.el5_8.1 libgomp-4.4.6-3.el5.1 libgssapi-0.10-2 libIDL-0.8.7-1.fc6 libnl-1.0-0.10.pre5.5 libpng-1.2.10-17.el5_8 libselinux-utils-1.33.4-5.7.el5 libSM-1.0.1-3.1 libsysfs-2.1.0-1.el5 libusb-0.1.12-6.el5 libvolume_id-095-14.27.el5_7.1 libXau-1.0.1-3.1 libXext-1.0.1-2.1 libXft-2.1.10-1.1 libxml2-2.6.26-2.1.15.el5_8.5 libXmu-1.0.2-5 libXrender-0.9.1-3.1 libxslt-devel-1.1.17-4.el5_8.3 libXxf86vm-1.0.1-3.1 logwatch-7.3-9.el5_6 m2crypto-0.16-8.el5 mailx-8.1.1-44.2.2 man-1.6d-2.el5 mcstrans-0.2.11-3.el5 mgetty-1.1.33-9.fc6 mkbootdisk-1.5.3-2.1 mlocate-0.15-1.el5.2
info-4.8-14.el5 iproute-2.6.18-13.el5 iptables-ipv6-1.3.5-9.1.el5 irda-utils-0.9.17-2.fc6 jwhois-3.2.3-12.el5 kernel-devel-2.6.18-308.16.1.el5 keyutils-libs-1.2-1.el5 krb5-workstation-1.6.1-70.el5 less-436-9.el5 libaio-0.3.106-5 libdaemon-0.10-5.el5 libfontenc-1.0.2-2.2.el5 libgcrypt-1.4.4-5.el5_8.2 libgpg-error-1.4-2 libhugetlbfs-1.3-8.2.el5 libidn-0.6.5-1.1 libnotify-0.4.2-6.el5 libselinux-1.33.4-5.7.el5 libsemanage-1.9.1-4.4.el5 libsmbclient-3.0.33-3.39.el5_8 libtermcap-2.0.8-46.1 libuser-0.54.7-2.1.el5_5.2 libwnck-2.16.0-4.fc6 libXcursor-1.1.7-1.2 libXfixes-4.0.1-2.1 libXi-1.0.1-4.el5_4 libxml2-devel-2.6.26-2.1.15.el5_8.5 libXpm-3.5.5-3 libXres-1.0.1-3.1 libXt-1.0.2-3.2.el5 lm_sensors-2.10.7-9.el5 lsof-4.78-6 m4-1.4.5-3.el5.1 make-3.81-3.el5 man-pages-2.39-20.el5 mdadm-2.6.9-3.el5 microcode_ctl-1.17-1.56.el5 mkinitrd-5.1.19.6-75.el5 module-init-tools-3.3-0.pre3.1.60.el5_5.1
Page 190
mod_ssl-2.2.3-65.el5_8 mtr-0.71-3.1 nc-1.84-10.fc6 net-snmp-5.3.2.2-17.el5_8.1 net-tools-1.60-82.el5 newt-0.52.2-15.el5 nmap-6.01-1.rhel5 nspr-4.9.1-4.el5_8 nss_db-2.2-35.4.el5_5 ntsysv-1.3.30.2-2.el5 OpenIPMI-libs-2.0.16-12.el5 openldap-devel-2.3.43-25.el5_8.1 openssh-server-4.3p2-82.el5 ORBit2-2.14.3-5.el5 pam_krb5-2.2.14-22.el5 pam_smb-1.1.7-7.2.1 parted-1.8.1-29.el5 pax-3.4-2.el5 pcre-6.6-6.el5_6.1 perl-5.8.8-38.el5 perl-URI-1.35-3 pkinit-nss-0.7.6-1.el5 poppler-0.5.4-19.el5 portmap-4.0-65.2.2.1 prelink-0.4.0-2.el5 psacct-6.3.2-44.el5 pyOpenSSL-0.6-2.el5 python-elementtree-1.2.6-5 python-sqlite-1.1.7-1.2.1 rcs-5.7-30.1 readahead-1.3-8.el5 redhat-lsb-4.0-2.1.4.el5 redhat-release-notes-5Server-43 rhn-client-tools-0.4.20-77.el5 rhnsd-4.7.0-10.el5 rng-utils-2.0-5.el5 rpm-4.4.2.3-28.el5_8 rsh-0.17-40.el5_7.1 samba-client-3.0.33-3.39.el5_8
mozldap-6.0.5-1.el5 nano-1.3.12-1.1 ncurses-5.5-24.20060715 net-snmp-libs-5.3.2.2-17.el5_8.1 NetworkManager-0.7.0-13.el5 nfs-utils-1.0.9-60.el5 notification-daemon-0.3.5-9.el5 nss-3.13.5-4.el5_8 nss_ldap-253-49.el5 numactl-0.9.8-12.el5_6 openldap-2.3.43-25.el5_8.1 openssh-4.3p2-82.el5 openssl-0.9.8e-22.el5_8.4 pam-0.99.6.2-6.el5_5.2 pam_passwdqc-1.0.2-1.2.2 pango-1.14.9-8.el5_7.3 passwd-0.73-2 pciutils-3.1.7-5.el5 pcsc-lite-1.4.4-4.el5_5 perl-Convert-ASN1-0.20-1.1 pinfo-0.6.9-1.fc6 pm-utils-0.99.3-10.el5 poppler-utils-0.5.4-19.el5 postgresql-libs-8.1.23-6.el5_8 procmail-3.22-17.1 psmisc-22.2-7.el5_6.2 python-2.4.3-46.el5_8.2 python-iniparse-0.2.3-4.el5 python-urlgrabber-3.1.0-6.el5 rdate-1.4-8.el5 readline-5.1-3.el5 redhat-menus-6.7.8-3.el5 rhel-instnum-1.0.9-1.el5 rhn-setup-0.4.20-77.el5 rhpl-0.194.1-2 rootfiles-8.1-1.1.1 rpm-libs-4.4.2.3-28.el5_8 rsync-3.0.6-4.el5_7.1 samba-common-3.0.33-3.39.el5_8
mtools-3.9.10-2.fc6 nash-5.1.19.6-75.el5 neon-0.25.5-10.el5_4.1 net-snmp-utils-5.3.2.2-17.el5_8.1 NetworkManager-glib-0.7.0-13.el5 nfs-utils-lib-1.0.8-7.9.el5 nscd-2.5-81.el5_8.7 nss-tools-3.13.5-4.el5_8 ntp-4.2.2p1-15.el5_7.1 OpenIPMI-2.0.16-12.el5 openldap-clients-2.3.43-25.el5_8.1 openssh-clients-4.3p2-82.el5 openssl097a-0.9.7a-11.el5_8.2 pam_ccreds-3-5 pam_pkcs11-0.5.3-26.el5 paps-0.6.6-20.el5 patch-2.5.4-31.el5 pcmciautils-014-5 pcsc-lite-libs-1.4.4-4.el5_5 perl-String-CRC32-1.4-2.fc6 pkgconfig-0.21-2.el5 policycoreutils-1.33.12-14.8.el5 popt-1.10.2.3-28.el5_8 ppp-2.4.4-2.el5 procps-3.2.7-18.el5 pygobject2-2.12.1-5.el5 python-dmidecode-3.10.13-1.el5_5.1 python-libs-2.4.3-46.el5_8.2 quota-3.13-5.el5 rdist-6.1.5-44 redhat-logos-4.9.16-1 redhat-release-5Server-5.8.0.3 rhn-check-0.4.20-77.el5 rhnlib-2.5.22-7.el5 rmt-0.4b41-6.el5 rp-pppoe-3.5-32.1 rpm-python-4.4.2.3-28.el5_8 samba-3.0.33-3.39.el5_8 screen-4.0.3-4.el5
Page 191
sed-4.1.5-8.el5 sendmail-8.13.8-8.1.el5_7 setools-3.0-3.el5 setuptool-1.19.2-1 slang-2.0.6-4.el5 specspo-13-1.el5 strace-4.5.18-11.el5_8 sudo-1.7.2p1-14.el5_8.4 sysfsutils-2.1.0-1.el5 sysstat-7.0.2-11.el5 SysVinit-2.86-17.el5 tcl-8.4.13-4.el5 tcsh-6.14-17.el5_5.2 tideway-9.0-292607.rhel5.dev tideway-devices-4.0.2012.09.2-290770.ga tk-8.4.13-5.el5_1.1 tree-1.5.0-4 tw-bsddb3-5.3.0-1.py2.7.rhel5 tw-docutils-0.8.1-3.py2.7.rhel5 tw-ipaddr-2.1.10-py2.7.rhel5 tw-omniORB-4.1.99.6105-1.py2.7.rhel5 tw-omniORB-devel-4.1.99.6105-1.py2.7.rhel5 tw-omniORB-utils-4.1.99.6105-1.py2.7.rhel5 tw-omniORBpy-devel-4.1.99.6105-1.py2.7.rhel5 tw-pexpect-2.5.1-1.beta.py2.7.rhel5 tw-pycrypto-2.0.1-3.py2.7.rhel5 tw-python-debuginfo-2.7.3-4.rhel5 tw-python-tkinter-2.7.3-4.rhel5 tw-reportlab-2.5-3.py2.7.rhel5 tw-sshpass-1.05-1.rhel5 tw-tripwire-2.4.2-1.rhel5 udftools-1.0.0b3-0.1.el5 usbutils-0.71-2.1 vconfig-1.9-3 vim-minimal-7.0.109-7.el5 wget-1.11.4-3.el5_8.2 words-3.0-9.1 xinetd-2.3.14-16.el5 xorg-x11-fonts-ISO8859-1-75dpi-7.1-2.1.el5
selinux-policy-2.4.6-327.el5 sendmail-cf-8.13.8-8.1.el5_7 setserial-2.17-19.2.2 sgpio-1.2.0_10-2.el5 smartmontools-5.38-3.el5 sqlite-3.3.6-5 stunnel-4.15-2.el5.1 svrcore-4.0.4-3.el5 sysklogd-1.4.1-46.el5 system-config-network-tui-1.3.99.21-1.el5 talk-0.17-31.el5 tcpdump-3.9.4-15.el5 telnet-0.17-39.el5 tideway-appliance-5.12.10.65-291388.rhel5 tideway-documentation-9.0-292582.rhel5.dev tmpwatch-2.9.7-1.1.el5.5 ttmkfdir-3.0.9-23.el5 tw-db-5.3.15-2.py2.7.rhel5 tw-egenix-mx-base-3.2.4-1.py2.7.rhel5 tw-jsvc-1.0.10-2.rhel5 tw-omniORB-bootscripts-4.1.99.6105-1.py2.7.rhel5 tw-omniORB-doc-4.1.99.6105-1.py2.7.rhel5 tw-omniORBpy-4.1.99.6105-1.py2.7.rhel5 tw-omniORBpy-doc-4.1.99.6105-1.py2.7.rhel5 tw-PIL-1.1.7-2.py2.7.rhel5 tw-pyOpenSSL-0.11-1.py2.7.rhel5 tw-python-devel-2.7.3-4.rhel5 tw-python-tools-2.7.3-4.rhel5 tw-snmp++-3.2.25-6.rhel5 tw-testoob-1.15-3.py2.7.rhel5 tzdata-2012f-1.el5 unix2dos-2.2-26.2.3.el5 usermode-1.88-3.el5.2 vim-common-7.0.109-7.el5 vixie-cron-4.1-81.el5 which-2.16-7 wpa_supplicant-0.5.10-9.el5 xorg-x11-filesystem-7.1-2.fc6 xorg-x11-xfs-1.0.2-5.el5_6.1
selinux-policy-targeted-2.4.6-327.el5 setarch-2.0-1.1 setup-2.5.58-9.el5 shadow-utils-4.0.17-20.el5 sos-1.7-9.62.el5 startup-notification-0.8-4.1 subversion-1.6.11-10.el5_8 symlinks-1.2-24.2.2 syslinux-3.11-7 system-config-securitylevel-tui-1.6.29.1-6.el5 tar-1.15.1-32.el5_8 tcp_wrappers-7.6-40.7.el5 termcap-5.5-1.20060701.1 tideway-device-api-4.0-1.rhel5 time-1.7-27.2.2 traceroute-2.0.1-6.el5 tw-atrium-1.2-1.rhel5 tw-db-debuginfo-5.3.15-2.py2.7.rhel5 tw-esmre-0.3.1-2.py2.7.rhel5 tw-M2Crypto-0.21.1-1.rhel5 tw-omniORB-debuginfo-4.1.99.6105-1.py2.7.rhel5 tw-omniORB-servers-4.1.99.6105-1.py2.7.rhel5 tw-omniORBpy-debuginfo-4.1.99.6105-1.py2.7.rhel5 tw-omniORBpy-standard-4.1.99.6105-1.py2.7.rhel5 tw-ply-3.4-1.py2.7.rhel5 tw-python-2.7.3-4.rhel5 tw-python-ldap-2.3.13-2.py2.7.rhel5 tw-PyXML-0.8.4-4.py2.7.rhel5 tw-snmp++-debuginfo-3.2.25-5 tw-tomcat-6.0.35-2.rhel5 udev-095-14.27.el5_7.1 unzip-5.52-3.el5 util-linux-2.13-0.59.el5 vim-enhanced-7.0.109-7.el5 Webware-1.1-1.py2.7.rhel5 wireless-tools-28-2.el5 Xaw3d-1.5E-10.1 xorg-x11-font-utils-7.1-3 yp-tools-2.9-2.el5
Page 192
Version naming convention change The software support policy and versioning conventions used for BMC Enterprise Systems Management (ESM) software released as GA products after September 11 2011. This date fell between the end of the 8.2 releases and the start of the 8.3 releases of BMC Atrium Discovery. Software products are now versioned as follows: VV.RR.SP where VV=major version, RR=minor release, and SP=Service Pack.
When a product enters limited support: New enhancements are not made to that version of the product. For reported issues, BMC Software Customer Support directs customers to existing fixes, patches and workarounds. New Technology Knowledge Update (TKU) releases are not available for that version of the product. During the time of limited support, BMC Software strongly encourages users of the respective versions of the product to upgrade to the latest current version.
To upgrade to version 9.0 SP1 on RHEL 5: You can upgrade to version 9.0 SP1 on RHEL 5 from 8.3.x and 9.0 on RHEL 5. To upgrade to version 9.0 SP1 on RHEL 6: You can upgrade to version 9.0 SP1 on RHEL 6 from 9.0 on RHEL 6.
Page 193
BMC Atrium Discovery 8.3, 8.3 SP1, 8.3 SP2, and 8.3 SP3
To upgrade to version 9.0 on RHEL 5: You can upgrade to version 9.0 on RHEL 5. To upgrade to version 9.0 on RHEL 6: There is no upgrade path to version 9.0 on RHEL 6. You must migrate your 8.3.x data to a new installation of BMC Atrium Discovery version 9.0.
To upgrade to version 9.0 on RHEL 5: You must incrementally upgrade to version 8.3 and then to version 9.0 on RHEL 5. To upgrade to version 9.0 on RHEL 6: There is no upgrade path to version 9.0 on RHEL 6. Rather, you must first upgrade to version 8.3 and then migrate your 8.3.x data to a new installation of BMC Atrium Discovery version 9.0.
To upgrade to version 9.0 on RHEL 5: You must incrementally upgrade to versions 8.2 and 8.3. From version 8.3.x, you can upgrade to version 9.0 on RHEL 5. To upgrade to version 9.0 on RHEL 6: There is no upgrade path to version 9.0 on RHEL 6. You must first incrementally upgrade to versions 8.2, and then to 8.3. From version 8.3.x, you can migrate your data to a new installation of BMC Atrium Discovery version 9.0.
Upgrading to version 8.2 Upgrading to version 8.3 Upgrading to version 9.0 Migrating from version 8.3.x to 9.0 Migrating from version 7.5 to 9.0
BMC Atrium Discovery 7.5 and 7.5.01 BMC Foundation Discovery and BMC Topology Discovery 1.6
There is no upgrade path from BMC Atrium Discovery version 7.5 to version 9.0. Rather, you need to migrate your data to a new installation of BMC Atrium Discovery version 9.0. From BMC Foundation Discovery and BMC Topology Discovery 1.6, upgrading to a newer version of both BMC Atrium Discovery and BMC Atrium CMDB may require a contract modification; please consult your BMC Software account representative for more information about upgrading to the latest versions.
Page 194
From BMC Atrium Discovery 8.0, upgrading to version 9.0 requires incremental upgrades to versions 8.1, 8.2, and then to 8.3. From version 8.3, you can do one of the following: Upgrade to version 9.0 on RHEL 5 Migrate your data to a new installation of version 9.0 on RHEL 6
Upgrading to Version 8.1 Upgrading to Version 8.2 Upgrading to Version 8.3 Upgrading to version 9.0 Migrating from version 8.3.x to 9.0
From Tideway Foundation 7.3 and 7.3.1, upgrading to version 9.0 requires incremental upgrades to versions 8.1, 8.2, and then to 8.3. From version 8.3, you can do one of the following: Upgrade to version 9.0 on RHEL 5 Migrate your data to a new installation of version 9.0 on RHEL 6
Upgrading to Version 8.1 Upgrading to Version 8.2 Upgrading to Version 8.3 Upgrading to version 9.0 Migrating from version 8.3.x to 9.0
BMC Foundation Discovery and BMC Topology Discovery 1.5 and 1.5.01
From BMC Foundation Discovery and BMC Topology Discovery 1.5 and 1.5.01, upgrading to a newer version of both BMC Atrium Discovery and BMC Atrium CMDB may require a contract modification; please consult your BMC Software account representative for more information about upgrading to the latest versions.
Technical bulletin
Apache
Copyright 1999-2011 The Apache Software Foundation Click here for full copyright statement.
Apache License, Version 2.0 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
Page 195
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity
Page 196
(including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: 1. You must give any other recipients of the Work or Derivative Works a copy of this License; and 2. You must cause any modified files to carry prominent notices stating that You changed the files; and 3. You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and 4. If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent
Page 197
acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
Page 198
BMC ADDM
Copyright 2003-2012 BMC Software; All rights reserved Click here for full copyright statement.
END USER LICENSE AGREEMENT BY OPENING THE PACKAGE, INSTALLING, PRESSING "AGREE" OR "YES" OR USING THE PRODUCT, THE ENTITY OR INDIVIDUAL ENTERING INTO THIS AGREEMENT AGREES TO BE BOUND BY THE FOLLOWING TERMS. IF YOU DO NOT AGREE WITH ANY OF THESE TERMS, DO NOT INSTALL OR USE THE PRODUCT, PROMPTLY RETURN THE PRODUCT TO BMC OR YOUR BMC RESELLER, AND IF YOU ACQUIRED THE LICENSE WITHIN 15 DAYS OF THE DATE OF YOUR ORDER CONTACT BMC OR YOUR BMC RESELLER FOR A REFUND OF LICENSE FEES PAID. IF YOU REJECT THIS AGREEMENT, YOU WILL NOT ACQUIRE ANY LICENSE TO USE THE PRODUCT. This Agreement ("Agreement") is between the entity or individual entering into this Agreement ("Customer") and the BMC Entity for the applicable Territory as described in Section 19 ("BMC"). In addition to the restrictions imposed under this Agreement, any other usage restrictions contained in the Product installation instructions or release notes shall apply to your use of the Product. Territory: The country where Customer acquired the license.
1. GENERAL DEFINITIONS. "Affiliate" is an entity that controls, is controlled by or shares common control with BMC or Customer, where such control arises from either (a) a direct or indirect ownership interest of more than 50% or (b) the power to direct or cause the direction of the management and policies, whether through the ownership of voting stock by contract, or otherwise, equal to that provided by a direct or indirect ownership of more than 50%. "Documentation" means the technical publications relating to the software, such as release notes, reference, user, installation, systems administrator and technical guidelines, included with the Product. "Licensed Capacity" is the amount of each Product licensed as established in the Order. "Order" is an agreed written or electronic document, subject to the terms of this Agreement that identifies the Products to be licensed and their Licensed Capacity and/or the Support to be purchased and the fees to be paid. "Product" is the object code of the software and all accompanying Documentation delivered to Customer, including all items delivered by BMC to Customer under Support. "Support" is the support services program as further specified in this Agreement. 2. SCOPE. Licenses are granted, and Support is obtained, solely by execution of Orders. Each Order is deemed to be a discrete contract, separate from each other Order, unless expressly stated otherwise therein, and in the event of a direct conflict between any Order and the terms of this Agreement, the terms of the Order will control only if the Order is executed by an authorized representative of each party. Orders may be entered under this Agreement by and between (a) BMC or an Affiliate of BMC; and (b) the Customer or an Affiliate of Customer. With respect to an Order, the terms "BMC" and "Customer" as
Page 199
used in this Agreement will be deemed to refer to the entities that execute that Order, the Order will be considered a two party agreement between such entities, and BMC will separately invoice the Customer named in the Order for the associated License fees and Support fees. Neither execution of this Agreement, nor anything contained herein, shall obligate either party to enter into any Orders. In the event an Order is proposed by BMC and is deemed to constitute an offer, then acceptance of such offer is limited to its terms. In the event Customer proposes an Order by submitting a purchase order, then regardless of whether BMC acknowledges, accepts or fully or partially performs under such purchase order, BMC OBJECTS to any additional or different terms in the purchase order, other than those that establish Product, price and Licensed Capacity in accordance with this Agreement. 3. LICENSE. Subject to the terms of this Agreement, BMC grants Customer a non-exclusive, non-transferable, non-sub-licensable perpetual (unless a non-perpetual license is provided on an Order) license, as specified in the relevant Order, to exercise the following rights to the Product up to the Licensed Capacity: (a) copy the Product for the purpose of installing it on Customer's owned or leased hardware at a facility owned or controlled by Customer in the Territory; (b) operate solely for Customer's and its Affiliates own internal Customer's business operations; and (c) make one copy of the Product for archival purposes only (collectively a "License"). Affiliates may use and access the Products and Support under the terms of this Agreement, and Customer is responsible for its Affiliates compliance with the terms of this Agreement. 4. RESTRICTIONS. Customer will not: (a) copy, operate or use any Product in excess of the applicable Licensed Capacity; (b) modify, delete or remove any ownership, title, trademark, patent or copyright notices ("Identification") from any Product; (c) copy any Product or any portion of any Product without reproducing all Identification on each copy or partial copy; (d) disassemble, reverse engineer, decompile or otherwise attempt to derive any Product source code from object code, except to the extent expressly permitted by applicable law despite this limitation without possibility of contractual waiver; (e) distribute, rent, lease, sublicense or provide the Product to any third party or use it in a service bureau, outsourcing environment, or for the processing of third party data; (f) provide a third party with the results of any functional evaluation, or performance tests, without BMC's prior written approval; (g) attempt to disable or circumvent any of the licensing mechanisms within the Product; or (h) violate any other usage restrictions contained in the Documentation. 5. PRODUCT PERFORMANCE WARRANTY. BMC warrants that (a) the Product will perform in substantial accordance with its Documentation for a period of one year from the date of the first Order, (b) BMC has used commercially reasonable efforts consistent with industry standards to scan for and remove software viruses, and (c) other than passwords that may be required for the operation of the Product, BMC has not inserted any code that is not addressed in the Documentation and that is designed to delete, interfere with or disable the normal operation the Products in accordance with the License.This warranty will not apply to any problems caused by hardware, software other than the Product, or misuse of the Product use of the Product other than as provided by the applicable License, modification of the Product, or claims made either outside the warranty period or not in compliance with the notice and access requirements set forth below. No warranty is provided for additional Licensed Capacity, Product provided pursuant to Support or Product provided pursuant to Section 11. 6. LIMITED REMEDIES. BMC's entire liability, and Customer's exclusive remedy, for breach of the above warranty is limited to: BMC's use of commercially reasonable efforts to have the Product perform in substantial accordance with its Documentation, or replacement of the non-conforming Product within a reasonable period of time, or if BMC cannot have the Product perform in substantial accordance with its Documentation replace the Product within such
Page 200
time period, then BMC will refund the amount paid by Customer for the License for that Product. Customer's rights and BMC's obligations in this section are conditioned upon Customer's providing BMC during the warranty period (a) full cooperation and access to the Product in resolving any claim; and (b) written notice addressed to the BMC Legal Department that includes notice of the claim, a complete description of the alleged defects sufficient to permit their reproduction in BMC's development or support environment, and a specific reference to the Documentation to which such alleged defects are contrary. 7. DISCLAIMER OF WARRANTIES. EXCEPT FOR THE EXPRESS WARRANTIES IN THIS AGREEMENT, THE PRODUCT IS PROVIDED WITH NO OTHER WARRANTIES WHATSOEVER, AND BMC, ITS AFFILIATES AND LICENSORS DISCLAIM ALL OTHER WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. BMC DOES NOT WARRANT THAT THE OPERATION OF THE PRODUCT WILL BE UNINTERRUPTED OR ERROR FREE, OR THAT ALL DEFECTS CAN BE CORRECTED. 8. PAYMENTS AND DELIVERY. Customer will pay each License fee and/or Support fee upon receipt of invoice. Customer will pay, or reimburse, BMC or when required by law the appropriate governmental agency for taxes of any kind, including sales, use, VAT, excise, customs duties, withholding, property, and other similar taxes (other than taxes based on BMC's net income) imposed in connection with the License and/or the Support fees which are exclusive of these taxes. For Products that are delivered electronically, upon request from BMC, Customer agrees to provide BMC with Documentation supporting that the designated Product was received electronically. If Customer accepts any Product in a non-electronic format, there may be an additional charge and it is the sole responsibility of Customer to bear any sales/use tax obligation, penalties, and interest. The unpaid balance of each late payment bears interest at a rate equal to the lesser of 1% per month or the maximum amount permitted by law. All Products are licensed FCA ("Free Carrier" as per Incoterms 2000) shipping point. The Products are accepted on the date BMC delivers the Product to the Customer either physically or by providing access codes for electronic download, whichever occurs first, however, such acceptance will not affect the Product Performance Warranty provided in this Agreement. 9. PROPRIETARY RIGHTS AND CONFIDENTIALITY. (a) BMC, its Affiliates or licensors retain all right, title and interest to the Product, Support and all related intellectual property and proprietary rights. The Product and all third party software provided with the Product are protected by applicable copyright, trade secret, industrial and other intellectual property laws. Customer may not remove any product identification, copyright, trademark or other notice from the Product. BMC reserves any rights not expressly granted to Customer in this Agreement. (b) "Confidential Information" means all proprietary or confidential information that is disclosed to the recipient ("Recipient") by the discloser ("Discloser"), and includes, among other things (i) any and all information relating Discloser financial information, customers, employees, products or services, including, without limitation, software code, flow charts, techniques, specifications, development and marketing plans, strategies, forecasts, and proposal related documents and responses; (ii) as to BMC, and its licensors, the Product and any third party software provided with the Product; and (iii) the terms of this Agreement, including without limitation, Product pricing information. Confidential Information does not include information that Recipient can show: (a) was rightfully in Recipient's possession without any obligation of confidentiality before receipt from the Discloser; (b) is or becomes a matter of public knowledge through no fault of Recipient; (c) is rightfully received by Recipient from a third party without violation of a duty of confidentiality; or (d) is independently developed by or for Recipient. Recipient may not disclose Confidential Information of Discloser to any third party or use the Confidential Information in violation of this Agreement. The Recipient (i) will exercise the same degree of care and protection with respect to the Confidential Information of the Discloser that it
Page 201
exercises with respect to its own Confidential Information and (ii) will not, either directly or indirectly, disclose, copy, distribute, republish, or allow any third party to have access to any Confidential Information of the Discloser. Notwithstanding the foregoing, Recipient may disclose Discloser's Confidential Information to Recipient's employees and agents who have the need to know provided that such employees and agents have legal obligations of confidentiality substantially the same (and in no case less protective) as the provisions of this Agreement. (c) Notification Obligation. If the Recipient becomes aware of any unauthorized use or disclosure of Discloser's Confidential Information, then Recipient will promptly and fully notify the Discloser of all facts known to it concerning such unauthorized use or disclosure. In addition, if the Recipient or any of its employees or agents are required (by oral questions, interrogatories, requests for information, or documents in legal proceedings, subpoena, civil investigative demand, or other similar process) to disclose any of Discloser's Confidential Information, the Recipient will not disclose the Discloser's Confidential Information without providing the Discloser with commercially reasonable advance prior written notice to allow Discloser to seek a protective order or other appropriate remedy or to waive compliance with this provision. In any event, the Recipient will exercise its commercially reasonable efforts to preserve the confidentiality of the Discloser's Confidential Information, including, without limitation, cooperating with Discloser to obtain an appropriate protective order or other reliable assurance that confidential treatment will be accorded to the Confidential Information. Notwithstanding the foregoing, Customer agrees that BMC may include Customer's name on customer lists. 10. DISCLAIMER OF DAMAGES; LIMITS ON LIABILITY. EXCEPT FOR VIOLATIONS OF LICENSE (SECTION 3), LICENSE RESTRICTIONS (SECTION 4), PROPRIETARY RIGHTS AND CONFIDENTIALITY (SECTION 9) AND FOR INFRINGEMENT CLAIMS (SECTION 12), NEITHER PARTY, ITS AFFILIATES OR BMC'S LICENSORS ARE LIABLE FOR (A) ANY SPECIAL, INDIRECT, INCIDENTAL, PUNITIVE OR CONSEQUENTIAL DAMAGES RELATING TO OR ARISING OUT OF THIS AGREEMENT, SUPPORT, THE PRODUCT OR ANY THIRD PARTY CODE OR SOFTWARE PROVIDED WITH THE PRODUCT (INCLUDING, WITHOUT LIMITATION, LOST PROFITS, LOST COMPUTER USAGE TIME, AND DAMAGE TO, OR LOSS OF USE OF DATA), EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES, AND IRRESPECTIVE OF NEGLIGENCE OF A PARTY OR WHETHER SUCH DAMAGES RESULT FROM A CLAIM ARISING UNDER TORT OR CONTRACT LAW , OR (B) DAMAGES OF ANY KIND IN AN AMOUNT GREATER THAN THE AMOUNT OF ACTUAL, DIRECT DAMAGES UP TO THE CAP. THE TERM "CAP" MEANS (I) IF BMC IS THE PAYOR, THE AMOUNT PAID BY CUSTOMER FOR THE LICENSE TO THE PRODUCT GIVING RISE TO SUCH DAMAGES AND (II) IF CUSTOMER IS THE PAYOR, THE GREATER OF THE AMOUNT PAID OR PAYABLE, BY CUSTOMER FOR THE LICENSE TO THE PRODUCT GIVING RISE TO SUCH DAMAGES. 11. TRIAL LICENSE. BMC may determine, in its sole discretion, to make products available to Customer without an Order and without charge Such products are deemed to be "Products" pursuant to this Agreement except that (a) they are provided to Customer solely so that Customer may evaluate internally whether to acquire a license to the products for a fee, (b) the license term for such products is thirty (30) days; (c) the Products are provided "AS IS" and without any warranty or support, and (d) the products cannot be put into productive use or included as part of Customer's business processes in any manner, unless or until they are expressly licensed and paid for under an Order. BMC may terminate all of Customer's rights and licenses to these products for BMC's convenience upon notice to Customer. 12. INFRINGEMENT CLAIMS. If a third party asserts a claim against Customer asserting that Customer's use of a Product in accordance with this Agreement violates that third-party's patent, trade secret or copyright rights ("Infringement Claim"), then BMC will, at its own expense: (a) defend or settle the Infringement Claim; and (b) indemnify Customer for any damages finally awarded against Customer based on infringement by the Product. BMC's obligations under this Section will not apply if: (a) BMC's legal department does not receive
Page 202
prompt, detailed written notice of the Infringement Claim from Customer, (b) BMC is not able to retain sole control of the defense of the Infringement Claim and all negotiations for its settlement or compromise, (c) BMC does not receive all reasonable assistance, or (d) the Infringement Claim is based on (i) the use of Product in combination with products not approved by BMC in the Product's Documentation, (ii) the failure of Customer to use any updates to such Product within a reasonable time after such updates are made available to Customer, or (iii) the failure of Customer to use the Product as permitted by the Order and in accordance with the Documentation. BMC will not bind Customer to a monetary obligation in a settlement or compromise, or make an admission on behalf of Customer, without obtaining Customer's prior consent. If BMC determines in BMC's reasonable discretion that use of the Product should be stopped because of an Infringement Claim or potential Infringement Claim, if a court of competent jurisdiction enjoins Customer from using a Product as a result of an Infringement Claim and BMC is unable to have such injunction stayed or overturned, or if BMC settles an Infringement Claim on terms that would require Customer to stop using the Product, then BMC will, at its expense and election: (a) modify or replace the Product, (b) procure the right to continue using the Product, or (c) if in BMC's reasonable judgment, neither (a) or (b) is commercially reasonable, terminate Customer's License to the Product and (i) for any perpetual licenses, issue a refund based upon the applicable license fees paid, prorated over 48 months from the date of the Order under which the Products were initially licensed; and (ii) for any non-perpetual licenses, release Customer from its obligation to make future payments for the Product or issue a pro rata refund for any fees paid in advance. This Section contains Customer's exclusive remedies and BMC's sole liability for Infringement Claims. 13. TERMINATION. Upon thirty days advance written notice, either party may terminate this Agreement for its convenience on a prospective basis; however, such termination will have no effect on Orders executed by the parties prior to its effective date and such Orders will remain in full force and effect under the terms of this Agreement. BMC may: (i) terminate an Order and the Licenses to the Products on that Order if Customer fails to pay any applicable fees due under that Order within 30 days after receipt of written notice from BMC of non-payment; (ii) terminate any or all Orders, Licenses to the Products and/or this Agreement, without notice or cure period, if Customer violates the intellectual property rights of BMC, its Affiliates or licensors, or uses the Products outside of the scope of the applicable Licenses; or (iii) terminate all Licenses and this Agreement in whole or in part if Customer commits any other material breach of this Agreement and fails to correct the breach within 30 days after BMC notifies Customer in writing of the breach. Upon any termination of a License, Customer will immediately uninstall and stop using the relevant Product, and upon BMC's request, Customer will immediately return such Product to BMC, together with all related Documentation and copies, or certify its destruction in writing. Neither party is liable for its failure to perform any obligation under this Agreement, other than a payment obligation, during any period in which performance is delayed by circumstances beyond that party's reasonable control. 14. AUDIT. If requested by BMC not more than once a year, Customer agrees to deliver to BMC periodic product usage reports generated from specific products (when available) or written reports, whether generated manually or electronically, specifying Customer's use of the Product. Additionally, if requested by BMC not more than once a year, Customer agrees to allow BMC to perform an audit at Customer's facilities during normal business hours to ensure compliance with the terms of this Agreement. Customer agrees to cooperate during any such audit and to provide reasonable access to its information and systems. If an audit reveals that Customer has exceeded the Licensed Capacity for a Product, Customer agrees to pay the applicable fees for additional capacity. If the understated capacity exceeds 5% of the Licensed Capacity of the applicable Product, then Customer agrees to also pay BMC's reasonable costs of conducting the audit.
Page 203
15. EXPORT CONTROLS. By using the Technology (as this term is defined below), Customer acknowledges that it is responsible for complying with the applicable laws and regulations of the United States and all other relevant countries relating to exports and re-exports. Customer agrees that it will not download, access, license or otherwise export or re-export, directly or indirectly, any software code (delivered as a BMC Product, through support/maintenance, or through other services), any technical publications relating to the software code, such as release notes, reference, user, installation, systems administrator and technical guidelines, or services (collectively, "Technology") in violation of any such laws and regulations, including regulations prohibiting export to certain restricted countries ("Restricted Countries"), or without any written governmental authorization required by such applicable laws. The list of Restricted Countries can and does change from time to time. It currently includes Cuba, Iran, North Korea, Sudan and Syria. In particular, but without limitation, the Technology may not be downloaded, licensed, transferred or otherwise exported or re-exported, directly or indirectly, including via remote access (a) into a Restricted Country or to a national or resident of a Restricted Country; (b) to anyone on the U.S. Treasury Department's list of Specially Designated Nationals or Other Blocked Persons, the U.S. Commerce Department's Denied Parties List, Entity List, or Unverified List; or (c) to or for any proliferation-related (nuclear weapons, missile technology, or chemical/biological weapons) end use. By downloading, licensing and/or using the Technology, Customer represents and warrants that (w) it is not located in, under the control of, acting on behalf of, or a national or resident of any Restricted Country; (x) Customer is not on any list in (b) above; (y) Customer is not involved in any end use listed in (c) above; and (z) no U.S. federal agency has suspended, revoked, or denied its export privileges. Customer agrees that all rights to use the Technology are granted on the condition that such rights are forfeited if it fails to comply with these terms. EC No. 428/2009 sets up a Community regime for control of exports of dual-use items and technology, and it is declared that this Technology is intended for civil purposes only. Therefore, Customer agrees not to license, download or transfer, directly or indirectly any Technology controlled by it to any military entity or to any other entity for military purposes, including any State Security Forces pursuant to this Agreement, nor to knowingly transfer any Technology to end-users for use in connection with chemical, biological or nuclear weapons or missiles capable of delivering such weapons. Customer also agrees, (a) not to export or re-export any Technology to an entity that is based in China and describe themselves as "Institute(s)" or "Academy(ies)"; or (b) not to knowingly export or re-export any Technology to any country that is subject to European Union, United Nations or Organizations for Security and Co-operation in Europe sanctions without first obtaining a validated license. 16. GOVERNING LAW. This Agreement is governed by the substantive laws in force, without regard to conflict of laws principles: (a) in the State of Texas, if you acquired the License in the United States, Puerto Rico, or any country in Central or South America; (b) in the Province of Ontario, if you acquired the License in Canada (subsections (a) and (b) collectively referred to as the "Americas Region"); (c) in Singapore, if you acquired the License in Japan, South Korea, Peoples Republic of China, Special Administrative Regions of Hong Kong or Macau, Taiwan, Philippines, Indonesia, Malaysia, Myanmar, Singapore, Brunei, Vietnam, Cambodia, Laos, Thailand, India, Pakistan, Australia, New Zealand, Papua New Guinea or any of the pacific island states (collectively, "Asia Pacific Region"); or (d) in the Netherlands, if you acquired the License in any other country not described above. The United Nations Convention on Contracts for the International Sale of Goods is specifically disclaimed in its entirety. 17. ARBITRATION. ANY DISPUTE BETWEEN CUSTOMER AND BMC ARISING OUT OF
Page 204
THIS AGREEMENT OR THE BREACH OR ALLEGED BREACH, SHALL BE DETERMINED BY BINDING ARBITRATION CONDUCTED IN ENGLISH. IF THE DISPUTE IS INITIATED IN THE AMERICAS REGION, THE ARBITRATION SHALL BE HELD IN NEW YORK, U.S.A., UNDER THE CURRENT COMMERCIAL OR INTERNATIONAL, AS APPLICABLE, RULES OF THE AMERICAN ARBITRATION ASSOCIATION. IF THE DISPUTE IS INITIATED IN A COUNTRY IN THE ASIA PACIFIC REGION, THE ARBITRATION SHALL BE HELD IN SINGAPORE, SINGAPORE UNDER THE CURRENT UNCITRAL ARBITRATION RULES. IF THE DISPUTE IS INITIATED IN A COUNTRY OUTSIDE OF THE AMERICAS REGION OR ASIA PACIFIC REGION, THE ARBITRATION SHALL BE HELD IN AMSTERDAM, NETHERLANDS UNDER THE CURRENT UNCITRAL ARBITRATION RULES. THE COSTS OF THE ARBITRATION SHALL BE BORNE EQUALLY PENDING THE ARBITRATOR'S AWARD. THE AWARD RENDERED SHALL BE FINAL AND BINDING UPON THE PARTIES AND SHALL NOT BE SUBJECT TO APPEAL TO ANY COURT, AND MAY BE ENFORCED IN ANY COURT OF COMPETENT JURISDICTION. NOTHING IN THIS AGREEMENT SHALL BE DEEMED AS PREVENTING EITHER PARTY FROM SEEKING INJUNCTIVE RELIEF FROM ANY COURT HAVING JURISDICTION OVER THE PARTIES AND THE SUBJECT MATTER OF THE DISPUTE AS NECESSARY TO PROTECT EITHER PARTY'S CONFIDENTIAL INFORMATION, OWNERSHIP, OR ANY OTHER PROPRIETARY RIGHTS. ALL ARBITRATION PROCEEDINGS SHALL BE CONDUCTED IN CONFIDENCE, AND THE PARTY PREVAILING IN ARBITRATION SHALL BE ENTITLED TO RECOVER ITS REASONABLE ATTORNEYS' FEES AND NECESSARY COSTS INCURRED RELATED THERETO FROM THE OTHER PARTY. 18. U.S. FEDERAL ACQUISITIONS. This Article applies to all acquisitions of the commercial Product subject to this Agreement by or on behalf of the federal government, or by any prime contractor or subcontractor (at any tier) under any contract, grant, cooperative agreement or other activity with the federal government. By accepting delivery of the Product, the government hereby agrees that the Product qualifies as "commercial" within the meaning of the acquisition regulation(s) applicable to this procurement. The terms and conditions of this Agreement shall pertain to the government's use and disclosure of the Product, and shall supersede any conflicting contractual terms and conditions. If the license granted by this Agreement fails to meet the government's needs or is inconsistent in any respect with Federal law, the government agrees to return the Product, unused, to BMC. The following additional statement applies only to acquisitions governed by DFARS Subpart 227.4 (October 1988): "Restricted Rights - Use, duplication and disclosure by the Government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.227-7013 (Oct. 1988)." 19. BMC ENTITIES. Agreement: Territory United States and Latin America South (not a specified Central or South America country below) Canada The following licensing entities apply to this
50 Minthorn Boulevard, Suite 200 Markham, Ontario L3T 7X8 Canada Boeing Avenue 245, 1119 PE Schiphol Rijk, The Netherlands Rua Leopoldo Couto de Magalhes Jr, 758 - 14 andar, So Paulo - SP - Brazil Torre Esmeralda II Blvd.
Mexico
Page 205
Manuel Avila, Camacho #36, Piso 23 Lomas de Chapultepec, CP11000, Mxico D.F. Ing. Butty 220 - Piso 14, Buenos Aires, Republica Argentina, C1001AFB 210 Middle Road, #12-01/08 IOI Plaza, Singapore 188994
Argentina
S.E.A (Southeast Asia), Australia, New Zealand, Hong Kong, Taiwan China
Suite 501-504, Level 5, Tower W1, The Towers, Oriental Plaza, #1 East Chang An Ave., Dong Cheng, Beijing 100738, China Harmony Tower 24th Floor, 1-32-2 Honcho, Nakano-ku, Tokyo, 164-8721 33rd Fl., ASEM Tower World Trade Center, 159-1, Samsung-dong, Kangnam-ku, Seoul 135-798
Japan
Korea
20. ASSIGNMENT AND TRANSFERS. Customer may not assign or transfer a Product separate from the applicable Agreement and License, and may not assign or transfer an Agreement or a License, except in the event of a merger with or into, or a transfer of all or substantially all of Customer's assets to, a third party who assumes all of Customer's liabilities and obligations under the Agreement and License, expressly agrees to be bound by and comply with all of the terms of the Agreement and License. Any attempt to assign or transfer an Agreement or License in violation of this provision will be null and void and be treated as a violation of BMC's intellectual property rights or use outside the scope of the License. 21. MISCELLANEOUS TERMS. A waiver by a party of any breach of any term of this Agreement will not be construed as a waiver of any continuing or succeeding breach. Should any term of this Agreement be invalid or unenforceable, the remaining terms will remain in effect. The parties acknowledge they have read this Agreement and agree that it is the complete and exclusive statement of the agreement and supersedes any prior or contemporaneous negotiations or agreements, between the parties relating to the subject matter of this Agreement. This Agreement may not be modified or rescinded except in writing signed by both parties. The prevailing party in any litigation is entitled to recover its attorney's fees and costs from the other party. To the extent BMC Products include third party code: if (a) such third party code is provided for use with a Product, it may be used only with that Product unless otherwise provided for in the Documentation; and (b) the Documentation contains terms that pertain to such third party code, those terms govern the third party code in place of the terms of the applicable Order and this Agreement; except that the third party terms will not (i) negate or amend the rights granted by BMC to Customer or the obligations undertaken by BMC in the applicable Order or this Agreement with respect to a Product; or (ii) impose any additional restrictions on Customer's use of the Product. In some circumstances, usually either for the convenience of its customers or in order to comply with the obligation to make source code available under specific license terms, BMC distributes to customers, without charge, products that are not governed by an Order or this Agreement. Such products are distributed separately from the BMC Products, are governed by the license terms that are included with
Page 206
them, and are provided by BMC AS IS, WHERE IS AND WITHOUT WARRANTIES OF ANY KIND, WHETHER ORAL OR WRITTEN, EXPRESS OR IMPLIED, AND EXCLUDING, WITHOUT LIMITATION, WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT AND TITLE. The parties have agreed that this Agreement and the documents related thereto be drawn up in the English language. Les parties exigent que la prsente convention ainsi que les documents qui s'y rattachent soient rdigs en anglais. Customer agrees that BMC and its affiliates may refer to Customer as a customer of BMC, both internally and in externally published media. 22. SUPPORT. Customer may acquire BMC support services ("Support") on an Order. Once Support is acquired for a Product, Customer is automatically enrolled in Support on an annual basis for all Licensed Capacity of that Product, unless either party terminates Support on all Licensed Capacity of a Product upon at least 30 days written notice prior to the next Support anniversary date. The annual fee for Support will be agreed upon at the time of each Order. For a description of Support go to www.bmc.com/support/review-policies. BMC may change its Support terms, to be effective upon Customer's support anniversary date. BMC reserves the right to discontinue Support for a Product where BMC generally discontinues such services to all licensees of that Product. If Customer terminates Support and then re-enrolls in Support, BMC may charge Customer a reinstatement fee. 23. ADDITIONAL TERMS. into this Agreement. The following additional terms are incorporated
a. DEFINITIONS. Terms set forth below have the indicated meaning regardless of whether they are capitalized. "Cloud Environment" means a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) managed so they behave as if they were one computer. "Cloud Services" means the dynamic provisioning of IT resources as a service, where typically the Cloud infrastructure is shared across multiple tenants, and tenants are billed on a utility/subscription basis for what they use. Examples of Cloud Services include Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). "Computer" or "Server" has the meaning generally given within the computer industry, which is a single machine, whether a central processing unit, such as a mainframe machine, or a distributed systems machine, such as a Unix or Intel based server. A mainframe machine would be an individual mainframe computer having single or multiple processors or engines. For purposes of distributed systems machines (excluding Control-M and Mainview product(s), a Computer or Server may be physical or virtual. "Enterprise" is the environment consisting of all hardware owned or leased by Customer in the Territory. b. LICENSE RESTRICTIONS. The following restrictions apply to certain Products. BMC AppSight Products: BMC AppSight Initial Platform ("AppSight System") may only be used to support a Customer's own applications according to the AppSight System configuration licensed. For this purpose, Customer's "own" applications are those of which Customer is the ultimate beneficiary or applications developed by Customer as an independent software vendor. Each AppSight System may only be used with the platform designated on the Product Table (Windows/.Net or J2EE) unless a BMC AppSight Additional Platform is licensed for that AppSight System which enables the AppSight System to be used with an additional platform. Each AppSight System may only be used for the Workflow designated on the Product Order Form unless a BMC AppSight Additional Platform is licensed for that AppSight System for an additional Workflow. AppSight System-Team Edition may only be used at a designated Site to support a designated Application for a designated Workflow. AppSight System - Group Edition may be used at a designated Site to support multiple applications of a designated Group for a designated
Page 207
Workflow. AppSight System - Division Edition may be used at multiple Sites to support multiple applications of a designated Division for a designated Workflow. BMC AppSight Named User is a license to use the full BMC AppSight System. Customer must provide the individual domain ID or email ID for each named user. Once designated, the BMC AppSight Named User may only be changed if the individual leaves the Customer or the Team, Group or Division as applicable. BMC AppSight Named Users may not be transferred from one AppSight System to another. BMC AppSight QA User is a license to use the AppSight Test Recorder module of the AppSight System. BMC AppSight Level 1 Viewer is a license to use limited-functions of AppSight System that provides a user with the ability to view and replay the visual recording of a Black Box log. The AppSight Level 1 Viewer may be used by the named users provided to BMC. BMC AppSight for Citrix Support User is a license to use the AppSight System for Citrix application support. Only licensed BMC AppSight for Citrix users may use the AppSight System for Citrix support. BMC AppSight Connector for Defect/Incident Tracking may only be used to interface the AppSight System to Customer's defect/incident tracking application. BMC Desktop Capture Customer must provide the individual domain ID or email ID for each named user. The BMC Desktop Capture Player may be used by licensed named users to play back and view BMC Desktop Capture incident recordings. Once designated, a named user may only be changed if the individual leaves the Customer. BMC AppSight and BMC Application Problem Resolution Products: License Key. The Product may require a software license key that limits usage to that provided under the terms of the Agreement and this Product Order Form. Licensee must install and run license manager software provided with the Product at no additional cost. All AppSight Consoles must be connected to the license manager. AppSight Black Box is the agent portion of the AppSight System. All AppSight Systems include unlimited AppSight Black Boxes which may be installed on any computers of Customer or Customer's customers. Customer may provide Customer's customers the limited right to install and to use on behalf of Customer the AppSight Black Boxes (but not any other components of the AppSight System) but only for the sole purpose of providing information to Customer to support Customer's own software applications, and in no event to support Customer's customer's own internal software applications. Embedded Black Box. Customer may incorporate the AppSight Black Box into a Customer application, in which case, in addition to the licenses granted above, Customer is hereby granted a worldwide, nonexclusive, perpetual license to (a) incorporate the unmodified unaltered object code version of the AppSight Black Box into Customer's designated application ("Customer Application"); (b) reproduce and distribute the AppSight Black Box as incorporated into the Customer Application; and (c) to use in unaltered form the BMC trademarks, service marks or marketing logos (the "BMC Trademarks") solely to promote the Customer Application, provided Customer obtains BMC's prior written approval for each new usage. Customer shall ensure that any Customer Application incorporating the AppSight Black Box shall be governed by a license agreement which is at least as protective of BMC's proprietary rights in the AppSight Black Box as of Customer's proprietary rights in the Customer Application, but no less protective than this Agreement, including rights and restrictions related to end user's right to make backup and archival copies. In the event Customer incorporates the Black Box into Customer's Application, Customer shall include in the startup screen, help and/or the about screens in the Customer Application, BMC's logo and the following, "POWERED BY BMC'S APPSIGHT BLACK BOX TECHNOLOGY." Furthermore, Customer shall visually display the BMC name and the BMC product names and trademarks in the documentation for the Customer Application incorporating the AppSight Black Box, on Customer's website and in advertising and promotional materials. BMC Capacity Management for Mainframes: Any BMC Capacity Management for Mainframes Product and/or any BMC Performance Analyzer for Mainframes, BMC Performance Predictor for Mainframes, BMC Performance
Page 208
Perceiver for Mainframes, BMC Performance Analyzer for Mainframe Applications and other related products that may be released as part of the BMC Capacity Management for Mainframes must be licensed for all Computer(s) within the mainframe environment for which the Product or one of its components will process data or execute functionality on behalf of, regardless of whether the Product or one of its components is specifically installed on that Computer. The Products may be installed on or moved to any Computer(s) included in the licensed environment. BMC Capacity Management Products: Any BMC Capacity Management Product, BMC Capacity Optimization Product, BMC Performance Assurance Product and/or any other related Products that may be released as part of the BMC Capacity Management solutions for distributed systems environments are licensed to the Computer(s) for which the Products are initially assigned and may not be reassigned to another Computer(s) unless the original Computer(s) has been removed from service. "Removed from service" or "out of service" is defined as no longer providing support for a business application or workload. A license is required for all Computers for which the Product or one of its components executes functionality, either locally or remotely. BMC Capacity Management Products: Any BMC Capacity Management Product, BMC Capacity Optimization Product, BMC Performance Assurance Product and/or any other related Products that may be released as part of the BMC Capacity Management solutions for distributed systems environments are licensed to the Computer(s) for which the Products are initially assigned and may not be reassigned to another Computer(s) unless the original Computer(s) has been removed from service. "Removed from service" or "out of service" is defined as no longer providing support for a business application or workload. A license is required for all Computers for which the Product or one of its components executes functionality, either locally or remotely. BMC Cloud Lifecycle Management - Core License Add-on ("CLM Core"): * The Product may only be used in Customer's internal cloud environment. * Access to BMC Atrium Orchestrator base product components is provided through the BMC BladeLogic Automation Suite - Base License. In addition, BMC Cloud Lifecycle Management - Core License Add-on includes the right to One Activity Peer (AP) for high availability. * Customer may only use the following BMC Atrium Orchestrator Base Adapters: Command Line, File, Web Services, PowerShell or Terminal. * Customer may only use the following BMC Atrium Orchestrator Application Adapters: VMware vSphere Virtual Infrastructure (VIS), NetApp Data Fabric Manager, NetApp Rapid Cloning Utility, Amazon Elastic Compute Cloud (EC2) and Adapters for BMC Products. * Customer may only use the following BMC Atrium Orchestrator Runbooks: Storage, IPAM, and MyServices Portal. BMC Cloud Lifecycle Management - Standard Pack License Add-on ("CLM Standard"): * The Product may only be used in a Cloud Environment. If Customer is also a Cloud Service Provider, then the Product cannot be used by the Cloud Service Provider for other environments, including but not limited to the Cloud Service Provider's internal IT environment, or System Integration activities for Clients which are not part of Cloud Services. The Product may not be installed on Client premises or accessed or used directly by Clients. * The Product includes expanded license rights for BMC Atrium Orchestrator including unlimited peer licenses, use of all generally available Base Adapters, and Development Studio and Operator Control Panel user licenses to support the Licensed Capacity. The Product does not include the right to use any other Application Adapters other than what is installed out-of-the-box. * Customer may only use the BMC Remedy Service Request Management functionality of the BMC Remedy ITSM product. The Product includes the right to use BMC Remedy Service Request Management for any number of users, to support any service requests that are directly related to the delivery or consumption of Cloud Services, for the Licensed
Page 209
Capacity. * The Product includes the right to use BMC Network Automation for the network devices in the Cloud Environment as long as the number of supported Network Devices does not exceed the Licensed Capacity. * The Product includes the right to use BMC ProactiveNet Performance Management for the Licensed Capacity, only in order to deliver the CPU monitoring capabilities that are installed out-of-the-box. The Product does not include the right to use any other functional capabilities of BMC ProactiveNet Performance Management, including but not limited to, use of the BMC ProactiveNet Performance Management console for operational purposes, monitoring of IT infrastructure beyond what is specified, and any other analytics, diagnostics, event or impact management capabilities. * The product includes the right to use BMC Capacity Optimization for the Licensed Capacity only in order to enable the out-of-the-box Capacity Aware Placement Advice capability as part of the CLM Resource Manager. The Product does not include the rights to use any other functional capabilities of BMC Capacity Optimization, including but not limited to, the use of BMC Capacity Optimization for capacity planning, virtualization and consolidation; capacity analysis, forecasting, reporting and dashboards; and capacity metering for showback or chargeback. BMC Cloud Operations Management Core - License Add-on: * This product may only be used in a Cloud Environment. BMC Cloud Operations Management - Standard Pack License Add-on: * This product may only be used in a Cloud Environment BMC Configuration Management Control Center Module Restriction for BMC Configuration Management Products: Each "BMC CM Control Center" License may only be used by Administrators for the project for which it was licensed. An Administrator is defined as an employee with access to or the right to use the administrative components of the Product. BMC Configuration Management Developers Kit Definition and Restriction for BMC Configuration Management Products: A "BMC CM Developers Kit" license allows Customer to embed the "SDK Run Time Code" in unmodified object code form, into a single software application developed by Customer to create an "SDK Client." "SDK Run Time Code" means the unmodified object code files in the BMC CM Product that are designated as re-distributable. "SDK Client" means a software technology with e a principal purpose and functionality substantially different than that of the SDK Run Time Code and that uses only a BMC Desktop/Mobile Management Product, a BMC Device Management Product and/or a BMC Server Management Product, as applicable, to invoke the update functionality of the SDK Run Time Code. An SDK Client may only be used on, or distributed to, licensed Endpoints that are licensed separately by Customer, which licensed Endpoints may be within or outside of Customer's organization. "Endpoint" means a Client Endpoint, a Device Endpoint, a Server Endpoint, or Other Endpoint, as the case may be. "Client Endpoint" means a laptop, desktop or other non-Server Computer. "Device Endpoint" means a personal digital assistant or similar computing device. "Other Endpoint" means a router, a switch, a hub, or other network device, peripheral or hardware instrument, as the case may be. "Server Endpoint" is any virtual or physical Computer that provides a service for other Computers or users connected to it via the Internet, extranet, intranet, or other networked technologies. BMC Identity Products: * Internal User: If a Product name includes the term "Internal User," that Product can only be used by Customer's employees (full time and part time) and contractors whose information is being managed using the BMC IdM tools. Information on these users will typically be found in the HR database. * External User: If a Product name includes the term "External User," that Product can only be used by Customer's business partners and customers/ prospects whose information is being managed using the BMC
Page 210
IdM tools or Customer's employees (full or part time)/contractors who are licensed to use one or more of the following BMC Identity Management Tools: (1) BMC Identity User Administration (2) BMC Identity Password Management (3) BMC Identity Compliance Manager, provided the users have no more than 2 logons (access points) being managed by the IdM tools. * Archive User: If a Product name includes the term "Archive User," that Product can only be used by users whose identity information is stored within the IdM system but is not being actively managed; the information could be stored for the purpose of audit/ forensics etc. * Developer User: If a Product name includes the term "Developer User," that Product can only be used by users who create or modify applications using the BMC Directory Management Studio. BMC Middleware and Transaction Management Products: When licensed on the per CPU - Subcapacity or per MIPS Unit of Measurement, regardless of the Licensed Capacity, Customer is limited to ten (10) individual employees or contractors to whom access to the Management Console is granted ("End Users"). When licensed on the per named user Unit of Measurement, is limited to the lesser of (a) the Licensed Capacity of the Product, or (b) 250 End Users. BMC Middleware Management - Performance and Availability and BMC Middleware Management - Transaction Monitoring: Notwithstanding the Licensed Capacity, the number of individual employees or contractors of Customer to whom access to the Management Console is granted ("End Users") is limited to the lesser of (a) the Licensed Capacity of the Product if priced on a per named user metric, or (b) 250 End Users. BMC Remedy Products: Customer may not bypass, in any way, the use of a concurrent or named user license to manage an update (including, without limitation, submitting a ticket to a parallel form and then using workflow to perform the update without a license). Development License Restriction for BMC Remedy Products: If a Product name includes the term "Dev Lsn", Customer will restrict installation, access and use of such Product to a server dedicated to development and testing only, and will not allow any production or commercial activity on that server. Hot Backup License Definition and Restriction for BMC Remedy Products: A hot backup license is a replicate of the Remedy production licenses on one backup server. Customer may access that backup server only when the customary server on which the AR System is installed fails or in preparation of that backup server for such situation. Load Balanced System Restriction for BMC Remedy Products: If Customer has multiple servers in a single logical environment pointing to a single AR System database instance, only one Instance of Remedy "per Instance" licenses is required for installation on these servers (except for the AR System, which must be licensed for each server). BMC Service Desk Express Products: No terms in any Business Objects or Crystal license agreement embedded in the Product apply to the Product. Customer may make and operate 2 additional copies of the Product solely for internal pre-production configuration and testing purposes. BMC Service Desk Express Suite Restriction for BMC Service Desk Express Products: When purchasing Concurrent User licenses for the "Service Desk Express" Product, regardless of the number of such licenses purchased and regardless of the number of purchases made (including future purchases), Customer is restricted via license keys to a total of (i) five Concurrent Users conducting a process in the report environment of the Crystal Reports "Web Server" product which is embedded in the "Service Desk Express" Product and (ii) two named users accessing the "Crystal Reports Professional " product which is bundled with the "Service Desk Express" Product. CONTROL-M/Assist: Control-M/Assist may only be used to interface with
Page 211
the third party scheduler and may not be used to schedule or manage batch processes outside of the cross-scheduler dependencies. Desktop/Mobile Management Product Restrictions for BMC Configuration Management Products: Each "Desktop/Mobile Management" License is limited for use with one Client Endpoint. "Client Endpoint" means a laptop, desktop or other non-Server Computer. * Desktop/Mobile Patch Management Restriction: A "Desktop/Mobile Patch Management" License may only be used to manage, deploy, update and inventory anti-virus software and security patches on one Client Endpoint. * Desktop/Mobile Patch Management Pack Restriction: The Desktop/Mobile Application Management Product and the Desktop/Mobile Configuration Discovery Product that are shipped with the Desktop/Mobile Patch Management Pack License may only be used to manage, deploy, update and inventory anti-virus software and security patches on one licensed Client Endpoint, unless Customer has separately licensed the Desktop/Mobile Application Management Product and the Desktop/Mobile Configuration Discovery Product. Customer may not use the functionality of such Products for any other purpose. * BMC Configuration Management Desktop OS Management Restriction: A "BMC CM Desktop OS Management" License may only be used to manage operating system migration activities on one Client Endpoint. Each BMC CM Desktop OS Management License: (a) may only be used on a licensed Client Endpoint that is licensed for use with both a Desktop/Mobile Application Management License and a Desktop/Mobile Configuration Discovery License; and (b) may not be redeployed or harvested to a different Client Endpoint. * Extranet Application Management Restriction: An "Extranet Application Management" License may only be used on one Client Endpoint. The parties must mutually agree on the name of each Single Application and its primary function at the time of Order. Single Application is defined as a Tuner channel containing one application with one primary function, and Tuner is defined as is the client component of the Product configured by Customer for deployment on licensed Endpoints. Device Management Product Restriction for BMC Configuration Management Products: Each "Device Management" License is limited for use with one Device Endpoint. "Device Endpoint" means a personal digital assistant or similar computing device. Server Management Product Restrictions for BMC Configuration Management Products: Each "Server Management" License is limited for use per CPU - Subcapacity. * Server Patch Management Restriction: A "Server Patch Management" License may only be used to manage, deploy, update and inventory anti-virus software and security patches per CPU - Subcapacity. Server Patch Management Pack Restriction: The Desktop/Mobile Application Management Product and the Desktop/Mobile Configuration Discovery Product that are shipped with the Server Patch Management Pack License may only be used to manage, deploy, update and inventory anti-virus software and security patches on licensed Server Endpoints, unless Customer has separately licensed the Desktop/Mobile Application Management Product and the Desktop/Mobile Configuration Discovery Product. Customer may not use the functionality of such Products for any other purpose. With respect to the above Server Management Licenses, Customer must comply with any restrictions designated at the time of Order on the maximum number of CPUs that may be included in each Server Endpoint. A "Server Endpoint" is any virtual or physical Computer that provides a service for other Computers or users connected to it via the Internet, extranet, intranet, or other networked technologies. c. UNITS OF MEASUREMENT. certain Products. The following units of measurement apply to
Page 212
per adapter: A license is required for each installation of an adapter that interfaces with the Product. per agent: A unit of software with the official name of Remote Sys Call Daemon or RSCD Agent that can be deployed on a physical or virtual operating system. per application: A unique collection of application component templates and configuration objects used to form a single logical platform defined by the Customer. per asset: A license is required for every physical or logical Server Endpoint, Client Endpoint, Device Endpoint, Data Center Rack, Data Center IP Sensor, or Other Endpoint monitored, managed or discovered by the Product. "Client Endpoint" means a laptop, desktop or other non-Server Computer. "Device Endpoint" means a personal digital assistant or similar computing device. "Other Endpoint" means a router, a switch, a hub, or other network device, peripheral or hardware instrument, as the case may be. A "Server Endpoint" is any virtual or physical Computer that provides a service for other Computers or users connected to it via the Internet, extranet, intranet, or other networked technologies. per Cisco UCS Server: A license is required for each Cisco Unified Computing System (UCS) Server on which the Product is installed and/or manages regardless of whether the Product or one of its components is installed on that Server. per Client Endpoint: A license is required for each Client Endpoint. "Client Endpoint" means a laptop, desktop or other non-Server Computer. per component: A license is required for all objects that represent a physical or logical part of the service model. per concurrent access license: A license is required for the maximum number of simultaneous sessions accessing the Product. Sessions are counted in packs of 5. per concurrent session: A license is required for the maximum number of simultaneous sessions accessing the Product. per concurrent user: A license is required for the maximum number of individual employees or contractors of Customer to whom simultaneous access has been granted to the Product on a computer or multiple computers. per CPU - Full Capacity: A license is required for the total number of active, physical CPUs in each Computer upon which the Product is installed or which the Product manages, either remotely or locally. "CPU" means a physical processor or central unit in a designated Computer containing the logic circuitry that performs the instructions of a Computer's programs and refers to the "socket" which can contain one or more processor cores. per CPU - Subcapacity: A license is required for all active, physical CPUs which the Product manages, either remotely or locally. "CPU" means a physical processor or central unit in a designated Computer containing the logic circuitry that performs the instructions of a Computer's programs and refers to the "socket" which can contain one or more processor cores. per database: A license is required for the total allocated database space per host ID or physical Computer which the Product is managing. The total allocated database capacity cannot be segregated or aggregated into lower or higher ranges. per Device Endpoint: A license is required for each Device Endpoint. "Device Endpoint" means a personal digital assistant or similar computing device. per deployed robot: A license is required for all PATROL End-to-End Response Timer robots deployed. per engine: A license is required for each mainframe general purpose engine on the server upon which the Product is installed and/or manages regardless of whether the Product or one of its components is installed on that Server. per enterprise: A license is required per Customer or Client, or both, for its internal use only, regardless of the number of times Customer installs the Product in its Enterprise or its Client's Enterprise. "Client" means a third party whose data is processed by Customer and is only permitted if Customer is an authorized BMC service provider. per gigabyte: A license is required for the total allocated database
Page 213
space of all Computers on which the Product has been installed or operated. per gigabyte range: A license is required for the total allocated database space per host ID or physical Computer which the Product is managing. The Product may not be moved to another Computer unless the current Computer is taken out of service. The total allocated database capacity cannot be segregated or aggregated into lower or higher ranges among different Computers. For example: if Customer licenses 26-50 gigabytes, the Customer is only licensed for a maximum of 50 gigabytes in total across all the databases of the licensed Product on one particular Computer. per installed server: A license is required for each Server (with a Classification at the appropriate Tier level, if applicable) upon which the Product or any of its components is installed. per instance: A license is required for all named occurrences of the Product created or installed in the Enterprise. per Linux engine: A license is required for all engines of a mainframe Computer on which Customer is running Linux, when applicable classified by Linux Group using BMC's standard Computer classification. per managed asset - Device Endpoint: A license is required for every Device Endpoint that is monitored, managed, or discovered by the Product(s). A "Device Endpoint" can be any virtual or physical Non-Server Client Computer (e.g. laptop, desktop computer, PDA, smart phone); any Network device (e.g. router, switch, hub) standalone or chassis-based device/card/processor using a unique-IP address (also includes virtual network devices managed through the IP address of its physical host); and independent Storage (e.g. a disk array, a fiber switch, a tape library, a switch director). When applicable, the license must be computed at the appropriate tier level. per managed asset - Server Endpoint: A license is required for every Server Endpoint monitored, managed (directly or indirectly), or discovered by the Product(s). A "Server Endpoint" is any virtual or physical Computer that provides a service for other Computers or users connected to it via the Internet, extranet, intranet, or other networked technologies. per managed component: A license is required for all objects that represent a physical or logical part of the service model managed by the Product. per managed network device: A license is required for each Network Device managed using a unique IP-address. "Network Device" means a standalone or chassis-based network device/card/processor. per managed server: A license is required for each Server managed by the Product or one of its components whether locally or remotely. When applicable, this license must be computed at the appropriate tier level based on the cumulative count of managed servers. Network Devices are not counted as Servers. This license does not include the Product's installation on or management of Integrated Facility for Linux (IFL) engines. "Network Device" means a standalone or chassis-based network device/card/processor. per MIPS: A license is required for the total aggregate number of MIPS for each Computer, including all Computers coupled in a parallel Sysplex environment, upon which the Product is installed, managed or monitored. MIPS Rating is the aggregate computing power (expressed in millions of instructions per second) of a Computer, using the MIPS rating set forth in the then current Gartner Group Rating Guide. Computer-specific passwords will be issued for the Product. per monitored element: A license is required for all remotely monitored elements, such as a Server, database, operating system, URL, firewall, storage, or network device. per monitored server: A license is required for each Server (with a Classification at the appropriate Tier level, if applicable) which the Product or one of its components is monitoring regardless of whether the Product is monitoring it locally or remotely. per named user: A license (with a Classification at the appropriate Level, if applicable) is required for all individual employees or contractors or clients of Customer to whom access has been granted to the Product on a computer or multiple computers typically via the issuance of a unique ID regardless of whether the individual is
Page 214
actively using the Product at any given time. per node: A license is required for the maximum number of Nodes which the Product manages and/or monitors. "Node" means a network device (IP or non-IP) such as a router, switch or Computer. per port: A license is required for the total port capacity of a managed storage networking device regardless of whether the port is in service. Storage networking devices typically include HBAs (Host Bus Adapters), Storage Switches and Directors. The total port capacity cannot be segregated or aggregated into lower or higher ranges. per project: A license is required for each specific project, facility or business unit, as the case may be specified at the time of order. per Server Endpoint: A license is required for each Server Endpoint. A "Server Endpoint" is any virtual or physical Computer that provides a service for other Computers or users connected to it via the Internet, extranet, intranet, or other networked technologies. per Service Management MIPS: A license is required for the total aggregate number of MIPS for each Computer, including all Computers coupled in a parallel Sysplex environment, upon which the Product is installed, managed or monitored. MIPS Rating is the aggregate computing power (expressed in millions of instructions per second) of a Computer, using the MIPS rating set forth in the then current Gartner Group Rating Guide. per site: A license is required for the physical site at which the Product is installed regardless of the number of times the Product is installed. per task: A license is required for the maximum number of Tasks (as defined below) loaded into the daily CONTROL-M Active Environment in a 24-hour period, excluding any tasks that are provided for by licenses under alternative Units of Measure (i.e. tier or MIPS). A loaded task refers to all Control-M Tasks that are monitored by Control-M in all environments (including but not limited to development, staging, QA, pre-production, production, and test environments). This includes all Control-M environments on Distributed Systems and/or Mainframe installations. The number of steps or scripts executed within the named Task shall have no bearing upon the number of Tasks licensed the sum total of the commands constitutes a single Task. For CONTROL-M: licensed tasks equal the maximum number of Tasks loaded into the daily CONTROL-M Active Environment. For Control- M mainframe add-on products (Control-M/Analyzer, Control-O, Control-M Links for z/OS, Control-M/Restart, Control-M/Tape, or Control-M Mainframe Extension Pack), Licensed Tasks equal that of the Licensed Capacity of or managed by Customer's mainframe Active Environment. For all other Task based Products, the maximum number of Tasks that the Product is priced against, is measured as the maximum number of CONTROL-M Tasks. "Task" means an executable command containing the name of the JCL, CL, DCL, ECL, script or dummy processes that will execute as well as the scheduling criteria, flow control, resource usage. per terabyte: A license is required for the total aggregate storage capacity in the Enterprise. per third-party software: A license is required for each installation of the third-party software product that interfaces with the Product.
YOU AGREE THAT YOU HAVE READ THIS AGREEMENT AND INTEND TO BE BOUND, AS IF YOU HAD SIGNED THIS AGREEMENT IN WRITING. IF YOU ARE ACTING ON
Page 215
BEHALF OF AN ENTITY, YOU WARRANT THAT YOU HAVE THE AUTHORITY TO ACCEPT THE TERMS OF THIS AGREEMENT FOR SUCH ENTITY.
DOM4J
Copyright 2001-2010 MetaStuff, Ltd. All Rights Reserved Click here for full copyright statement.
Copyright 2001-2010 (C) MetaStuff, Ltd. All Rights Reserved. Redistribution and use of this software and associated documentation ("Software"), with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain copyright statements and notices. Redistributions must also contain a copy of this document. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. The name "DOM4J" must not be used to endorse or promote products derived from this Software without prior written permission of MetaStuff, Ltd. For written permission, please contact dom4j-info@metastuff.com. 4. Products derived from this Software may not be called "DOM4J" nor may "DOM4J" appear in their names without prior written permission of MetaStuff, Ltd. DOM4J is a registered trademark of MetaStuff, Ltd. 5. Due credit should be given to the DOM4J Project http://dom4j.sourceforge.net THIS SOFTWARE IS PROVIDED BY METASTUFF, LTD. AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL METASTUFF, LTD. OR ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Dojo
Copyright 2005-2009, The Dojo Foundation; All rights reserved Click here for full copyright statement.
Page 216
Copyright 2005-2009, The Dojo Foundation All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the Dojo Foundation nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
JDOM
Copyright 2000-2004 Jason Hunter & Brett McLaughlin; All rights reserved Click here for full copyright statement.
Page 217
Copyright 2000-2004 Jason Hunter & Brett McLaughlin. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions, and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions, and the disclaimer that follows these conditions in the documentation and/or other materials provided with the distribution. 3. The name "JDOM" must not be used to endorse or promote products derived from this software without prior written permission. For written permission, please contact (request_AT_jdom_DOT_org). 4. Products derived from this software may not be called "JDOM", nor may "JDOM" appear in their name, without prior written permission from the JDOM Project Management (request_AT_jdom_DOT_org). In addition, we request (but do not require) that you include in the end-user documentation provided with the redistribution and/or in the software itself an acknowledgement equivalent to the following: "This product includes software developed by the JDOM Project http://www.jdom.org/. Alternatively, the acknowledgment may be graphical using the logos available at http://www.jdom.org/images/logos. THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE JDOM AUTHORS OR THE PROJECT CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. This software consists of voluntary contributions made by many individuals on behalf of the JDOM Project and was originally created by Jason Hunter (jhunter_AT_jdom_DOT_org) and Brett McLaughlin (brett_AT_jdom_DOT_org). For more information on the JDOM Project, please see see http://www.jdom.org/.
M2Crypto
Copyright 2008-2010 Heikki Toivonen. All rights reserved Click here for full copyright statement.
Page 218
Copyright 1999-2004 Ng Pheng Siong. All rights reserved. Portions copyright 2004-2006 Open Source Applications Foundation. All rights reserved. Portions copyright 2005-2006 Vrije Universiteit Amsterdam. All rights reserved. Copyright 2008-2010 Heikki Toivonen. All rights reserved. Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation. THE AUTHOR PROVIDES THIS SOFTWARE ``AS IS'' AND ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
PLY
Copyright 2001-2011 David M. Beazley Click here for full copyright statement.
PLY (Python Lex-Yacc) Copyright 2001-2011, David M. Beazley (Dabeaz LLC) All rights reserved.
Version 3.4
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the David Beazley or Dabeaz LLC may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Page 219
Python
Copyright 1990-2010 Python Software Foundation; All rights reserved Click here for full copyright statement.
We apply patches to the Python 2.7.1 source to accomplish the following: - Stop building of the bsddb module as we use the additional module seperately - Fix for http://bugs.python.org/issue1692335 - Replace calls from select to poll in the select module to overcome limitations on maximum file descriptor number
This is the official license for the Python 2.7 release: A. HISTORY OF THE SOFTWARE ========================== Python was created in the early 1990s by Guido van Rossum at Stichting Mathematisch Centrum (CWI, see http://www.cwi.nl) in the Netherlands as a successor of a language called ABC. Guido remains Python's principal author, although it includes many contributions from others. In 1995, Guido continued his work on Python at the Corporation for National Research Initiatives (CNRI, see http://www.cnri.reston.va.us) in Reston, Virginia where he released several versions of the software. In May 2000, Guido and the Python core development team moved to BeOpen.com to form the BeOpen PythonLabs team. In October of the same year, the PythonLabs team moved to Digital Creations (now Zope Corporation, see http://www.zope.com). In 2001, the Python Software Foundation (PSF, see http://www.python.org/psf/) was formed, a non-profit organization created specifically to own Python-related Intellectual Property. Zope Corporation is a sponsoring member of the PSF. All Python releases are Open Source (see http://www.opensource.org for the Open Source Definition). Historically, most, but not all, Python releases have also been GPL-compatible; the table below summarizes the various releases. Release Derived from Year Owner GPLcompatible? (1) yes yes no no yes (2) no yes yes yes yes yes yes yes yes yes yes yes yes yes yes
0.9.0 thru 1.2 1.3 thru 1.5.2 1.6 2.0 1.6.1 2.1 2.0.1 2.1.1 2.2 2.1.2 2.1.3 2.2.1 2.2.2 2.2.3 2.3 2.3.1 2.3.2 2.3.3 2.3.4 2.3.5
1.2 1.5.2 1.6 1.6 2.0+1.6.1 2.0+1.6.1 2.1+2.0.1 2.1.1 2.1.1 2.1.2 2.2 2.2.1 2.2.2 2.2.2 2.3 2.3.1 2.3.2 2.3.3 2.3.4
1991-1995 1995-1999 2000 2000 2001 2001 2001 2001 2001 2002 2002 2002 2002 2003 2002-2003 2002-2003 2002-2003 2002-2003 2004 2005
CWI CNRI CNRI BeOpen.com CNRI PSF PSF PSF PSF PSF PSF PSF PSF PSF PSF PSF PSF PSF PSF PSF
Page 220
2.4 2.4.1 2.4.2 2.4.3 2.4.4 2.5 2.5.1 2.5.2 2.5.3 2.6 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7 Footnotes:
2.3 2.4 2.4.1 2.4.2 2.4.3 2.4 2.5 2.5.1 2.5.2 2.5 2.6 2.6.1 2.6.2 2.6.3 2.6.4 2.6
2004 2005 2005 2006 2006 2006 2007 2008 2008 2008 2008 2009 2009 2009 2010 2010
PSF PSF PSF PSF PSF PSF PSF PSF PSF PSF PSF PSF PSF PSF PSF PSF
yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes yes
(1) GPL-compatible doesn't mean that we're distributing Python under the GPL. All Python licenses, unlike the GPL, let you distribute a modified version without making your changes open source. The GPL-compatible licenses make it possible to combine Python with other software that is released under the GPL; the others don't. (2) According to Richard Stallman, 1.6.1 is not GPL-compatible, because its license has a choice of law clause. According to CNRI, however, Stallman's lawyer has told CNRI's lawyer that 1.6.1 is "not incompatible" with the GPL. Thanks to the many outside volunteers who have worked under Guido's direction to make these releases possible.
B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING PYTHON =============================================================== PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2 -------------------------------------------1. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"), and the Individual or Organization ("Licensee") accessing and otherwise using this software ("Python") in source or binary form and its associated documentation. 2. Subject to the terms and conditions of this License Agreement, PSF hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use Python alone or in any derivative version, provided, however, that PSF's License Agreement and PSF's notice of copyright, i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010 Python Software Foundation; All Rights Reserved" are retained in Python alone or in any derivative version prepared by Licensee. 3. In the event Licensee prepares a derivative work that is based on or incorporates Python or any part thereof, and wants to make the derivative work available to others as provided herein, then Licensee hereby agrees to include in any such work a brief summary of the changes made to Python. 4. PSF is making Python available to Licensee on an "AS IS" basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
Page 221
A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. 6. This License Agreement will automatically terminate upon a material breach of its terms and conditions. 7. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between PSF and Licensee. This License Agreement does not grant permission to use PSF trademarks or trade name in a trademark sense to endorse or promote products or services of Licensee, or any third party. 8. By copying, installing or otherwise using Python, Licensee agrees to be bound by the terms and conditions of this License Agreement.
BEOPEN.COM LICENSE AGREEMENT FOR PYTHON 2.0 ------------------------------------------BEOPEN PYTHON OPEN SOURCE LICENSE AGREEMENT VERSION 1 1. This LICENSE AGREEMENT is between BeOpen.com ("BeOpen"), having an office at 160 Saratoga Avenue, Santa Clara, CA 95051, and the Individual or Organization ("Licensee") accessing and otherwise using this software in source or binary form and its associated documentation ("the Software"). 2. Subject to the terms and conditions of this BeOpen Python License Agreement, BeOpen hereby grants Licensee a non-exclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use the Software alone or in any derivative version, provided, however, that the BeOpen Python License is retained in the Software, alone or in any derivative version prepared by Licensee. 3. BeOpen is making the Software available to Licensee on an "AS IS" basis. BEOPEN MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, BEOPEN MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE SOFTWARE WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. 4. BEOPEN SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF USING, MODIFYING OR DISTRIBUTING THE SOFTWARE, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. 5. This License Agreement will automatically terminate upon a material breach of its terms and conditions. 6. This License Agreement shall be governed by and interpreted in all respects by the law of the State of California, excluding conflict of law provisions. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between BeOpen and Licensee. This License Agreement does not grant permission to use BeOpen trademarks or trade names in a trademark sense to endorse or promote products or services of Licensee, or any third party. As an exception, the "BeOpen Python" logos available at http://www.pythonlabs.com/logos.html may be used according to the permissions granted on that web page. 7. By copying, installing or otherwise using the software, Licensee agrees to be bound by the terms and conditions of this License Agreement.
Page 222
1. This LICENSE AGREEMENT is between the Corporation for National Research Initiatives, having an office at 1895 Preston White Drive, Reston, VA 20191 ("CNRI"), and the Individual or Organization ("Licensee") accessing and otherwise using Python 1.6.1 software in source or binary form and its associated documentation. 2. Subject to the terms and conditions of this License Agreement, CNRI hereby grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use Python 1.6.1 alone or in any derivative version, provided, however, that CNRI's License Agreement and CNRI's notice of copyright, i.e., "Copyright (c) 1995-2001 Corporation for National Research Initiatives; All Rights Reserved" are retained in Python 1.6.1 alone or in any derivative version prepared by Licensee. Alternately, in lieu of CNRI's License Agreement, Licensee may substitute the following text (omitting the quotes): "Python 1.6.1 is made available subject to the terms and conditions in CNRI's License Agreement. This Agreement together with Python 1.6.1 may be located on the Internet using the following unique, persistent identifier (known as a handle): 1895.22/1013. This Agreement may also be obtained from a proxy server on the Internet using the following URL: http://hdl.handle.net/1895.22/1013". 3. In the event Licensee prepares a derivative work that is based on or incorporates Python 1.6.1 or any part thereof, and wants to make the derivative work available to others as provided herein, then Licensee hereby agrees to include in any such work a brief summary of the changes made to Python 1.6.1. 4. CNRI is making Python 1.6.1 available to Licensee on an "AS IS" basis. CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6.1 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. 5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON 1.6.1 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 1.6.1, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. 6. This License Agreement will automatically terminate upon a material breach of its terms and conditions. 7. This License Agreement shall be governed by the federal intellectual property law of the United States, including without limitation the federal copyright law, and, to the extent such U.S. federal law does not apply, by the law of the Commonwealth of Virginia, excluding Virginia's conflict of law provisions. Notwithstanding the foregoing, with regard to derivative works based on Python 1.6.1 that incorporate non-separable material that was previously distributed under the GNU General Public License (GPL), the law of the Commonwealth of Virginia shall govern this License Agreement only as to issues arising under or with respect to Paragraphs 4, 5, and 7 of this License Agreement. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between CNRI and Licensee. This License Agreement does not grant permission to use CNRI trademarks or trade name in a trademark sense to endorse or promote products or services of Licensee, or any third party. 8. By clicking on the "ACCEPT" button where indicated, or by copying, installing or otherwise using Python 1.6.1, Licensee agrees to be bound by the terms and conditions of this License Agreement. ACCEPT
Page 223
Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam, The Netherlands. All rights reserved. Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of Stichting Mathematisch Centrum or CWI not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. STICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
Page 224
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
Unless stated in the specfic source file, this work is Copyright 1994-2008, Mark Hammond All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither name of Mark Hammond nor the name of contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Python-ldap
Copyright 2008, python-ldap project team Click here for full copyright statement.
The python-ldap package is distributed under Python-style license. Standard disclaimer: This software is made available by the author(s) to the public for free and "as is". All users of this free software are solely and entirely responsible for their own choice and use of this software for their own purposes. By using this software, each user agrees that the author(s) shall not be liable for damages of any kind in relation to its use or performance. The author(s) do not warrant that this software is fit for any purpose. $Id: LICENCE,v 1.1 2002/09/18 18:51:22 stroeder Exp $
Red Hat
Page 225
Copyright 2003 Red Hat, Inc. All rights reserved Click here for full copyright statement.
Copyright 2003 Red Hat, Inc. All rights reserved. "Red Hat" and the Red Hat "Shadowman" logo are registered trademarks of Red Hat, Inc. "Linux" is a registered trademark of Linus Torvalds. All other trademarks are the property of their respective owners. LICENSE AGREEMENT AND LIMITED PRODUCT WARRANTY RED HAT ENTERPRISE LINUX AND RED HAT APPLICATIONS This agreement governs the use of the Software and any updates to the Software, regardless of the delivery mechanism. The Software is a collective work under U.S. Copyright Law. Subject to the following terms, Red Hat, Inc. ("Red Hat") grants to the user ("Customer") a license to this collective work pursuant to the GNU General Public License. 1. The Software. Red Hat Enterprise Linux and Red Hat Applications (the "Software") are either a modular operating system or application consisting of hundreds of software components. The end user license agreement for each component is located in the component's source code. With the exception of certain image files identified in Section 2 below, the license terms for the components permit Customer to copy, modify, and redistribute the component, in both source code and binary code forms. This agreement does not limit Customer's rights under, or grant Customer rights that supersede, the license terms of any particular component. 2. Intellectual Property Rights. The Software and each of its components, including the source code, documentation, appearance, structure and organization are owned by Red Hat and others and are protected under copyright and other laws. Title to the Software and any component, or to any copy, modification, or merged portion shall remain with the aforementioned, subject to the applicable license. The "Red Hat" trademark and the "Shadowman" logo are registered trademarks of Red Hat in the U.S. and other countries. This agreement does not permit Customer to distribute the Software using Red Hat's trademarks. Customer should read the information found at http://www.redhat.com/about/corporate/trademark/ before distributing a copy of the Software, regardless of whether it has been modified. If Customer makes a commercial redistribution of the Software, unless a separate agreement with Red Hat is executed or other permission granted, then Customer must modify any files identified as "REDHAT-LOGOS" and "anaconda-images" to remove all images containing the "Red Hat" trademark or the "Shadowman" logo. Merely deleting these files may corrupt the Software. 3. Limited Warranty. Except as specifically stated in this agreement or a license for a particular component, to the maximum extent permitted under applicable law, the Software and the components are provided and licensed "as is" without warranty of any kind, expressed or implied, including the implied warranties of merchantability, non-infringement or fitness for a particular purpose. Red Hat warrants that the media on which the Software is furnished will be free from defects in materials and manufacture under normal use for a period of 30 days from the date of delivery to Customer. Red Hat does not warrant that the functions contained in the Software will meet Customer's requirements or that the operation of the Software will be entirely error free or appear precisely as described in the accompanying documentation. This warranty extends only to the party that purchases the Software from Red Hat or a Red Hat authorized distributor. 4. Limitation of Remedies and Liability. To the maximum extent permitted by applicable law, the remedies described below are accepted by Customer as its only remedies. Red Hat's entire liability, and Customer's exclusive remedies, shall be: If the Software media is defective, Customer may return it within 30 days of delivery along with a copy of Customer's payment receipt and Red Hat, at its option, will replace it or refund the money paid by Customer for the Software. To the maximum extent
Page 226
permitted by applicable law, Red Hat or any Red Hat authorized dealer will not be liable to Customer for any incidental or consequential damages, including lost profits or lost savings arising out of the use or inability to use the Software, even if Red Hat or such dealer has been advised of the possibility of such damages. In no event shall Red Hat's liability under this agreement exceed the amount that Customer paid to Red Hat under this agreement during the twelve months preceding the action. 5. Export Control. As required by U.S. law, Customer represents and warrants that it: (a) understands that the Software is subject to export controls under the U.S. Commerce Department's Export Administration Regulations ("EAR"); (b) is not located in a prohibited destination country under the EAR or U.S. sanctions regulations (currently Cuba, Iran, Iraq, Libya, North Korea, Sudan and Syria); (c) will not export, re-export, or transfer the Software to any prohibited destination, entity, or individual without the necessary export license(s) or authorizations(s) from the U.S. Government; (d) will not use or transfer the Software for use in any sensitive nuclear, chemical or biological weapons, or missile technology end-uses unless authorized by the U.S. Government by regulation or specific license; (e) understands and agrees that if it is in the United States and exports or transfers the Software to eligible end users, it will, as required by EAR Section 740.17(e), submit semi-annual reports to the Commerce Department's Bureau of Industry & Security (BIS), which include the name and address (including country) of each transferee; and (f) understands that countries other than the United States may restrict the import, use, or export of encryption products and that it shall be solely responsible for compliance with any such import, use or export restrictions. 6.Third Party Programs. Red Hat may distribute third party software programs with the Software that are not part of the Software. These third party programs are subject to their own license terms. The license terms either accompany the programs or can be viewed at http://www.redhat.com/licenses/. If Customer does not agree to abide by the applicable license terms for such programs, then Customer may not install them. If Customer wishes to install the programs on more than one system or transfer the programs to another party, then Customer must contact the licensor of the programs. 7. General. If any provision of this agreement is held to be unenforceable, that shall not affect the enforceability of the remaining provisions. This agreement shall be governed by the laws of the State of North Carolina and of the United States, without regard to any conflict
Page 227
of laws provisions, except that the United Nations Convention on the International Sale of Goods shall not apply.
ReportLab
Copyright 2000-2010, ReportLab Inc Click here for full copyright statement.
Copyright 2000-2010, ReportLab Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of the company nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OFFICERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
SNMP++
Copyright 2001-2010 Jochen Katz, Frank Fock Click here for full copyright statement.
Copyright 2001-2010 Jochen Katz, Frank Fock This software is based on SNMP++ 2.6 from Hewlett Packard: Copyright 1996 Hewlett-Packard Company ATTENTION: USE OF THIS SOFTWARE IS SUBJECT TO THE FOLLOWING TERMS. Permission to use, copy, modify, distribute and/or sell this software and/or its documentation is hereby granted without fee. User agrees display the above copyright notice and this license notice in all copies of the software and any documentation of the software. User agrees to assume all liability for the use of the software; Hewlett-Packard and Jochen Katz make no representations about the suitability of this software for any purpose. It is provided "AS-IS" without warranty of any kind, either express or implied. User hereby grants a royalty-free license to any and all derivatives based upon this software code base.
Page 228
VI Java
Copyright 2008 VMware, Inc. All Rights Reserved Click here for full copyright statement.
Copyright (c) 2008 VMware, Inc. All Rights Reserved. Copyright (c) 2009 Altor Networks. All Rights Reserved. Copyright (c) 2009 NetApp. All Rights Reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of VMware, Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL VMWARE, INC. OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Webware
Copyright 1999-2010 by Chuck Esterbrook; All rights reserved Click here for full copyright statement.
Page 229
The Gist Webware for Python is open source, but there is no requirement that products developed with or derivative to Webware become open source. Webware for Python is copyrighted, but you can freely use and copy it as long as you don't change or remove this copyright notice. The license is a clone of the original OSI-approved Python license. There is no warranty of any kind. Use at your own risk. Read this entire document for complete, legal details. Copyright Copyright 1999-2010 by Chuck Esterbrook; All rights reserved. License Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the names of the authors not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. Disclaimer THE AUTHORS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. Trademarks All trademarks referred to in the source code, documentation, sample files or otherwise, are reserved by their respective owners.
bsddb3
Copyright 1999-2011, Digital Creations, Fredericksburg, VA, USA and Andrew Kuchling; All rights reserved Click here for full copyright statement.
Page 230
Copyright 1999-2011, Digital Creations, Fredericksburg, VA, USA and Andrew Kuchling; All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions, and the disclaimer that follows. Redistributions in binary form must reproduce the above copyright notice, this list of conditions, and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of Digital Creations nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY DIGITAL CREATIONS AND CONTRIBUTORS *AS IS* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL DIGITAL CREATIONS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
ipaddr
Copyright 2007 Google Inc Click here for full copyright statement.
ipaddr Copyright 2007 Google Inc Licensed to PSF under a Contributor Agreement. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. http://code.google.com/p/ipaddr-py/
mxDateTime
Copyright 2000-2009 eGenix.com Software GmbH, 1997-2000 IKDS Marc-Andre Lemburg Click here for full copyright statement.
Page 231
VERSION 1.1.0
This "License Agreement" is between eGenix.com Software, Skills and Services GmbH ("eGenix.com"), having an office at Pastor-Loeh-Str. 48, D-40764 Langenfeld, Germany, and the Individual or Organization ("Licensee") accessing and otherwise using this software in source or binary form and its associated documentation ("the Software"). 2. License Subject to the terms and conditions of this eGenix.com Public License Agreement, eGenix.com hereby grants Licensee a non-exclusive, royalty-free, world-wide license to reproduce, analyze, test, perform and/or display publicly, prepare derivative works, distribute, and otherwise use the Software alone or in any derivative version, provided, however, that the eGenix.com Public License Agreement is retained in the Software, or in any derivative version of the Software prepared by Licensee. 3. NO WARRANTY eGenix.com is making the Software available to Licensee on an "AS IS" basis. SUBJECT TO ANY STATUTORY WARRANTIES WHICH CAN NOT BE EXCLUDED, EGENIX.COM MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, EGENIX.COM MAKES NO AND DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE SOFTWARE WILL NOT INFRINGE ANY THIRD PARTY RIGHTS. 4. LIMITATION OF LIABILITY EGENIX.COM SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS (INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOSS OF BUSINESS PROFITS, BUSINESS INTERRUPTION, LOSS OF BUSINESS INFORMATION, OR OTHER PECUNIARY LOSS) AS A RESULT OF USING, MODIFYING OR DISTRIBUTING THE SOFTWARE, OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OR LIMITATION OF INCIDENTAL OR CONSEQUENTIAL DAMAGES, SO THE ABOVE EXCLUSION OR LIMITATION MAY NOT APPLY TO LICENSEE. 5. Termination This License Agreement will automatically terminate upon a material breach of its terms and conditions. 6. Third Party Rights Any software or documentation in source or binary form provided along with the Software that is associated with a separate license agreement is licensed to Licensee under the terms of that license agreement. This License Agreement does not apply to those portions of the Software. Copies of the third party licenses are included in the Software Distribution. 7. General Nothing in this License Agreement affects any statutory rights of consumers that cannot be waived or limited by contract. Nothing in this License Agreement shall be deemed to create any relationship of agency, partnership, or joint venture between eGenix.com and Licensee. If any provision of this License Agreement shall be unlawful,
Page 232
void, or for any reason unenforceable, such provision shall be modified to the extent necessary to render it enforceable without losing its intent, or, if no such modification is possible, be severed from this License Agreement and shall not affect the validity and enforceability of the remaining provisions of this License Agreement. This License Agreement shall be governed by and interpreted in all respects by the law of Germany, excluding conflict of law provisions. It shall not be governed by the United Nations Convention on Contracts for International Sale of Goods. This License Agreement does not grant permission to use eGenix.com trademarks or trade names in a trademark sense to endorse or promote products or services of Licensee, or any third party. The controlling language of this License Agreement is English. If Licensee has received a translation into another language, it has been provided for Licensee's convenience only. 8. Agreement By downloading, copying, installing or otherwise using the Software, Licensee agrees to be bound by the terms and conditions of this License Agreement.
For question regarding this License Agreement, please write to: eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str. 48 D-40764 Langenfeld
Page 233
Germany
omniORB
Copyright AT&T Laboratories Cambridge, Apasphere Ltd and others Click here for full copyright statement.
This is omniORB 4.1.6. omniORB is copyright Apasphere Ltd, AT&T Laboratories Cambridge and others. It is free software. The programs in omniORB are distributed under the GNU General Public Licence as published by the Free Software Foundation. See the file COPYING for copying permission of these programs. The libraries in omniORB are distributed under the GNU Lesser General Public Licence. See the file COPYING.LIB for copying permission of these libraries. We impose no restriction on the use of the IDL compiler output. The stub code produced by the IDL compiler is not considered a derived work of it.
pexpect
Copyright 2008 Noah Spurrier Click here for full copyright statement.
Pexpect Copyright 2008 Noah Spurrier Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. http://pexpect.sourceforge.net/
prototype
Copyright 2005-2010 Sam Stephenson Click here for full copyright statement.
Page 234
Copyright 2005-2010 Sam Stephenson Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
scriptaculous
Copyright 2005-2008 Thomas Fuchs Click here for full copyright statement.
Copyright (c) 2005-2008 Thomas Fuchs (http://script.aculo.us, http://mir.aculo.us) Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. http://madrobby.github.com/scriptaculous/license/
Legal Information
Copyright 2012 BMC Software, Inc. BMC, BMC Software, and the BMC Software logo are the exclusive properties of BMC Software, Inc., are registered with the U.S. Patent and Trademark Office, and may be registered or pending registration in other countries. All other BMC trademarks, service marks, and logos may be registered or pending registration in the U.S. or in other countries. All other trademarks or registered trademarks are the property of their respective owners.
Page 235
The information included in this documentation is the proprietary and confidential information of BMC Software, Inc., its affiliates, or licensors. Your use of this information is subject to the terms and conditions of the applicable End User License agreement for the product and to the proprietary and restricted rights notices included in the product documentation. BMC Software, Inc. 5th Floor 157-197 Buckingham Palace Road London SW1W 9SP United Kingdom Tel: +44 (0) 800 279 8925 Email: customer_support@bmc.com Web: http://www.bmc.com/support
Table of Contents
1 Table of Contents 2 Getting started 2.1 Introduction to Discovery 2.1.1 Discovery communication 2.1.2 Windows discovery 2.2 The Community Edition 2.3 The BMC Atrium Discovery Virtual Appliance 2.3.1 Installing a BMC Atrium Discovery Virtual Appliance 2.4 Using BMC Atrium Discovery 2.4.1 Logging in to BMC Atrium Discovery 2.4.2 Performing an initial discovery scan 2.4.3 Examining scan results 2.4.4 Rescanning with credentials 2.4.5 Enabling other users 2.4.6 Installing Windows discovery proxies 2.4.7 Next steps 2.4.8 Logging in to the appliance command line 2.5 Deployment considerations 3 Legal Information
Getting started
The BMC Atrium Discovery Getting Started guide is intended for anyone who is starting to evaluate, use, or deploy BMC Atrium Discovery or BMC Atrium Discovery Community Edition. It describes the Community Edition of BMC Atrium Discovery, how to log into and begin using BMC Atrium Discovery, how to set up and run a virtual machine installation of BMC Atrium Discovery, and deployment considerations. Goal Learn the basics of the BMC Atrium Discovery product and discovery service. Get up and running and scan up to 60 devices in your production environment. Install, set up, and run a BMC Atrium Discovery virtual machine. Related information Introduction to Discovery The Community Edition The BMC Atrium Discovery Virtual Appliance Benefit Understand key concepts and core functionality so that you can begin using discovery right away Deploy BMC Atrium Discovery for baseline projects using a free version of the product ready to use out of the box Run BMC Atrium Discovery for production as well as for testing and validation
Page 236
Perform an initial scan, configure credentials, and install Windows proxies. Use BMC Atrium Discovery to map software applications that span multiple hosts.
Quickly perform scans with credentials on both UNIX and Windows systems to perform more detailed analysis Optimize the configuration, operation, and the quality of the discovered data
PDF Download You can download a PDF version of the Getting Started information here. Be aware that the PDF document is a snapshot and will be outdated should updates occur on the wiki. The PDF file is generated from the wiki on each request so it may take some time before the download dialog box is displayed.
Introduction to Discovery
The discovery service identifies systems in the network and obtains relevant information from them as quickly as possible and with the lowest impact, using a variety of different tools and techniques to communicate.
Page 237
UTF-8 Data BMC Atrium Discovery stores and suppoprtall character data as UTF-8, but does not support direct discovery of data outside the basic ASCII character set. If such data is discovered, BMC Atrium Discovery is unable to map it to UTF-8 and invalid characters are introduced into the discovery data. Windows proxies do attempt some conversion of this data.
UNIX discovery
The BMC Atrium Discovery appliance is UNIX-based and uses the discovery service to determine the type and version of the operating system. The discovery service on the appliance attempts to use ssh (or if desired telnet/rlogin) to access the host, performs connectivity checks on known ports, and attempts to log on with stored credentials and run discovery commands.
Windows discovery
Windows discovery is performed using Windows proxies. This is application software that runs on a number of Windows hosts on the network. For more information, see Windows discovery.
Discovery communication
This section describes communication between the BMC Atrium Discovery appliance, Windows proxies, and discovery targets.
Page 238
Closed Port FTP SSH telnet HTTP Windows RPC SNMP rlogin Discovery for z/OS Agent
WMI
The ports that are used by WMI discovery methods and the corresponding assigned ports are described in the following table. Port Number 135 1024-1030 Port assignment DCE RPC Endpoint Manager. DCOM Service Control Restricted DCOM One of these ports is used after initial negotiation.
Page 239
Unrestricted DCOM One of these ports is used after initial negotiation. Netbios Session Service Microsoft Directory Services SMB
All WMI communication from BMC Atrium Discovery is sent with Packet Privacy enabled. If the host being discovered does not support Packet Privacy, the flag is ignored and WMI returns the requested information (for example, if you run a version earlier than Windows Server 2003 with Service Pack 1 (SP1)).
By default, WMI (DCOM) uses a randomly selected TCP port between 1024 and 65535. To simplify configuration of the firewall, you should restrict this usage if you scan through firewalls. See To set the DCOM Port Range for more information.
These settings should be restricted on the target host, not the Windows proxy host.
1. Using a registry editor, create the key HKEY_LOCAL_MACHINE\Software\Microsoft\Rpc\Internet 2. Within that key create a REG_MULTI_SZ (Multi-String Value) called Ports. 3. Enter in the port(s) or port range you want to use. The Windows proxy uses only one port; however, if the user has other DCOM applications in use on that machine, you might need to enable a larger range. 4. Create a REG_SZ (String Value) called PortsInternetAvailable and give it the value Y. 5. Create a REG_SZ (String Value) called UseInternetPorts and give it the value Y. 6. Restart the computer. You should also read the relevant Microsoft article about this issue: How to configure RPC dynamic port allocation to work with firewalls
RemQuery
The ports that are used by RemQuery discovery and the corresponding port assignments are described in the following table. Port Number 139 445 Port assignment Netbios Session Service Microsoft Directory Services SMB
Page 240
Samba 3.x or earlier. TCP 139 is the NetBIOS Session Service. Some versions of Windows (particularly 9x/NT4) run SMB on NetBIOS over TCP using port 139. Newer versions default to running SMB directly over TCP on port 445. Windows XP/2003/Vista/2008 and later and Active Directory networks use SMB directly over TCP 445.
Proxy port changes in 8.3 SP2 In BMC Atrium Discovery 8.3 SP2, proxies are not limited to the default ports. It is also possible to install multiple proxies of each type on a single host. Consequently, in BMC Atrium Discovery 8.3 SP2 you must check the proxy manager to determine which ports the proxies are using. The defaults are the same as previous releases, but installations of additional proxies use incremental ports. You can also use the proxy manager to modify the port that each proxy uses.
Workgroup Windows proxy support is only for pre-8.2 Windows proxies, as they no longer exist in the current release. All of their functionality has been moved into Active Directory Windows proxies.
J2EE Discovery
The port information used for J2EE discovery is determined in the patterns used to discover the particular J2EE Application Server. If no port information is discovered, then the default port is used. In addition, for full extended discovery, the port for the database that the J2EE Application Server is using is also required. This is dependent on the way that these servers are configured in your organization. The following table details the default port. Port Number 7001 Port Assignment JMX Use WebLogic
SQL discovery
The port information used for SQL discovery is derived in the patterns used to discover the particular database. This is dependent on the way that databases are configured in your organization. The following table details the default ports.
Page 241
Discovery of vCenter Discovery of vCenter uses standard host discovery with the creation of a vCenter SI triggered on a discovered vCenter process.
Windows discovery
Because the methods that are used to access Windows hosts are only available from Windows systems, Windows discovery requires a Windows proxy host. Windows discovery is handled in one of the following ways: Credential Windows proxy a BMC Atrium Discovery application that runs on a customer-provided Windows host and uses credentials supplied by the BMC Atrium Discovery appliance to perform Windows discovery. Active Directory Windows proxy a BMC Atrium Discovery application that runs on a customer-provided Windows host that is part of an Active Directory domain or Workgroup. The user that the discovery service runs as, is configured after the Windows proxy is installed. Where that user is configured on hosts in the domain, the Windows proxy can log in and run discovery commands. The Active Directory Windows proxy does not use any credentials entered using the BMC Atrium Discovery user interface. The Windows proxy scans Windows hosts on behalf of the discovery service on the BMC Atrium Discovery appliance. Benefits and limitations of these approaches are described in the following sections.
Page 242
recommended. If both proxies are being used intensively then discovery performance would be impacted.
Any Windows host that is to be discovered by the Active Directory Windows proxy must be visible from the BMC Atrium Discovery appliance and the Windows proxy. The discovery engine on the appliance attempts to use ssh (or telnet/rlogin) to access the host, performs connectivity checks on known ports, and uses SNMP to get information from the host before requesting the Windows proxy to log on and run discovery commands. Windows proxies can also be configured with the details of the appliances permitted to connect to it. Authentication of incoming requests is handled by the standard CORBA connection handling features.
Page 243
Center 1.
Processing sequence
The main discovery service uses the Active Directory Windows proxies in preference to a Credential Windows proxy. This includes the initial heuristics processing. The following steps describe the initial discovery sequence for a Windows host. This process assumes that one appliance, two Active Directory Windows proxies, and a Credential Windows proxy are present. It also assumes that the only detail known about the scan target is the IP address. 1. An IP address is passed to the discovery service to scan. 2. The discovery service performs initial diagnostics such as SNMP, HTTP HEAD requests, and port scan to determine the type of operating system. 3. The operating system is determined to be Windows. 4. If the Windows host is part of a domain, then the IP address is passed to the those Active Directory Windows proxies first. 5. The Active Directory Windows proxy attempts to log in to the host. If it is unsuccessful, it informs the discovery service. Thereafter, Windows proxies are used in the order defined in the Managing Windows proxies page.
Page 244
Terms and Conditions - opens a page which shows the license terms of the Community Edition.
Additional limitations
In addition to the limitation on discovery, the following features are disabled: Integration SDKs and Export CSV, XML Productised Integrations Single Sign On Multi-Appliance Consolidation Mainframe discovery
Support
For information on how the Virtual Appliance is supported, see supported virtualization platforms
Community Edition
Users of the Community Edition can get support from the BMC Community. This is best achieved by reading and posting to the public forums. In addition to those products listed above, the BMC Atrium Discovery Virtual Appliance will also run in the following tools: VMware Player 4: Available here: https://my.vmware.com/web/vmware/free#desktop_end_user_computing/vmware_player/4_0 VMware Workstation 8: Available here: https://my.vmware.com/web/vmware/info/slug/desktop_end_user_computing/vmware_workstation/8_0 A link to the relevant VMware support site is provided below: VMware support
Specifications
The following sections specify the default specifications of the Virtual Appliances supplied by BMC.
Production Appliance
CPU: 1 RAM: 2048MB Hard Disk: 1 x SCSI 55 GB (set to grow as necessary in 2 GB file increments) CD/DVD Drive: 1 (auto detect) Network Interface Cards: 1 (eth0) bridged, configured to use DHCP to obtain an IP address. The Network Interface Card (eth0) is configured to use DHCP. However, this can be changed to use a static IP. During the install process, the following network configuration is required: DHCP or static IP for eth0 interface If Static IP then provide the following: Appliance IP Address Gateway Subnet Mask DNS for address lookup
Page 245
Defaults For initial testing at a small scale, the default configuration is sufficient. For production use, or use at scale you must increase the RAM, CPU and disk configuration in accordance with the sizing guidelines. One size Virtual Appliance is supplied, because a single size simplifies delivery and does not require you to download various sized VMs for various applications such as scanning and consolidation appliances. 2GB RAM is insufficient to activate a TKU in an acceptable time. To use a TKU in testing, you must increase the amount of RAM in the system to 4GB or more.
No additional software is supported on the appliance BMC Atrium Discovery is built as an appliance that is not intended to have any additional software installed on it, with the single exception of BMC PATROL. If you have an urgent business need to install additional software on the appliance, please contact BMC Customer Support before doing so.
No OS customizations are supported on the appliance The BMC Atrium Discovery software is delivered as an appliance model (either virtual or physical), and includes the entire software stack from a Linux operating system to the BMC application software. The operating system must not be treated for general purpose use, but rather as a tightly integrated part of the BMC Atrium Discovery solution. As such, customizations to the operating system should only be made at the command line level if explicitly described in this online documentation, or under the guidance of BMC Customer Support. If you have an urgent business need to change any operating system configuration, please contact BMC Customer Support before doing so.
Pursuant to the terms of VMwares VMware Ready program. BMC Atrium Discovery is a qualifying partner product in that it has been approved by VMware as meeting all of the requirements of the VMware Ready program. BMC Atrium Discovery is certified in the Application Software category of the VMware Ready program and is fully supported by BMC. The product has been fully tested to ensure reliable operation under fully-loaded conditions.
Page 246
This certification can be validated via this article, in the Installing the virtual appliance guide, and on VMwares Solution Exchange. VMware and VMware Ready is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions.
You must add a CD-ROM drive to the virtual machine before you can install VMware tools. Do this before starting the virtual machine.
The major steps in this procedure are outlined below. For full information, see the VMware documentation. 1. Copy and extract the BMC Atrium Discovery Virtual Appliance zip file to the local file system. 2. In VMware Virtual Infrastructure, from the File menu, select Deploy OVF Template. The wizard guides you through the rest of the process described here. 3. Enter the file name of the BMC Atrium Discovery OVF file into the Filename dialog box, or click Browse... to locate the file. Click Next. 4. Review the details provided. Click Next. 5. Provide a name for the virtual machine. Click Next. 6. Choose datastore location for the virtual machine. Click Next. 7. Choose thin or thick disk format. The default is thick which pre-allocates all disk space. Click Next. 8. Review the configuration and click Finish to proceed. A progress bar is displayed while the virtual machine is deployed. 9. Click the Hardware tab and click Add. 10. Select DVD/CD-ROM Drive and click Next. 11. Accept the defaults for the drive. Optionally you can now add additional disk storage which can be managed with the disk configuration utility. 12. Click the Hardware tab and click Add. 13. Select Hard Disk and click Next. 14. Choose the disk capacity and other options. For information on recommended capacity, see the sizing guidelines. 15. Click Next. 16. Review the settings and click Finish. 17. Click OK to save the changes. You can now start the virtual machine.
1. Copy and extract the BMC Atrium Discovery Virtual Appliance zip file to the local file system. 2. In VMware Workstation, from the File menu, select Open. 3. Enter the file name of the BMC Atrium Discovery OVF file into the Filename dialog box, or click select OVF as the file type and click Browse... to locate the file. Click Open. 4. Enter a name for the virtual machine and click Import. VMware Workstation imports the virtual machine and the virtual machine is displayed in the virtual machine library.
Earlier versions
VMware Workstation version 7 and earlier and VMware Player version 3 and earlier do not directly support appliances generated by their current tools. However, you can convert the OVF file using the OVF converter in order to import it. Note also that some of these older versions of VMware tools do not natively support Red Hat Enterprise Linux version 6. To convert, enter the following at a command prompt:
"c:\Program Files\VMware\VMware OVF Tool\ovftool" ADDM_VA_64_9.0.1_308185_ga_Community.ovf ADDM_VA_64_9.0.1_308185_ga_Community.vmx
When you have converted the file, you can install it using File > Open.
Post Installation
Important Post Installation Steps It is important to consider the following steps after installing. VMware Tools Installing VMware Tools Setting up NTP
VMware tools
VMware Tools is required but is not installed on the shipped BMC Atrium Discovery Virtual Appliance, as the correct version of the tools is dependent on the version of the VMware host. When you have installed the BMC Atrium Discovery Virtual Appliance, you must always install VMware Tools.
5. Issue the following commands (change the XXX-nnnnnn in the RPM or tar file name below to the version presented by your VMware product): a. mkdir /mnt/cdrom b. mount -t iso9660 /dev/<device> /mnt/cdrom where <device> is the device name reported in step 4. c. Enter one of the following commands depending on whether VMware Tools is supplied on the virtual CD as an RPM, or a tar file. i. If you have an rpm enter the following command: rpm -i /mnt/cdrom/VMwareTools-XXX-nnnnnn.x86_64.rpm umount /mnt/cdrom ii. If there is a tar file, enter the following commands: cd /tmp tar zxvf /mnt/cdrom/VMwareTools-XXX-nnnnnn.tar.gz umount /mnt/cdrom cd vmware-tools-distrib ./vmware-install.pl 6. Respond to the configuration questions on the screen. Press Enter to accept the default value. If at any later point, you need to re-run the VMware tools script, the default location will be: /usr/bin/vmware-install.pl
Setting up NTP
It is essential to configure ntp on your BMC Atrium Discovery Virtual Appliance. If significant clock skews occur then it can impact the functioning of the system or even prevent the BMC Atrium Discovery product from starting. VMware have documented their recommended timekeeping best practices for Linux based VMs. The KnowledgeBase article can be found here.
Next Steps
Once the Virtual Appliance is installed it is a good time to configure it to have sufficient resources for its intended use. Doing so now will be easier than extending the resources once the Virtual Appliance has been in use for some time. See Configuring the Virtual Appliance for details.
ip-address is the address of the virtual machine, which is displayed when you log in to the VMware console. 2. Log in as the system user with the system user name and password. The initial values for BMC Atrium Discovery are: User name: system Password: system 3. If you are using the Community Edition, agree to the terms and conditions of the Community Edition license by selecting the I accept the Terms and Conditions of use check box. To review the license, click the Terms and Conditions link. 4. Click Login. 5. Copyright 2003-2013, BMC Software Page 249
5. Change your password as follows: a. For Current password, enter the password that you used to log in. b. For New password, enter your new password. c. Enter your new password, and enter it again to verify it.
Password Quality By default, the password that you enter must have at least one lowercase character, one uppercase character, one numeric character, and one special character. It must also contain a minimum of six characters. You can change this requirement by configuring password policies. See Managing security policies in the BMC Atrium Discovery Configuration Guide.
To start discovery
To start discovery, click START ALL SCANS. The screen is automatically refreshed to show the status of the discovery process. 1. On the discovery page, click Add New Run. For consolidation appliances, click Add New Local Run.
2. Enter the information for the snapshot discovery run in the fields. Field Name Details
Page 250
Range
Enter IP address information in one of the following formats: IPv4 address (for example 192.168.1.100). Labelled v4. IPv6 address (for example 2001:500:100:1187:203:baff:fe44:91a0). Labelled v6. IPv4 range (for example 192.168.1.100-105, 192.168.1.100/24, or 192.168.1.*). Labelled v4. Scanning the following address types is not supported: IPv6 link local addresses (prefix fe80::/64) IPv6 multicast addresses (prefix ff00::/8) IPv6 network prefix (for example fda8:7554:2721:a8b3::/64) IPv4 multicast addresses (224.0.0.0 to 239.255.255.255)
As you enter text, the user interface (UI) divides it into pills, discrete editable units, when you enter a space or a comma. According to the text entered, the pill is formatted to represent one of the previous types or presented as invalid. Click here for more information on using the pill UI. Invalid pills are labelled with a question mark. You can also paste a list of IP addresses or ranges into this field. There is no paste option on the context sensitive (right click) menu. You cannot paste a comma-separated list of IP address information into the Range field in Firefox. This may crash the browser. You can use a space separated list without any problems.
To edit a pill, click the pill body and edit the text. To delete a pill, click the X icon to the right of the pill, or click to edit and delete all of the text. To view the unformatted source text, click the source toggle switch. The source view is useful for copying to a text editor or spreadsheet. Click the source toggle switch again to see the formatted pill view. You can also sort the pill view by value or type. Value sorts by ascending numerical value. Type sorts by type, placing invalid pills first. Underneath the entry field is a filter box. Enter text in the filter box to only show matching pills.
Pills are not currently supported in Opera. Level Select the level for the discovery run. This is one of: Sweep Scan: This will do a sweep scan, trying to determine what is at each endpoint in the scan range. It will attempt to login to a device to determine the device type. Full Discovery: This will retrieve all the default info for hosts, and complete full inference. Enter a label for the discovery run. Where the discovery run is referred to in the UI, it is this label that is shown. Select the company name to assign to the discovery run. This drop-down list is only displayed if multitenancy has been set up. If (No company) is displayed, or a company name that you were expecting is missing, refresh the list by clicking Lookup Companies on the CMDB Sync. See multitenancy for more information about this feature.
Label Company
Click OK. The Currently Processing Runs tab is displayed with the new discovery run.
When the scan is complete, the Discovery Page is refreshed to show that the run has completed. You can see this by clicking the Recent Runs tab.
Page 251
1. To show the reports that are available on the discovered data, click Discovery Reports. The left side of the page shows the amount of information discovered in the credential-free scan, as shown in the following illustration.
2. 3. 4. 5. 6.
To show the discovery run details, click the 1 Discovery Run icon. To display the reports that are available on the Discovery run, click Reports. Click Possible Endpoint Host Devices (Detailed). Click the chart icon in the OS Class column heading. From the menu, click Pie Chart. A chart displays the breakdown of the scan by OS class, as shown in the following illustration.
7. Continue based on the dominant OS class. This enables you to model a larger proportion of your environment in the first instance, and then return to the other operating system class for more complete modeling. If UNIX is the predominant OS class in the scanned environment, you can continue scanning by entering credentials to gather more information. Ideally, these credentials are valid for a range of computers and have sufficient rights to run system-level commands to discover richer data. Adding a credential is the same whether you add it for UNIX systems or Windows systems accessed through a Credential Windows proxy. You do not need to add credentials for systems accessed through the Active Directory Windows proxy because it uses the permissions of the user it was installed as. For more information, see Configuring host login credentials. If Windows is the predominant OS class, you must download and install a BMC Atrium Discovery Windows proxy to perform Windows-specific discovery tasks. Go to Installing Windows discovery proxies.
Destroying hosts
If you have reached the maximum number of devices (60) permitted in the Community Edition of BMC Atrium Discovery, discovery stops and you cannot start any new discovery runs. Ideally, you should only target discovery runs at the devices that you are interested in are scanned. If you need to carry out further work and scan additional hosts, you must destroy some existing ones so that you can start discovery again. 1. Click the Infrastructure tab. 2. Click the Hosts icon to display a list of discovered hosts. 3. Select the hosts that you want to destroy by clicking the selection box at the left of the host row. You can select multiple hosts. You can also select all hosts by clicking the selection box at the left of the heading row. 4. Click Destroy. 5. Click OK to destroy the hosts, or click Cancel to return to the hosts page without destroying any hosts.
Page 252
By default, in addition to the system user, the following additional users are configured in BMC Atrium Discovery: admin appmodel discovery
Recommendation Each user has an initial password in BMC Atrium Discovery that is the same as the user name. BMC recommends that you change the password when you first log in as the system user. Not doing so will make it easier for someone to access the BMC Atrium Discovery user interface. Use the system user only for configuration tasks which require system privileges.
Windows proxies are downloaded as install files from the appliance and installed onto the local Windows host. You must be logged in as an administrator to install Windows proxies. If an administrator user is not specified as the credential for each proxy during the proxy installation, then you need to grant permissions to write to the directories listed below, and which exist under C:\Program Files\BMC Software\ADDM Proxy_Type where Proxy_Type is one of the following: Active Directory Credential If you are not logged in to BMC Atrium Discovery from the Windows computer onto which you want to install the Windows proxy, log in using a browser on that computer. You must be logged in to the Windows host as an administrator to install Windows proxies. If the software is not installed by an administrator, after installation you must grant permissions to write to C:\Program Files\BMC Software\ADDM Proxy_Type , where Proxy_Type is one of the following types: Active Directory Credential
Page 253
must log in to the appliance as the system user to download the Windows proxy installers. For more information, see the Downloading a Windows proxy installer section in Installing Windows proxies.
Next steps
Now that you have scanned Windows and UNIX hosts, with and without credentials, and have used the Infrastructure page and examined some reports, you are ready to explore more of BMC Atrium Discovery. For example: Discovery To view discovery commands, see Managing the discovery platform scripts. To add SNMP credentials, see Credentials. User Security and configuration To integrate with LDAP, see Managing LDAP. To configure an SMTP server for email notification, see Configuring mail settings. TKU and Patterns Technology Knowledge Update (TKU) is the method used by BMC Atrium Discovery to distribute Pattern Language (TPL) patterns that you can install. See Technology Knowledge Update (TKU). To edit existing patterns or write new patterns to identify applications, see Pattern management. Taxonomy To view the system taxonomy, see Viewing the system taxonomy.
When you enter the new password it is not echoed to the screen. 1. Change the tideway user's password. Still logged in as the root user, change the password using the passwd command specifying the tideway user. 2. Enter a new password when prompted. An example session is shown below:
Page 254
2.
PDF generated from online docs at http://discovery.bmc.com BMC Atrium Discovery 9.0
# passwd tideway Changing password for user tideway. New UNIX password: Retype new UNIX password: passwd: all authentication tokens updated successfully. #
Additional users
The following additional users are configured on the appliance: The netadmin user The upload user
login as: netadmin netadmin@ee-64's password: Last login: Thu May 26 12:56:46 2011 from 192.168.0.1 Tideway Appliance Network Administration Shell ---------------------------------------------G I R Q Configure General Settings Configure Network Interfaces Exit & Reboot the appliance Exit *without* rebooting
Select option:
Select an option by entering the code letter: Enter G to configure the appliance hostname and the gateway details. Enter I to configure the appliance network interfaces. Enter R to exit the netadmin session and reboot the appliance. Enter Q to exit the netadmin session without rebooting the appliance.
Page 255
If you make any configuration changes from the General Settings or Network Interfaces menu, you must reboot the appliance for the changes to take effect.
The following tables describe the items in the General Settings and Network Interface menus:
D C Q
Add interface
Remove/Restore Network Interfaces Discard changes Commit changes Return to the main menu
D C Q
Q - Quit
Enter Q to quit the netadmin session.
Page 256
When an appliance is built, the upload user has no password. Before you can log in as the upload user, you should log in as the root user and use the passwd command to change the password.
# passwd upload Changing password for user upload. New UNIX password: Retype new UNIX password: passwd: all authentication tokens updated successfully. #
Deployment considerations
This section details some deployment considerations which will affect the configuration, operation, and the quality of the data that you can obtain using BMC Atrium Discovery.
Do not use pfiles in Solaris Zones Potential Data Loss The lsof utility cannot obtain process communication information from the local zone in Oracle Solaris operating systems. Performing a search on the Internet shows possible workarounds using pfiles. Using pfiles can result in data loss on the Solaris host. Oracle highlights the danger of using pfiles in the warnings section of their own documentation.
NTP Configuration
You should configure the NTP service on the Community Edition Virtual Machine. This is of particular importance if it will be involved in consolidation. If you do not configure NTP, the appliance time can drift backwards, preventing the datastore from validating itself. This in turn prevents the system from restarting. See Configuring the NTP client for more information.
Do not modify the path statement to correct this issue as that will cause other problems.
Legal Information
Copyright 2012 BMC Software, Inc. BMC, BMC Software, and the BMC Software logo are the exclusive properties of BMC Software, Inc., are registered with the U.S. Patent and Trademark Office, and may be registered or pending registration in other countries. All other BMC trademarks, service marks, and logos may be registered or pending registration in the U.S. or in other countries. All other trademarks or registered trademarks are the property of their respective owners. The information included in this documentation is the proprietary and confidential information of BMC Software, Inc., its affiliates, or licensors.
Page 257
Your use of this information is subject to the terms and conditions of the applicable End User License agreement for the product and to the proprietary and restricted rights notices included in the product documentation. BMC Software, Inc. 5th Floor 157-197 Buckingham Palace Road London SW1W 9SP United Kingdom Tel: +44 (0) 800 279 8925 Email: customer_support@bmc.com Web: http://www.bmc.com/support
Table of Contents
1 Table of Contents 2 User guide 2.1 First steps 2.1.1 Logging in to the system 2.1.2 User security 2.1.3 Setting preferences 2.1.4 Supported browsers 2.1.5 User documentation 2.1.6 For more information 2.2 BMC Atrium Discovery user interface 2.2.1 Navigating BMC Atrium Discovery pages 2.2.2 Using the home page 2.2.3 Appliance status 2.2.4 Appliance performance 2.2.5 Viewing summary list pages 2.2.6 Using View Object pages 2.3 Basic tasks 2.3.1 Searching for data 2.3.2 Destroying data 2.3.3 Comparing the history of nodes 2.3.4 Changing log levels at runtime 2.3.5 Viewing dependency visualizations 2.3.6 Managing attachments 2.3.7 Exporting data 2.4 Running discovery 2.4.1 Viewing discovery status 2.4.2 Stalled discovery runs 2.4.3 Starting and stopping discovery 2.4.4 Scanning IP addresses or ranges 2.4.5 Excluding IP addresses and ranges from scanning 2.4.6 Edge connectivity 2.5 Managing your IT infrastructure 2.5.1 Viewing details of the IT infrastructure 2.5.2 Searching the IT infrastructure 2.5.3 Viewing a host 2.5.4 Viewing a cluster 2.5.5 Viewing a host container 2.5.6 Viewing a mainframe 2.5.7 Viewing a network device 2.5.8 Viewing a network printer 2.5.9 Viewing an SNMP managed device 2.5.10 Supported SNMP devices 2.5.11 Viewing a subnet 2.5.12 Viewing a software instance
Page 258
2.5.13 Reporting on the infrastructure 2.6 Managing your business applications 2.6.1 Viewing details of business applications 2.6.2 Searching for application information 2.6.3 Viewing an application instance 2.6.4 Reporting on business applications 2.7 Using data center baselines 2.7.1 What is baselining 2.7.2 Automatic grouping 2.7.3 Manual grouping 2.7.4 Host profile 2.8 Managing reporting 2.8.1 Reporting basics 2.8.2 The main reports page 2.8.3 Application reports 2.8.4 Infrastructure reports 2.8.5 Discovery reports 2.8.6 Context-sensitive node reports 2.9 CMDB synchronization 2.9.1 Preparing BMC Atrium CMDB for synchronization 2.9.2 Setting up the CMDB synchronization connection 2.9.3 Filtering the root device node 2.9.4 Filtering the synchronized components 2.9.5 Filtering the CMDB CI classes 2.9.6 Configuring continuous synchronization 2.9.7 Configuring CMDB Sync blackout windows 2.9.8 Default CDM Mapping 2.10 Using the Search and Reporting service 2.10.1 Using the Search service 2.10.2 Using Query Language 2.11 Using Export 2.11.1 Introduction to BMC Atrium Discovery Export 2.11.2 Setting up and configuring an export 2.11.3 Running an export 2.11.4 The export process 2.11.5 Export APIs 3 Legal Information
User guide
The BMC Atrium Discovery User Guide describes how to use the product, and provides an overview of user interface page layouts and navigation, information about how to view details of your IT system, and guidance on searching for specific data and producing reports. It is intended for IT operations staff and managers who use and manage BMC Atrium Discovery to view, create, search, and report on data from an infrastructure or business perspective. Goal Learn how to start using the product at a high level. Use the standard features of the product and methods by navigating the user interface. Use basic tasks of the product. View and report on the details of the model that represents the hardware and software items in your network. View and report on the details of the model that represents your business software. Perform a baseline of your data center as an audit. Related information First steps BMC Atrium Discovery user interface Basic tasks Managing your IT infrastructure Managing your business applications Using data center baselines Managing reporting Benefit Understand the basics of logging and and setting preferences to make using the product easier. Optimize use of the product with recommended browsers and navigate quickly through the interface. Be able to quickly search for, change, delete, and export data. Understand how the data obtained from host systems during Discovery affects your IT operations and how you can optimize your infrastructure. Understand the information about the software applications that are actually running on your system and how you can optimize your IT services. Understand data center baselining and how it can be used to simplify tasks in BMC Atrium Discovery before large IT projects. Use basic and advanced reporting capabilities to view data, and include reports on a dashboard to more effectively analyze, manipulate, and collaborate on the data.
Page 259
CMDB synchronization Using the Search and Reporting service Using Export
Ensure that BMC Atrium Discovery data is accurately mapped to BMC Atrium CMDB, providing a single source of truth for CIs and their relationships in the CMDB. Using query language, be able to specify which nodes to retrieve, and the attributes of those nodes to display. Understand how to enter search queries the query language itself. Enables you to publish data from BMC Atrium Discovery directly into relational databases and into all systems that can accept CSV files.
Search for data in BMC Atrium Discovery using the query language and search service, a high-level mechanism for retrieving data stored in the datastore. Export data from BMC Atrium Discovery.
PDF Download You can download a PDF version of the User Guide here. Be aware that the PDF document is simply a snapshot and will be outdated should updates occur on the wiki. The PDF file is generated from the wiki on each request so it may take some time before the download dialog is displayed.
First steps
This section describes BMC Atrium Discovery and explains how you can start using the system, configure user security, access user documentation, and review additional resources. Logging in to the system User security Setting preferences Supported browsers User documentation For more information
The main BMC Atrium Discovery home page displays (see Using the home page).
User security
Your BMC Atrium Discovery Administrator will set up your account, provide a password, and define your access rights. Your user account and password may be the same as you use throughout your organization (single sign on) or it may be specific to BMC Atrium Discovery.
It is your responsibility to maintain the security of your system. Ensure you follow your organization's security policy at all times. Configuring BMC Atrium Discovery security to conform to your password policies is described in Managing security policies.
Page 260
All actions on the system are recorded against a user's ID for audit purposes and for this reason: Never give anyone else your ID/password. Always log out when you have finished using BMC Atrium Discovery. Your view of BMC Atrium Discovery is controlled by your user ID and your access rights. You may not be able to access all the options described here.
3. On the Change Password for Tideway User page, complete the following fields: Current password: Enter your current password. New password: Enter your new password. Verify password: Enter your new password again to verify it. By default, the password that you enter must meet a number of rules. Refer to the Password Rules read-only field on this page for details. Consult your BMC Atrium Discovery administrator for details of the password policy enforced, if any, in your organization.
Logging out
To ensure system security, always log out when leaving the system. Click the Logout link to the right of the primary navigation bar that is displayed on each page, as illustrated in the following screen.
BMC Atrium Discovery includes a timeout that logs you out automatically when you have been inactive for 30 minutes.
The timeout does not apply on user interface pages which automatically refresh, such as the Discovery home page. Pages without automatic refresh, such as the Reports home page time out after 30 minutes.
Setting preferences
Page 261
3. Specify the number of search results to display per page. For more information about running searches, see Searching for data. Specify the size of the Recent Items list. For more information about recent items, see Default Icons. Select the History view type. This may be Comparative or Raw. For further information about the display of history, see Comparing the history of nodes. Select the default delimiter for exported data. This can be a Comma, a Semicolon, or a Tab. Select whether to include a byte-order-mark (BOM) in CSV export files. Including the BOM is required for correct handling of multibyte characters in some applications. Select the XML version required for reports. This can be 1.0 or 1.1. Select the default paper size for PDF reports. 4. After you have entered your settings, click Apply.
Supported browsers
There are several levels of browser support:
Comprehensive
Page 262
On such browsers, we have thoroughly tested the functionality and will consider it a bug if something does not work. These are the browsers we recommend using. Firefox 3 and above Internet Explorer 8 Internet Explorer 9
Internet Explorer Compatibility Mode Internet Explorer Compatibility Mode is not supported, as this forces the browser to behave as an older (unsupported) version. This behavior can be controlled with the Internet Explorer settings: Tools > Compatibility View Tools > Compatibility View Settings
Ad-hoc
On such browsers, no formal testing has been undertaken, although some informal testing has happened and the product may work. Chrome 11 and above Opera 11 Safari 4 Internet Explorer 10 (Supported only in 9.0 SP2 and later versions of BMC Atrium Discovery. Earlier versions do not support Internet Explorer 10.)
Deprecated
The following browsers are no longer supported by BMC Atrium Discovery. If you access the BMC Atrium Discovery user interface with one of these browsers, a warning message is displayed. You can continue to use the browser, though some features may not work. Internet Explorer 6 Internet Explorer 7 Support for Compatibility and Quirks modes for the versions listed is also deprecated in later Internet Explorer browsers.
Known issues
For a list of known issues for browsers that do not have comprehensive support, see Limitations and restrictions of this version.
User documentation
Additional resources
If you encounter a problem, the following resources are available:
Page 263
First, check the available BMC Atrium Discovery product documentation. If you need further assistance, contact your System Administrator. If the problem still cannot be resolved, contact Customer Support.
BMC recommends that you use Firefox 3 and above or Internet Explorer 8.0 and above with this version of BMC Atrium Discovery. If the domain security has been set to High (or above) in Internet Explorer, BMC Atrium Discovery's Javascript-based features will not function. We recommend that you amend the security settings, or add the application to the trusted sites list. Flash Player is also required to display certain reports. For more advice, contact Customer Support. See Supported browsers for more detailed information.
BMC recommends that you use Firefox 3 and above or Internet Explorer 8.0 and above with this version of BMC Atrium Discovery. If the domain security has been set to High (or above) in Internet Explorer, BMC Atrium Discovery's Javascript-based features will not function. We recommend that you amend the security settings, or add the application to the trusted sites list. Flash Player is also required to display certain reports. For more advice, contact Customer Support. See Supported browsers for more detailed information.
Page 264
The search options menu enables you to search the database for a keyword. To close this menu, click on the Search Options icon again or alternatively, click the Close button in the top-right corner of the menu. For more information on the available search options in BMC Atrium Discovery, see Searching for data.
Dynamic toolbox
A dynamically populated toolbox is located below the primary navigation bar.
Default icons
By default, the dynamic toolbox contains the following icons from left to right: Available Dashboards: Displays a drop-down list showing the dashboards that are available. See Dashboards for more information. Groups: Displays the Groups drop-down so that you can manage host groups. See Manual grouping for more information. User General Preferences: Displays the general user preferences page.
Page 265
Application Preferences: Displays the application preferences page. Change Password: Displays the change password page. For details, see Changing Your password. Icons that show a menu when clicked have a down arrow in the bottom-right corner. Click the User icon to display the dialog. Appliance Status: Displays the appliance status page. Click the Appliance Status icon to show the dialog box.
To configure the Appliance Status, see Appliance status. Recent Items: Displays the Recent Items list. Click the Recent Items icon to show the dialog box. The recent items menu lists your ten most-recently accessed pages, enabling you to navigate back to them easily. This list is retained between sessions. Click Clear to clear the list. When items are deleted from the system, they are not removed from the recent items list. Help Documentation: Displays the contents page of the BMC Atrium Discovery user documentation available in this release. See User documentation. Support & Community: Displays the Community page of the Tideway website. From here you can contact Customer Support or access the current BMC Atrium Discovery online documentation and other information about the company. About: Displays information about this version of BMC Atrium Discovery. Click the Help icon to display the dialog box.
Dynamic icons
The Dynamic toolbox also contains icons which are displayed on other nodes, such as when you are viewing the details of a host. The following icons are provided: Attachments: Lists the attachments attached to this node and enables you to manage them. For details, see Managing attachments. Click the Attachments icon to display the dialog box. Discovery Session Logs: Displays a list of Discovery session logs. This is only available for host nodes. Click the Discovery Session Logs icon to display the dialog box. Select a session log from the list and click the link. For more information about Discovery Session Logs, see Appendix A in the BMC Atrium Discovery Configuration Guide.
The Session Logs icon is not available, or has been disabled, if there are no logs associated with the host you have selected.
Additional Links: Lists any external links associated with this node. For example, the drop-down dialog below provides links to different search engines which may provide further useful information about this node. Click the Attachments icon to show the drop-down dialog. For information about configuring external links, see Configuring additional links.
Actions
The Actions drop down menu is dynamic and changes depending on the page on which it is displayed. On a Host List page, the Actions drop down menu has the following options: Only show selected: Filters the list to only show those nodes that are selected. Export as CSV: Exports the selected nodes as CSV files. Scan: Performs a snapshot scan of the selected nodes. Create Host Profiles: Creates Host Profiles of the selected nodes. Manual Groups: Opens the manual grouping menu. Destroy: Destroys the selected nodes.
Reports
The Reports drop down menu is dynamic and changes depending on the page on which it is displayed. The Reports drop down contains a list of reports which may be run for ALL items in the list (which may be on multiple pages), or the selected items. To run a report for all items in a list page, ensure that no items are selected by clicking the None selection option on the left side of the page. Then from the Reports menu, select the report that you want to run. To run a report for the selected items in a list page, select the report from the Reports menu.
Page 266
Visualizations
The Visualizations menu is dynamic and changes depending on the page on which it is displayed. The Visualizations menu contains a list of reports which may be run for ALL items in the list (which may be on multiple pages), or the selected items. To run a visualization for all items in a list page, ensure that no items are selected by clicking the None selection option on the left hand side of the page. Then from the Visualizations drop down, select the visualization that you want to run. To run a visualization for the selected items in a list page, select the report from the Visualization menu.
On subsequent logins, the most recently used dashboard is displayed. On any user-defined dashboards, each section can be selected by clicking and holding the heading bar and can then be moved around the page. When you move the section to a new position, the outline is moved and the other sections rearranged to accommodate the new position. You can also remove a section by clicking the close icon in the top right corner of the section. You can refresh the section with the refresh icon to the left of the close icon. The changes that you make are immediately available to all other users. You can configure the dashboards to provide your own custom home page. The dashboards provided are described in Types of dashboards, and their use and how to customize them is described in Using and customizing dashboards. The standard installed dashboards cannot be edited, though they can be copied, and the copy can be edited.
Types of dashboards
These dashboards are supplied by default and are described in the following sections: Default Baseline CMDB Sync Status Discovery Hardware Reference Data Host Network Summary Host Overview
Page 267
OS Product Lifecycle Analysis Operating Systems Server Consolidation Software products - Apache Webserver Software products - Oracle Software Products Lifecycle Analysis Software Products Summary Virtualization
Default
The Default dashboard provides a concise summary of BMC Atrium Discovery's view of your IT infrastructure. It provides the following sections: Discovery Status Automatic Grouping Community Update Quick View
Baseline
The Baseline Dashboard provides information and reports which help you in baselining a data center. Infrastructure summary Software Products By Publisher Host count by OS Class Average number of Network Connections Automatic Grouping Baseline Support Host OS distribution Virtual vs. Physical
Discovery
The Discovery dashboard provides information on the current status of the results of discovery. It provides the following sections: Discovery Radar Current UNIX Access Current Windows Access Discovery Dashboard Reports Click the "Discovery is currently" link by the green RUNNING or red STOPPED button to display the Discovery page.
Page 268
The Hardware Reference Data dashboard provides information on the physical and thermal characteristics of the hosts in your IT Infrastructure. It provides the following sections: Host Compute Power Host BTU/h per U size Host Compute Power and Watts Hardware Analysis Reports Host Watts per U size
Host Overview
The Host Overview dashboard provides information on the numbers of hosts, distribution of host types, and the memory and compute power of hosts in your IT infrastructure. Compute power is a rough guide to the power of a host and is determined by multiplying the clock speed by the number of logical CPUs. It provides the following sections: Host count Hosts by Type Host Compute Power and RAM Host RAM distribution on GB
Operating Systems
The Operating Systems dashboard provides a breakdown of operating system types and versions. It also has links to detailed reports on patches and kernels. It provides the following sections: Windows Operating Systems UNIX Operating Systems Operating Systems Host Resiliency - Kernel Host Resiliency - Patches
Server Consolidation
The Server Consolidation dashboard highlights the hosts in your environment which are candidates for consolidation. This is particularly useful when planning a datacenter consolidation project as it highlights the areas in which you can make the greatest gains. It provides charts showing breakdowns of the following: Host CPU speed and count Host Compute Power Hosts By Vendor Host Type by RAM in GB
Page 269
Virtualization
The Virtualization dashboard provides an overview of software running on virtual machines, and the physical hosts on which the virtual machines are hosted. An example Virtualization dashboard is shown in the following screen.
Page 270
Channels
The content in the section shown on the home page is referred to as a channel. There are a variety of types of content available for use in channels. For example, you can configure video content, web feeds, RSS feeds, in addition to the standard BMC Atrium Discovery reports. For example: Web feed channel
Video channel
Creating channels
Channels are created using the Administration > Channels page.
Channels on Reports Pages You can include any of the other channels on the report page in a dashboard. For example, you could add a few favorite reports to your dashboard. See Reporting basics for a list of reports that can be added.
Editing dashboards
To edit a dashboard, select the Copy current Dashboard link from the Dashboard drop-down panel or select an existing copy of the dashboard from the Dashboard drop-down panel. The copy of the dashboard displays an Edit link which enables you to add or remove sections and rename
Page 271
it as required.
2. On the channels selector panel, rename the dashboard using the Dashboard Title field. This renames the existing dashboard rather than saving a copy under a new name. 3. Click Commit to change the name. 4. Select or deselect a channel to add it to or remove it from the dashboard. The changes that you make are immediate and are available to all other users.
Appliance status
A dialog is displayed when you click the Appliance Status icon in the dynamic toolbox. See Dynamic Toolbox. The appliance status page is displayed when you click the link in the dialog. The link description can have one of the following descriptions: No Problems Detected: The status is green. No problems have been detected. Status Information Available: The status is green, but at least one potential problem has been detected that has an information-level message. Minor Problems Detected: At least one minor problem has been detected with the appliance. Major Problems Detected: At least one major problem has been detected with the appliance. Critical Problems Detected: At least one critical problem has been detected with the appliance. You can also access this page by clicking the Administration tab from the primary navigation bar and clicking the Baseline Status icon in the Appliance section. One or more of the following actions can be configured to occur if the appliance status is reported as critical, major, or minor. Send Email Restrict Network Access Stop Discovery The configuration checks that are performed for each item in the Appliance Status Page are described in Baseline Configuration. The following screen illustrates the Appliance Baseline page and the various status states for each listed appliance.
Page 272
Where an item has a report or configuration page (such as Appliance Specification or Appliance Configuration Files Tripwire) a hyperlink to that page or report is provided. Where no such page or report is available (such as Appliance Firewall), there is no hyperlink. The following buttons are provided at the bottom of the page: Check Baseline Now: Performs an immediate check of the appliance against the baseline. Update All Baselines: Updates all checks in the baseline status to their current values. Configure Actions ...: Opens the Appliance Actions page. From here you can configure the actions you want to take in the event of baseline failures. See Configuring Actions. Configure Options: Opens the Appliance Baseline Options page. From here you can configure automatic email messaging on the event of a baseline failure, and individual network services to maintain if network access is restricted. See Configuring Options. Cancel: Closes the Appliance Baseline page.
Appliance performance
The appliance generates statistics on aspects of its operation and reports them in the appliance performance pages. Appliance performance tracks performance statistics for engines, patterns, and hardware. Engine performance Pattern performance Hardware statistics Datastore cache performance DDD removal statistics
Engine performance
In BMC Atrium Discovery each ECA engine provides information about how much work it is doing. This information is available when Reasoning is running at debug level as it logs information about the events passing through the system and rules that are being executed, however, this log is difficult for many people to understand. On the Appliance Performance page the work that ECA engine is doing is presented graphically. To view the appliance performance page: 1. Click the Administration tab from the primary navigation bar. 2. Click the Performance icon in the Appliance section. The Appliance Performance page displays the Engines tab. The Engine Statistics chart shows the event queue size and number of events processed for the last 24 hours. The left hand margin represents midnight, the right hand margin represents 23.59 hours. You can use the drop-down selector to view the engine statistics chart for any of the last ten days.
The absolute values are useful for historical comparison, or comparison with other instances of BMC Atrium Discovery. They may be useful as supporting information when diagnosing problems.
Pattern performance
The Pattern performance page displays timing information on TPL pattern performance. TPL provides considerable power which can be used in discovering an environment. Unfortunately some patterns are inherently inefficient or perform poorly in specific customer environments. Determining the performance of individual patterns is extremely difficult, involving running Reasoning at debug level and then piecing together all the rules run for an individual pattern. Each pattern has performance information gathered for it by Reasoning and this is presented graphically on the Appliance Performance page. This makes it easier for customers and BMC Customer Support to determine which patterns are performing badly. To view pattern performance information: To view the appliance performance page: 1. Click the Administration tab from the primary navigation bar. 2. Click the Performance icon in the Appliance section. The Appliance Performance page displays the Engines tab. 3. Click the Patterns tab. You can use the drop-down selector to view the pattern performance statistics chart for any of the last ten days. If dates are not available on the selector, no log has been created for those days. Invocation and timing information is displayed for each pattern and is described in the following table.
Page 273
Column name Pattern Name Invocations Average Execution Time Max Execution Time Min Execution Time Average Discovery Time Average Search Time Average Modeling Time Average Inferencing Time
Description The name of the pattern. The number of times that the pattern has been invoked in the reporting period. The average execution time. This time excludes time spent waiting for discovery commands to complete. It is the sum of the time spent running searches in the data store, updating the model, updating inference relationships and any other work the pattern did including the execution of the pattern code itself. The maximum amount of time taken to execute the pattern as defined above.
The minimum amount of time taken to execute the pattern as defined above.
The average time this pattern has spent running discovery commands.
The average time this pattern has spent performing searches on the data store.
The average time this pattern has spent creating, updating or deleting nodes in the model.
The average time this pattern has spent creating, updating or deleting the relationships between DDD and inferred nodes.
TKU patterns
TKUs are shipped each month. If a TKU pattern is causing a performance spike, then you should first check to see that you are using the most up to date TKU. If you are then you should contact Customer Support to report the issue who will advise on the best course of action.
Page 274
pattern. Where a condition could cause the pattern to stop early, it should be placed as early as possible in the pattern to avoid wasted processing. Pattern doing too much work: Individual patterns should ideally perform one task, for example, creating a software instance. Examples of patterns doing too much work are: Creating SIs and a BAI in a single pattern. Rather, you should break the pattern down so each pattern creates one type of node which triggers the next. Loops containing discovery calls. You should try to make these calls once and then access the stored results as necessary from the loops. Creating an SI with multiple detail nodes. Try to simplify or reduce the amount of detail stored.
Hardware statistics
To view the appliance performance page: 1. Click the Administration tab from the primary navigation bar. 2. Click the Performance icon in the Appliance section. The Appliance Performance page displays the Engines tab. 3. Click the Hardware tab. The following statistics are reported: Daily SAR Statistics Hourly SAR Statistics Hourly Memory Usage Statistics Hourly Average Traffic Statistics Daily Disk Usage These are described in the following sections. Under each chart there is a get as XML link. Click this link to download the chart as XML.
Page 275
Page 276
A datastore cache that is experiencing in the order of hundreds of thousands of misses per ten minutes may have adverse effects on the system as a whole. Examples of these are shown below:
This graph shows three discovery runs in which a datastore cache is performing well with cache hits between 75 and 95 million per ten minutes and cache misses below 45 thousand per ten minutes.
This graph shows a discovery run from midnight to 1am in which the datastore cache is experiencing just under 50 million cache hits per ten minutes, but with 375 thousand cache misses per ten minutes. The rate of cache hits is typical, though the rate at which cache misses are occurring is high and may be causing adverse affects to the system as a whole. To determine whether datastore cache performance is having an adverse effect on the system then you should consider performing some structured testing to determine whether a change in database cache size may help. Database cache size is changed using the Model Maintenance page. To view the Datastore cache performance page: 1. Click the Administration tab from the primary navigation bar. 2. Click the Performance icon in the Appliance section. The Appliance Performance page displays the Engines tab. 3. Click the Datastore tab.
Page 277
With the continual DDD aging scheme in operation, if the trend of eligible DiscoveryAccesses is rising over a two week period, you may have contention between the DDD creation and removal processes. See example 1 in the diagram below. In this scenario you may find that using DDD removal blackout windows to schedule removal between the gaps in your discovery schedule may give you better overall system performance. Once you have set up DDD removal blackout windows you should continue to monitor the DDD removal statistics page to ensure that removal is keeping up with creation. See diagrams 3 and 4 in the diagram below.
Page 278
Click any listed object kind to display summary details of all the objects of that kind.
Only the Summary attributes of each object are listed. The number of matching items is displayed at the top of the list.
Column filtering
When you are viewing a result set you can filter individual columns to display different subsets of the full list. This enables you to easily refine the results from any result set for any displayed column. 1. Select a result set in BMC Atrium Discovery.
2. Click the Column Filter icon in any column. A screen shows up to ten of the most frequent values for the selected column. If more than ten values are present in the column, as in the following example, a filtered subset named Others is displayed in the screen.
3. Select which results you want to show and click that filtered link. The results are displayed for that filtered selection, as illustrated in the following screen.
This filtering method can be applied to any column in the BMC Atrium Discovery UI which has the column filter icon contained in the column header. You can also filter results sets which have already been filtered.
Column charting
When you are viewing a result set in BMC Atrium Discovery you can chart data directly from any column in that result set. This enables you view a graphical representation of the data contained in any column showing the frequency of the data items for that column. 1. Select a result set in BMC Atrium Discovery. 2. Click the Column Chart icon in any column. A screen displays, showing the list of available charts (currently set for pie, bar and column charts).
Page 279
3. Select the type of chart you want to show and click that chart link. The results are displayed for that chart (for example, the Pie Chart result for the Hardware Vendor column, as shown in the following illustration).
If the maximum number of default data points is exceeded for any type of chart, smaller items are merged together to form an Others section in that chart. The limit for all three chart types is 20 data points. For a full description of the BMC Atrium Discovery reporting capabilities, see Managing reporting.
Customizing columns
The default list view for each node type shows pre-defined columns. You can customize these views using the Customize Columns tab of the Query Builder. For example, the default list view for hosts shows the Virtual column which may not be useful to you. The Customize Columns tab of the Query Builder contains the following sections:
Page 280
Clicking an attribute, for example Discovered OS Class, adds a Host: Discovered OS Class entry to the column list. Clicking a relationship, for example Software Instance: Software Instances running on this host, displayed with a following right arrow (), populates the next pane in the selection tool with the attributes and relationships of the destination node. Clicking a relationship in the second pane populates the third pane in the selection tool with the attributes and relationships of the destination node in the same way as before. Clicking a further relationship populates a fourth pane and scrolls the previous panes to the left, hiding the first pane. You can scroll back by clicking the arrow to the left of the selector panes.
Column list
The column list is simply a list of columns displayed in the order that they appear in the list view. In this view they can be reordered by dragging and dropping. An edit icon and a delete icon are also provided for each column. To see the results of your changes, click Refresh results. To cancel all changes, click Cancel.
To remove a column
To remove a column, click on the cross icon relating to that column. The column label is deleted immediately. Click Refresh results to remove the column.
To add a column
To add a column, click the one you wish to add using the three pane selector. You can drag the column to the required position in the list. The attribute name (the column label) is displayed as editable text so you can change it if required. Click Refresh results to add the column.
When this is filtered, the number of hosts reduces, though there are still multiple software instances per host.
Query Builder
Page 281
The Query Builder enables you to select attributes on a node, and to group and evaluate conditions on those attributes or groups of attributes. You can follow relationships from the node and group and evaluate attributes on destination nodes in the same way. This enables you to create a list of nodes fulfilling those criteria. For example, you may create a list of hosts running web servers. Additionally, you can traverse from a node, using relationships, to other node types, enabling you to create lists of other nodes which are related to the original nodes. For example, starting from the list of web servers mentioned above, you may create a list of clusters containing web servers. The example below shows this. The attributes that you select are not limited to those in the taxonomy; those created from TKU or your own patterns can be used. To make it easier to find the correct attributes, relationships, and traversals, only the most commonly used are displayed initially. You can reveal all for a node by clicking the Show All link. To view the Query Builder, click the Customize control under the page title on a list view page. The Query Builder contains the following sections:
Traversal selector
When you need to traverse to another node kind, click the Traverse to link. The traversal selector replaces the three pane selector. From this you can select the node type to traverse to, and select the relationship to use for the traversal. The example below shows a traversal from a host to a DiscoveryAccess node using a failed access relationship.
SI to SI traversals
The following relationships are among those available when specifying a traversal from Software Instance to Software Instance: Software Instances observed to be communicating with this Software Instance Server Software Instances that this Software Instance is communicating with (Client to Server Comms) Client Software Instances that are communicating with this Software Instance (Server to Client Comms) Peer Software Instances that are communicating with this Software Instance (Peer to Peer Comms) The last three are "explicit", in other words built by a discovery for later use. Examples of these are relationships created by patterns that parse application configuration files to find communicating hosts. The first relationship is a "pseudo-traversal" based on the communicatingSIs function. While the pseudo-traversal may be useful where relationships corresponding to the explicit relationships do not exist, it should be used with caution due to its potential impact on performance. These relationships can be created from the output of commands such as netstat. Note that the first relationship is only available in the the Traverse to dialogue and not the Filter dialogue. You cannot filter by it because it is not a key expression.
Query viewer
The Query Viewer provides a map of the query that you are constructing. You can evaluate conditions on an attribute or group of attributes using the following conditions:
Page 282
All: True when ALL conditions are true Any: True when any of the conditions are true None: True when none of the conditions are true These conditions are selected using a drop-down selector in the container which holds the attribute or attributes of interest. For example, in the following illustration, two attributes have been selected from the first pane and are grouped with an All condition.
In this example, the Discovered OS Class is still being evaluated. A Software Instance has been added by scrolling down the list to Software Instance: Software Instances running on this host when this is clicked, the second pane is populated with the attributes and relationships of Software Instances. The Name attribute has been added to the query viewer by scrolling down the second pane list and clicking Name.
Building a query
This section illustrates an example of how to use the Query Builder to determine which hosts in the organization are web servers. The query should find all Windows hosts running Apache or IIS and all UNIX hosts running Apache. To do this: 1. From a host list view, click the Customize control under the page title. 2. Click the Query Constructor tab. 3. From the first pane, select Discovered OS Class from the list of host attributes. The Discovered OS Class attribute is added to the Query Viewer. 4. Enter "Windows" in the text entry field and leave the condition drop-down on contains word.
In this example, you are initially looking for Windows hosts running IIS and need to create an nesting level where this first test can be performed. 5. Click the Add Condition icon in the Discovered OS Class row. A new container is added which will be used to supply the All, Any, or None condition to be evaluated. Before you add a Software Instance, you must ensure that it is added in the correct place. 6. Click in the area of the container you just added to set the focus on the container. When it is selected, it is highlighted in yellow. 7. Scroll down the first pane to Software Instance: Software Instances running on this host. Click this to populate the second pane with the attributes and relationships of Software Instances. From the second pane, select the Name attribute. 8. Type "IIS" in the text entry field and leave the condition drop down on contains word. 9. Click Apply button to display the results in the list view.
Now that you have a list of Windows hosts that are running IIS, you must find UNIX hosts running Apache. 10. With the root of the Query Viewer highlighted, select Discovered OS Class from the list of host attributes. The Discovered OS Class
Page 283
10. attribute is added to the Query Viewer outside the section of the query completed above. Enter "UNIX" in the text entry field and leave the condition drop down on contains word. 11. Click the Add Condition icon in the Discovered OS Class row. A new container is added which will be used to supply the All, Any, or None condition to be evaluated. 12. Click the container to select it.
13. Scroll down the first pane to Software Instance: Software Instances running on this host. 14. Click the name to populate the second pane with the attributes and relationships of Software Instances. From the second pane. select the Name attribute. 15. Type "Apache" in the text entry field and leave the condition drop down on contains word. 16. Change the root condition drop down to Any. 17. Click Apply to display the results in the list view.
You know that some of the web servers are on hosts which are members of clusters. To list those clusters, you must traverse to the cluster nodes. Click the Traverse to link. The Query Constructor is replaced by a Traverse via Relationship pane. Click Cluster. The Traverse via Relationship pane is populated with the available relationships. In this case, the "Cluster of which this is a member" relationship. Click Apply. The List of hosts is replaced with a list of clusters of which hosts in the previous hosts list are members and the Traverse via Relationship panel is replaced with the Query Constructor pane. You can use the Query Constructor pane to filter the result set further. Click Save to save the query. Saved queries can be accessed from the Saved Queries button on the Reports Tab. The list of saved queries is restricted to queries you have created - it does not show queries saved by other users. You can run a query again by clicking through to the query and selecting Run Query from the Actions menu.
Only those attributes and relationships of the object that have been set appear; any that are unknown are not displayed.
Page 284
The data in BMC Atrium Discovery is read-only. BMC Software strongly recommends that you do not edit any nodes as your version will not be retained by BMC Atrium Discovery. However, if you need to enable the edit facility, you must contact your BMC Atrium Discovery Administrator.
Basic tasks
This section explains some of the standard tasks that you may want to perform using BMC Atrium Discovery, such as searching for information, creating, amending and deleting data, associating files with data objects and exporting data. Searching for data Destroying data Comparing the history of nodes Changing log levels at runtime Viewing dependency visualizations Managing attachments Exporting data
Basic searching
A basic search enables you to search for keywords or text strings occurring anywhere in the BMC Atrium Discovery datastore.
Notes on searching
If your search returns too many matches, you can refine it by running a further search on the items. You can export the results of a search in CSV format. See Exporting data.
Page 285
Advanced searching
In an advanced search, you can search a defined set of objects for keywords or text strings, and you can opt to include destroyed objects as well as restrict your search to your own objects. You can also match against a regular expression. If you are familiar with the Search Service and its query language you can use the Reports page to enter a search query. To access the Reports page, click the Search Query link. For a detailed description of the Search Service and query language, see the Using the Search and Reporting service. The following check boxes are provided in each advanced search section: Section Applications Infrastructure Check box Application Instances Clusters Coupling Facilities Details File Systems Generic Elements Hosts Mainframes Network Interfaces Patches Printers Software Components Storage Subnets Groups Collections Database Details Fibre Channel HBAs Files Host Containers MF Parts Network Devices Packages Port Interfaces Runtime Environments Software Instances Storage Collections
Page 286
Discovery
Command Failures Directory Listings Discovered ATM Virtual Circuits Discovered ATM Virtual Paths Discovered Aggregated Ports Lists Discovered Application Components Discovered Bridges Discovered Cards Discovered Chassis Lists Discovered Coupling Facilities Discovered Database Detail Lists Discovered Database Lists Discovered Dependencies Discovered Device Port Lists Discovered Directory Entries Discovered Disk Drives Discovered File Systems Discovered Frame Relay DLCI Lists Discovered Frame Relay LMI Lists Discovered HBAs Discovered MFParts Discovered MQ Details Discovered Mainframes Discovered Network Interfaces Discovered Patches Discovered Registry Entries Discovered SNMP Table Rows Discovered SNMP Values Discovered Software Discovered Storage Subsystem Lists Discovered Sysplex Lists Discovered Tape Drive Lists Discovered Transaction Lists Discovered Virtual Machines Discovered WMI Query Results Discovery Conditions ECA Errors FQDN Lists HBA Info Lists Host Infos Integration Results Listening Ports Network Connections Pattern Definitions Functions Pattern Modules Patterns Provider Accesses SQL Result Rows Service Lists Virtual Machine Lists
Device Infos Discovered ATM Virtual Circuit Lists Discovered ATM Virtual Path Lists Discovered Aggregated Ports Discovered Application Component Lists Discovered Bridge Lists Discovered Card Lists Discovered Chassis Discovered Command Results Discovered Coupling Facility Lists Discovered Database Details Discovered Databases Discovered Dependency Lists Discovered Device Ports Discovered Disk Drive Lists Discovered FQDNs Discovered Files Discovered Frame Relay DLCIs Discovered Frame Relay LMIs Discovered MFPart Lists Discovered MQ Detail Lists Discovered Mainframe Lists Discovered Neighbours Discovered Packages Discovered Processes Discovered Registry Values Discovered SNMP Tables Discovered Services Discovered Software Lists Discovered Storage Subsystems Discovered Sysplexes Discovered Tape Drives Discovered Transactions Discovered WMI Queries Discovery Accesses Discovery Runs Exclude Ranges File System Lists Hardware Reference Data Integration Points Interface Lists Network Connection Lists Pattern Definitions Pattern Executions Pattern Packages Process Lists Registry Listings Script Failures Session Results
Administration
Page 287
5. 6. 7.
8.
9.
Regular Expression Match - Interprets the input keyword as a regular expression and matches against it. (See Regular Expression Searching.) Select the Include Destroyed check box if you want the search to include objects that have been destroyed. These will otherwise not be included in the search. Select the Show DQ check box if you need to view the Data Quality for all items. The default setting allows for faster searching when large numbers of objects are involved, as the system will not need to calculate data quality values. Select check boxes to indicate the kinds of object in each module (Applications, Infrastructure, Discovery, and Administration) that you want to search. You can select any number. In each module section you can click All to choose all kinds in the module or None to deselect all kinds of objects in the module. Click Search. If objects of one kind are found, the matching ones are listed. Note that the list shows Summary attributes only, and it is possible that the text you searched for does not appear in these attributes. If objects of more than one kind are found, a page is displayed listing the number of objects of each kind found. Click an entry in this list to display the list of objects of this kind. If the object that you were searching for was not found, click Search Again to return to the Advanced Search screen. To display the View Object page of the objects found, click the object name.
Notes on searching
The returned list displays summary attributes of each object only. Click an item in the list to access the View Object page which displays all of an object's relationships and attributes. If your search returns too many matches, you can refine it by running a further search on the items. If the matches include destroyed items, an Include destroyed items check box is shown. Enter an additional keyword and a matching type and click Refine Search. You can export the results of a search in CSV format. See Exporting data.
Destroying data
Although it is possible to remove an object from the user interface, you should note that the object is actually retained in the datastore. Details can still be displayed for audit purposes and you can choose to include destroyed objects in searches.
Page 288
In general you should not need to destroy data in production. When testing you may need to destroy data, for example when developing patterns.
To destroy an object
1. Display the View Object page of the relevant object by clicking on a highlighted object on any List page or any Report page. All of the object's current attributes and relationships are listed. 2. From the Actions drop down menu, select Destroy. The message "Are you sure you want to destroy this node?" is displayed. 3. Choose OK to destroy the object. 4. The View Object page is redisplayed, with "This node has been destroyed" at the top of the page. You can select History from the Actions menu to display details of the changes made to the object. It will not appear in subsequent list pages.
2. From the Actions drop down, select History. 3. The Historical Comparison page is displayed for the node you have selected, as shown in the following screen.
4. The default view for the Historical Comparison page initially shows only the attributes that have been changed on this node since it was first created.
5. Click the Show All Attributes link if you want to display all of the attributes for this particular node. 6. Click the Show Differences Only link to return to the default view.
Page 289
last time that something was amended for this node, that is, version 5 of 6 for this example above. Right Column: Displays a particular version of the node you have selected for historical comparison. The default view for this column shows the latest version of this node, that is, version 6 of 6 for this example above. The default view, in the first header row of the right column, shows the most recent state of the node on 25/01/2009 at 14:15 and that it is the sixth version of this node (6 of 6). The fact that version 6 of this node is being displayed in the right column is also indicated in the second header row where the number 6 is not underlined. The previous change (5 of 6) was made to this node on 24/01/2009 at 14:15.
The second header row enables you to independently change the versions of the node to display different historical comparisons. For example, you may want to compare version 2 (2 of 6) with version 3 (3 of 6) to identify exactly what changed during that period of time. The blue direction arrows in this row enable you to view earlier or later versions of this node and are configured as follows: Arrows pointing left: Click once to display the next earlier version. Arrows pointing right: Click once to display the next later version. Alternatively, you can click the version number directly in each column. The historical comparison between the two versions that you select is displayed automatically. Three types of information changes are color-coded and recorded on this page:
Green: The attribute is unique to the node version in the left column. Yellow: The attribute differs across node versions selected. Red: The attribute is unique to the node version in the right column. Attributes for different nodes in BMC Atrium Discovery may vary, however, the first attribute row is Change Made By and is always present. This identifies the user that made the change. Remaining attributes for each node are listed in Taxonomy defined order. You can also view a separate list of data for certain attributes which shows unique and common items. For this example you can click on the links for each version for Patches, Packages and Network Interfaces to view this information. For example, you can click the link in either node version column for the Patches attribute.
This example shows that version (6 of 6) has zero items which are unique to that version and 463 items which it has in common with version (5 of 6). Click the link in either node version column again to return to the Communication Links attribute summary view.
The view changes to one where each individual change is output per line. To change back to the compressed view, click the Switch to comparative view link.
Page 290
You can change which type of view you would like to have appear by default by changing your Application Preferences.
1. Display the View Object page of the relevant host. The BMC Atrium Discovery Host page is displayed. 2. from the Actions menu, select Compare To. 3. The Search for Hosts to Compare page is displayed for the node you have initially selected.
You can also change the details of this first host to search for another host. 1. 2. 3. 4. 5. Click the Change link. In the Search for the first Host to Compare pane, type a keyword for the name or the IP address of the first host you want to compare. Select the type of Match from the drop-down menu and click Search. A list of results is displayed. Select a host to compare.
To return to the default details for the first host at any time, click Cancel.
1. In the Search for the second Host to Compare pane, type a keyword for the name or the IP address of the second host you want to compare. 2. Select the type of Match from the drop-down menu and click Search. A list of results is displayed. 3. Select a second host to compare against the first host and click on the name of the host. The following screen illustrates what is displayed.
4. After you have selected the two hosts that you want to compare, select Compare from the Actions menu.
The width of each displayed column is dependent on the data content of each host.
The node to node comparison page is divided into the following columns. Attribute Column: The attributes of this node. Center Column: Displays a version of the node you have selected for historical comparison. Right Column: Displays a particular version of the node you have selected for historical comparison. The default view, in the first header row of the left and right columns, displays the most recent state of each separate node. The second header row enables you to independently change the versions of each separate node to enable you to display different historical comparisons. For example, you might want to compare version 6 of monty with version 3 of sunfirev240. The blue direction arrows in this row enable you to view earlier or later versions of this node and are configured as follows: Arrows pointing left: Click once to display the next earlier version. Arrows pointing right: Click once to display the next later version. Alternatively, you can click the version number directly in each column. The historical comparison between the two versions that you select is displayed automatically. One type of information change is color-coded in Yellow and recorded on the page. This indicates that information differs across selected separate nodes, attributes or attribute items. You can view a separate list of data for certain attributes in each node which shows unique and common items particular to that node. For this example you can click on the links to show detailed information for each separate node for the following attributes:
Page 291
Software Instances Configuration Files Host IP Address List Patches Subnets Communication Links Disk Info For example, you can click the link in either node column for the Patches attribute.
This example illustrates that node monty has 336 patches that are unique and 105 patches that it has in common with node sunfirev240. It also shows that node sunfirev240 has 458 patches which are unique. For any of the listed patches on either node, you can click the radio button next to it to select that particular patch. This facility enables you to compare individual patches (or other attributes) across the two nodes that you have selected for comparison. To compare individual attributes: 1. 2. 3. 4. Click the radio button next to an individual patch for each of the two nodes. Click Compare. Click Compare again to return to the previous page. Click the link to return to the summary view.
To change the nodes that you are comparing at any time, click the Change Comparison link at the bottom of the page. Using this facility, you can compare any like-nodes in BMC Atrium Discovery with each other, their individual attributes and individual items in selected attributes over any period of time.
The log levels are cumulative in the order they are displayed in the drop-down. Therefore the Debug log level is everything including debug messages, Info is everything up to and including Info and so forth.
If you change the log levels here they will be persisted, that is, the next time that you restart the appliance the log level settings that you have selected will remain. When you change one or more of the log levels from the drop-down menu against any setting, click the Apply button. A message is displayed informing you that the change you have made has been successful. This message will continue to be displayed until you select another page in BMC Atrium Discovery and return to the Logging window. The Apply button is not enabled until you have changed one or more of the log levels.
Page 292
If you select Debug for any setting a pop-up warning is displayed asking you to confirm your selection. This is because the Debug log level can impact the performance of the system.
2. Click the dependency visualization type you want to view (for example, the Infrastructure visualization).
There are many different types of visualization available for viewing each object. If you click any of the types here, a pictorial representation of the object dependencies is displayed. See Types of Visualization for more details. It is also possible to create custom made Dependency Visualizations on your system using a configuration file. See Configuring dependency visualizations for more information.
Visualizations and Navigation When viewing dependency visualizations you should avoid using the Forward and Back buttons on your browser. Doing so may cause problems with the view (for example, the zoom function may be erratic, and navigation buttons in the display applet may not work).
Multiple nodes
When viewing a result set containing multiple nodes, a visualization is displayed with multiple root nodes; one for each node in the result set. Opening a visualization on many nodes at once may be slow; for this reason you are asked for confirmation if you attempt to do this on more than 20 nodes.
Page 293
Layout controls
The layout controls provide the ability to display the dependency visualization in four layout styles; Circular, Hierarchical, Orthogonal and Symmetric. The layout shown in the example at the top of this page is the Hierarchical layout. Nodes can be rearranged by clicking on them and dragging them to a new location. Multiple nodes can be selected and dragged together. The icon on the far right of the layout row, seperated by a |, is the Incremental Layout tool. This re-displays the visualization to fit on screen, respecting where you have re-arranged your nodes. Each visualization also resizes if the window is resized.
Select Landscape when Printing Whichever page size you select you must select the 'Landscape' option from the print dialog.
The following method can be used to export your visualization: 1. Click the Print/Export button and a new screen is displayed. 2. Right-click the image and choose the Save Picture As link.
Page 294
Visualization size limit There is a visualization size limit of 5631x4439 pixels. Larger visualizations could consume excessive system resources and cause instability.
The node that is the root of the visualization, that is, the one from which the visualization was launched, or on which it was re-centered, is highlighted with an orange box. For an example of a multiple root node visualization, see Multiple Root Node Visualizations.
Types of Visualization
Dependency Visualizations are viewed from a dynamic list of links depending on the node kind being viewed. There are a number of different default types of dependency visualization available for viewing different objects. If you click any of the types listed below, a diagrammatic representation of the object dependencies is displayed. The presence of links is determined by the node kind being viewed. For example, Clustering visualizations are available to view on cluster nodes, host nodes and software instance nodes. The following links are provided in the Visualization drop-down dialog: Application Dependencies - Application View Application Dependencies - Software View Application Functional Component Automatic Grouping Detail Clustering Database Detail Host Containment Host Containment with Software Inferred Software Communication Inferred Software Dependency Infrastructure Observed Communication - Host View Observed Communication - Process View Observed Communication - Software View Software Running on Host Virtualization Provenance Software Structure
Page 295
Clustering
Page 296
Displays Clusters and the software services that manage them. Clustering Visualizations are available on the following nodes: Software Instances Hosts Clusters
Database Detail
Displays a database's schemas and tables. This assumes that the SI is a database, the extended database patterns are loaded, and we had credentials to log into the database. These visualizations are available on the following nodes: SoftwareInstance DatabaseDetail The example below shows this type of visualization.
Host Containment
Displays Host containers and their contents. This type of dependency visualization is specifically targeted at those host container nodes which are used to represent Sun E10K and E15K machines as well as Solaris Zones. Host Containment Visualizations are available on the following nodes: Hosts Host Containers
Page 297
container nodes which are used to represent Sun E10K and E15K machines as well as Solaris Zones. Host Containment with Software Visualizations are available on the following nodes: Hosts Host Containers
Infrastructure
Displays the relationships between hosts, switches, and subnets. Infrastructure Visualizations are available on the following nodes: Hosts Switches
Page 298
Subnets
Page 299
Virtualization
This type of visualization is designed to display virtual machines, for example, VMware. Virtualization Visualizations are available on the following nodes: Software Instances Hosts
Provenance
This type of dependency visualization shows you the provenance, that is, the calculated origin of the data. Provenance dependency visualizations are available for all discovered data as well as for SIs, BAIs and patterns. Provenance Visualizations are available on the following nodes and discovered data: Software Instances Business Application Instances Patterns Packages Discovered Processes Discovered Network Connections Discovered Listening Ports Discovered Packages Discovered Files Discovered Command Results Discovered Registry Values
Page 300
Discovered WMIs
Software Structure
Displays the software structure, in terms of how BAIs and higher order SIs are composed. Software Structure Visualizations are available on the following nodes: Business Application Instances including Software Components, and BAIs on MFParts Software Instances Software Component
Managing attachments
You can associate a file such as a document, a spreadsheet, or a diagram with certain node types in the BMC Atrium Discovery datastore. One or more files can be attached to a single object, and copies of a single file can be attached to multiple objects. BMC Atrium Discovery interprets the file type based on the file extension. Make sure that the attachment file extensions are valid. You can view and update a file at any time. To open and view attached files, you must have the appropriate applications. The files that you attach are stored as data objects in the BMC Atrium Discovery datastore. No attachment categories are initially defined. Attachments are assigned to a particular category. You can associate a file with certain node types in order to provide additional information. Files can be of any type and are not validated in any way by the BMC Atrium Discovery system. Attached files are uploaded to the BMC Atrium Discovery datastore so that they can be accessed easily for disaster recovery purposes.
In the following versions of BMC Atrium Discovery, node attachment is enabled by default: Versions earlier than 9.0 SP1 Upgraded 9.0 SP1 or later versions (where the earlier version had files attached to nodes prior to the upgrade) In the following versions of BMC Atrium Discovery, node attachment is disabled by default: Newly installed 9.0 SP1 or later versions Upgraded 9.0 SP1 or later versions (where the earlier version did not have files attached to nodes prior to the upgrade) Based on the version of BMC Atrium Discovery you are using, attaching files to nodes involves the following steps in the given order: 1. Enabling node attachment: You must enable node attachment only if you are on an appliance where node attachment is disabled. To enable node attachment, you must be a user with system group permissions. 2. Creating attachment categories: You must create attachment categories from the UI prior to attaching files to nodes. To create attachment categories, you must be a user with admin or system group permissions. 3. Attaching files to objects: After creating the attachment categories, you can attach files to nodes from the UI. To attach files to nodes, you must be a user with admin or system group permissions.
Use of the tw_options command is strictly restricted to managing node attachment only. You must not change any other tw_options options as it may result in unexpected BMC Atrium Discovery behavior.
3. When prompted, enter the system user password. If node attachment is disabled, the following is displayed:
NODE_ATTACHMENTS_ENABLED = False
5. When prompted, enter the system user password. 6. Exit from the command line prompt. Node attachment has been enabled on the appliance.
3. When prompted, enter the system user password. If node attachment is enabled, the following is displayed:
NODE_ATTACHMENTS_ENABLED = True
Page 302
4.
tw_options NODE_ATTACHMENTS_ENABLED=False
5. When prompted, enter the system user password. 6. Exit from the command line prompt. Node attachment has been disabled on the appliance.
Exporting data
You can save the output from searches or reports in CSV format from the BMC Atrium Discovery user interface (UI). This option is not available in the BMC Atrium Discovery Community Edition.
Page 303
BMC Atrium Discovery Export enables you to publish data from BMC Atrium Discovery directly into relational databases and into all systems that can accept CSV files. BMC Atrium Discovery data is exported by creating an exporter, which is a combination of a mapping set with an adapter configuration. These concepts are explained in Introduction to BMC Atrium Discovery Export.
Export API
BMC Atrium Discovery is provided with an API that enables you to create custom scripts to interrogate the datastore. The export API can export data in XML or a stream of comma-separated text or an empty string. See Export APIs for more information about importing and exporting data.
Running discovery
This section describes how you run and monitor BMC Atrium Discovery. Viewing discovery status Stalled discovery runs Starting and stopping discovery Scanning IP addresses or ranges Excluding IP addresses and ranges from scanning Edge connectivity
By default, the discovery process is off. To start it, you must click START ALL SCANS.
The default view of the Discovery Status page, with a snapshot scan in progress, is shown in the following screen.
Consolidation
An appliance may be configured to be either a Consolidation appliance or a Scanning appliance. Consolidation refers to the centralization of discovery data from scheduled or snapshot scans on multiple Scanning appliances to one or more Consolidation appliances. For a detailed description about consolidation and how to configure it, see Consolidation.
Page 304
Details The descriptive label that is applied to the run. This is the label applied by the user who created the run. The scan start time. This is the time that the discovery run was requested rather than the actual start time. In practice these are almost identical; the only time that a large discrepancy is seen is when the discovery Run is requested at the end or outside of a scan window.
rRange|The IP range that is being scanned in this discovery run. For a large list of IP addresses this shows the first two IP addresses and the number of other IP addresses in the queue.| # of IPs Type The total number of IP addresses being scanned in this discovery run. The type of scan. This is either: Snapshot Scheduled The level for the discovery run. This is one of: Sweep Scan: Performs a sweep scan, trying to determine what is at each endpoint in the scan range. It will attempt to login to a device to determine the device type. Full Discovery: Retrieves all the default info for hosts, and complete full inference. A progress indicator showing how much of the scan has been competed and how many are in progress. The user name of the user who created the scan.
Level
If a discovery run is in progress but has been stopped because the scan window in which it can take place has ended, it is On Hold. On hold scans are shown in the Currently Processing Runs tab and marked as such. You can select a discovery run and cancel it using Cancel. The discovery run will stop though individual commands in progress will run to completion. Adding new discovery runs is described in Scanning IP addresses or ranges.
Recent runs
The Recent Runs tab shows the 10 most recently completed discovery runs, and contains the following fields: Field Name Label Range # of IPs Timing Details The descriptive label that is applied to the run. This is the label applied by the user who created the run. The IP range that was scanned in this discovery run. The total number of IP addresses scanned in this discovery run. The scan start time, finish time, and elapsed time. This is the time that the discovery run was requested rather than the actual start time. In practice these are almost identical; the only time that a large discrepancy is seen is when the discovery run is requested at the end or outside or a scan window. States whether the scan was completed or cancelled by the user. The type of scan. This is either: Snapshot Scheduled The level for the discovery run. This is either: Sweep Scan: Performs a sweep scan, trying to determine what is at each endpoint in the scan range. It will attempt to login to a device to determine the device type. Full Discovery: Retrieves all the default info for hosts, and complete full inference. The name of the user who created the scan. A Rescan Now button is provided in this column which enables you to rescan the discovery run. If you rescan a scheduled scan, it is scanned immediately as a snapshot scan with the same label.
Complete Type
Level
User Actions
Scheduled runs
Copyright 2003-2013, BMC Software Page 305
The Scheduled Runs tab shows up to 10 scheduled discovery runs, and contains the following fields: Field Name Label IP Range Level Details The descriptive label that is applied to the run. This is the label applied by the user who created the run. The IP range to be scanned in this discovery run. The level for the discovery run. This is either: Sweep Scan: Performs a sweep scan, trying to determine what is at each endpoint in the scan range. It will attempt to login to a device to determine the device type. Full Discovery: Retrieves all the default info for hosts, and complete full inference. Describes the schedule for this discovery run. The days on which the discovery run takes place, the time it starts, and the period in which the scan is permitted to run. The scan start time. The name of the user who created the run. The date that the scheduled run was created.
Exclude ranges
The Exclude Ranges tab shows the ranges which have been specified as not to be scanned, and contains the following fields: Field Name Label Range Description User Details The descriptive label that is applied to the exclude range. The IP range that is not to be scanned in any discovery run. A description for the exclude range. The user who created the exclude range.
Restart or continue
At the next scheduled scan window, any on hold or blocked runs will continue or restart, depending on whether all endpoints have been started or not. The behavior change introduced in BMC Atrium Discovery 8.2.1 is described below: Scan fails to complete in scan window and all endpoints have not been started
Page 306
Pre-8.2.1: scan continues at next window. 8.2.1 and later: scan continues at next window. Scan fails to complete in scan window and all endpoints have been started Pre-8.2.1: scan continues at next window. 8.2.1 and later: scan restarts at next window.
Blocked runs
Click the (blocked) notice. A dialog box is displayed showing why the discovery run is blocked. The blocked endpoint is shown and the reason for it being blocked is given. In the preceding example, the blocked endpoint is 137.72.94.27 and it is blocked because the DiscoveryRuncommand.DiscoveryRuncommand pattern is attempting to access another endpoint ( 137.72.94.219) which is currently outside a scan window.
On hold runs
Click anywhere in the row of the on hold run to display the Discovery Run page for that run. To check the scanning window: 1. From the Discovery Status page click the Scheduled Runs tab. 2. The timing information for each scheduled run is shown in a table. If you want to edit the discovery run, click its entry in the table and the Edit an Existing Run dialog box is displayed.
Note Reasoning runs the same status check automatically every 15 minutes and outputs the results in the tw_svc_reasoning.log file.
To use the utility, type the following command at the $TIDEWAY/bin/ directory:
tw_reasoningstatus [options]
where options are either the command described in the following table or the standard, inherited options detailed in Using command line utilities. Command Line Option --waiting, -w Description Lists information for all endpoints which are on hold waiting for information from the discovery of a different endpoint. Expands the information provided by --waiting to include information on all endpoints being held waiting for discovery. This option is ignored if --waiting is not specified. Specifies the name of the BMC Atrium Discovery user. If no name is specified, BMC Atrium Discovery uses the default, system.
--waiting-full
-u, --username=NAME
User example
In the following example, you can view the status of the reasoning service.
Page 307
If you do not provide a password, you are prompted for one. After providing a password, a status is displayed that includes information about engine status, pool state, queue length, and so forth. The output is saved in the tw_svc_reasoning.log file.
To start discovery
To start discovery, click START ALL SCANS. The screen is automatically refreshed to show the status of the discovery process.
To stop discovery
To stop discovery, click STOP ALL SCANS. No more runs are started and discovery attempts to stop all commands that are running. After the run has stopped, no more runs are performed until you start discovery.
2. Enter the information for the snapshot discovery run in the fields. Field Name Details
Page 308
2.
PDF generated from online docs at http://discovery.bmc.com BMC Atrium Discovery 9.0
Range
Enter IP address information in one of the following formats: IPv4 address (for example 192.168.1.100). Labelled v4. IPv6 address (for example 2001:500:100:1187:203:baff:fe44:91a0). Labelled v6. IPv4 range (for example 192.168.1.100-105, 192.168.1.100/24, or 192.168.1.*). Labelled v4. Scanning the following address types is not supported: IPv6 link local addresses (prefix fe80::/64) IPv6 multicast addresses (prefix ff00::/8) IPv6 network prefix (for example fda8:7554:2721:a8b3::/64) IPv4 multicast addresses (224.0.0.0 to 239.255.255.255)
As you enter text, the user interface (UI) divides it into pills, discrete editable units, when you enter a space or a comma. According to the text entered, the pill is formatted to represent one of the previous types or presented as invalid. Click here for more information on using the pill UI. Invalid pills are labelled with a question mark. You can also paste a list of IP addresses or ranges into this field. There is no paste option on the context sensitive (right click) menu. You cannot paste a comma-separated list of IP address information into the Range field in Firefox. This may crash the browser. You can use a space separated list without any problems.
To edit a pill, click the pill body and edit the text. To delete a pill, click the X icon to the right of the pill, or click to edit and delete all of the text. To view the unformatted source text, click the source toggle switch. The source view is useful for copying to a text editor or spreadsheet. Click the source toggle switch again to see the formatted pill view. You can also sort the pill view by value or type. Value sorts by ascending numerical value. Type sorts by type, placing invalid pills first. Underneath the entry field is a filter box. Enter text in the filter box to only show matching pills.
Pills are not currently supported in Opera. Level Select the level for the discovery run. This is one of: Sweep Scan: This will do a sweep scan, trying to determine what is at each endpoint in the scan range. It will attempt to login to a device to determine the device type. Full Discovery: This will retrieve all the default info for hosts, and complete full inference. Enter a label for the discovery run. Where the discovery run is referred to in the UI, it is this label that is shown. Select the company name to assign to the discovery run. This drop-down list is only displayed if multitenancy has been set up. If (No company) is displayed, or a company name that you were expecting is missing, refresh the list by clicking Lookup Companies on the CMDB Sync. See multitenancy for more information about this feature.
Label Company
3. Click OK. The Currently Processing Runs tab is displayed with the new discovery run.
Page 309
Range
Enter IP address information in one of the following formats: IPv4 address (for example 192.168.1.100). Labelled v4. IPv6 address (for example 2001:500:100:1187:203:baff:fe44:91a0). Labelled v6. IPv4 range (for example 192.168.1.100-105, 192.168.1.100/24, or 192.168.1.*). Labelled v4. As you enter text, the UI divides it into pills, discrete editable units, when you enter a space or a comma. According to the text entered, the pill is formatted to represent one of the previous types or presented as invalid. Invalid pills are labelled with a question mark. You can also paste a list of IP addresses or ranges into this field. There is no paste option on the context sensitive (right click) menu. You cannot paste a comma-separated list of IP address information into the Range field in Firefox. This may crash the browser. You can use a space separated list without any problems.
Scanning the following address types is not supported: IPv6 link local addresses (prefix fe80::/64) IPv6 multicast addresses (prefix ff00::/8) IPv6 network prefix (for example fda8:7554:2721:a8b3::/64) IPv4 multicast addresses (224.0.0.0 to 239.255.255.255)
To edit a pill, click the pill body and edit the text. To delete a pill, click the X icon to the right of the pill, or click to edit and delete all of the text. To view the unformatted source text, click the source toggle switch. The source view is useful for copying to a text editor or spreadsheet. Click the source toggle switch again to see the formatted pill view. You can also sort the pill view by value or type. Value sorts by ascending numerical value. Type sorts by type, placing invalid pills first. Underneath the Range field is a filter box. Enter text in the filter box to only show matching pills. Pills are not currently supported in Opera. Level Select the level for the discovery run. This is either: Sweep Scan: Performs a sweep scan, trying to determine what is at each endpoint in the scan range. It will attempt to login to a device to determine the device type. Full Discovery: Retrieves all the default info for hosts, and complete full inference. Enter a label for the discovery run. Where the discovery run is referred to in the UI, it is this label that is shown. Select the company name to assign to the discovery run. This drop-down list is only displayed if multitenancy has been set up. If (No company) is displayed, or a company name that you were expecting is missing, refresh the list by clicking Lookup Companies on the CMDB Sync. See multitenancy for more information about this feature. Select a frequency for the discovery run to be performed. For example, this can be Daily, Weekly, or Monthly. For a weekly discovery run, you are provided with buttons for each day. Select the day or days that you want the run to take place. For a monthly discovery run, you are provided with buttons for each day in the month. Select the day or days that you want the run to take place. Alternatively, select the Scan on the radio button and choose one of: First Second Third Fourth Last and the day that you want the scan to take place. In this way you can select the Second Tuesday of the month and so forth. Select a time for the scan to start. Discovery must be running at this time. Select a duration in hours. This is the length of the scan window and can be from 1 to 23 hours. If this duration expires before the scan has completed, then the scan is suspended until the next scheduled time that the scan occurs. The scan resumes from the point where it was previously suspended.
Label Company
Frequency
4. Click OK.
Page 310
4. The Scheduled Runs tab is displayed with the new scheduled discovery run. To add another scan to the page, click Add New Run. To delete any existing scheduled scans, select the entry and click Delete.
Click here for more information on using the pill UI. Invalid pills are labelled with a question mark. You can also paste a list of IP addresses or ranges into this field. There is no paste option on the context sensitive (right click) menu. You cannot paste a comma-separated list of IP address information into the Range field in Firefox. This may crash the browser. You can use a space separated list without any problems.
To edit a pill, click the pill body and edit the text. To delete a pill, click the X icon to the right of the pill, or click to edit and delete all of the text. To view the unformatted source text, click the source toggle switch. The source view is useful for copying to a text editor or spreadsheet. Click the source toggle switch again to see the formatted pill view. You can also sort the pill view by value or type. Value sorts by ascending numerical value. Type sorts by type, placing invalid pills first. Underneath the entry field is a filter box. Enter text in the filter box to only show matching pills. Pills are not currently supported in Opera.| Label Description Enter a label for the discovery run. Where the discovery run is referred to in the UI, it is this label that is shown. A free text description of the exclude range.
3. Click OK. The Excluded Ranges tab is displayed with the new exclusion.
Page 311
Edge connectivity
Edge connectivity refers to the manner in which hosts are connected to edge switches. In BMC Atrium Discovery version 8.3, Edge Connectivity discovery replaced network topology in version 8.2. Edge Connectivity is not a like-for-like replacement, because it does not attempt to calculate overall network topology. Instead, it determines which hosts are connected to which edge switches. Network topology mapping in version 8.2 required an additional topology run to calculate the topology of a LAN by processing existing Network Devices and calculating switch-to-switch and switch-to-host relationships. Edge connectivity is calculated during the course of a normal discovery run. The separate topology run no longer exists. For information on the previous version of network topology, see the BMC Atrium Discovery version 8.2 documentation. Edge connectivity discovery occurs when a discovery run takes place. When a switch is discovered, it is queried for port information. The SNMP request (getNetworkConnectionList) method gathers information on all ports found on the switch, as well as a list of neighbor MAC addresses found for each port. When a host is discovered, its MAC address or addresses are determined. Reasoning uses the MAC addresses to determine whether there is a connection between that host and any previously scanned switches and, if so, creates a new one. If a connection already exists, then it is confirmed. If no connection is found, any existing connection is deleted. When a host is discovered, an old connection may be deleted, and a connection to a switch may be made or confirmed. When a switch is discovered, an old connection may be deleted. No new connections are made.
Connections are only made when a host is discovered Connections are only made when a host is discovered, and when the switch that it is connected to has already been scanned. Both ends of the connection are required before the connection can be confirmed. Consequently, connection information may not be complete until two scans have taken place.
The provenance of a connection is the provenance of the host or the switch, whichever was discovered last.
Page 312
On the main page of the Infrastructure tab you can view a summary of: All the infrastructure items held in the system. The infrastructure items that are owned by you. A number of infrastructure-related reports. From any list of objects you can drill down to display more detailed information (for example, to view the details of a Host or a Software Instance and the relationships between objects).
Some attributes or relationships only appear on the View Object page if a value is set. If data has not been discovered, these fields will not be listed.
For more searching options, select Advanced Search from the Search box. For more information about searching, see Searching for data.
Viewing a host
All hosts are set up automatically by the BMC Atrium Discovery.
To view a host
1. Click the Hosts link in the Infrastructure Summary section of the Infrastructure page. 2. Select a host from the list. The BMC Atrium Discovery Host page groups the field names in a logical format.
Page 313
The information fields for a host object are arranged in the following groups: General Details: General information on the host such as its name, type, and hosted applications. See General Details below. Identity: Information on FQDN and domain. See Identity for details. Ownership & Location: Information on the ownership and location of the host. See Ownership & Location for details. Operating System: The Operating system and service pack of this host. See Operating System for details. Hardware and Network: Information on the physical hardware installed on the host, and network information. See Hardware and Network for details. Inference: The inference details. See Inference for details. Other: See Other for details.
You can also display provenance information by clicking the Show Provenance button. Provenance information is meta-information describing how the other information came to exist. For more information, see Provenance Information.
A list of information fields that may be displayed in the General Details group for a host object are detailed in the following table.
General details
Field Name Name Details Name of this host system. A link is also provided which generates a pdf Host Profile. Click this to generate the Host Profile and then download it by clicking on the Reports Tab. See Host profile for more information. The type of the host, such as UNIX Server, Windows Desktop, and so on. The vendor of this host. Whether the host is running on a partition. A link to the host or list of host that this host is communicating with. Is this host a virtual machine. Only displayed when true. The hosted applications on this host. The Software Instances related to this host. Click the link to view the related Software Instances. The Aggregate Software Instances related to this host. Click the link to view the related Aggregate Software Instances. The Runtime Environments related to this host. Click the link to view the related Runtime Environments. The names of the virtual machines that run on this host. These are created by the VMware pattern. The containing host for the virtual machines on this host, that is, the chassis in which it runs.
Host Type Hardware Vendor Partition Communicating With Virtual Hosted Applications Software Instances Aggregate Software Instances Runtime Environments Virtual Machines Containing Host
Page 314
Containing VM Zonename Zone UUID Host Container Cluster E10K SSP Hostname SunFire Domain LDOM Power LPAR Number Files
The containing virtual machines for this host, that is, the virtualization software and server that this host is on. Zonename of this host system. This is for Solaris zones only. The zone UUID of this host system. This is for Solaris zones only. The container for this host, that is, the machine on which it runs. The name of the cluster. This is only applicable if this host is a member of a cluster of host machines. The name of the host on which the E10K System Service Processor runs. The SunFire domain name. The LDOM in which this host is running. the number of the LPAR on which the host is running. Configuration files for this host.
A list of information fields that may be displayed in the Identity group for a host object are described in the following table.
Identity
Field Name Hostname Aliases Local FQDN DNS Domain NIS/Windows Domain Windows Workgroup Endpoint Used To Be Identified As Endpoint Now Identified As Details Other names by which this host is known. List of string values. The fully qualified domain name which is local to the host. The DNS domain that this host is part of. The name of the NIS or Windows domain that this host is in. The Windows workgroup that this host belongs to. If the endpoint used to discover this host was previously identified as a different host, a link to that host and a link to a comparison with that host are provided. If the endpoint used to discover this host is now identified as a different host, a link to that host and a link to a comparison with that host are provided.
A list of information fields that may be displayed in the Ownership & Location group for a host object are described in the following table.
A list of information fields that may be displayed in the Operating System group for a host object are described in the following table.
Operating System
Page 315
Field Name Discovered OS Discovered OS Class Discovered OS Type Discovered OS Version Discovered OS Edition Discovered OS Update Level Discovered OS Architecture Discovered OS Build Discovered OS Vendor Service Pack Discovered Kernel Patch Count Patches Package Count Unique Packages OS Support Details
Details The discovered operating system for this host, for example, Red Hat Enterprise Linux Server release 5 (Tikanga). The discovered operating system class for this host, for example, UNIX. The discovered operating system type for this host, for example, Red Hat Enterprise Linux. The discovered operating system version for this host, for example, 5. The discovered operating system edition for this host, for example, Server. The discovered operating system update level for this host, for example, 6100-02.
The discovered operating system architecture for this host, for example, x86_64.
The discovered operating system build for this host, for example Tikanga. The discovered operating system vendor for this host, for example, Red Hat. The service pack installed on this host. The discovered kernel for this host. The number of software patches discovered on this host. The software patches related to this host. The number of software packages discovered on this host. The software packages related to this host. Operating System Support details. This provides the dates for the operating system's End of Life, End of Support, and End of Extended Support. Clicking any of the links displays the Hardware Reference Data page for that host type. For more information, see [OS Support Page].
A list of information fields that may be displayed in the Hardware and Network group for a host object are described in the following table.
Page 316
Processor Type Number of Logical Processors Cores per Processor CPU Threading Enabled Number of Processor Types Threads per Processor Core Power Supply Status
The type of processor this host has. The number of logical processors contained on the physical processors that this host has. Each physical processor may have more than a single logical processor. This may be the same as the number of physical or virtual processors The number of cores per processor. For example, this could be dual-core or quad-core. Whether CPU hardware threading is enabled.
BMC Atrium Discovery can identify which hosts have a single PSU, those that have one or more failed PSUs and those for which PSU information is not available. Discovery detects the power supply or power supplies for each host and displays the status of each PSU on the Host page. The status of each PSU is shown as 'OK' or 'FAILED'. If the presence of a PSU is detected, but not its state, that PSU will be shown as 'UNKNOWN'. A table of information on the physical details of this host. For example, the hardware's size in Rack Units, and its power capacity, and thermal characteristics. Clicking any of the links displays the Hardware Reference Data page for that host type. For more information, see Hardware Reference Data page. A detailed view of the network interface or interfaces discovered on this host. See Network Interfaces field for more information. The HBA interface or interfaces discovered on this host. See HBA modeling for more information. A detailed view of the file system for this host. See File System field for more information. The name of the primary domain controller.
Hardware Reference Data Network Interfaces HBA Interfaces File System Primary Domain Controller
A list of information fields that may be displayed in the Inference group for a host object are described in the following table.
Inference
Field Name Discovery Access Details Lists Discovery Accesses for this host, grouped by scan date. When an endpoint is scanned, a Discovery Access node is created. The Discovery Access node records information such as the start time, end time, the Discovery Run of which this is a part, and the previous Discovery Access to simplify troubleshooting. Clicking a Discovery Access displays the DiscoveryAccess page for that Discovery Access. Where a Discovery Access is about to be deleted as part of DDD aging, a trash can icon is displayed next to the Discovery Access. Discovery conditions which apply to this host.
Discovery Conditions
CMDB Synchronization
Field Name Last successful CMDB sync Details The time and date at which this host was last successfully synchronized into the CMDB.
Page 317
Last failed CMDB sync CMDB sync duration CMDB CI count CMDB Relationship count
The time and date at which this host was last unsuccessfully synchronized into the CMDB. The time (in seconds) spent performing the last CMDB synchronization for this host. The number of CIs corresponding to this host at the last CMDB synchronization. The number of relationships to CIs corresponding to this host at the last CMDB synchronization.
A list of information fields that may be displayed in the Other group for a host object are described in the table below.
Other
Field Name Details Collections Data Completeness Issues Details Additional details of this host. A collection which this Host is a member of. List of any missing fields.
The data in BMC Atrium Discovery is read-only. BMC strongly recommends that you do not edit any nodes as your version will not be retained by BMC Atrium Discovery.
Connected To
Click Show Details to reveal more information on the network interfaces. Where a mismatch has been flagged in the Interface or Connected To fields, the reason for this is highlighted. This is shown in the screens below.| The layout of the blocks indicates the configuration of the interfaces. For example, the following screen shows three IP addresses "spanned" by a single MAC address. In this case, a single interface has an IPv4 address, an IPv6 address, and an IPv6 link local address. Currently, bonded interfaces are correctly displayed for Linux hosts only.
This screen shows the same interface after clicking Show Details to show additional information and reveal the reason for the highlighted mismatch.
A bonded interface is shown in the following screen where a single MAC address spans three interfaces.
Page 318
Non-administrator WMI discovery of a Windows 2003 In a non-administrator user discovery of a Windows 2003 host using WMI, the Manufacturer attribute of the network interfaces will not be populated.
Network settings discovery in Windows NT4 The negotiation, speed, duplex, driver_date and driver_version attributes are not set for Windows NT4 systems. The information is not stored in the registry as in later versions of Windows.
The links in each row link to the File System node page for that file system. The first displays a LOCAL file system node. The second displays a node EXPORTED from one host linked to a REMOTE node from another (the nfs file system from the client and server point of view).
The information fields for a File System node are shown in the following table. Field Name Name Kind Details File system name. The kind of file system. This may be LOCAL, EXPORTED, or REMOTE.
Page 319
The type of file system. For example, ext3, tmpfs, NTFS, cifs and so forth. The file system mount point. The size of the file system. The percentage of the file system that is used. This field and the ones described below are highlighted yellow when 81% or more of the capacity is used, and red when 91% or more of the capacity is used. This can be seen in the VMware ESXi 4.0.0 example above. The amount of free space remaining. The percentage of the file system that is free. The name of the host on which the file system is mounted.
BMC Atrium Discovery reports the size and free percentage of the file system. In some file systems part of the space is reserved for root (or similar) access and is not usable by general users. Thus, the reported size and free space may exceed that available to general users.
Consolidated hosts
Consolidated hosts are the discovered hosts on the network that comprise collaborating or virtual hosts that can be grouped into their host containers or clusters. The consolidation of these hosts is managed by standard Pattern Language patterns. The types of consolidated hosts supported by BMC Atrium Discovery are: Starfire domains Virtual hosts into parent hosts Clusters
Patterns are provided to consolidate the following machines: Sun Enterprise 10000 - E10K Sun Fire 12000 - F12K Sun Fire 15000 - F15K Sun Fire 20000 - F20K Sun Fire 25000 - F25K
Page 320
Solaris 10 zones
When a Solaris 10 host is discovered, Discovery runs commands to determine the zonename of the host and a software instance is created for all installed zones. The patterns search for a host with the zonename of global; this is the container host. The list of installed zones in that host gives the list of zonenames to search for on other Solaris 10 hosts. The patterns search for these hosts in the datastore and then link them to the SI for the zone Host Node representing the container host. The installed zones are shown as Virtual Machines on the container host details page.
You cannot run lsof inside a Solaris zone container, by design of the Solaris OS. All processes from all zones are visible in the global zone.
VMware consolidation
The host running the VMware instance is seen as an individual host. When the host is scanned, software instances representing the VMware instance and any running VMware guest operating systems are created. Software Instance Nodes are linked to the Host Node representing the physical server running the VMware instance and the Host Node representing the virtual host.
Page 321
The information fields for a typical host are described in the following table. Field Name Hardware Vendor Hardware Model Known Hardware Model Names Processor Type Number of Processors Maximum RAM Rack Unit (RU) Power Capacity (VA) Power Capacity (W) BTU per hour Power Capacity VA per RU Power Capacity W per RU BTU per hour per RU Hardware Details The name of the hardware vendor. The hardware model name. Any other names by which this hardware is known. The processor type The number of physical processors in the hardware. The maximum amount of RAM supported by this hardware. The size of the hardware. The input power capacity measured in Volt Amperes. The input power capacity measured in Watts. Thermal output measured in BTUs. The input power capacity measured in Volt Amperes, per Rack Unit. The input power capacity measured in Watts, per Rack Unit. Thermal output measured in BTUs, per Rack Unit. A link to the host or a list of all hosts using this hardware.
New Hardware Reference data can be imported into BMC Atrium Discovery by an administrator. For more information, see Importing Hardware Reference Data.
Page 322
Other names that the software or operating system is known as. A link to the host or a list of all hosts running this software product or operating system.
HBA modeling
Host Bus Adapter modeling provides the ability to determine which firmware version is running on individual Fiber Channel HBA cards in the network under Discovery and report on them. This function has been extended in BMC Atrium Discovery 9.0 to retrieve information from QLogic cards in addition to Emulex. The methods have been improved so they should be able to retrieve information form other vendors' cards. These methods are supported on the following platforms: Linux (2.6 and 2.4 kernels) VMware ESX/ESXi Solaris AIX Windows In BMC Atrium Discovery versions 9.0 SP1 and later, HBA modelling has been extended for the HP-UX platform and retrieves information from Emulex, QLogic, and HP Tachyon cards.
The HBA Interface(s) row is divided into fields. The discovered HBA card ID and the firmware version that the card reported are displayed. This identifier string is set by the manufacturer to identify that particular version of the firmware. The WWNN and WWPN columns display the World Wide Node Number and World Wide Port Number respectively. A Fiber Channel target is assigned its WWNN at loop initialization time. It is possible for the WWNN to change between one loop initialization and the next. Every time a system boots or a target is added to or removed from the Fiber Channel, the loop will be re-initialized. There are two possible configurations for HBA cards. You can have an HBA card with one node and two ports or you can have an HBA card with two nodes and one port each. The Emulex cards have two nodes and one port each. 1. Click any of the listed HBA cards. 2. The Fiber Channel HBA page is displayed for that card. 3. From here you can view any other related objects, view history or destroy the object.
Information access
The HBA discovery scripts have been revised and updated from a monolithic script to one in which multiple scripts are provided, each one discovering using a single technique.
Page 323
Viewing a cluster
A cluster object in BMC Atrium Discovery represents a grouping of one or more logical hosts into a logical group. All cluster objects are set up automatically by the BMC Atrium Discovery Discovery process.
To view a cluster
To view a cluster, click the Clusters link in the Infrastructure Summary section of the Infrastructure page and select a cluster from the list. The following screen illustrates an example of a Clusters page.
The information fields for a typical cluster are described in the following table. Field Name Name Type Member Hosts Service Software Data Quality Issues Details Name of this cluster. Type of cluster, for example, Microsoft cluster. Hosts that are members of this cluster. Software Instances providing the cluster service. List of any missing fields.
The information fields for a typical host container are described in the following table. Field Name Name Type Model Hardware Vendor System Identifier Members Data Completeness Issues Details Name of this host container. Type of host container. Model name of host container. Hardware vendor name for this host container. The system identifier for this host container. Hosts contained by this container. List of any missing fields.
Viewing a mainframe
A mainframe object in BMC Atrium Discovery represents a mainframe computer, identified by an IP address range. Mainframe objects are set up automatically by the BMC Atrium Discovery Discovery and Reasoning process.
Page 324
To view a mainframe
1. Click the Mainframes link in the Infrastructure Summary section of the Infrastructure page. 2. Select a mainframe from the list. The BMC Atrium Discovery Mainframe page groups the field names in a logical format. Note that the page is divided into separate groups of information which you can collapse or expand to view as required.
The information fields for a mainframe object are arranged in the following groups: Default Section contains general information on the mainframe such as its name and vendor. See Default Section below. MFParts information on LPARs contained in the mainframe. See MFParts for details. Identity the serial number and description of the mainframe. See Identity for details. Hardware information on the mainframe model and number of processsors. See Hardware for details. Inference the inference details. See Inference for details.
Default Section
A list of information fields which may be displayed in the Default Section group for a mainframe object are detailed in the following table. Field Name Hostname Vendor Details The hostname of the mainframe computer. The vendor of the mainframe computer.
MFParts
MFParts represent the LPARs of the mainframe computer. The links to the LPARs are listed in this section. See MF Part page for more information.
Identity
A list of information fields that may be displayed in the Identity group for a mainframe object are detailed in the following table. Field Name Serial Description Details The serial number of the mainframe computer. The description of the mainframe computer.
Hardware
A list of information fields that may be displayed in the Hardware group for a mainframe object are detailed in the following table. Field Name Model Number of Processors Details The model name of the mainframe computer. The number of processors in the mainframe computer.
Inference
Page 325
Details Lists Discovery Accesses for this mainframe, grouped by scan date. When an endpoint is scanned, a Discovery Access node is created. The Discovery Access node records information such as the start time, end time, the Discovery Run of which this is a part, and the previous Discovery Access to simplify troubleshooting. Clicking a Discovery Access displays the DiscoveryAccess page for that Discovery Access. Where a Discovery Access is about to be deleted as part of DDD aging, a trash can icon is displayed next to the Discovery Access.
MF Part page
The BMC Atrium Discovery MF Part page provides details of the MF Part. MF Parts are LPARs and may be native or virtual. The MF Parts page groups the field names in a logical format. Note that the page is divided into separate groups of information which you can collapse or expand to view as required. The information fields for an MF Part object are arranged in the following groups: Default Section: General information about the MF Part such as its name, model, and vendor. See Default Section below. Identity: The identity of this MF Part. See Identity for details. Operating System: Information on the operating system installed on the mainframe. See Operating System for details. Storage: Information about the storage collections used by the mainframe. See Storage for details. Inference: The inference details. See Inference for details.
Default Section
A list of information fields that may be displayed in the Default Section group for a mainframe object are detailed in the following table. Field Name Name Model Vendor Virtual Partition Software Instances Mainframe Sysplex Coupling Facility Details The name of the LPAR. The model of the LPAR. The LPAR vendor. A flag indicating whether the MFPart is virtual. A flag indicating whether the MFPart is a partition. Software Instances running on this MFPart. A link to the object page for the containing mainframe computer. A link to the object page for the sysplex. This is modeled as a Cluster Node. A link to the coupling facility that the LPAR is using.
Identity
The full list of information fields which may be displayed in the Identity group for a mainframe object are shown in the following table. Field Name Part Type VM ID Virtual Details The serial number of the mainframe computer. The description of the mainframe computer. States whether the LPAR is virtual.
Operating System
The full list of information fields which may be displayed in the Operating System group for a mainframe object are shown in the following table. Field Name Discovered OS Details The discovered operating system, for example, z/OS SP7.1.1..
Page 326
The discovered operating system class (for example, Mainframe). The discovered operating system type (for example, z/OS). The discovered operating system type (for example, SP7.1.1).
Storage
The full list of information fields which may be displayed in the Storage group for a mainframe object are shown in the following table. Field Name Storage Collection Details Link to the storage collections used by this MF Part.
Inference
Field Name Discovery Access Details Lists Discovery Accesses for this mainframe, grouped by scan date. When an endpoint is scanned, a Discovery Access node is created. The Discovery Access node records information such as the start time, end time, the Discovery Run of which this is a part, and the previous Discovery Access to simplify troubleshooting. Clicking a Discovery Access displays the DiscoveryAccess page for that Discovery Access. Where a Discovery Access is about to be deleted as part of DDD aging, a trash can icon is displayed next to the Discovery Access.
CMDB Synchronization
Field Name Last successful CMDB sync Last failed CMDB sync CMDB sync duration CMDB CI count CMDB Relationship count Details The time and date at which this MFPart was last successfully synchronized into the CMDB. The time and date at which this MFPart was last unsuccessfully synchronized into the CMDB. The time (in seconds) spent performing the last CMDB synchronization for this MFPart. The number of CIs corresponding to this MFPart at the last CMDB synchronization. The number of relationships to CIs corresponding to this MFPart at the last CMDB synchronization.
Sysplex page
The BMC Atrium Discovery Sysplex page provides details of the mainframe computers sysplex. The sysplex is the component of the mainframe which manages the contained LPARs. A list of information fields that may be displayed in the Sysplex page are detailed in the following table.
Page 327
Field Name Name Type Internal Identifier Description Member MFParts Member Coupling Facilities
Details The sysplex name. The sysplex is modeled using a cluster node. This field refers to the type of cluster which for a syspex is always Sysplex. The internal identifier of the sysplex. A description of the sysplex. Currently this is sysplexname on mainframename. A link to any MFParts (currently LPARs) that are part of this sysplex. A link to any coupling facilities that are part of this sysplex.
Possible duplication of networks devices You should either discover or import network devices. Doing both will result in duplicates where a discovered network device is also imported.
The information fields for a discovered network device object are arranged in the following groups: General Details: General information on the network device such as its name, type, and model. See General Details below. Identity: Information on domain names and IP addresses. See Identity for details. Operating System: The Operating system information for this network device. See Operating System for details. Infrastructure: Information on the services that the network device provides. See Infrastructure for details. Inference: The inference details. See Inference for details.
You can also display provenance information by clicking the Show Provenance button. Provenance information is meta-information describing how the other information came to exist. For more information, see Provenance Information.
General details
The full list of information fields which may be displayed in the General Details group for a network device are described in the following table. Field Name Details
Page 328
The name of the network device. If no name is discovered then this is the IP address of the device. The type of the network device, such as Switch, Router, and so forth. The vendor of the network device. The model name of the network device. Whether the exact SNMP device has been tested by BMC. This may be either: EXACT: BMC has tested the SNMP queries used to discover this device on a device of exactly this model. SIMILAR: BMC has tested the SNMP queries used to discover this device on a device that is similar, but not identical, to this device. For example, the queries may have been tested on a device from the same family of devices from the same vendor, but not on this exact model.
Identity
The full list of information fields that may be displayed in the Identity group for a network device object are detailed in the following table. Field Name Serial ID System Name System Object ID Details The serial number of the network device. The sysname of the network device. The SNMP System Object ID of the network device.
Operating system
The full list of information fields that may be displayed in the Operating System group for a network device object are detailed in the following table. Field Name Discovered OS Class Discovered OS Type Discovered OS Version Discovered OS Vendor Details The discovered operating system class for the network device (for example, Embedded). The discovered operating system type for the network device (for example, IOS). The discovered operating system version for the network device (for example, 12.1). The vendor of the discovered operating system (for example, Cisco).
Infrastructure
The full list of information fields that may be displayed in the Infrastructure group for a network device object are detailed in the following table. Field Name Networking Interfaces Details Provides the following details for each network interface: IP Address: The IP address, also a link to the page for that IP address. MAC Address: The MAC address of the network interface which is also a link to the page for that network interface. Details: The interface number (eth0, eth1) which is also a link to the page for that network interface. A red x icon is displayed where an interface has some mismatch (speed, duplex, or negotiation) with the network device that it is connected to. Connected To: The network device that the network interface is connected to. A red x icon is displayed where the network device port has some mismatch (speed, duplex, or negotiation) with the interface connected to it. Show details: Click Show Details to reveal more information on the network interfaces. Where a mismatch has been flagged in the Connected To or Details fields, the reason for this is highlighted. Show All nnn: Click Show All nnn to show all of the available networking interfaces. The name and IPv4 address of the network device. Click Show Details to see Netmask and Broadcast information. The name and IPv6 address of the network device. Click Show Details to see Prefix information.
Page 329
Inference
The full list of information fields which may be displayed in the Inference group for a network device object are displayed in the following table. Field Name Discovery Access Details Lists Discovery Accesses for the network device, grouped by scan date. When an endpoint is scanned, a Discovery Access node is created. The Discovery Access node records information such as the start time, end time, the Discovery Run that the scan is part of, and the previous Discovery Access to simplify troubleshooting. Clicking a Discovery Access displays the DiscoveryAccess page for that Discovery Access. Where a Discovery Access is about to be deleted as part of DDD aging, a trash can icon is displayed next to the Discovery Access.
CMDB synchronization
Field Name Last successful CMDB sync Last failed CMDB sync CMDB sync duration CMDB CI count CMDB Relationship count Details The time and date when the network device was last successfully synchronized into the CMDB. The time and date when the network device was last unsuccessfully synchronized into the CMDB. The time (in seconds) spent performing the last CMDB synchronization for the network device . The number of CIs corresponding to the network device at the last CMDB synchronization. The number of relationships to CIs corresponding to the network device at the last CMDB synchronization.
The information fields for a typical imported network device are described in the following table. Field Name Name Port Details Connected Hosts Data Quality Issues Details Name of the network device. The following details are provided for each port on the network device: Port, Speed, Duplex, Negotiation, IP Address, Connected Host, and Description. This also highlights any performance affecting mismatches. See Network device/host mismatch for more information. List of connected hosts. The following details are provided for each connected host. Name, Discovered OS, Hardware Vendor, Virtual (whether the host is a virtual host), Organisational Unit, Location, and Status. List of any missing fields.
A number of context-sensitive reports are available for network devices. See Switch Reports for additional information and examples of these network device-related reports.
Hyper-V Windows virtual machines always report NIC speed as 10 GBps Hyper-V Windows virtual machines always report NIC speed as 10 GBps regardless of the actual speed. Consequently, for ports connected to Hyper-V Windows VMs, the network device and switch mismatch field shows incorrect results.
The information fields for a discovered network printer object are arranged in the following groups: General Details: General information about the network printer such as its name, vendor, and model. See General Details below. Identity: Information about domain names and IP addresses. See Identity for details. Operating System: The Operating system information for the network printer. See Operating System for details. Infrastructure: Information about the services that the network printer provides. See Services for details. Inference: The inference details. See Inference for more information.
You can also display provenance information by clicking Show Provenance. Provenance information is meta-information describing how the other information became available. For more information, see Provenance Information.
Page 331
General details
The full list of information fields which can be displayed in the General Details group for a network printer are described in the following table. Field Name Name Type Vendor Model Testing Status Details The name of the network printer. The device type. For a printer, this is Printer. The vendor of the network printer. The model name of the network printer. Whether the exact printer has been tested by BMC. This may be either: EXACT: BMC has tested the SNMP queries used to discover this device on a device of exactly this model. SIMILAR: BMC has tested the SNMP queries used to discover this device on a device that is similar, but not identical, to this device. For example, a duplex version of a single side printer which is validated).
Identity
The full list of information fields that can be displayed in the Identity group for a network printer object are detailed in the following table. Field Name Serial ID System Name System Object ID Details The serial number of the network printer. The sysname of the network printer. The SNMP System Object ID of the network printer.
Operating system
The full list of information fields that can be displayed in the Operating System group for a network device object are detailed in the following table. Field Name Discovered OS Class Discovered OS Type Discovered OS Version Discovered OS Vendor Details The discovered operating system class for the network printer (for example, Embedded). The discovered operating system type for the network printer (for example, JetDirect). The discovered operating system version for the network printer (for example, 22.01). The vendor of the discovered operating system (for example, HP).
Infrastructure
The full list of information fields that can be displayed in the Infrastructure group for a network printer object are detailed in the following table. Field Name Details
Page 332
Network Interfaces
Provides the following details for each network interface: IP Address: The IP address, also a link to the page for that IP address. MAC Address: The MAC address of the network interface, also a link to the page for that network interface. Details: The interface number (eth0, eth1) which is also a link to the page for that network interface. A red x icon is displayed where an interface has some mismatch (speed, duplex, or negotiation) with the network device that it is connected to. Connected To: The network device that the network interface is connected to. A red x icon is displayed where the network device port has some mismatch (speed, duplex, or negotiation) with the interface connected to it. Show details: Click Show Details to reveal more information on the network interfaces. Where a mismatch has been flagged in the Connected To or Details fields, the reason for this is highlighted. Show All nnn: Click Show All nnn to show all of the available network interfaces. The name and IPv4 address of the network printer. Click Show Details to see Netmask and Broadcast information. The name and IPv6 address of the network printer. Click Show Details to see Prefix information.
Inference
The full list of information fields that can be displayed in the Inference group for a network printer object are displayed in the following table. Field Name Discovery Access Details Lists Discovery Accesses for the network printer, grouped by scan date. When an endpoint is scanned, a Discovery Access node is created. The Discovery Access node records information such as the start time, end time, the Discovery Run that the scan is part of, and the previous Discovery Access to simplify troubleshooting. Clicking a Discovery Access displays the DiscoveryAccess page for that Discovery Access. Where a Discovery Access is about to be deleted as part of DDD aging, a trash can icon is displayed next to the Discovery Access.
The information fields for a discovered SNMP managed device object are arranged in the following groups: General Details: General information on the device such as its name, type, and model. See General Details below. Identity: Information on Serial number, system name, system object ID. See Identity for details. Ownership & Location: Information on the ownership and location of the host. See Ownership & Location for details. Operating System: The Operating system information for this SNMP managed device. See Operating System for details. Infrastructure: Information on the network interfaces, and IP addresses (IPv4 and IPv6) that the SNMP managed device uses. See Infrastructure for details.
Page 333
Inference: The inference details. See Inference for details. CMDB Synchronization: Information on the last CMDB synchronization operations and the number of CIs corresponding to this SNMP managed device at the last CMDB synchronization.
You can also display provenance information by clicking the Show Provenance button. Provenance information is meta-information describing how the other information came to exist. For more information, see Provenance Information.
General details
The full list of information fields which may be displayed in the General Details group for a SNMP managed device are described in the following table. Field Name Name Type Vendor Model Status Details The name of the SNMP managed device. If no name is discovered then this is the IP address of the device. The type of SNMP managed device, such as UPS, Network Infrastructure, or Remote Access Controller. The vendor of the SNMP managed device. The model name of the SNMP managed device. Whether the exact SNMP device has been tested by BMC. This may be either: EXACT: BMC has tested the SNMP queries used to discover this device on a device of exactly this model. SIMILAR: BMC has tested the SNMP queries used to discover this device on a device that is similar, but not identical, to this device. For example, the queries may have been tested on a device from the same family of devices from the same vendor, but not on this exact model. Customer: The queries used to discover this device were contributed by a member of the BMC Atrium Discovery community.
Identity
The full list of information fields that may be displayed in the Identity group for a SNMP managed device object are detailed in the following table. Field Name Serial ID System Name System Object ID Endpoint Used To Be Identified As Endpoint Now Identified As Details The serial number of the SNMP managed device. The sysname of the SNMP managed device. The SNMP System Object ID of the SNMP managed device. If the endpoint used to discover this SNMP managed device was previously identified as a different entity, a link to that entity and a link to a comparison with that entity are provided. If the endpoint used to discover this SNMP managed device is now identified as a different entity, a link to that entity and a link to a comparison with that entity are provided.
A list of information fields that may be displayed in the Ownership & Location group for a host object are described in the following table.
Page 334
Status
Operating system
The full list of information fields that may be displayed in the Operating System group for a SNMP managed device object are detailed in the following table. Field Name Discovered OS Class Discovered OS Type Discovered OS Version Discovered OS Vendor Details The discovered operating system class for the SNMP managed device (for example, Embedded). The discovered operating system type for the SNMP managed device (for example, APC). The discovered operating system version for the SNMP managed device (for example, 3.8.6). The vendor of the discovered operating system (for example, APC).
Infrastructure
The full list of information fields that may be displayed in the Infrastructure group for a SNMP managed device object are detailed in the following table. Field Name Network Interfaces Details Provides the following details for each network interface: Name: The name of the network interface, also a link to the page for that network interface. Description: Description of the network interface. MAC Address: The MAC address of the network interface which is also a link to the page for that network interface. Details: The interface number (eth0, eth1) which is also a link to the page for that network interface. A red x icon is displayed where an interface has some mismatch (speed, duplex, or negotiation) with the network device that it is connected to. Connected To: The network device that the network interface is connected to. A red x icon is displayed where the network device port has some mismatch (speed, duplex, or negotiation) with the interface connected to it. Show details: Click Show Details to reveal more information on the network interfaces. Where a mismatch has been flagged in the Connected To or Details fields, the reason for this is highlighted. Show All nnn: Click Show All nnn to show all of the available networking interfaces. The name, IPv4 address, and subnet of the SNMP managed device. Click Show Details to see Netmask and Broadcast information. The name, IPv6 address, and subnet of the SNMP managed device. Click Show Details to see Prefix information.
Inference
The full list of information fields which may be displayed in the Inference group for a SNMP managed device object are displayed in the following table. Field Name Discovery Access Details Lists Discovery Accesses for the SNMP managed device, grouped by scan date. When an endpoint is scanned, a Discovery Access node is created. The Discovery Access node records information such as the start time, end time, the Discovery Run that the scan is part of, and the previous Discovery Access to simplify troubleshooting. Clicking a Discovery Access displays the DiscoveryAccess page for that Discovery Access. Where a Discovery Access is about to be deleted as part of DDD aging, a trash can icon is displayed next to the Discovery Access.
CMDB synchronization
Page 335
Field Name Last successful CMDB sync Last failed CMDB sync CMDB sync duration CMDB CI count CMDB Relationship count
Details The time and date when the SNMP managed device was last successfully synchronized into the CMDB. The time and date when the SNMP managed device was last unsuccessfully synchronized into the CMDB. The time (in seconds) spent performing the last CMDB synchronization for the SNMP managed device . The number of CIs corresponding to the SNMP managed device at the last CMDB synchronization. The number of relationships to CIs corresponding to the SNMP managed device at the last CMDB synchronization.
Requesting support for a new SNMP device If you need BMC Atrium Discovery to support a new SNMP device, you should first try to create a recognition rule for it. This is described in Recognizing SNMP devices. You can also use the Device Capture capability to download a zipped MIB that you can forward to BMC Customer Support as part of a new support issue. BMC Atrium Discovery engineering will aim to integrate support for the new device as a new feature in future TKU releases. You are recommended to raise the case reporting the unsupported SNMP device, including the zipped MIB as early as possible to maximize the chances of its inclusion in the earliest release possible. Note: BMC only provides integration support for SNMP-enabled devices that are either network printers or core networking infrastructure such as switches, routers, and load balancers. Other device classes, such as SNMP-enabled network appliances (NetApp devices) or UPS devices, should be discovered through the use of recognition rules.
Notes
3. Click on a device to display a list of all of methods that BMC Atrium Discovery might use to obtain data from a device, and the Object Identifiers accessed. Where an OID contains a table, a plus (+) icon is displayed. Click the plus icon to reveal the table. Click the minus (-) icon to collapse the table. This is illustrated in the following screen.
Captured devices display on the Device Captures page and include the following detail: Endpoint Last Captured State Expected Manufacturer Expected Model Actions 3. From the Actions column, click on the corresponding link to perform the following the actions: View Log: View the information about the capture (state, duration, and so forth) in a log file. Edit: Revise the entered information for the capture (name, description, and so forth) in a Description field. Capture: Run the capture (potentially overwriting any existing data). This is only available for captures that do not have a State of In Progress. Download: Download the capture as a .Zip file. This option is only available if the capture completed successfully. Cancel: Stop an "In Progress" capture. A Capture Cancelled banner displays and an error will display in the log. Delete: Remove the capture files. The following screen illustrates the log page of a successful capture.
Page 337
To capture unsupported devices from DeviceInfo nodes: 1. Select a node or nodes from the Device Info List page and choose Capture Device Data from the Actions menu, as illustrated in the following screen.
Captured devices display on the Device Captures page and include the following detail: Endpoint Last Captured State Expected Manufacturer Expected Model Actions 2. From the Actions column, click on the corresponding link to perform the following the actions: View Log: View the information about the capture (state, duration, and so forth) in a log file. Edit: Revise the entered information for the capture (name, description, and so forth) in a Description field. Capture: Run the capture (potentially overwriting any existing data). This is only available for captures that do not have a State of In Progress. Download: Download the capture as a .Zip file. This option is only available if the capture completed successfully. Cancel: Stop an "In Progress" capture. A Capture Cancelled banner displays and an error will display in the log. Delete: Remove the capture files. For a list of all network devices supported by BMC Atrium Discovery, see Supported SNMP devices.
Recognition approaches
An approach is a way of recognizing the device. BMC Atrium Discovery uses real world knowledge when offering possible recognition approaches. For example, if the sysObjectID states that the manufacturer is Xerox, then the device is almost certainly a printer. Similarly, Dell and HP printers always have printer in the sysDescr. The same practice is used for switches and load balancers. Where the device would be modeled as a NetworkDevice node, you can always choose to create an SNMPManagedDevice node instead. In the sysDescr, you may be able to see information about the device. From there the most appropriate recognition approach may be obvious. For example, where the sysDescr begins "Dell 1350cnw Color Printer" and the approaches offered are: SNMP Managed Device (dell/generic) Printer (dell/printer) Network Device (dell/generic) Network Device (dell/powerconnect3000) The approach most likely to be successful is "Printer (dell/printer)", though it is possible to use any listed.
Page 338
The SNMP Recognition Rules page shows the following information for each unrecognized SNMP device. Field Name Count sysObjectID Description The number of this type of unrecognized device that have been detected. The SNMP System Object ID of the SNMP managed device. In case of incorrect value (e.g. truncated sysObjectID), an error will be displayed. Sample sysDescr Status An excerpt from the SNMP system description. Unknown Device: An unrecognized SNMP device that has no recognition rule created. Pending Discovery: A device for which a TKU has a rule, but has not been scanned since the TKU was loaded. Rule Defined: A rule is defined for that sysObjectID Overridden by TKU: A rule is defined for the sysObjectID but the loaded TKU also has one defined. The TKU rule takes precedence. Generic sysObjectID: Where multiple devices share the same sysObjectIDs, for example embedded devices using the NET-SNMP or UCD SNMP agent, they cannot be fully identified using this approach. Some of these devices may be identified by reference to other SNMP System Object IDs, though this is not user configurable. Kind of device. For example, Printer or Network Device. The vendor of the SNMP managed device. The model name of the SNMP managed device. An dynamic list of available actions. The following are displayed depending on the Status described above: Create: Open the SNMP Recognition Rule dialog and create recognition rules. Displayed where status is Unknown Device. When you have created a rule, the following actions are available: Edit: Open the SNMP Recognition Rule dialog to edit existing recognition rules. This action is removed when a TKU rule overrides a recognition rule. Delete: Delete an existing recognition rule. Scan: Scan the device using the recognition rule to identify it.
Page 339
2. Enter information on the device into the fields on the SNMP Recognition Rule dialog. Field Name sysObjectID sysDescr Approach Vendor Model Description A read only field showing the sysObjectID. Also provides a link to a new Google search page for the sysObjectID. A read only field showing the sysDescr. A drop down list showing available recognition approaches. Select the one most appropriate for the device. For more information, see approach. Enter the vendor of the device. This information can often be obtained from the sysDescr field above. You cannot save the recognition rule without entering a Vendor name. Enter the model name or number of the device. This information can often be obtained from the sysDescr field above. You cannot save the recognition rule without entering a Model name. Select the device capabilities from the drop down list. For example, for the "Dell 1350cnw Color Printer", select Print. Where a device has multiple capabilities, click Add another and select from the new drop down list. For example, a Self-contained NAS may also have a Media Library so you should select both of those capabilities. Select an IP address from the drop down list. This list is populated with the IP addresses that the unrecognized device has been reported on. Click Start to start the test. As the test progresses, the methods run are shown along with results and a success or failure icon.
Capabilities
Testing
3. Click Save to save the recognition rule. 4. The SNMP Recognition Rules page is displayed and the status of the rule is now Rule Defined.
Page 340
3. 4. 5. 6. 7. 8. For every other scanning appliance: Log into the scanning appliance and navigate to the SNMP Recognition Rules page. Select Import Recognition Rules from the Actions menu. Enter the file name of the exported rules file into the Filename dialog box, or click Browse... to locate the file. Click Import. The screen is refreshed and the imported rules shown against corresponding unrecognized devices.
Viewing a subnet
A subnet object in BMC Atrium Discovery represents a subnet, identified by an IP address range. Host systems are associated with a subnet. All subnet objects are set up automatically by the BMC Atrium Discovery Discovery process.
To view a subnet
1. Click the Subnets link in the Infrastructure Summary section of the Infrastructure page. 2. Select a subnet from the list.
The information fields for a typical Subnet are detailed in the following table. Field Name IP Range Interfaces Data Completeness Issues Primary Data Provenance Details Range of IP addresses. Interfaces discovered on the subnet. The headers can be used to sort the interface list. List of any missing fields. Displayed only when showing provenance data
Page 341
Contains Software Components Part of Application Details Maintaining Pattern Primary processes Data Completeness Issues
Software components that this software instance contains. The application that this software instance is a part of. Links to the details node for the software instance. The pattern used to create the software instance. The primary process that identifies this software. List of any missing fields.
What is Configipedia?
Configipedia is BMC's community website that facilitates knowledge sharing around BMC Atrium Discovery software patterns, how they function and the real world issues they help address. Configipedia also provides visibility of the Technology Knowledge Update release schedule and contents. A link to Configipedia is provided for Software Instance nodes and Business Application Instance nodes that are maintained by BMC Atrium Discovery software patterns. See Viewing an application instance. There is also a link to Configipedia provided in the detailed Pattern Module lists in the Pattern Management section of the Discovery tab. See Viewing and Editing a Pattern Source for more information.
Page 342
Note Some attributes or relationships appear on the View Object page only if a data value has been set.
Page 343
5. Click Create. You can add an empty subgroup from the home page for the group.
For more searching options, select Advanced Search from the Search box.
Page 344
Product Version Contains Software Instances Hosts Pattern Data Completeness Issues
The product version of the application. Software instances that are part of this application instance. Hosts that this application is running on. Pattern responsible for creating and maintaining this BAI. List of any missing fields.
What is baselining
Page 345
Normally, this initial analysis is done manually by examining lists of processes, the network communications on which the processes are involved in, and the host communications for hundreds or thousands of hosts, which can be a tedious, error-prone exercise that can take considerable time. Once the initial analysis is complete, the workshops are conducted. These involve talking to on-site experts about your conclusions from the analysis. These conclusions will often be challenged; either your expert is aware of a subtlety that you hadn't picked up on, or the environment has moved on a bit and they're not aware of some new condition. Either way, you need a way to present your conclusions to them with clear supporting evidence that can be discussed and further analysed. Frantically cross-referencing reams of reports under the gaze of an impatient technical architect wastes time and precious good-will.
Automatic grouping
When you baseline a data center, one of the first tasks is to subdivide it into small groups of hosts. BMC Atrium Discovery does this for you, for all discovered hosts, automatically at the end of each scan. Automatic grouping is intended as a baselining tool for use at the scale of 100 to 500 hosts. At larger scale the usefulness and usability of the visualizations is diminished. There are three types of group created automatically: Group around a host. Clients of a host. Hosts with no relevant communication. An additional group type can be created manually by excluding hosts from automatic grouping. These are placed into a group called "Hosts Excluded from Automatic Grouping". This is described in Excluding hosts from automatic grouping below. You can enable or disable automatic grouping in the Discovery configuration page. These groups are created in the following way after each scan: 1. Groups hosts which communicate with one another these are called Group around hostname Any two hosts which are seen to communicate are called neighbours. When a pair of hosts are considered, the connection between them is ranked. A higher rank increases the probablility of them being grouped. The ranking is calculated according to the proportion of their total neighbours that they share. Where more neighbours are shared, the ranking is higher, and the hosts are more likely to be grouped. This is used to reduce the chances of clients of shared infrastructure being grouped on the basis of those connections. 2. Groups hosts which are clients of a particular set of servers, called Clients of host 3. Hosts with no observed communication are grouped in the "Hosts with no relevant communication" group. Further hosts, that communicate only with hosts that have been excluded from grouping, are also placed in this group. Host Groups are stored as AutomaticGroup nodes in the datastore.
Click on any of the icons to view a visualization. The top row shows: The overview. This is a visualization that shows all automatic groups and how they communicate with each other. No relevant communication. A visualization of the group containing hosts that have either no communication at all or are communicating only with excluded hosts. Excluded hosts. Hosts that have been excluded from the automatic grouping process (see below). The following line shows up to five of the most highly connected groups; that is, those that are communicating with the most other groups. The
Page 346
final line shows up to five of the largest unconnected groups, self-contained groups that do not communicate outside themselves at all.
Direct navigation
1. Select the Infrastructure section from the Primary Navigation bar. 2. From the Infrastructure page, click the Explore tab on the Dynamic Secondary Navigation bar.
Host nodes show an additional menu option that controls whether the host is included in automatic grouping. The menu item shows as either Exclude From Automatic Grouping or Include In Automatic Grouping, depending on the current state of the node. See below for more information about this. The Automatic Grouping palette block on the left has a link to return to the overview visualization from any detail visualization.
Page 347
To actually see the results of excluding the host, the grouping algorithm must be re-run. Several hosts can be marked excluded from automatic grouping before re-running the algorithm. To re-run the algorithm, click Update that appears in the Automatic Grouping palette block on the left. Regrouping then occurs and the overview visualization is redisplayed. Hosts that are excluded from automatic grouping appear in a special group, labeled Hosts Excluded From Automatic Grouping. This is displayed on the overview visualization in a similar way to hosts with no observed communication.
Drilling into this group shows all the hosts in the group, how they are communicating, and the other groups with which they are communicating, as before. However, hosts in the other groups with which the excluded hosts are communicating no longer show a communicating relationship to the excluded host, and the groups are calculated without regard for the communications to and from the excluded hosts. This also means that hosts that are communicating only with excluded hosts are moved into the "Hosts with no relevant communication" group. Hosts can be included again by right-clicking an excluded host and choosing Include In Automatic Grouping.
Manual grouping
BMC Atrium Discovery enables you to add manually selected nodes to a group. You can use this feature to create new groups, or to modify automatically created groups. Manually created groups can overlap, and can contain some or all of the nodes in automatically created groups. Manually created groups can also contain multiple node kinds (for example, Hosts, Software Instances, and Business Application Instances) and subgroups.
Non-ASCII Unicode characters in CAM In Collaborative Application Mapping (CAM), if you create components such as group names or functional components, that contain non-ASCII Unicode characters, the Business Application Instance (BAI) that results from running the pattern displays with unreadable characters.
Once you have collected the items and divided them into subgroups, you can create a PDF report to distribute to the application or component owners. The Group page is the home page for each group that has been created in BMC Atrium Discovery.
You can access Group pages from the Groups menu. The following sections describe the use of manual groups: Creating and populating a group. Creating manual groups from lists or visualizations. Adding subgroups to an existing group. Viewing manual groups and their contents. Adding individual or multiple nodes to a group. Removing individual or multiple nodes from a group. Renaming manual groups. Including or excluding groups from your working set. Deleting manual groups.
Page 348
The Group page lists the node types in the group and displays the first few nodes of each type. Click the node type heading (one of Hosts, Application Instance, or Software Instances) to view a list view of nodes of that type in the group. You can also click a node name to see the node view page for that node.
The top of the page includes a Notes section where you can add and edit notes on the group. The notes can be typed in rich text format, and you can include paragraph formatting, HTML links, and so forth. By clicking Edit, you can go back and revise the text. For more information about the notes functionality, see Searching and investigating the datastore. You can also create PDF reports on the group and individual subgroups. For more information, see Reporting on manual groups. The following screen shows a group with subgroups.
In an empty group, the following label is displayed "This Group is empty, please browse and add items. You could start by trying an Advanced Search." Part of this is a link to the Advanced Search page. For example, if the application that you are mapping has components with a specific naming scheme, start by searching for Software Instances which include a fragment of that name. For example, in the basic example to map BMC Atrium Discovery, enter "BMC Atrium Discovery" in the keywords field and click Search. Select Software Instances from the resulting multi-kind list and from the resulting list select components which are likely to be part of the application, in this case, select all of the software instances. Using the Query Builder you can now traverse along relationships to find the hosts running those Software Instances and add them to the group. Common examples (depending on your application) of other related node types that you may consider traversing to and adding to your groups are: Software Components Database Details SIs with this Detail (from Database Details) Software Instances representing the database server Once you have added more nodes, you should look at the group again and consider items that may have been missed. To simplify managing large groups, you may also wish to divide the group into a number of subgroups. There are a number of ways you may choose to do this, depending on the application that you are mapping. Examples are: Application layers Ownership Geography UAT/Production Unknown components that require investigation You can reorder the subgroups by dragging and dropping them in the Contents window.
Page 349
2. From the Groups menu, select the Show group labels in lists option. Although this step is not necessary, it makes the operations clearer in the user interface. 3. Click away from the menu to close it. 4. If you are on a search result page, select the nodes to add to the new group. You can use the select tools to help with this: (Select: All | None | Invert). 5. Click the Actions button to display the Actions menu, and select the Manual Groups item. 6. Enter the name of the new group in the field on the Groups page, as illustrated in the following screen. Double quotes (") cannot be used in a Group name. 7. Click Create. The page is updated to show that the node or nodes have been added to the new group.
12. Click Create. You can add an empty subgroup from the home page for the group.
7. Click Create. The visualization is refreshed, and the members of the group are highlighted with a tag showing the group name.
and click the Run Query button. The results are displayed in a report. To display the contents of subgroups, you need an additional containment traversal, for example:
search Group where name="AR host Pair" traverse Container:Containment:ContainedItem: traverse Container:Containment:ContainedItem:
For more information on running queries, see Using the Search service.
Page 351
After you have created a group, click its entry in the groups menu to visit the home page for that group.
Renaming groups
To rename a group, click the pencil icon corresponding to it on the Groups menu. The name is redisplayed in an editable field. Rename the field as required and press Enter to apply the change. Double quotes (") cannot be used in the Group name.
Page 352
To delete a manual group, select the delete icon from the Groups menu.
Deleting a group does not delete the nodes contained in that group; it removes only the group itself.
b. Choose one of the following reporting levels from the Actions menu: Level Set PDF Detail High Set PDF Detail Medium Set PDF Detail Low Type of report generated Details the names of the nodes in each group and a list of attributes for each node, and displays functional components with all relationships to other components.
Details the names of the nodes in each group and a list of important attributes for each node. This is the default reporting level.
Details the names of the nodes in each group, but no further information on the nodes. You might choose this level, for example, if you have complex diagrams in your report where the links or relationships are not readable. This would display only the functional components and not their relationships to other components.
3. Choose Generate PDF Report from the Actions menu. 4. From the status link at the top of the Group page, click the Reports Tab link. 5. Select the Group Report in the list and click Download. An example of all three levels combined in one report to represent a three-tier application is shown here.
Host profile
The Host Profile is a PDF report that provides a summary of baselining information about a host or hosts. On this report you can view any of the following: Host profile for a single host, which is available from the host page
Page 353
Host profile for a selection of hosts or all hosts, from the host list page Host profiles that have been generated by any user A sample Host profile is available here.
Managing reporting
This section explains how to run and manage the reports included with BMC Atrium Discovery. Reporting basics The main reports page Application reports Infrastructure reports Discovery reports Context-sensitive node reports
A simple way of reaching the reports that you use most often is to use a pre-defined dashboard, or to create one of your own. See [Dashboards] for more information.
Reporting basics
BMC Atrium Discovery includes a number of predefined reports. Reports operate in the same way as searches: when a report is selected, a query is run to find matching objects. The results are displayed in the form of a list page, from which details of individual objects can be selected. All of these reports can be added to a dashboard. See Using and customizing dashboards for more information. Reports can also be located using the search in the UI. Keywords entered there are matched against the titles and descriptions of the reports. Reports can be accessed from a number of areas in the product.
Page 354
Change Control reports Software reports Data Centre Standardization charts Host Resiliency - Patches reports Atrium Discovery Deployment reports Data Centre Standardization Software reports Data Centre Standardization reports Host Resiliency - Services reports Host Resiliency - Kernel reports
To run a report
1. From the Reports section, click the report you want to run. Alternatively, the main Application, Infrastructure and Policy pages display the most commonly used reports applicable to these sections. Some reports are run immediately. For those that can be configured, a page is displayed enabling you to specify the items you want to report on.
2. Choose the data you want to report on by specifying the parameter fields. The search is not case-sensitive. Where a drop-down list is provided, you can select a specific value or where available you can choose All to match all values. For dates, you can choose a day and month from drop-down lists and enter a year (in yyyy format). You can also specify a time (hours and minutes). 3. Click Run to run the report. 4. The results are summarized in the form of a list object page. For information about list object pages and the charting and filtering tools available, see Viewing summary list pages. The reports are categorized according to the following list. Click a category to see a list and description of the reports:
Page 355
Operational Integrity reports Change Control reports Software reports Data Centre Standardization charts Host Resiliency - Patches reports Atrium Discovery Deployment reports Data Centre Standardization Software reports Data Centre Standardization reports Host Resiliency - Services reports Host Resiliency - Kernel reports
Reports on nodes
Where reports list nodes, each node has a check box next to it. You can select and deselect individual or multiple nodes of this type using these check boxes. There is also a select all check box in the heading row which enables you to select all nodes in the report. The select all check box selects all nodes on all pages of the report.
Page 356
3. Click the slice again to return it to the original pie chart position.
The report is divided into three areas: Report Counter Context Report Counter Chart Counter For each one there is a unique ID. The report is downloaded in the form of a text file.
Page 357
Software reports
The Software reports section on the Reports page enables you to run a number of discovered software-related reports.
Page 358
Page 359
Page 360
Page 361
Application reports
The main Applications page enables you to run a number of reports on the Business Applications in your system. These individual reports can also be run from the main Reports home page. Applications that could be Impacted by a Network Device This report shows a list of Applications that could be impacted by changes to a Network Device. Set Include Hosted Virtual OSIs to also Include Applications running in Virtual OSIs. Applications that could be Impacted by a Network Device via Drop Down This report shows a list of Applications that could be impacted by changes to a Network Device. This report uses a drop down of all seen Network Devices. Set Include Hosted Virtual OSIs to also Include Applications running in Virtual OSIs. Applications that could be impacted by a Host This report shows a list of Applications that could be impacted by changes to a Host. Set Include Hosted Virtual OSIs to also Include Applications running in Virtual OSIs hosted by this Host. Applications that could be Impacted by a Host via Drop Down This report shows a list of Applications that could be impacted by changes to a Host. This report uses a drop down of all seen Hosts. Set Include Hosted Virtual OSIs to also Include Applications running in Virtual OSIs hosted by this Host. Applications that could be Impacted by a Host Model This report shows a list of Applications that could be impacted by changes to a Host Model. Set Include Hosted Virtual OSIs to also Include Applications running in Virtual OSIs hosted by this Host Model. Applications that could be Impacted by a Host Model via Drop Down This report shows a list of Applications that could be impacted by changes to a Host Model. This report uses a drop down of all seen Host Models. Set Include Hosted Virtual OSIs to also Include Applications running in Virtual OSIs hosted by this Host Model. Applications that could be Impacted by an OS This report shows a list of Applications that could be impacted by changes to an OS. Set Include Hosted Virtual OSIs to also Include Applications running in Virtual OSIs hosted by this OS. Applications that could be Impacted by an OS via Drop Down This report shows a list of Applications that could be impacted by changes to an OS. This report uses a drop down of all seen OSs. Set Include Hosted Virtual OSIs to also Include Applications running in Virtual OSIs hosted by this OS.
Page 362
Applications that could be Impacted by a Software Product This report shows a list of Applications that could be impacted by changes to an Software Instance type. By default this report only searches directly modeled containment dependencies. Set Include Co Hosted Applications to include Applications that run on the same Host. Set Include Co Inferred Software Instances to include Software Instances with primary inferences from the same source, for instance this would include Java SIs when searching for Tomcat SIs. Both these additions will cause the report to execute slower. Applications that could be Impacted by a Software Product via Drop Down This report shows a list of Applications that could be impacted by changes to an Software Instance type. This report uses a drop down of all seen Software Instance Types. By default this report only searches directly modeled containment dependencies. Set Include Co Hosted Applications to include Applications that run on the same Host. Set Include Co Inferred Software Instances to include Software Instances with primary inferences from the same source, for instance this would include Java SIs when searching for Tomcat SIs. Both these additions will cause the report to execute slower.
Infrastructure reports
The main Infrastructure page enables you to run a number of reports on the IT infrastructure. Some of these individual reports can also be run from the main Reports home page.
VMware Infrastructure
Hosts supporting VMware VMs Shows a report of the physical Hosts containing running VMware Guests. Hosts in VMware VMs Shows a report of the contained VMware Guests. ESX/ESXi Hosts where API access needs patching Shows a report of the ESX/ESXi Hosts that have a connection leak in their API and need updating if API based discovery is required. VMware Guest OS Classification Shows a chart of OS Classification of VMware Guests. VMware Guest OS Distribution Shows a chart of OS Type distribution of VMware Guests.
Solaris Infrastructure
Solaris LDOMs Shows a report of Hosts that are Solaris Logical Domains. Hosts supporting Solaris Zones Shows a report of the physical Hosts containing Solaris Zones. Hosts in Solaris Zones Shows a report of the contained Solaris Zones
Page 363
Hosts in Xen VMs Shows a report of the contained Xen Domain hosts. Hosts supporting OracleVM VMs Shows a report of the Hosts containing OracleVM Domain hosts. Hosts in OracleVM VMs Shows a report of the contained OracleVM Domain hosts. Hosts supporting Sun xVM VirtualBox VMs Shows a report of the physical Hosts containing running Sun xVM VirtualBox Guests. Hosts in Sun xVM VirtualBox VMs Shows a report of the contained Sun xVM VirtualBox Guests.
Operating Systems
Host OS Classification Shows a count of Hosts for each OS Classification. Host OS Distribution Shows a count of Hosts for each OS Type. UNIX Classification Shows a count of Hosts for each UNIX OS. Windows Versions Shows a count of Hosts for each Windows Version.
Infrastructure Reports
All Hosts ordered by Hostname Show a list of all Discovered Hosts, ordered by their hostname. All Hosts ordered by Operating System Show a list of all Discovered Hosts, ordered by their Operating System and hostname. All Hosts ordered by Location Show a list of all Discovered Hosts, ordered by their Location and hostname. All Hosts ordered by Organizational Unit Show a list of all Discovered Hosts, ordered by their Organizational Unit and hostname. All Hosts ordered by Host Owner Show a list of all Discovered Hosts, ordered by their Host Owner and hostname. Host NIS/Windows Domain Show a list of Host nodes and their discovered domain ordered by domain and hostname. Host NIS/Windows Domain Chart Show a chart of Host nodes and their discovered domain. Fibre Channel HBA Shows a list of Fibre Channel Host Based Adapters (HBA) and which Host they are installed in. The list includes the HBA identifier, WWNN and WWPN. MAC to IP Shows a list of Network Interface MAC addresses against the IP addresses allocated to them and which Host they are installed in. The list includes the Interface Name, MAC and IP. Virtual interfaces on the same MAC address are shown on separate rows. Host/Network Device Mismatches Shows Host to Network Device configuration mismatches. Host IP Address Distribution Shows distribution of IP Addresses per Host.
Installed Software
Installed Packages Report to show Hosts linked to Packages. This is an intensive report and may take some time to run. Packages - Where Installed Shows package details and on what hosts they are installed. List can be filtered by package name. Patches - Where Installed Shows patch details and on what hosts they are installed. List can be filtered by patch name.
Database Reports
All Databases Instances Show a list of all Databases Instances. Oracle Instances Show a list of Oracle Instances. Sybase Instances Show a list of Sybase Instances. SQL Server Instances Show a list of SQL Server Instances. MySQL Instances Show a list of MySQL Instances. DB2 Instances Show a list of DB2 Instances. PostgreSQL Instances Show a list of PostgreSQL Instances.
Page 364
Discovery reports
When you click the Discovery tab of the primary navigation bar, the main Discovery Reports page can be accessed by clicking the Discovery Reports button in the dynamic secondary navigation bar. This page enables you to run a number of reports on Discovery Access.
New entities
Newly Discovered Hosts Report to show new Hosts found in the given period. Newly Discovered Software Report to show new Software Instances found in the given period. Newly Discovered Packages Report to show new Packages found in the environment in the given period. This is an intensive report and may take some time to run. New Packages discovered on Hosts Report to show new links between Hosts and Packages in the given period. This is an intensive report and may take some time to run. Newly Discovered Patches Report to show new Patches found in the environment in the given period. This is an intensive report and may take some time to run. New Patches discovered on Hosts Report to show new links between Hosts and Patches in the given period. This is an intensive report and may take some time to run. Installed Packages Report to show Hosts linked to Packages. This is an intensive report and may take some time to run.
Page 365
Model health
Hosts with Endpoint Identity Change This report shows a list of Host nodes created because of the identity of the Host derived from it changed. The Previous Host node is no longer actively being maintained and is aging out. Hosts near Removal Threshold This report shows a list of Host nodes that are near their removal thresholds and gives an approximate indication of the removal deadline. Network Devices near Removal Threshold This report shows a list of Network Device nodes that are near their removal thresholds and gives an approximate indication of the removal deadline. MFParts near Removal Threshold This report shows a list of MFPart nodes that are near their removal thresholds and gives an approximate indication of the removal deadline. Printers near Removal Threshold This report shows a list of Printer nodes that are near their removal thresholds and gives an approximate indication of the removal deadline. Hosts impacted by Discovery Conditions This report shows a list of Hosts impacted by selected Discovery Conditions. Containing Hosts with unresolved Virtual Machines This report shows a report of Hosts that are running more Virtual Machines than have been resolved to Hosts. Virtual Machines unresolved to Containing Hosts This report shows a report of Hosts that appear to be running in Virtual Machines but have not been resolved to containing Hosts.
Host reports
To run host-related reports: 1. Click the Hosts link in the Infrastructure Summary section of the Infrastructure page. The Hosts page is displayed. 2. Click the Reports icon in the dynamic toolbox. The following host-related reports are displayed: Seen but unscanned IPs: Show unscanned Hosts inferred from Network Connections. Network Device Dependencies: Show Network Devices used by these Hosts. Host Dependencies via SI and BAI: Show other Hosts via Traversing shared SI and BAI. Number of Interfaces vs Hosts: Show the number of Network Interfaces versus Number of Hosts. Number of Subnets vs Hosts: Show the number of Subnets versus Number of Hosts. Observed Communicating Hosts: Show hosts reached through observed network connections. Detailed Observed Communications: Detailed report of observed communication showing processes where available. Host Last Update Summary: Show the distribution of the relative age of the last update on Hosts. Number of Interfaces vs Hosts: Show the number of Network Interfaces versus Number of Hosts on a Column Chart. Number of Subnets vs Hosts: Show the number of Subnets versus Number of Hosts on a Column Chart. Number of Software Instances vs Hosts: Show the number of Software Instances running on a Host versus Number of Hosts on a Column Chart. Number of Applications vs Hosts: Show the number of Business Application Instances running on a Host versus Number of Hosts on a Column Chart. Host OS Type Summary Chart: Show OS by Type on a Pie Chart. Host OS Class Summary Chart: Show OS by Class on a Pie Chart. Host Access Method Summary Chart: Show Host Access Methods on a Column Chart. Host Access Result Summary Chart: Show Host Access Results on a Column Chart.
Page 366
Detailed Observed Communications: Detailed report of observed communication showing processes where available. Host Network Connection Count Over Time: Line Graph of the Number of Network Connections over all available scans. Host Process Count Over Time: Line Graph of the Number of Processes over all available scans. Host Access Method Summary Chart: Host Access Methods on a Column Chart. Host Access Result Summary Chart: Host Access Results on a Column Chart.
Mainframe reports
To run Mainframe-related reports: 1. Click the Mainframe link in the Infrastructure Summary section of the Infrastructure page. The Mainframe page is displayed. 2. Click Reports in the dynamic toolbox. The following Mainframe-related reports are available: Contained MFParts LPARs on these Mainframes. *Contained Sysplex ,Sysplex on these Mainframes. Contained Coupling Facilities Coupling Facilities on these Mainframes. Contained z/VM Guest LPARs LPARs running under z/VM mapped to the z/VM LPARs on these Mainframes. Storage SubSystems Storage SubSystems used by LPARs on these Mainframes. Storage Volumes Storage Volumes used by LPARs on these Mainframes.
MFPart reports
To run MFPart-related reports: 1. Click the MFPart link in the Infrastructure Summary section of the Infrastructure page. The MFPart page is displayed. 2. Click Reports in the dynamic toolbox.
Page 367
2. The following MFPart-related reports are available: Storage Volumes Storage Volumes used by LPARs. Containing Mainframe Mainframe that contains these LPARs. Peer LPARs via Sysplex Other LPARs in the same Sysplex as this one. Peer LPARs via Coupling Facility Other LPARs using the same Coupling Facility as this one. Peer LPARs via Storage SubSystem Other LPARs using the same Storage SubSystems as this one.
Page 368
Package reports
To run package-related reports: 1. Click the Package link in the Infrastructure Summary section of the Infrastructure page. The Package page is displayed. 2. Click Reports in the dynamic toolbox. The following package-related reports are available: Package Host Density: How many hosts are these packages installed on.
Patch reports
To run patch-related reports: 1. Click the Patch link in the Infrastructure Summary section of the Infrastructure page. The Patch page is displayed. 2. Click Reports in the dynamic toolbox. The following Patch-related reports are available: Patch Host Density: How many hosts are these patches installed on.
Subnet reports
Page 369
To run subnet-related reports: 1. Click the Subnets link in the Infrastructure Summary section of the Infrastructure page. The Subnets page is displayed. 2. Click Reports in the dynamic toolbox. The following subnet-related reports are available: Subnet Interface Count: How many interfaces are on these subnets. Subnet Host Count: How many hosts are on these subnets.
CMDB synchronization
CMDB synchronization provides a configurable mechanism to keep data in the BMC Atrium CMDB (CMDB) synchronized with information discovered by BMC Atrium Discovery.
Page 370
The BMC Atrium Discovery data model (Discovery model) is different from the Common Data Model (CDM) used in the CMDB, so the synchronization mechanism is responsible for transforming the required information from one data model to the other. The Discovery model is known as the source model, and the CDM is referred to as the target model. Both models are graph models, meaning that they represent data as nodes connected to each other with relationships. Synchronization operates on portions of the full graph known as subgraphs. A subgraph contains a root node (such as a host computer), plus all the related nodes that belong to it (such as interface information, software information, and so on), referred to as its components. Some nodes can be shared, meaning that they belong to more than one subgraph. For example, the node representing a subnet is shared by all the host computers on that subnet. The mapping between the Discovery model and the CDM is defined in syncmapping definitions within TPL files. The Default CDM Mapping provides a comprehensive mapping from the Discovery model to a best-practices CDM model. If you have added custom nodes to Discovery they are not exported automatically by the default CDM mapping, instead, you will have to create additional mapping definitions within TPL files.
Synchronization stages
Synchronization occurs in five stages: 1. A root source node is chosen for synchronization. The root node kinds are Host, NetworkDevice, MFPart, and Printer corresponding to the main kinds of device that can be discovered. The list of possible root nodes is fixed and cannot be altered in the Syncmappings. 2. The source subgraph is populated by retrieving all the relevant data from the Discovery datastore. 3. The source subgraph is transformed into a target subgraph, centered around a target root node. 4. The target subgraph is compared with the corresponding subgraph stored in the CMDB. 5. The CMDB is updated (by creating, updating and (soft) deleting CIs and relationships) so that the stored subgraph matches the target subgraph.
Restart tideway service after making changes to CDM If any changes are made to the CDM (for example, adding attributes), you cannot synchronize to those attributes until the tideway service has been restarted. After the tideway service starts, the CDM is read and all customized classes and attributes are available for CMDB synchronization.
Initial configuration
The following steps are required to set up CMDB synchronization: Preparing BMC Atrium CMDB for synchronization Setting up the CMDB synchronization connection
Performing synchronization
Synchronization can be configured to run in a continuous mode, meaning that as soon as Discovery completes its scan of a device, the data corresponding to it is queued for synchronization with the CMDB. Alternatively, synchronization can be launched manually for one or more devices from within the browsing interface. Configuring continuous synchronization
Page 371
Filtering
By default, all details about all devices are synchronized to the CMDB. In some situations, it may be useful to only synchronize a subset of the data. After each of the first three synchronization stages described above, the data can be filtered, to affect the data that finally reaches the CMDB. 1. Filtering the root device node 2. Filtering the synchronized components 3. Filtering the CMDB CI classes
An additional step is required if your BMC Atrium CMDB instance is earlier than: 7.5 patch 004 7.6 patch 001 See Modifying the Reconciliation Rules for more details.
After you have performed these steps, you can start synchronizing BMC Atrium Discovery data to BMC Atrium CMDB.
Performance considerations
To obtain the maximum synchronization performance when using CMDB synchronization with BMC Atrium Discovery version 8.2.03 and later, you should consider tuning the database that BMC Atrium CMDB (or BMC Remedy AR System) uses. See the "BMC Remedy AR System Server 7.6 Performance Tuning for Business Service Management" White Paper for more information. The following sections in the White Paper are particularly relevant: Tuning an Oracle server (page 37) Best practices for tuning Oracle database servers (page 44) Tuning a SQL Server database (page 47) Best practices for tuning SQL Server database servers (page 49)
Page 372
the BMC Atrium Discovery user interface. The CMDB Sync page is described here. The extensions are required only for versions of BMC Atrium CMDB prior to 7.6.03. The ADDM Integration extension adds an attribute ADDMIntegrationId) to the BMC_BaseElement class that is used to store a key that ensures the correct CIs in the CMDB are updated when data in BMC Atrium Discovery is updated. The Mainframe extension adds Mainframe-specific classes and relationships. It is only required if you are discovering mainframe systems. The XML extension files should be extracted from the zipped archive and saved to your local file system.
Due to a defect in BMC Atrium CMDB, you must run the BMC Atrium Configuration Management Database Maintenance Tool as the same NT user that installed the CMDB. Otherwise, you will get a critical failure related to authentication and the rik.exe executable not being found.
1. Depending on your environment, perform the following steps to access the Extension Loader: On Windows Atrium CMDB 7.5 From <BMC_install_directory>\BMC Software\AtriumCore\<server>\atriumcore, double-click AtriumCoreMaintenanceTool.cmd. Atrium CMDB 7.6 From <BMC_install_directory>\BMC Software\AtriumCore\atriumcore, double-click AtriumCoreMaintenanceTool.cmd. On UNIX Navigate to your installation directory and run ./AtriumCoreMaintenanceTool.sh. 2. Click the Configuration tab. 3. Select the Run Extensions. 4. Click Browse to locate and select the extension data file ExtensionLoader.xml that you extracted from the extension-423-ADDMIntegration.zip file. 5. Click Next. 6. Provide the following BMC Remedy AR System information, and click Next. User Name: Specify an Administrator user name. Password: Type the password for the Administrator user name. Port Number: If you specified a port number when installing the AR System server, type that number in this field. If the server uses a portmapper to automatically select a port to use, leave this field to the default "0". AR Server Host Name: Type the AR System server on which you want to install the extensions. It is recommended that you install the BMC Atrium Discovery extensions on the AR System server which has the administration operations enabled. The configurations on the AR System Group Operation Ranking form determine whether administration operations are enabled on the servers or not. In some cases, these configurations might either not be correctly picked up by the server, or might not be available on the AR System configuration file (ar.cfg) of the server. In such cases, you must: Verify the settings on the AR System Group Operation Ranking form to determine which servers are enabled for administrator operations. Verify the AR System configuration file (ar.cfg) on the corresponding servers to have the correct settings for Disable-Admin-Ops and Server-Group-Member. Restart the servers and verify that the one which support administrator operations is accessible through BMC Remedy Developer Studio. Verify that the host name specified during installation resolves to the correct server, and does not go through a load balancer, or gets routed to a server where administrator operations are disabled. 7. Click Run. An installation summary indicates if the generation succeeded or failed, including a brief reason for the failure, such as "The structure of the XML is not proper. Please ensure tags are closed properly." 8. For a failure, click View Log to find details. a. In the atriumcore_configuration_log.txt window, you can sort by Severity. b. Look for SEVERE messages highlighted in red. c. Select a message to display the details at the bottom of the window. 9. To return to the initial generation view, click Done. For more information about BMC Atrium CMDB extensions, see the documentation for the corresponding version: BMC Atrium CMDB version 8.0.00: Managing BMC Atrium CMDB extensions BMC Atrium CMDB version 7.6.04: BMC Atrium CMDB 7.6.04 Administrators Guide BMC Atrium CMDB version 7.6.03: BMC Atrium CMDB 7.6.03 Administrators Guide BMC Atrium CMDB version 7.6.00: BMC Atrium CMDB 7.6.00 Administrators Guide BMC Atrium CMDB version 7.5.00: BMC Atrium CMDB 7.5.00 Administrator's Guide
Page 373
Additional Information If you are having difficulty applying the extension you might like to try the resolution covered in this thread on the BMC Communities site.
Warning The dataset you are creating is intended to receive data from only one BMC Atrium Discovery appliance. If your deployment architecture requires that several scanners or consolidator synchronize to the same CMDB, then you must create a separate dataset for each. For example, BMC.ADDM1, BMC.ADDM2, and associated reconciliation jobs.
1. You can create a dataset from one of two places in the console. Either: From the Identify activity, click Add Dataset Identification Group Association, and then click Create Dataset. From the Mid Tier, Open Atrium Core Console -> Applications -> Reconciliation console, click "Create Dataset" option. 2. Complete the following fields: Field Name ID Accessibility DatasetType 3. Click Save. For more information, see the documentation for the corresponding version: BMC Atrium CMDB version 8.0.00: Creating datasets in the Reconciliation console BMC Atrium CMDB version 7.6.04: BMC Atrium CMDB 7.6.04 Normalization and Reconciliation Guide BMC Atrium CMDB version 7.6.03: BMC Atrium CMDB 7.6.03 Normalization and Reconciliation Guide BMC Atrium CMDB version 7.6.00: BMC Atrium CMDB 7.6.00 Normalization and Reconciliation Guide BMC Atrium CMDB version 7.5.00: BMC Atrium CMDB 7.5.00 Patch 001 Normalization and Reconciliation Guide (The document is valid for BMC Atrium CMDB version 7.5.00.) Description The name for the dataset. Usually set to BMC.ADDM. The system identifier for the dataset. This must match the ID used in the configuration within BMC Atrium Discovery. The default is BMC.ADDM. Set to "Writable". Set to "Regular".
Page 374
The best practice is to create a precedence group that specifies a precedence for the BMC.ADDM dataset. The value depends on your relative preference for data populated by BMC Atrium Discovery and other data providers to the CMDB. For more information about how precedence groups are configured for version 7.6.00 or later, see the documentation for the corresponding version: BMC Atrium CMDB version 8.0.00: Defining a Precedence Association Set for reconciliation Merge activities BMC Atrium CMDB version 7.6.04: BMC Atrium CMDB 7.6.04 Normalization and Reconciliation Guide BMC Atrium CMDB version 7.6.03: BMC Atrium CMDB 7.6.03 Normalization and Reconciliation Guide BMC Atrium CMDB version 7.6.00: BMC Atrium CMDB 7.6.00 Normalization and Reconciliation Guide
If you are using an earlier version of BMC Atrium CMDB then the standard reconciliation rules need to be modified before running the first reconciliation from BMC.ADDM to BMC.ASSET.
Class BMC_Application
Attribute Qualification*
Replace Replace 'ApplicationType' != $\NULL$ AND 'ApplicationType' = $ApplicationType$ with 'Name' != $\NULL$ AND 'Name' = $Name$ Replace 'SoftwareServerType' != $\NULL$ AND 'SoftwareServerType' = $SoftwareServerType$ with 'Name' != $\NULL$ AND 'Name' = $Name$ Replace 'ConnectivityType' != $\NULL$ AND 'ConnectivityType' = $ConnectivityType$ with 'Name' != $\NULL$ AND 'Name' = $Name$
BMC_SoftwareServer
Qualification*
BMC_ConnectivitySegment
Qualification*
How to Modify the Reconciliation Rules in BMC Atrium CMDB version 7.5 and 7.6
1. From the Mid Tier, Open Atrium Core Console > Applications > Reconciliation Application, and click Standard Rules Editor. 2. Select the 'Application Type' attribute for the BMC_Application class. Drill down to the 'Application Type' attribute and change it to 'Name'. 3. Similarly, Select Attribute 'Connectivity Type' for BMC_ConnectivitySegment class. Drill down to the 'Connectivity Type'attribute and change it to 'Name'. 4. Select the 'SoftwareServerType' attribute for the BMC_SoftwareServer class. 5. Drill down to the 'SoftwareServerType' attribute and change it to 'Name'. 6. Click Save.
Page 375
5. From the Infrastructure > Hosts page, send one or two hosts to the CMDB, and select Actions > Sync with CMDB. 6. Start continuous synchronization. 7. Take the appliance out of maintenance mode and restart your discovery runs. The data is inserted into the new CMDB dataset as your Discovery jobs are executed. BMC Atrium Discovery detects that the data does not exist, and then inserts the data.
Page 376
Before you can synchronize BMC Atrium CMDB with BMC Atrium Discovery, you must prepare the CMDB.
2. Click the Configuration tab to configure a default export to your Atrium server. 3. Click Edit to change settings on the Configuration page. 4. Type the following information on the CMDB Sync page: Field Atrium CMDB Server Description The BMC Atrium CMDB host. This may be specified as one of the following: Hostname or FQDN IPv4 or IPv6 address If BMC Atrium CMDB is installed with an AR System server group, you must enter the following based on the high availability status of the server group: (If high availability is configured) The host name or IP address of the load balancer. (If high availability is not configured) The host name or IP address of the primary AR System Server. To learn about the AR System server groups configuration, see the corresponding documentation for versions 8.1, 8.0, 7.6.04, and 7.5. To learn about the AR System server high availability configuration and hardware load balancer, see the corresponding documentation for versions 8.1, 8.0, 7.6.04, and 7.5. Specify TCP Port The communication port. BMC Atrium CMDB typically uses a portmapper service to automatically choose a suitable communication port. If this is not appropriate in your environment, you can configure the CMDB to listen on a specific port, and then specify that port in this field. The username of the BMC Atrium CMDB user that is at least a member of a group having the CMDB Data Change All role. Note that a user account that is part of the "administrator" or "asset admin" group may not comply with this requirement. If in doubt, create a dedicated discovery user with the same profile as the standard CMDB Demo user. The CMDB Demo user permissions at least are required for multitenancy to work. Note that a user is not permitted to connect to BMC Atrium CMDB from a second IP address within 4 hours of their last activity at the first IP address. For a failover scenario, you could also create a second discovery user to connect from the failover appliance. Set Password Dataset ID The password of the CMDB user (or blank if the user has no password). The ID of the dataset used for Discovery data. The default is BMC.ADDM.
Username
Page 377
Data model
The data model for your CMDB. Different versions of the CMDB have different data models, so the data from BMC Atrium Discovery must be transformed differently according to the CMDB version. The preferred value for Data model is 7.6.03 and later (without impact relationships), unless using BMC Service Impact Management version 7.4 or earlier. If not using a product version which requires this older IaaR format of impact relationships, you should allow Atrium Normalization to create and manage the impact relationships using the newer IaaA format.
Do not use mixed modes with 7.6.03 and later (with impact relationships) If you specify the 7.6.03 and later (with impact relationships) option, CMDB synchronization creates Impact relationships as a relation (IaaR). Normalization in the CMDB using the default setting of auto impact creates Impact relationships as an attribute (IaaA). You must change your CMDB Normalization setting so that it does not use auto impact when the Data model is set to 7.6.03 and later (with impact relationships) in BMC Atrium Discovery. Do not use a mixed mode of (IaaA) and (IaaR).
Choose the correct data model from the menu. If you choose an incorrect value, you might encounter errors during synchronization. Sync threads CMDB synchronization can simultaneously process data from multiple control threads (the default is one). Depending on the configuration of your CMDB and the characteristics of the network between BMC Atrium Discovery and the CMDB, the rate at which data is synchronized to the CMDB might be increased by choosing a non-default number of Sync threads. The number of Sync threads in BMC Atrium Discovery must be fewer than the number of Fast Threads defined in the CMDB, and fewer than the value of the Next-Id-Block-Size parameter in the CMDB. Increasing the number of Sync Threads should only be undertaken with the assistance of your AR and CMDB System Architects. Note: each root CI (Host, NetworkDevice, Printer, SNMPManagedDevice, MFPart) is processed by a single thread. If you are processing several root CI's, multiple threads are used. However if you are processing a single root CI, only one thread is used, even if additional threads are configured. Multitenancy support. Selecting this check box enables you to choose a company name from the Default Company menu, which assigns that company name to a discovery run during an initial scan, and assigns that company name to the Company attribute to the CIs created in BMC Atrium CMDB. The CMDB Demo user permissions at least are required for multitenancy to work. 5. Click Apply.
Multitenancy
6. If necessary, click Edit to change configuration settings. 7. From the CMDB Sync > Configuration tab, click Test Connection to the new CMDB dataset.
8. If you are setting up multitenancy, click Lookup Companies to populate the Companies list. 9. Click Manage Blackout Windows to configure times at which no synchronization should occur with the CMDB. This is described in Configuring CMDB Sync blackout windows.
Status
Comprehensive status information is also available on the CMDB Sync page, which displays a summary of the number of devices that are queued, which devices are currently processing, and those that have recently completed synchronization. If any errors occur during synchronization, error information is shown towards the bottom of the page. To display the status window, click the Status tab.
Page 378
The following table describes the fields in the Most recent synchronized devices section. Field Time Label Kind Attributes written Description The time at which the synchronization started. The device name. The CMDB device kind. The total number of attributes which have been written or updated on all nodes which belong to that particular device. Where the number of attributes written or updated is fewer than the possible total it means that not all attributes have been updated. A scenario in which this may occur is where a node is shared between two root nodes, such as an IPConnectivitySubnet node. The first time it is created, its attributes are set. When another root node that is also linked is created, the shared node's attributes are counted as candidates for writing or updating, but are already set so are not updated. The number of configuration items considered, inserted, updated or deleted for this device. Where the number of CIs considered is greater than the number inserted, the number of attributes written (see Attributes written above) is fewer than the possible total. The number of relationships considered, inserted, updated or deleted for this device. Where the number of relationships considered is greater than the number inserted, the number of attributes written (see Attributes written above) is fewer than the possible total.
The status window automatically refreshes to display up-to-date information. Click Show all in the Devices that failed last CMDB sync field to show a list view of devices that failed to synchronize.
Multitenancy
BMC Atrium Discovery can be used in conjunction with BMC Atrium CMDB's multitenancy capability. BMC Atrium Discovery itself does not support multitenancy, or provide any segregation of data from multiple tenants. Multitenancy refers to a situation where a single application on a server supports multiple client organizations. BMC Atrium Discovery provides the ability to scan the estates of multiple organizations, and to assign a company name attribute to that scan. Any CMDB CIs created as a result of that scan can be assigned the appropriate company name. The company name is assigned to the cdm_company attribute on the Discovery Run node. This is mapped onto CMDB CIs created as a result of that scan and assigned to the Company attribute.
Page 379
The CMDB Sync page enables you to select the root device nodes to synchronize to the CMDB. All device nodes are synchronized unless a filter is applied. You apply a filter using the Device Filter tab of the CMDB Sync page. After configuring a connection to the CMDB, the default view of the CMDB Sync page displays the Device Filter tab. This tab uses the same three pane selection tool used in the Query Builder. It enables you to construct a filter which pushes just those devices with matching attribute values to the CMDB. The filter can be on simple attribute values, or attributes of a destination node reached by following a relationship, or logical combinations of the same. The filters are shown in map form in the filter builder.
In this example, the filter is still using the Discovered OS Class. A Software Instance filter has been added by scrolling down the list to Software Instance: Software Instances running on this host; when this is clicked, the second pane is populated with the attributes and relationships of Software Instances. The Name attribute has been added to the query viewer by scrolling down the second pane list and clicking Name.
The filter means that only Windows Hosts running IIS are synchronized. All other Hosts are excluded from synchronization.
Page 380
2. We are initially looking for Windows hosts running IIS and need to create a nesting level where this first test can be performed. 3. Click the Add Condition icon in the Discovered OS Class row. A new container is added which will be used to supply the All, Any, or None condition to be evaluated. 4. Before we add a Software Instance filter, we must ensure that it is added in the correct place. Set the focus to the container which was just added by clicking in that container's area. When selected it is highlighted in yellow. 5. We now need to add the Software Instance filter. Scroll down the first pane to Software Instance: Software Instances running on this host. Click this to populate the second pane with the attributes and relationships of Software Instances. From the second pane. select the Name attribute. 6. Enter "IIS" in the text entry field and leave the condition drop down on contains word.
7.
8. 9.
10. 11.
We now have a filter for Windows hosts which are running IIS. Click Save to save the changes. We now need to find UNIX hosts running Apache. With the root of the Filter Viewer highlighted, select Discovered OS Class from the list of host attributes. The Discovered OS Class attribute is added to the Filter Viewer outside the section of the query completed above. Enter "UNIX" in the text entry field and leave the condition drop down on contains word. Click the Add Condition icon in the Discovered OS Class row. A new container is added which will be used to supply the All, Any, or None condition to be evaluated. Click this container to select it. We now need to add the Software Instance filter. Scroll down the first pane to Software Instance: Software Instances running on this host. Click this to populate the second pane with the attributes and relationships of Software Instances. From the second pane. select the Name attribute. Enter "Apache" in the text entry field and leave the condition drop down on contains word. The last step in building the query is to change the root condition drop down to Any, insead of All.
Example
The following example shows how to use the Filter Builder to create a filter that will synchronize only those software instances representing web servers on those hosts previously identified as web servers. 1. From the left pane of the filter builder, select Software Instance.
Page 381
1. 2. 3. 4. 5. 6. 7. The right pane is populated with Software Instance attributes. Click the Type attribute. Enter "IIS" in the text entry field and leave the condition drop-down on "contains word". Click the Type attribute. Enter "Apache" in the text entry field and leave the condition drop-down on "contains word". Change the "Software Instance where..." drop-down to "any". Click Save to save the changes.
With this filter in place, any SoftwareInstances that do not match the conditions are removed from the source subgraph. When the source subgraph is transformed into the target subgraph, it is as if the chosen SoftwareInstance nodes were the only ones present in the Discovery datastore.
Continuous CMDB synchronization starts, and the red STOPPED heading banner changes to a green RUNNING banner.
In this mode, as the scan of each device (Host, NetworkDevice or MFPart) is completed, it is automatically queued for synchronization. If any devices are deleted from the Discovery data store (due to aging), they are also queued for synchronization, so the deletion is synchronized with the CMDB.
Batch synchronization
Nodes can be synchronized into BMC Atrium CMDB as a batch. From a list view in the user interface, select one or more nodes, then choose Sync with CMDB from the Actions menu. The menu items appear for any root node kind. By default, Host, NetworkDevice and MFPart and all the selected nodes are added to the synchronization queue. You can follow the link in the notification to the status page to see the progress of the synchronization.
Page 382
Note that a node is only added to the synchronization queue if it is not already queued.
Individual synchronization
From a node view page for a root node kind (Host, NetworkDevice and MFPart), you can choose Sync with CMDB from the Actions menu to synchronize that particular device.
Note If you have configured blackout windows using the version BMC Atrium Discovery 8.2 configuration file, that file is read during the upgrade to BMC Atrium Discovery 8.3, and new blackout windows are created that are stored as nodes in the datastore. The configuration file is renamed from cmdb_sync_blackout.cfg to cmdb_sync_blackout.converted. CMDB Sync will not start if there is a configuration file cmdb_sync_blackout.cfg and blackout windows are already stored in the datastore.
Page 383
5. Click OK.
Restart tideway service after making changes to CDM If any changes are made to the CDM in the CMDB, for example, adding attributes, you cannot sync to those attributes until the tideway service has been restarted. On restart of the tideway service, the CDM is read and all customized classes and attributes are available to CMDB sync.
ADDMIntegrationId
So that Discovery can properly maintain the CIs it creates in its dataset, it stores a unique "key" on every CI, in the attribute named ADDMIntegrationId. These keys are sometimes directly populated from the key on a corresponding node in the Discovery data store, but in other situations they are constructed using rules appropriate for the mapping structure. See the syncmapping definitions for details of how keys are populated.
Company attribute
If the CMDB is configured for multitenancy, all CIs can have a Company attribute set appropriately. See Multitenancy for details.
Impact relationships
One of the main features of BMC Atrium CMDB is to indicate the way that one CI impacts another. Versions of BMC Atrium CMDB prior to 7.6.03 represent impact using a specific relationship, BMC_Impact. With those CMDB versions, Discovery is responsible for creating and maintaining the BMC_Impact relationships. In later CMDB versions, the BMC_Impact relationship has been deprecated, and impact is now indicated with the HasImpact attribute on all other relationship classes. Also, Impact Normalization rules in the CMDB are responsible for populating the impact attributes, so Discovery is not responsible for setting impact information. BMC Service Impact Manager (SIM) version 7.4 is incompatible with the new representation of impact as an attribute, so if SIM is used with CMDB version 7.6.03 or later, Discovery must be configured with the "CMDB 7.6.03 and later with Impact relationships" data model. In that case, Discovery will ask the CMDB to create BMC_Impact relationships, but the deprecation mechanism will translate them into BMC_BaseRelationship instances with the Name "ImpactOnly". If you view such a relationship in Atrium Explorer, it will display as a BMC_BaseRelationship with HasImpact set to "Yes", but the deprecation mechanism means that it is still possible to find the relationships as BMC_Impact using the CMDB API or using the BMC.CORE:BMC_Impact form. Note that this complicated situation only applies to BMC Service Impact Manager version 7.4 and earlier, not to the SIM functionality built into later versions of BMC Proactive Performance Manager (BPPM).
Page 384
PDF Download You can download a PDF version of this CDM mapping section here. Be aware that the PDF document is simply a snapshot and will be outdated should updates occur on the wiki. The PDF file is generated from the wiki on each request so it may take some time before the download dialog is displayed.
CMDB version differences The default mapping works with CMDB versions 7.5, 7.6, and 7.6.03 and later. The data model in each is slightly different. BMC_Impact relationships are not normally created with CMDB 7.6.03 and later, since the CMDB automatically maintains them itself. See the information about impact relationships for more details. A number of attributes are not present in CMDB 7.5, as noted below.
Differences from previous versions of BMC Atrium Discovery Except where noted below, the model populated by BMC Atrium Discovery version 9.0 is the same as that populated by earlier versions.
BMC_ComputerSystem
The root Host node is mapped to a root BMC_ComputerSystem CI with the following attributes: Attribute Name NameFormat ShortDescription Description CapabilityList Domain HostName isVirtual LastScanDate ManufacturerName Model PrimaryCapability SerialNumber SystemType Details The fully qualified name of the host if available, otherwise the unqualified name; if no names are available, the IP address "DNS", "HostName" or "IP" Host name attribute The host name and the fully qualified domain name separated by a colon A single value corresponding to Server, Desktop, or Laptop Host dns_domain the DNS domain of the host Host hostname the hostname reported by the host Yes if the host is known to be virtual (or partitioned hardware); null if not Host last_update_success the date and time the host was last scanned Host vendor the manufacturer of the host Host model Server, Desktop or Laptop Host serial Enumeration value corresponding to the system type
Page 385
See TokenId rules below Host ram total Host ram Enumeration value corresponding to the type of virtual machine Host workgroup the Windows workgroup "Hardware" "Processing unit" "Server", "Desktop" or "Laptop"
TokenId rules
TokenId is an attribute that in some circumstances aids reconciliation of CIs populated by multiple data sources. The following describes how discovery sets TokenId for BMC_ComputerSystem. For most hosts, TokenId is of the form hostname:DNS domain name. For some virtual hosts, TokenId contains a UUID: 1. For VMware, TokenId is of the form "VI-UUID:ABCD-EF-GH-IJ-KLMNOP". Where each letter represents a hexadecimal digit. 2. For Hyper-V, TokenId is of the form "HYPERV-ID:ABCD-EF-GH-IJ-KLMNOP". With Hyper-V, the UUID is only available on the physical machine, so TokenId is only set for virtual machines that have been successfully linked to their hosting physical machines. 3. For Xen (including Oracle VM), TokenId is of the form "XEN-ID:ABCD-EF-GH-IJ-KLMNOP". 4. For KVM (including RedHat Enterprise Virtualization), TokenId is of the form "KVM-ID:ABCD-EF-GH-IJ-KLMNOP".
BMC_OperatingSystem
The root Host is also mapped to a single BMC_OperatingSystem CI, with the following attributes: Attribute Name NameFormat ShortDescription Description ManufacturerName MarketVersion Model OSType OSProductSuite ServicePack VersionNumber Category Type Item Details Host os_type Host os_version "OSName" "OS type version on hostname" Host os Host os_vendor Host os_version Host os_type Enumeration value for OS type Enumeration value for OS product suite Host service_pack Host os_version "Software" "Operating System Software" "Operating System"
Page 386
BMC_Impact
IMPACT
BMC_OperatingSystem
BMC_ComputerSystem
BMC_Processor
The root Host is mapped to a number of BMC_Processor CIs. For physical machines, the number of CIs corresponds to the number of physical CPU packages present in the machine. For virtual machines, the number of CIs corresponds to the number of logical CPUs the operating system is scheduling across. In some circumstances, it is not possible to discover the number of physical CPU packages in a physical machine. In such cases, no BMC_Processor CIs are created for the machine. All the BMC_Processor CIs for a host are normally identical to each other. New in BMC Atrium Discovery 8.3, they can be different in cases that a physical machine has more than one type of CPU. Attribute Name NameFormat ShortDescription Description isVirtual ManufacturerName MaxClockSpeed Model NumberOfCores NumberOfLogicalProcessors OtherProcessorFamilyDescription ProcessorArchitecture ProcessorFamily ProcessorStatus ProcessorType UpgradeMethod Category Type Item Details "CPUindex" (e.g. "CPU0", "CPU1", etc.) "ProcessorName" Host processor_type Description of the processor and its index True if the Host is virtual; not set if physical. (Not present in CMDB 7.5.) Manufacturer of the processor. (New in ADDM 8.3.) Host processor_speed Host processor_type Host cores_per_processor (only set for physical machines) Host num_logical_processors / num_processors (only set for physical machines) Host processor_type (only set if ProcessorFamily is "Other") Enumeration representing the architecture Enumeration representing the processor family 1 CPU enabled 2 Central processor 1 Unknown "Hardware" "Component" "CPU"
Difference from BMC Atrium Discovery 8.1 BMC Discovery version 8.1 always created one single BMC_Processor CI, regardless of the circumstances.
Processor relationships
Relationship BMC_HostedSystemComponents Name SYSTEMHARDWARE Source BMC_ComputerSystem Destination BMC_Processor
Page 387
mapped to a BMC_LANEndpoint. Each of its associated IP addresses are mapped to BMC_IPEndpoint CIs. Where Discovery has been able to connect a NetworkInterface to a NetworkDevice (via associated port nodes), a BMC_Dependency relationship is created between the two BMC_ComputerSystem CIs representing the host and the network device.
BMC_NetworkPort
Attribute Name ShortDescription Description AutoSense FullDuplex LinkTechnology ManufacturerName PermanentAddress PortType Speed Category Type Item Details NetworkInterface interface_name NetworkInterface interface_name NetworkInterface name Yes (0) or No (1) or null if not known Yes (0) or No (1) or null if not known Ethernet (2) NetworkInterface manufacturer NetworkInterface mac_addr Ethernet (2) NetworkInterface raw_speed "Hardware" "Card" "Network interface card"
Differences from earlier BMC Atrium Discovery versions BMC_NetworkPort was not created in versions prior to 8.3.
BMC_LANEndpoint
MAC addresses in the Discovery data model are stored in the conventional form with colons separating each pair of hexadecimal digits; in the CDM, MAC addresses are stored as just the hexadecimal digits, with no separating colons. Attribute Name NameFormat ShortDescription Description Address MACAddress ProtocolType Category Type Item Details NetworkInterface mac_addr "MAC" NetworkInterface mac_addr "MAC address on hostname" MAC address (no separating colons) MAC address (no separating colons) Ethernet (14) "Network" "Address" "MAC Address"
Differences from earlier BMC Atrium Discovery versions Prior to version 8.3, Category was set to "Miscellaneous" instead of "Network" and Address was set to the IP address, not the MAC address.
Page 388
BMC_IPEndpoint
Attribute Name NameFormat ShortDescription Description Address AddressType DNSHostName ProtocolType SubnetMask ManagementAddress Category Type Item Details IPAddress ip_addr "IP" IPAddress ip_addr IPAddress name IPAddress ip_addr IPv4 (1) or IPv6 (2) IPAddress fqdns (first item in list) IPv4 (2) or IPv6 (3) IPAddress netmask Yes (1) if the IP address was used to scan the host; No (0) if not. (Not in CMDB 7.5.) "Network" "Address" "IP Address"
Differences from earlier BMC Atrium Discovery versions Prior to version 8.3, Category was set to "Miscellaneous" instead of "Network".
BMC_IPConnectivitySubnet
Discovery Subnet is mapped to BMC_IPConnectivitySubnet: Attribute Name ShortDescription Description AddressType Details Subnet ip_address_range Subnet ip_address_range Subnet ip_address_range IPv4 (1) or IPv6 (2)
Page 389
Subnet relationships
Relationship BMC_InIPSubnet Name INIPSUBNET Source BMC_IPConnectivitySubnet Destination BMC_IPEndpoint
BMC_Cluster
Host Cluster nodes are mapped to BMC_Cluster: Attribute Name NameFormat Description ShortDescription Category Type Item TokenId Details Cluster name "ClusterName" Cluster name Cluster name "Unknown" "Unknown" "BMC Discovered" Attribute that aids reconciliation of CIs populated by multiple data sources. TokenId is of the form "ADDM:hashedkey", where hashedkey is a hash of cluster key from TKU 2013-Jul-1 onwards
Cluster relationships
Relationship BMC_Component BMC_Impact Name CLUSTEREDSYSTEM IMPACT Source BMC_Cluster BMC_ComputerSystem Destination BMC_ComputerSystem BMC_Cluster
Page 390
EnablerType ManufacturerName MarketVersion Model PatchNumber ServicePack VersionNumber Category Type Item
Enumeration representing the type of virtualization technology Publisher from the Pattern maintaining the SoftwareInstance SoftwareInstance product_version SoftwareInstance type SoftwareInstance patch SoftwareInstance service_pack SoftwareInstance version "Software" "Operating System Software" "Virtualization OS"
Differences from earlier BMC Atrium Discovery versions Prior to version 8.3, Name was set to the BMC_ComputerSystem Name.
Virtualization relationships
Relationship BMC_HostedSystemComponents BMC_Dependency BMC_Dependency BMC_Impact BMC_Impact BMC_Impact Name HOSTEDSYSTEMCOMPONENTS VIRTUALSYSTEMOS HOSTEDVIRTUALSYSTEM IMPACT IMPACT IMPACT Source BMC_ComputerSystem (physical) BMC_VirtualSystemEnabler BMC_ComputerSystem (physical) BMC_ComputerSystem (physical) BMC_VirtualSystemEnabler BMC_ComputerSystem (physical) Destination BMC_VirtualSystemEnabler BMC_ComputerSystem (virtual) BMC_ComputerSystem (virtual) BMC_VirtualSystemEnabler BMC_ComputerSystem (virtual) BMC_ComputerSystem (virtual)
Host containment
HostContainer nodes are mapped to BMC_ComputerSystem representing the partitioned container. Attribute Name ShortDescription Description CapabilityList LastScanDate ManufacturerName Model OtherCapabilityDescription PrimaryCapability SerialNumber Category Type Details HostContainer name HostContainer name HostContainer name "Other" Host last_update_success HostContainer vendor HostContainer model HostContainer type Other (2) HostContainer serial "Hardware" "Hardware"
Page 391
Item
HadwareContainer type
Item values The Item attribute is populated from the Pattern that is maintaining the SoftwareInstance. To obtain a list of all the possible Item values, perform the following query in the Discovery Generic Query page:
Page 392
Differences from earlier BMC Atrium Discovery versions Prior to version 8.3, Name was set to "hostname:type:name". Version 8.2 mapped all SoftwareInstance nodes to BMC_SoftwareServer. Version 8.1 mapped those SoftwareInstances containing other SoftwareInstances to BMC_ApplicationInfrastructure. Version 8.1 always hard-coded Item to "BMC Discovered".
As shown in the table, BMC_SoftwareServer and BMC_ApplicationSystem CIs can be related to each other with a BMC_Dependency relationship. This is mapped from both Dependency and Communication relationships between the SoftwareInstance nodes in the Discovery model. BMC_SoftwareServer and BMC_ApplicationSystem CIs also have relationships to BMC_Product, shown in the table below.
BMC_Product
BMC_Product represents the installed aspects of a product. BMC Atrium Discovery uses SoftwareInstance nodes and associated metadata to provide an accurate picture of the installed products, regardless of how the products are installed. The mapping groups the SoftwareInstance nodes according to the product information on the maintaining patterns, and maintains one BMC_Product CI for each unique product version on the Host. SoftwareInstance nodes with different types can belong to a single product. So, for example, a server running four Oracle database instances and one Oracle TNS Listener would have five SoftwareInstance nodes; those five SoftwareInstances would map to a single "Oracle Database" BMC_Product CI. Attribute Name Details "product name:product version" up to and including TKU 2012-Nov-1 "publisher:product name:product version" from TKU 2012-Dec-1 onwards "ProductName:Version" up to and including TKU 2012-Nov-1 "ManufacturerName:ProductName:Version" from TKU 2012-Dec-1 onwards "product name product version" up to and including TKU 2012-Nov-1 "Publisher from Pattern or SoftwareInstance product name product version" from TKU 2012-Dec-1 onwards "product name product version on hostname" up to and including TKU 2012-Nov-1 "Publisher from Pattern or SoftwareInstance product name product version on hostname" from TKU 2012-Dec-1 onwards SoftwareInstance build Publisher from Pattern or SoftwareInstance SoftwareInstance product_version
NameFormat
ShortDescription Description
Page 393
Model
Product name from Pattern or SoftwareInstance up to and including TKU 2012-Nov-1 Publisher from Pattern or SoftwareInstance Product name from Pattern or SoftwareInstance from TKU 2012-Dec-1 onwards SoftwareInstance patch SoftwareInstance service_pack SoftwareInstance version (in CMDB 7.6.03 and later) or product_version "Software" "Application" up to and including TKU 2012-Nov-1 "Software Application/System" from TKU 2012-Dec-1 onwards Product category from the maintaining Pattern
Item
Item values The Item attribute is populated from the Pattern that is maintaining the SoftwareInstance. To obtain a list of all the possible Item values, perform the following query in the Discovery Generic Query page:
Difference from BMC Atrium Discovery 8.1 BMC Atrium Discovery version 8.1 always hard-coded Item to "BMC Discovered".
Product relationships
Relationship BMC_HostedSystemComponents BMC_Dependency BMC_Dependency Name INSTALLEDSOFTWARE APPLICATIONSYSTEMPRODUCT APPLICATIONSYSTEMPRODUCT Source BMC_ComputerSystem BMC_Product BMC_Product Destination BMC_Product BMC_SoftwareServer BMC_ApplicationSystem
BMC_ApplicationService
SoftwareComponent is mapped to BMC_ApplicationService. Attribute Name ShortDescription Description Model VersionNumber ApplicationServiceType Category Type Item Details SoftwareComponent name SoftwareComponent name SoftwareComponent name SoftwareComponent type SoftwareComponent version Enumeration value representing the service type "Software" "Application Service" "BMC Discovered"
Page 394
Relationship BMC_ApplicationSystemServices
Name APPLICATIONSYSTEMSERVICES
Source BMC_SoftwareServer
Destination BMC_ApplicationService
BMC_DataBase
DatabaseDetail nodes representing logical databases are mapped to BMC_DataBase: Attribute Name ShortDescription Description ManufacturerName MarketVersion Model PatchNumber ServicePack VersionNumber Category Type Item TokenId Details DatabaseDetail name DatabaseDetail instance DatabaseDetail name SoftwareInstance vendor SoftwareInstance product_version DatabaseDetail type SoftwareInstance patch SoftwareInstance service_pack SoftwareInstance version "Miscellaneous" "Instance" "Database" Attribute that aids reconciliation of CIs populated by multiple data sources. TokenId is of the form "computersystem.Name:detail.type:softwareinstance.instance:detail.instance" from TKU 2013-Jul-1 onwards
BMC_DataBase relationships
Relationship BMC_Dependency BMC_Impact Name MANAGEDDATABASE IMPACT Source BMC_SoftwareServer BMC_SoftwareServer Destination BMC_DataBase BMC_DataBase
Differences from earlier BMC Atrium Discovery versions BMC_DataBase was not created in version 8.2.
BMC_Application
BusinessApplicationInstance is mapped to BMC_Application. Contained SoftwareInstance and BusinessApplicationInstance nodes are mapped to corresponding BMC_Component relationships. Name ShortDescription Description MarketVersion Model VersionNumber BusinessApplicationInstance name BusinessApplicationInstance name BusinessApplicationInstance name BusinessApplicationInstance product_version BusinessApplicationInstance type BusinessApplicationInstance version
Page 395
Application relationships
Relationship BMC_Dependency BMC_Component BMC_Component BMC_Component BMC_Impact BMC_Impact BMC_Impact BMC_Impact Name APPLICATIONSYSTEMCOMPUTER APPLICATIONSYSTEMHIERARCHY APPLICATIONSYSTEMHIERARCHY APPLICATIONSYSTEMHIERARCHY IMPACT IMPACT IMPACT IMPACT Source BMC_ComputerSystem BMC_Application BMC_Application BMC_Application (containing) BMC_ComputerSystem BMC_SoftwareServer BMC_ApplicationSystem BMC_Application (contained) Destination BMC_Application BMC_SoftwareServer BMC_ApplicationSystem BMC_Application (contained) BMC_Application BMC_Application BMC_Application BMC_Application (containing)
The following TPL snippet shows the search from the host to associated subnet or subnets.
Page 396
subnet_list := search(in host traverse DeviceWithAddress:DeviceAddress:IPv4Address:IPAddress traverse DeviceOnSubnet:DeviceSubnet:Subnet:Subnet); location_id := ''; list_size := size(subnet_list); log.debug("Subnet_2_Locations: subnet_list size is %list_size%"); for subnet_node in subnet_list do log.debug("Subnet_2_Locations: Current subnet_node is %subnet_node.ip_address_range%"); // if subnet_node is "None" get next node otherwise we can use this one if '%subnet_node.ip_address_range%' <> "None" then location_id:= '%subnet_node.ip_address_range%'; break; end if; end for;
This TPL snippet uses the table to look up the location from the subnet.
// Look up the location name from the table location_name := SubnetLocations[location_id];
Now you can create or update the relationship between the host and location node. The following code snippet uses the model.uniquerel function to create or update the relationship.
// Relate host to location. Using "uniquerel" rather than "rel" means that // any existing Location relationships between this Host (the first // parameter) and any Location nodes other than the one given (the second // parameter) are removed. If a host has changed location, this keeps the // model up-to-date. log.info("Subnet_2_Locations: model.uniquerel.Location %host% %location%"); model.uniquerel.Location( ElementInLocation := host, Location := location );
The host is now related to the location node which represents its physical location. The second pattern is the template_cmdb_location template which is available in the Pattern Templates section of the Pattern Management page. See Template patterns for more information on the template patterns supplied with BMC Atrium Discovery. The syncmapping is described in the TPL Guide. The template_cmdb_location syncmapping extends the root mapping for Host nodes. The mapping section specifies the extension to the root mapping using the from keyword. The traversal finds related location nodes, and the mapping from phys_loc to BMC_PhysicalLocation is defined. The name defined by the traversal may only be used in a for each expression; it may not be used in any other context.
mapping from Host_ComputerSystem.host as host traverse ElementInLocation:Location:Location:Location as location phys_loc -> BMC_PhysicalLocation; end traverse; end mapping;
The body of the template_cmdb_location pattern loops through each location reached by the traversal, and transforms the subgraph of data in the Discovery model into a subgraph of CIs in the target CMDB model. This is copied across and then the relationship is created between the BMC_ComputerSystem and the new BMC_PhysicalLocation in the CMDB.
Page 397
body computersystem := Host_ComputerSystem.computersystem; for each location do // // // if ADDM Location nodes created by patterns have a key attribute, but those created in the UI do not. We use the name attribute if the key is not present. location.key then key := location.key; else key := location.name; end if; // Specify the BMC_PhysicalLocation CI. You may wish to // add more attributes to the ADDM Location node and copy // them across here. phys_loc := sync.shared.BMC_PhysicalLocation( key := key, Name := location.name, ShortDescription := location.name, Description := location.description, Address := location.address, Category := "Reference Configuration", Type := "Facilities", Item := "Space Management" ); // Relate the BMC_ComputerSystem to the new BMC_PhysicalLocation sync.rel.BMC_ElementLocation( Source := computersystem, Destination := phys_loc, Name := "ELEMENTLOCATION" ); end for; end body;
CMDB version differences The default mapping works with CMDB versions 7.5, 7.6, and 7.6.03 and later. The data model in each is slightly different. BMC_Impact relationships are not normally created with CMDB 7.6.03 and later, since the CMDB automatically maintains them itself. See the information about impact relationships for more details. A number of attributes are not present in CMDB 7.5, as noted below.
Differences from previous versions of BMC Atrium Discovery Except where noted below, the model populated by BMC Atrium Discovery version 9.0 is the same as that populated by earlier versions.
Page 398
BMC_ComputerSystem
The MFPart node is used to model a logical server within the mainframe. The root MFPart node is mapped to a root BMC_ComputerSystem CI with the following attributes: Attribute TokenId Name NameFormat ShortDescription Description CapabilityList ComponentAliases isVirtual LastScanDate ManufacturerName MFIntegrationID Model PrimaryCapability VirtualSystemType Category Type Item Details MFPart key attribute MFPart key attribute "Mainframe" MFPart name attribute MFPart description attribute Server Component alias from mainframe discovery Yes if either MFPart virtual or partition is set; null otherwise MFPart last_update_success the date and time the host was last scanned MFPart vendor the manufacturer of the mainframe MFPart mf_integration_id MainView identifier used for launch in context MFPart model Server LPAR or VM/VM Guest "Hardware" "Processing unit" "Server"
BMC_OperatingSystem
The root MFPart is also mapped to a single BMC_OperatingSystem CI, with the following attributes: Attribute Name NameFormat ShortDescription Description ComponentAliases ManufacturerName MFIntegrationID Model OSType SerialNumber VersionNumber Category Type Details MFPart __mfpart_id "Mainframe" OS type followed by OS serial number "OS-__mfpart_id Component alias from mainframe discovery MFPart vendor MFPart mf_integration_id MainView identifier used for launch in context MFPart os_type "z/OS" or "z/VM" z/OS or z/VM OS MFPart os_serial MFPart os_version "Software" "Operating System Software"
Page 399
Item
"Standard OS"
BMC_Mainframe
An MFPart corresponding to a Native LPAR is directly connected to a Mainframe node. The Mainframe is mapped to a BMC_Mainframe CI as follows: Attribute Name Description ShortDescription AvailableCPs CapabilityList ComponentAliases InstalledCPUs ManufacturerName MFIntegrationID Model PrimaryCapability Category Type Item Details Mainframe key Mainframe description Mainframe name Mainframe available_cps Mainframe Component alias from mainframe discovery Mainframe num_processors Mainframe vendor Mainframe mf_integration_id MainView identifier used for launch in context Mainframe model Mainframe Hardware Processing unit Mainframe
Mainframe relationships
Relationship BMC_Component BMC_Impact Name SYSTEMPARTITION IMPACT Source BMC_Mainframe BMC_Mainframe Destination BMC_ComputerSystem BMC_ComputerSystem
BMC_VirtualSystemEnabler
For Native LPARs, the MFPart and Mainframe nodes are mapped to a BMC_VirtualSystemEnabler CI as follows: Attribute Name ShortDescription ComponentAliases Description Details Mainframe key with suffix "-VSE" MFPart part_type Component alias from mainframe discovery MFPart part_type followed by Mainframe key
Page 400
MFPart __cdm_enabler_type MFPart vendor Mainframe mf_integration_id MainView identifier used for launch in context MFPart part_type "Software" "Operating System Software" "Virtualization OS"
For zVM LPARs, the Discovery model represents the virtualization as a SoftwareInstance node connected to the containing Native LPAR. That SoftwareInstance is mapped to a BMC_VirtualSystemEnabler as follows: Attribute Name NameFormat ShortDescription Description BuildNumber ComponentAliases EnablerType MFIntegrationID Model PatchNumber ServicePack VersionNumber Category Type Item Details SoftwareInstance __cdm_component_alias "Mainframe" SoftwareInstance vm_id "z/VM-" followed by SoftwareInstance __cdm_component_alias SoftwareInstance build Component alias from mainframe discovery SoftwareInstance cdm_vm_enabler_type MFPart mf_integration_id MainView identifier used for launch in context SoftwareInstance type SoftwareInstance patch SoftwareInstance service_pack SoftwareInstance version "Software" "Operating System Software" "Virtualization OS"
BMC_VirtualSystemEnabler relationships
Relationship BMC_Dependency Name VIRTUALSYSTEMOS Source BMC_VirtualSystemEnabler Destination BMC_ComputerSystem (contained) BMC_VirtualSystemEnabler
BMC_HostedSystemComponents
HOSTEDSYSTEMCOMPONENTS
BMC_Impact
IMPACT
BMC_Impact
IMPACT
BMC_Sysplex / BMC_Cluster
The root MFPart is related to a Cluster node to represent its sysplex. This is mapped to BMC_Sysplex in CMDB versions prior to 7.6.03, and to BMC_Cluster in version 7.6.03. In either case, the attributes are populated as follows:
Page 401
Attribute Name NameFormat ShortDescription Description ComponentAliases MFIntegrationID Model Types Category Type Item
Details Cluster key "Mainframe" Cluster name Cluster description Component alias from mainframe discovery MFPart mf_integration_id MainView identifier used for launch in context "Sysplex" "SysPlex" "Software" "Cluster" "Mainframe"
BMC_MFCouplingFacility
The MFPart and Cluster are both related to a number of CouplingFacility nodes. Each one is mapped to a BMC_MFCouplingFacility CI as follows: Attribute Name NameFormat Description ShortDescription CFRMName ComponentAliases MFIntegrationID Model NodeDescriptor Storage Category Type Item Details CouplingFacility key "Mainframe" CouplingFacility description CouplingFacility name CouplingFacility cfrm_name Component alias from mainframe discovery MFPart mf_integration_id MainView identifier used for launch in context "CF" CouplingFacility node_descriptor CouplingFacility storage Hardware Processing unit Mainframe
BMC_MFCouplingFacility relationships
Relationship Name Source Destination
Page 402
BMC_StorageSubsystem
The root MFPart can be related to a number of StorageCollection nodes. Each node is mapped to a BMC_StorageSubsystem CI as follows: Name NameFormat ShortDescription Description ComponentAliases ManufacturerName MFIntegrationID Model PrimaryCapability Category Type Item StorageCollection key "Mainframe" StorageCollection name StorageCollection key Component alias from mainframe discovery StorageCollection vendor MFPart mf_integration_id MainView identifier used for launch in context StorageCollection type StorageSubsystem "Hardware" "Disk device" "Disk array"
BMC_StorageSubsystem relationships
Relationship BMC_Dependency BMC_Impact Name STORAGESUBSYSTEMOS IMPACT Source BMC_StorageSubsystem BMC_StorageSubsystem Destination BMC_OperatingSystem BMC_OperatingSystem
BMC_DiskDrive / BMC_TapeDrive
Both the root MFPart and the StorageCollection nodes are related to Storage nodes, representing disk and tape drives. Disk drives are mapped to BMC_DiskDrive CIs as follows: Name NameFormat ShortDescription Description ComponentAliases ManufacturerName MediaType MFIntegrationID Model PNPDeviceID SerialNumber Storage key "Mainframe" Storage name Storage key Component alias from mainframe discovery Storage vendor 1 (fixed hard disk) MFPart mf_integration_id MainView identifier used for launch in context Storage model Storage device_id Storage volume_id
Page 403
Tape drives are mapped to BMC_TapeDrive CIs as follows: Name NameFormat ShortDescription Description ComponentAliases ManufacturerName MediaType MFIntegrationID Model PNPDeviceID Category Type Item Storage key "Mainframe" Storage name Storage key Component alias from mainframe discovery Storage vendor 0 (removable media) MFPart mf_integration_id MainView identifier used for launch in context Storage model Storage device_id "Hardware" "Tape device" "Tape drive"
BMC_MFSoftwareServer / BMC_SoftwareServer
Each SoftwareInstance related to the root MFPart is mapped to a corresponding CI. In CMDB versions prior to 7.6.03, the CI has the extension class BMC_MFSoftwareServer; in CMDB version 7.6.03, the CI has the base BMC_SoftwareServer class. In both cases, the attributes are as follows: Attribute Name NameFormat ShortDescription Description BuildNumber ComponentAliases Details SoftwareInstance key "Mainframe" SoftwareInstance name SoftwareInstance key SoftwareInstance build Component alias from mainframe discovery
Page 404
JobName MFJobName MFIntegrationID Model Model OtherSoftwareServerType PatchNumber ServicePack ServerID MFServerID VersionNumber Category Type Item
SoftwareInstance job_id CMDB versions prior to 7.6.03 SoftwareInstance job_id CMDB versions 7.6.03 and later MFPart mf_integration_id MainView identifier used for launch in context SoftwareInstance short_type CMDB versions prior to 7.6.03 SoftwareInstance type CMDB versions 7.6.03 and later SoftwareInstance type SoftwareInstance patch SoftwareInstance service_pack SoftwareInstance server_id CMDB versions prior to 7.6.03 SoftwareInstance server_id CMDB versions 7.6.03 and later SoftwareInstance version "Software" "Application" "BMC Discovered"
BMC_ApplicationService
SoftwareComponent is mapped to BMC_ApplicationService. Attribute Name ShortDescription Description Model VersionNumber ApplicationServiceType Category Type Item Details SoftwareComponent name SoftwareComponent name SoftwareComponent name SoftwareComponent type SoftwareComponent version Enumeration value representing the service type "Software" "Application Service" "BMC Discovered"
Page 405
BMC_Application
BusinessApplicationInstance is mapped to BMC_Application. Contained SoftwareInstance and BusinessApplicationInstance nodes are mapped to corresponding BMC_Component relationships. Name ShortDescription Description MarketVersion Model VersionNumber Category Type Item BusinessApplicationInstance name BusinessApplicationInstance name BusinessApplicationInstance name BusinessApplicationInstance product_version BusinessApplicationInstance type BusinessApplicationInstance version "Software" "Application" "Application Platform"
Application relationships
Relationship BMC_Dependency BMC_Component Name APPLICATIONSYSTEMCOMPUTER APPLICATIONSYSTEMHIERARCHY Source BMC_ComputerSystem BMC_Application Destination BMC_Application BMC_SoftwareServer / BMC_MFSoftwareServer BMC_Application (contained)
BMC_Component
APPLICATIONSYSTEMHIERARCHY
BMC_DataBase
DatabaseDetail nodes representing logical databases are mapped to BMC_DataBase: Attribute Name NameFormat ShortDescription Description ComponentAliases ManufacturerName MFIntegrationID Model SerialNumber Category Details DatabaseDetail name "Mainframe" DatabaseDetail instance DatabaseDetail name Component alias from mainframe discovery "IBM" MFPart mf_integration_id DatabaseDetail type DatabaseDetail instance "Miscellaneous"
Page 406
Type Item
"Instance" "Database"
BMC_DataBase relationships
Relationship BMC_Dependency BMC_Impact Name MANAGEDDATABASE IMPACT Source BMC_SoftwareServer BMC_SoftwareServer Destination BMC_DataBase BMC_DataBase
BMC_SystemResource
Resources such as table spaces inside databases are represented as DatabaseDetail nodes. These are mapped to BMC_SystemResource CIs: Attribute Name NameFormat ShortDescription Description ComponentAliases ManufacturerName MFIntegrationID Model SerialNumber Category Type Item Details DatabaseDetail key "Mainframe" DatabaseDetail name DatabaseDetail key Component alias from mainframe discovery "IBM" MFPart mf_integration_id DatabaseDetail type DatabaseDetail instance "Unknown" DatabaseDetail type "BMC Discovered"
MQ resources are represented as Detail nodes related to MQ SoftwareInstance nodes. These are also mapped to BMC_SystemResource CIs: Attribute Name NameFormat ShortDescription Description ComponentAliases ManufacturerName MFIntegrationID Model SerialNumber Category Type Item Details Detail key "Mainframe" Detail name Detail key Component alias from mainframe discovery "IBM" MFPart mf_integration_id Detail type Detail instance "Unknown" Detail type "BMC Discovered"
Page 407
BMC_SystemResource relationships
Relationship BMC_Dependency BMC_Dependency BMC_Dependency BMC_Dependency BMC_Impact Name SYSTEMRESOURCES DATABASEINDEXSPACE DATABASEPAGESET DATABASETABLESPACE IMPACT Source BMC_SoftwareServer BMC_DataBase BMC_DataBase BMC_DataBase BMC_DataBase Destination BMC_SystemResource BMC_SystemResource BMC_SystemResource BMC_SystemResource BMC_Transaction
BMC_Transaction
Transactions within IMSTM and CICS are represented as Detail nodes related to SoftwareInstance nodes. These are mapped to BMC_Transaction: Attribute Name NameFormat ShortDescription Description ComponentAliases ManufacturerName MFIntegrationID Model SerialNumber Category Type Item Details Detail key "Mainframe" Detail name Detail key Component alias from mainframe discovery "IBM" MFPart mf_integration_id Detail type Detail instance "Unknown" "Unknown" "BMC Discovered"
BMC_Transaction relationships
Relationship BMC_Dependency BMC_Impact Name SERVERTRANSACTION IMPACT Source BMC_SoftwareServer BMC_SoftwareServer Destination BMC_Transaction BMC_Transaction
BMC_Activity
The first program run by a transaction is represented by a Detail node related to the transaction's Detail node. These are mapped to BMC_Activity, related to the BMC_Transaction: Attribute Name NameFormat ShortDescription Description Details Detail key "Mainframe" Detail name Detail key
Page 408
Component alias from mainframe discovery "IBM" Detail type Detail instance "Unknown" "Unknown" "BMC Discovered"
BMC_Transaction relationships
Relationship BMC_Dependency BMC_Impact Name TRANSACTIONPROGRAM IMPACT Source BMC_Activity BMC_Activity Destination BMC_Transaction BMC_Transaction
CMDB version differences The default mapping works with CMDB versions 7.5, 7.6, and 7.6.03 and later. The data model in each is slightly different. BMC_Impact relationships are not normally created with CMDB 7.6.03 and later, since the CMDB automatically maintains them itself. See the information about impact relationships for more details. A number of attributes are not present in CMDB 7.5, as noted below.
BMC_ComputerSystem
The root NetworkDevice node is mapped to a root BMC_ComputerSystem CI with the following attributes: Attribute Name ShortDescription Description CapabilityList HostName LastScanDate ManufacturerName Model Details NetworkDevice name attribute. The configured name of the network device NetworkDevice name NetworkDevice name List of device capabilities NetworkDevice name NetworkDevice last_update_success the date and time the device was last scanned NetworkDevice vendor NetworkDevice model
Page 409
Switch, Router, or other network device enumeration value NetworkDevice serial NetworkDevice sysobjectid "Network" "Switch" or "Router" "Data switch" or "Access router"
Relationships
Relationships are created to represent the connection between edge switches and hosts. Switch-to-switch connections are not mapped. Relationship BMC_Dependency BMC_Impact Name NETWORKLINK IMPACT Source BMC_ComputerSystem (network device) BMC_ComputerSystem (network device) Destination BMC_ComputerSystem (host) BMC_ComputerSystem (host)
BMC_OperatingSystem
The root NetworkDevice node is also mapped to a BMC_OperatingSystem CI: Attribute Name ShortDescription Description ManufacturerName MarketVersion Model OSType VersionNumber Category Type Item Details "os_vendor os_type os_version" "os_type os_version" "os_vendor os_type os_version" NetworkDevice os_vendor NetworkDevice os_version NetworkDevice os_type Other (0) NetworkDevice os_version "Software" "Operating System Software" "Network Operating System"
BMC_OperatingSystem relationships
Relationship BMC_HostedSystemComponents BMC_Impact Name SYSTEMOS IMPACT Source BMC_ComputerSystem BMC_OperatingSystem Destination BMC_OperatingSystem BMC_ComputerSystem
Page 410
BMC_NetworkPort
Attribute Name ShortDescription Description AutoSense FullDuplex LinkTechnology ManufacturerName PermanentAddress PortType Speed Category Type Item Details NetworkInterface interface_name NetworkInterface interface_name NetworkInterface name Yes (0) or No (1) or null if not known Yes (0) or No (1) or null if not known Ethernet (2) NetworkInterface manufacturer NetworkInterface mac_addr Ethernet (2) NetworkInterface raw_speed "Hardware" "Card" "Network interface card"
Differences from earlier BMC Atrium Discovery versions BMC_NetworkPort was not created for NetworkDevice in versions prior to 9.0.
BMC_LANEndpoint
MAC addresses in the Discovery data model are stored in the conventional form with colons separating each pair of hexadecimal digits; in the CDM, MAC addresses are stored as just the hexadecimal digits, with no separating colons. Attribute Name NameFormat ShortDescription Description Address MACAddress ProtocolType Category Type Item Details NetworkInterface mac_addr "MAC" NetworkInterface mac_addr "MAC address on hostname" MAC address (no separating colons) MAC address (no separating colons) Ethernet (14) "Network" "Address" "MAC Address"
Differences from earlier BMC Atrium Discovery versions BMC_LANEndpoint was not created for NetworkDevice in versions prior to 9.0.
BMC_IPEndpoint
Each IPAddress related to the root NetworkDevice is mapped to a BMC_IPEndpoint CI:
Page 411
Attribute Name NameFormat ShortDescription Description Address AddressType DNSHostName ProtocolType SubnetMask ManagementAddress Category Type Item
Details IPAddress ip_addr "IP" IPAddress ip_addr IPAddress name IPAddress ip_addr IPv4 (1) or IPv6 (2) IPAddress fqdns (first item in list) IPv4 (2) or IPv6 (3) IPAddress netmask Yes (1) if the IP address was used to scan the host; No (0) if not. (Not in CMDB 7.5.) "Network" "Address" "IP Address"
Differences from earlier BMC Atrium Discovery versions BMC_IPEndpoint was not created for NetworkDevice in versions prior to 9.0.
BMC_IPConnectivitySubnet
Discovery Subnet is mapped to BMC_IPConnectivitySubnet: Attribute Name ShortDescription Description AddressType PrefixLength SubnetMask SubnetNumber Category Details Subnet ip_address_range Subnet ip_address_range Subnet ip_address_range IPv4 (1) or IPv6 (2) IPAddress prefix_length IPAddress netmask Subnet ip_address_range "Network"
Page 412
Type Item
"Subnet" "TCP/IP"
Differences from earlier BMC Atrium Discovery versions BMC_IPConnectivitySubnet was not created for NetworkDevice in versions prior to 9.0.
Subnet relationships
Relationship BMC_InIPSubnet Name INIPSUBNET Source BMC_IPConnectivitySubnet Destination BMC_IPEndpoint
CMDB version differences The default mapping works with CMDB versions 7.5, 7.6, and 7.6.03 and later. The data model in each is slightly different. BMC_Impact relationships are not normally created with CMDB 7.6.03 and later, since the CMDB automatically maintains them itself. See the information about impact relationships for more details. A number of attributes are not present in CMDB 7.5, as noted below.
BMC_Printer
The root Printer node is mapped to a root BMC_Printer CI with the following attributes: Attribute Name ShortDescription Description CapabilityList HostName LastScanDate ManufacturerName Model PrimaryCapability PrinterType SerialNumber SystemOID Category Type Details Printer name Printer name Printer name Print Printer sysname Printer last_update_success the date and time the device was last scanned Printer vendor Printer model Print (11) Network Printer (3) Printer serial Printer sysobjectid "Hardware" "Peripheral"
Page 413
Item
"Printer"
BMC_NetworkPort
Attribute Name ShortDescription Description AutoSense FullDuplex LinkTechnology ManufacturerName PermanentAddress PortType Speed Category Type Item Details NetworkInterface interface_name NetworkInterface interface_name NetworkInterface name Yes (0) or No (1) or null if not known Yes (0) or No (1) or null if not known Ethernet (2) NetworkInterface manufacturer NetworkInterface mac_addr Ethernet (2) NetworkInterface raw_speed "Hardware" "Card" "Network interface card"
Differences from earlier BMC Atrium Discovery versions BMC_NetworkPort was not created for Printer in versions prior to 9.0.
BMC_LANEndpoint
MAC addresses in the Discovery data model are stored in the conventional form with colons separating each pair of hexadecimal digits; in the CDM, MAC addresses are stored as just the hexadecimal digits, with no separating colons. Attribute Name NameFormat ShortDescription Description Address MACAddress ProtocolType Category Type Details NetworkInterface mac_addr "MAC" NetworkInterface mac_addr "MAC address on hostname" MAC address (no separating colons) MAC address (no separating colons) Ethernet (14) "Network" "Address"
Page 414
Item
"MAC Address"
Differences from earlier BMC Atrium Discovery versions BMC_LANEndpoint was not created for Printer in versions prior to 9.0.
BMC_IPEndpoint
Each IPAddress related to the root Printer is mapped to a BMC_IPEndpoint CI: Attribute Name NameFormat ShortDescription Description Address AddressType DNSHostName ProtocolType SubnetMask ManagementAddress Category Type Item Details IPAddress ip_addr "IP" IPAddress ip_addr IPAddress name IPAddress ip_addr IPv4 (1) or IPv6 (2) IPAddress fqdns (first item in list) IPv4 (2) or IPv6 (3) IPAddress netmask Yes (1) if the IP address was used to scan the host; No (0) if not. (Not in CMDB 7.5.) "Network" "Address" "IP Address"
Differences from earlier BMC Atrium Discovery versions BMC_IPEndpoint was not created for Printer in versions prior to 9.0.
BMC_IPConnectivitySubnet
Discovery Subnet is mapped to BMC_IPConnectivitySubnet: Attribute Name Details Subnet ip_address_range
Page 415
Subnet ip_address_range Subnet ip_address_range IPv4 (1) or IPv6 (2) IPAddress prefix_length IPAddress netmask Subnet ip_address_range "Network" "Subnet" "TCP/IP"
Differences from earlier BMC Atrium Discovery versions BMC_IPConnectivitySubnet was not created for Printer in versions prior to 9.0.
Subnet relationships
Relationship BMC_InIPSubnet Name INIPSUBNET Source BMC_IPConnectivitySubnet Destination BMC_IPEndpoint
CMDB version differences The default mapping works with CMDB versions 7.5, 7.6, and 7.6.03 and later. The data model in each is slightly different. BMC_Impact relationships are not normally created with CMDB 7.6.03 and later, since the CMDB automatically maintains them itself. See the information about impact relationships for more details. A number of attributes are not present in CMDB 7.5, as noted below.
BMC_ComputerSystem
The root SNMPManagedDevice node is mapped to a root BMC_ComputerSystem with the following attributes: Attribute Name ShortDescription Description CapabilityList HostName LastScanDate Details SNMPManagedDevice name attribute. The configured name of the SNMP managed device SNMPManagedDevice name SNMPManagedDevice name List of device capabilities SNMPManagedDevice name SNMPManagedDevice last_update_success the date and time the device was last scanned
Page 416
SNMPManagedDevice vendor SNMPManagedDevice model Switch, Router, or other network device enumeration value SNMPManagedDevice serial SNMPManagedDevice sysobjectid "Network" "Switch" or "Router" "Data switch" or "Access router"
BMC_OperatingSystem
The root SNMPManagedDevice node is also mapped to a BMC_OperatingSystem CI: Attribute Name ShortDescription Description ManufacturerName MarketVersion Model OSType VersionNumber Category Type Item Details "os_vendor os_type os_version" "os_type os_version" "os_vendor os_type os_version" SNMPManagedDevice os_vendor SNMPManagedDevice os_version SNMPManagedDevice os_type Other (0) SNMPManagedDevice os_version "Software" "Operating System Software" "Network Operating System"
BMC_OperatingSystem relationships
Relationship BMC_HostedSystemComponents BMC_Impact Name SYSTEMOS IMPACT Source BMC_ComputerSystem BMC_OperatingSystem Destination BMC_OperatingSystem BMC_ComputerSystem
BMC_NetworkPort
Attribute Name ShortDescription Description Details NetworkInterface interface_name NetworkInterface interface_name NetworkInterface name
Page 417
AutoSense FullDuplex LinkTechnology ManufacturerName PermanentAddress PortType Speed Category Type Item
Yes (0) or No (1) or null if not known Yes (0) or No (1) or null if not known Ethernet (2) NetworkInterface manufacturer NetworkInterface mac_addr Ethernet (2) NetworkInterface raw_speed "Hardware" "Card" "Network interface card"
BMC_LANEndpoint
MAC addresses in the Discovery data model are stored in the conventional form with colons separating each pair of hexadecimal digits; in the CDM, MAC addresses are stored as just the hexadecimal digits, with no separating colons. Attribute Name NameFormat ShortDescription Description Address MACAddress ProtocolType Category Type Item Details NetworkInterface mac_addr "MAC" NetworkInterface mac_addr "MAC address on hostname" MAC address (no separating colons) MAC address (no separating colons) Ethernet (14) "Network" "Address" "MAC Address"
BMC_IPEndpoint
Each IPAddress related to the root SNMPManagedDevice is mapped to a BMC_IPEndpoint CI: Attribute Name NameFormat ShortDescription Description Address AddressType DNSHostName ProtocolType SubnetMask ManagementAddress Details IPAddress ip_addr "IP" IPAddress ip_addr IPAddress name IPAddress ip_addr IPv4 (1) or IPv6 (2) IPAddress fqdns (first item in list) IPv4 (2) or IPv6 (3) IPAddress netmask Yes (1) if the IP address was used to scan the host; No (0) if not. (Not in CMDB 7.5.)
Page 418
BMC_IPConnectivitySubnet
Discovery Subnet is mapped to BMC_IPConnectivitySubnet: Attribute Name ShortDescription Description AddressType PrefixLength SubnetMask SubnetNumber Category Type Item Details Subnet ip_address_range Subnet ip_address_range Subnet ip_address_range IPv4 (1) or IPv6 (2) IPAddress prefix_length IPAddress netmask Subnet ip_address_range "Network" "Subnet" "TCP/IP"
Subnet relationships
Relationship BMC_InIPSubnet Name INIPSUBNET Source BMC_IPConnectivitySubnet Destination BMC_IPEndpoint
Page 419
Cancelling searches
The Search Management page lists any searches that you are permitted to view. Any search that you can cancel is shown with a cancel button at the right hand side of its row. To cancel a search, click the Cancel button corresponding to the search that you want to cancel. When the search has been cancelled, the window in which the search was entered is refreshed with a Search Cancelled message. You can also cancel a search that you have just started by navigating away from the search-in-progress spinner.
Search expressions
The query language provides a natural mechanism for searching and processing the data model. The basic format of a search expression is: SEARCH [in partition] <kinds> [where clause] [traversals] [ordering] [show clause] [processing] [in partition] <kinds> [where clause] Used to search in a named partition. Used to specify the kinds of nodes (objects) to find. Used to filter the current set of nodes.
Page 420
Used to define a traverse from one node to another in order to access attributes and relationships from related nodes. Used to define a sort order for the results. Used to define the information to return in the search results. Used to post-process the results of a search.
These are described in detail in the following sections. A Note on Performance includes important information on writing efficient queries.
Examples
As a simple example, the following query retrieves all Host objects where the operating system is Windows. It displays each host's name and how much RAM it has; the results are sorted by name:
SEARCH Host WHERE os_type = 'Windows' ORDER BY name SHOW name, ram
In this example, both the ordering and show clauses are absent. The search service therefore uses the taxonomy definitions to choose defaults. The results are ordered by the label attributes of each node kind found and the attributes to show are set according to the corresponding summary lists. (If no definitions are given in the taxonomy for a node kind, the results are not sorted and just the node ids are shown.) The following example searches the _System partition for Users:
When writing search queries, you should be aware that an unconstrained search may have a serious performance impact on the appliance. For example, SEARCH * would return details of every node in the entire datastore! The sets resulting from searches (and traversals) can be named and combined using set operations. This is described in Results Post Processing .
LOOKUP expressions
Instead of performing a SEARCH, a search can be performed with a LOOKUP that simply finds one or more nodes with their node id: An example finding a single node:
LOOKUP "845981da735bb875d1246b496e486f7374" SHOW hostname
A LOOKUP cannot have a WHERE clause. It is usually used in conjunction with one or more traversals.
Page 421
Comments
Search queries can contain comments on lines starting // . Everything is ignored from // to the end of line.
Literal strings
Literal strings used in search expressions can take a number of forms. 'string' "string" '''string''' """string""" A string terminated by an unescaped ' character. Cannot include newlines. A string terminated by an unescaped " character. Cannot include newlines. A string terminated by an unescaped ''' character sequence. Can include newlines. A string terminated by an unescaped """ character sequence. Can include newlines.
In normal string literals, escape characters start with backslash \ characters. Usual C-style escapes are permitted. Strings can be 'qualified' to change their interpretation, by prefixing the string literal with a word as follows: raw regex path unix_cmd windows_cmd Backslash characters do not resolve to escape sequences. Backslash characters do not resolve to escape sequences. Intended for use in MATCHES expressions. Backslash characters do not resolve to escape sequences. Intended for use with filesystem paths. Expanded into a regular expression suitable for matching a Unix command by prefixing with '\b' and suffixing with '$'. Expanded into a regular expression suitable for matching a Windows command by prefixing with '(?i)\b' and suffixing with '\.exe$'.
Keywords
In this document, query language keywords appear in upper case to make them stand out. Keywords are actually case insensitive, so they can be specified in lower case or mixed case. All other parts of query expressions are case sensitive. To use an identifier that clashes with a keyword, prefix it with a $ character to prevent the parser reporting a syntax error:
For backwards-compatibility, keywords can also be escaped with the ~ character. The keywords are as follows: AND AS BY DEFINED DESC EXPAND EXPLODE FLAGS HAS IN Logical operator when defining conditions. See Logical Operators. Modifier defining default heading shown. See The SHOW Clause. Use in function result naming. See Name binding. Used in ORDER BY clause. See Ordering. Used in Definition Boolean condition. See Conditions. Changes sort order from ascending to descending. See Ordering. Used for repeated traversals. See Traversals. Used to 'explode' the items in the list into multiple output rows. See Explode. Used to modify behavior with destroyed nodes, result segmentation and other characteristics. See Search Flags. Used in substring and subword Boolean conditions. See Conditions. Used in containment Boolean conditions. See Conditions. Used with STEP when performing traversals, to move from a set of nodes to a set of relationships. See Traversals. Used in definition Boolean conditions. See Conditions. Deprecated synonym for MATCHES.
IS LIKE
Page 422
Used to show localized column headings in queries. See The SHOW Clause. Finds a single node. Used in Boolean conditions when matching regular expressions. See Conditions. Returns number of nodes when performing traverses. See NODECOUNT Expressions. Returns a list of the traversed-to nodes. See NODECOUNT Expressions. Used in substring and subword Boolean conditions. See Conditions. Logical operator when defining conditions. See Logical Operators. Logical operator when defining conditions. See Logical Operators. Used in ORDER BY clause. See Ordering. Used with STEP when performing traversals, to move from a set of relationships to a set of nodes. See Traversals. Used to summarize or modify the search results. See Results after processing. Used to summarize or modify the search results. See Results after processing. Runs a search. See Using Query Language. Defines the columns to return in the search results. See The SHOW Clause. Used with IN and OUT when performing traversals, to move between nodes and relationships. See Traversals. Used in substring Boolean condition. See Conditions. Used in subword Boolean condition. See Conditions. Used to show the taxonomy-defined summary list in the search results. See The SHOW Clause. Used to order by the taxonomy-defined label and to refer to named attribute lists. See Ordering. Used to traverse from one node to another in order to access attributes and relationships from related nodes. See Traversals. Filters the current set of nodes according to a Boolean condition. See Logical and arithmetic expressions. Used with the PROCESS keyword to specify the post processing function to use. See Results after processing. Used in function result naming. See Name binding.
OR ORDER OUT PROCESS PROCESSWITH SEARCH SHOW STEP SUBSTRING SUBWORD SUMMARY TAXONOMY TRAVERSE WHERE WITH
Kind selection
A search expression must specify the kinds of nodes to search. The specification must either be a single * character, meaning to search all kinds, or a comma-separated list of node kinds. The majority of queries specify a single kind.
Key expressions
The WHERE, ORDER BY and SHOW clauses often use simple attribute names, as in the examples in the sections above. In some cases, however, a query needs more complex specifications. The first option is to use a 'key expression' to retrieve information from related nodes. Key expressions start with a # character. The key expression formats are: #role:rel:role:kind.attr Traverses from the node via the roles and relationships, returning the attribute of the target node. If more than one target can be reached, returns a list of up to 10 (or an alternative limit chosen in the system options). As above, but returns a 'NodeHandle' for the target node, rather than an attribute of it, useful as a function argument. Step in to a relationship and get its attribute. Step in to a relationship and return a 'RelationshipHandle' for it, to pass into functions. Step out of a relationship and get the destination node's attribute. Step out of a relationship and return the destination 'NodeHandle'.
#role:rel:role:kind
Page 423
Return the node's id in hex string format. Returns the node ID in binary format. Returns a 'NodeHandle' for the node itself, to pass into various functions. Returns the node itself. Applies Python format string to the attributes (which may be key expressions) in the list.
So, extending the example of finding Windows hosts, this expression shows the IP addresses of the hosts' network interfaces:
SEARCH Host WHERE os_type = 'Windows' ORDER BY name SHOW name, ram, #DeviceWithInterface:DeviceInterface:InterfaceOfDevice:NetworkInterface.ip_addr
The traversal syntax permits components to be wild-carded by leaving them blank. For example, the following query shows the names of all software running on a Host, both BusinessApplicationInstances and SoftwareInstances:
SEARCH Host SHOW name, #Host:HostedSoftware:RunningSoftware:.name
Value manipulation
The following functions operate on normal attribute values:
abs(value)
Returns the absolute value of an integer.
gives results like rs6000 jpl64app sol10x86 linux-mandr-9-1 1888 896 288 32 1024-4096 512-1024 64-512 Less than 64
Page 424
lonvmserv02
12288
The optional legend parameter allows the bins to be named. The list must have one more item than the bins list. e.g.:
SEARCH Host SHOW name, ram, bin(ram, [64, 512, 1024, 4096], ["Less than 64M", "64M to 512M", "512M to 1G", "1G to 4G", "More than 4G"])
boolToString(value)
Interprets its argument as a Boolean and returns "Yes" or "No".
duration(start_time, end_time)
Calculates the amount of time elapsed between the two dates and returns the result as a string of the form ' days.hours:minutes:seconds'.
currentTime()
Returns the number of 100 nanosecond intervals since 15 October 1582. This example query returns the hosts created in the last 7 days:
search Host where creationTime(#) > (currentTime() - 7*24*3600*10000000) show name, time(creationTime(#)) as CreateTime
defaultNumber(value, default_value)
Attempts to convert the value into a number and if it is not possible returns the default value.
fmt(format, ...)
The first argument is a Python format string, the remaining arguments are values to interpolate into it. The result is equivalent to a key expression of the form #"format"(...), except that the arguments to the fmt() function can be the results of other functions. That is, these two expressions are equivalent:
SEARCH Host SHOW #"%s: %s"(name,ram) SEARCH Host SHOW fmt("%s: %s",name,ram)
Whereas the following expression can only be done with fmt() because it calls the len() function:
Page 425
The following example shows the fmt() function used to show the results of floating point arithmetic:
fmt_map(format, items)
Applies a format string to all items in a list. The format argument must be a python format string with only one variable. items is the list of items to interpolate into the string. Returns a list of strings as a result.
formatTime(time_value, format)
Converts the internal time format to a string, based on the format specification, and converting into the appliance's time zone. The format is specified using Python's strftime format. e.g. a search like this:
SEARCH Host SHOW name, formatTime(last_update_success, "%d %B %Y")
Gives results: lonvmserv03 rs6000 sol10x86 15 January 2009 12 January 2009 13 January 2009
formatUTCTime(time_value, format)
Identical to formatTime, except that it does not perform timezone conversion.
friendlyTime(time_value)
Converts the internal time format into a human readable string, taking into account time zones and daylight saving times, based on the time zone of the appliance.
friendlyUTCTime(time_value)
Converts the internal time format into a human readable string, without converting the time to account for time zones and daylight saving times.
friendly_duration(duration)
Takes a duration (that is, one time minus another) and returns a human readable string of the result, such as '3 days' or '1 month' or '30 seconds'. The result is not intended to be precise, but to be quickly understood by a person.
hash(value)
Page 426
int(value [, default])
Converts a string form of an integer to an integer. Works on lists. Optionally supports a second argument, which if present will be used if the string cannot be converted.
join(value, separator)
Build a string out of a list by concatenating all the list elements with the provided separator between them.
leftStrip(value)
Returns the value with white space stripped from the start.
len(value)
Returns the length of a string or list.
lower(string_value)
Returns a lower-case version of a string.
parseTime(time_string)
Converts a date/time string into the internal format, without time zone conversion.
parseUTCTime(time_string)
Converts a date/time string into the internal format. Identical to parseTime.
parseLocalTime(time_string)
Converts a date/time string into the internal format, taking into account time zones and daylight saving times, based on the time zone of the appliance.
recurrenceDescription(recurrence)
Converts a recurrence object to a human readable string.
rightStrip(value)
Returns the value with white space stripped from the end.
single(value)
If value is a list, return just the first item of it; otherwise return the value unchanged. This is useful when following key expressions that usually return a single item, but occasionally return multiple. e.g.
search Host show name, single(#InferredElement:Inference:Primary:HostInfo.uptimeSeconds)
size(value)
Returns the size of a list or string. A synonym for len().
sorted(values)
Page 427
split(value [, separator ])
Split a string into a list of strings. Splits on white space by default, or uses the specified separator.
str(value)
Converts its argument to a string.
strip(value)
Removes white space at the start and end of the value.
sumValues(values)
Sums a list of values. For example, to total the count attributes of the SoftwareInstances related to each Host:
search Host show name, sumValues(#Host:HostedSoftware:RunningSoftware:SoftwareInstance.count)
time(time_val)
Marks a number to indicate that it is a time. The values returned by functions such as currentTime and parseTime are large numbers (representing the number of 100 nanosecond intervals since 15 October 1582), which can be manipulated by other functions and compared to each other. To return them in results in a way that the UI knows that they are times, they must be annotated as times using the time function.
toNumber(value [, base ])
Converts a string into a number. If base is given, uses the specified base for the conversion, instead of the default base 10.
unique(values)
Returns a list containing the unique values from the provided list.
upper(string_value)
Returns an upper-case version of a string.
value(item)
Returns item unchanged. This is only useful to bind a non-function result to a name, as described in Name binding.
whenWasThat()
Converts the internal time format to something easily readable, like '1 hour ago', '2 weeks ago', etc.
Node manipulation
These functions must be passed nodes with key expressions, often just a single # to represent the current node:
SEARCH Host SHOW name, keys(#)
Page 428
// Use the time() function to tell the UI that the result is a time. SEARCH Host SHOW name, time(modified(#))
destroyed(node)
Returns True if the node has been destroyed, False if not. Returns [invalid node] if the argument is not a node. Works on lists of nodes as well, returning a list of boolean values. (See the section on Search Flags that permit searching destroyed nodes.)
hasRelationship(node, spec)
Takes a node and a traversal specification. Returns True if the node has at least one relationship matching the specification; False if not. Works on lists of nodes as well.
id(node)
DEPRECATED function to return a node id in string form. Use #id to return a node's id.
keys(node)
Returns a list of the keys set on the node. Returns [invalid node] if the argument is not a node. Works on lists of nodes as well, returning a list of lists of keys.
kind(node)
Returns the kind of the node. Returns [invalid node] if the argument is not a node. Works on lists of nodes as well, returning a list of kinds.
label(node)
Returns the node's label, as defined in the taxonomy. Works on lists of nodes as well, returning a list of labels.
modified(node)
Returns the node's last modified time in the internal numeric format. Works on lists of nodes as well, returning a list of times.
History functions
The following history-related functions are currently available.
creationTime(node)
Returns the number of 100 nanosecond intervals between 15 October 1582 and the time the node was created. Also works on lists of nodes.
Page 429
destructionTime(node)
Returns the time the node was destroyed in the internal time format. If the node is not destroyed, returns 0. Works on lists of nodes as well, returning a list of destruction times.
System interaction
These functions allow access to other aspects of the BMC Atrium Discovery system.
fullFoundationName(username)
Returns the full name of the user with the given user name or None if no such user exists.
Page 430
getOption(key)
Returns the value of the system option key.
Link functions
When search results are shown in the UI, each cell in the result table is usually a link to the node corresponding to the result row. These functions allow other links to be specified:
nodeLink(link, value)
link is a node id or node reference, for example the result of a key expression; value is the value to display in the UI and to be used in exports. e.g. to create a table listing SoftwareInstances and their Hosts, with links from the host names to the Host nodes:
SEARCH SoftwareInstance SHOW name AS "Software Instance", nodeLink(#RunningSoftware:HostedSoftware:Host:Host, #RunningSoftware:HostedSoftware:Host:Host.name) AS "Host"
queryLink(link, value)
link is a search query to execute; value is the value to display in the UI and to be used in exports.
Truth
In WHERE clauses, the following values are considered False: Boolean False Number zero Empty string Empty list None (e.g. missing attribute) All other values are considered True. This allows simple WHERE clauses that choose only useful values. For example, to find all SoftwareInstance nodes with a populated version:
SEARCH SoftwareInstance WHERE version SHOW type, version
Expressions
The following logical and arithmetic expressions are supported: Equality Inequality Comparison a = b a <> b a a a a > b >= b < b <= b
Page 431
Arithmetic
a a a a
+ * /
b b b b
Subwords
a HAS SUBWORD b Case-insensitive subword test. a is split on white space and other delimiter characters. The condition is true if b is in the list of split words. When used in a WHERE clause, HAS SUBWORD uses the full text indexes maintained by the datastore. It is by far the most efficient way to find nodes. a HAS SUBSTRING b Case-insensitive substring test. Where possible HAS SUBSTRING uses the full text indexes maintained in the data store, but it is always significantly less efficient than HAS SUBWORD. a MATCHES b a LIKE b The condition is True if a matches regular expression b. LIKE is a deprecated synonym of MATCHES. MATCHES cannot usually use the datastore indexes, and is therefore by far the slowest mechanism for finding nodes. If at all possible queries should use HAS SUBWORD instead of MATCHES. a IN b a NOT IN b The condition is True if a is/is not in b, where b is a list. a IN [b, c, d...] a NOT IN [b, c, d...] The condition is True if a is/is not in the specified list. a IS DEFINED a IS NOT DEFINED The condition is True if the node has an attribute a (with any value). NOT a True if a is considered False; False if it is considered True. a AND b Considered True if both a and b are considered True. If a is considered True, the value of the expression is b; if a is considered False, the value of the expression is a. a OR b Considered True if either a is considered True or b is considered True. If a is considered True, the value of the expression is a; otherwise the value of the expression is b.
Substrings
Containment
Containment
Definition
Inverse
And
Or
Precedence
Operator precedence works as you would expect. Parentheses ( and ) are used to explicitly group operations. Otherwise, in arithmetic expressions, * and / take precedence over + and -. In logical expressions, OR has lowest precedence, followed by AND, followed by NOT, followed by all the other operators. i.e. this expression
NOT a > b OR c HAS SUBWORD d AND e = 10
is equivalent to
(((NOT (a > b)) OR ((c HAS SUBWORD d) AND (e = 10)))
The behavior of AND and OR in returning one of their input parameters can be useful in handling missing attributes, to avoid output of "None" when an attribute is not available, for example:
Page 432
SEARCH Host SHOW name, (virtual AND "Host identified as virtual" OR "Host not identified as virtual")
Name binding
It is sometimes necessary to use the result of calling a function more than once, for example using a result as part of a WHERE clause, and then using the value in a SHOW clause. To avoid repeated function evaluation, the result of a function can be bound to a name using a WITH clause. The result can then be referred to later by name, prefixed with an @ character, for example:
SEARCH Host WITH creationTime(#) AS ctime WHERE @ctime > parseTime("2007-06-01") SHOW name, os, @ctime
Only function results can be bound to names in this way. It is not possible to directly use a logical or arithmetic expression. To do so, you can use the value() function as a wrapper around the expression:
SEARCH Host WITH value(currentTime() - creationTime(#)) AS offset WHERE @offset < 10000000 * 60 * 60 // one hour SHOW name, os, @offset
Traversals
A SEARCH expression creates a set of nodes, and a LOOKUP makes a set with just one node in it. With that set, a query can TRAVERSE from each of the nodes to a new set of related nodes. The format of a TRAVERSE expression is
TRAVERSE role:rel:role:kind [WHERE clause]
The WHERE clause allows the new set to be further filtered. For example, to find all software instances that are running on Linux hosts:
SEARCH Host WHERE os_type HAS SUBWORD "Linux" TRAVERSE Host:HostedSoftware:RunningSoftware:SoftwareInstance
As with key expressions, elements of the traversal can be omitted as a wildcard mechanism. All matching relationships are followed, so this example finds software running on Linux hosts, and virtual machine software containing the hosts:
SEARCH Host WHERE os_type HAS SUBWORD "Linux" TRAVERSE :::SoftwareInstance
Page 433
SEARCH Host WHERE os_type HAS SUBWORD "Linux" TRAVERSE :HostedSoftware::SoftwareInstance WHERE type HAS SUBSTRING "oracle"
Notice that the attributes that are accessed in the TRAVERSE's WHERE clause are attributes of the nodes that have been traversed to, not attributes of the original nodes. The original nodes have been discarded from the set. Traversals can be chained, so we can now find any business application instances related to the software instances:
SEARCH Host WHERE os_type HAS SUBWORD "Linux" TRAVERSE :HostedSoftware::SoftwareInstance WHERE name HAS SUBSTRING "oracle" TRAVERSE ContainedSoftware:SoftwareContainment: SoftwareContainer:BusinessApplicationInstance
Note that at each stage, the result of a traversal is a set, so each node appears only once in the set, even if it is reached via relationships from more than one node. A TRAVERSE clause performs one single traverse from each node in the current set. Sometimes it is useful to repeatedly traverse to find all the nodes that can be reached via particular relationships, using an EXPAND clause. This is useful for finding all applications that depend on a particular other application, for example:
SEARCH BusinessApplicationInstance WHERE <some condition> EXPAND DependedUpon:Dependency:Dependant:BusinessApplicationInstance
The set now contains the original nodes, plus all the nodes that depend upon them, via any number of intermediate dependencies. The EXPAND keeps performing traversals until no more nodes are found. Note that EXPAND always keeps the original set. If you want to EXPAND but not keep the original set, use a TRAVERSE followed by an EXPAND:
Just like a TRAVERSE clause, an EXPAND can have a WHERE clause. The filtering of the WHERE clause happens once the EXPAND has completed, not on every traversal iteration (since if it did it every iteration, it might never complete). TRAVERSE and EXPAND move from one set of nodes to another set of nodes. Sometimes, you want to stop off at a relationship along the way. This is achieved with STEP IN, which replaces the set of nodes with a set of relationships. Given a set of relationships, you can STEP OUT to another set of nodes. To find critical Dependency relationships:
SEARCH BusinessApplicationInstance WHERE <some condition> STEP IN DependedUpon:Dependency WHERE critical
To find the BusinessApplicationInstances that have a critical dependency to the ones matched in the WHERE clause:
SEARCH BusinessApplicationInstance WHERE <some condition> STEP IN DependedUpon:Dependency WHERE critical STEP OUT Dependant:BusinessApplicationInstance
NODECOUNT Expressions
Sometimes it is useful to filter, sort, or show the number of nodes related to each node in the set. This is achieved with a NODECOUNT expression. It behaves like a function, except that its arguments are a traversal clause. To show the number of dependencies of each BusinessApplicationInstance:
SEARCH BusinessApplicationInstance SHOW name, NODECOUNT(TRAVERSE DependedUpon:Dependency: Dependant:BusinessApplicationInstance)
Page 434
NODECOUNT accepts all the traversal expressions described above, including chaining and where clauses. NODECOUNT is the correct way to filter based on properties of related nodes, rather than using a key expression in a WHERE clause. The following is a way to find all Linux Hosts running products from the Apache foundation, for example:
SEARCH Host WHERE os_type HAS SUBWORD "linux" AND NODECOUNT(TRAVERSE Host:HostedSoftware:RunningSoftware:SoftwareInstance WHERE type HAS SUBWORD "apache")
Similar to NODECOUNT, the NODES keyword performs a traversal and returns a list of the traversed-to nodes. It is only used for internal purposes since, in general, lists of nodes cannot be handled in the search service.
Search Flags
Certain aspects of search behavior can be modified by setting 'flags' for the search. Flags are applied with the flags keyword, and affect the whole search:
SEARCH FLAGS (flag1, flag2, ...) Host WHERE ...
Flags can also be applied to traversals, in which case they apply to just the traversal:
SEARCH Host WHERE os_type HAS SUBWORD "linux" TRAVERSE FLAGS (include_destroyed) :::DiscoveryAccess
include_destroyed
Normally, nodes and relationships that are marked as destroyed are excluded from searches and traversals. The include_destroyed flag means that destroyed nodes and relationships are included.
exclude_current
With include_destroyed, searches involve both current and destroyed nodes; with the additional exclude_current flag, current nodes are excluded. exclude_current only makes sense when used in conjunction with include_destroyed, otherwise everything is excluded.
no_segment
Normally, a search that finds multiple target nodes segments its results so that each node kind is in a separate result set. no_segment prevents the segmentation, meaning the results are in just one set. This is generally only useful when the node kinds are similar in some way, otherwise it is impossible to define a suitable SHOW clause for the combined results.
find_relationships
Searches normally find nodes with the specified kinds. The find_relationships flag causes searches to find relationships instead.
suppress_default_links
When presented in the UI, search results are normally presented so that each row is a link to the node from which the data came. The suppress_default_links flag removes these default links.
Ordering
After processing a SEARCH or LOOKUP and any traversals, the set of nodes is in an arbitrary order. If the query expression contains an ORDER BY clause, the set is sorted according to the clause. The clause is a comma separated list of attributes (or key expressions, function calls, etc.). The set is sorted according to the first attribute in the list; those that are identical by the first attribute are further sorted according to the second attribute in the list, and so on.
SEARCH Person ORDER BY surname, name SHOW #"%s %s"(name, surname)
Page 435
Sorting is normally in ascending order; the order can be reversed with the DESC modifier:
SEARCH Person ORDER BY surname DESC, name SHOW #"%s %s"(name, surname)
If the search expression does not have an ORDER BY clause, it is returned in an arbitrary order unless the taxonomy is consulted for the SHOW clause (see below). In that case, the set is sorted by the taxonomy-defined label attribute.
or
SEARCH Host SHOW TAXONOMY "report_high_detail"
If the SHOW clause is absent, the columns are set according to the summary list as defined in the taxonomy. If the SHOW clause is a single * character, the columns are set to the field order defined in the taxonomy. In both of those cases, if there is no entry in the taxonomy for the node kinds, no columns are set, and results can not be retrieved (although the nodes themselves can still be accessed). To show the taxonomy-defined summary list defined plus some other attributes, use the SUMMARY keyword in a SHOW clause. This query shows the OS of a Host, followed by the normal taxonomy summary:
SEARCH Host SHOW os, SUMMARY
If columns are not named with AS, the columns are named using information from the taxonomy, as long as the locale is known. Searches performed through the UI take the locale from the user's profile information. In other cases, such as exports, there is no default locale. The locale can be specified in the query:
SEARCH Host LOCALE "en_GB" SHOW name, os
If the locale is not known, or is is specified as the empty string, the column headings are based on the attribute names in the SHOW clause.
Explode
Normally, if an attribute contains a list, or a key expression traversal that leads to multiple nodes, all items in the list appear in a single cell in the results. Sometimes, particularly when exporting data to other systems, it can be useful instead to 'explode' the items in the list into multiple output rows. The result is similar to the result of a join in a relational database. This query produces a table of network interfaces with one fully-qualified domain name per row:
SEARCH NetworkInterface SHOW ip_addr, EXPLODE fqdns
When exploding an entry, one row is created for each item in the exploded list; if the exploded list is empty, no rows are created. A common requirement when exporting data for import into another system is to extract the node identifiers for all nodes related to a set of nodes, so the graph structure can be reconstructed. This can be achieved by exploding the related node identifiers. This query produces a table mapping Host node ids to all the SoftwareInstances running on them:
Page 436
Key expression traversals leading to multiple nodes can be limited in the system settings so that not all target nodes are used (although by default there is no limit). When exploded, key expressions have no traversal limit. All key expressions with the same traversal specifications are handled as a group, meaning that multiple attributes can be selected, for example:
SEARCH Host SHOW name, EXPLODE #:::SoftwareInstance.name, #:::SoftwareInstance.instance
The result has one row per software instance. Even though the second key expression is not explicitly exploded, it is implicitly exploded since it shares the traversal with the exploded attribute. You should not use explode on more than one independent attribute. If you do this, all possible combinations of the attribute values are exploded. This can produce very large datasets and can impact performance considerably.
Optional parameters In the following example, some function parameters are shown with a value, such as min=0. This indicates that the parameters are optional, and the values are the default used if values are not provided. The key=value syntax is not part of the search syntax. Parameters must always be provided as plain values or missed out to use the defaults.
Post-processing functions may be chained together in a comma-separated list, in which case they are applied in turn, each taking the output of the previous one. For example, the following query follows from all ssh processes to processes they are communicating with:
SEARCH DiscoveredProcess WHERE cmd HAS SUBWORD "ssh" PROCESSWITH communicationForProcesses, localToRemote, processesForCommunication
The syntax is identical to the existing PROCESSWITH feature, except with no PROCESSWITH functions listed. @number refers to columns in the first search, as in current refine searches.
Page 437
unique(sort=0)
The unique function takes the rows of output from the search, and returns each unique row just once. If the optional sort argument is set to 1, the result rows are sorted; if set to zero or not provided, the rows are output in the same order they appeared in the original results, with duplicates removed.
It converts the search results into a summary where each row contains a command, arguments pair and count of how many times that pair appears. When a single list attribute is provided in the SHOW clause, countUnique counts the individual items in the list and shows two counts. The first count is the number of nodes in which the item appears; the second count is the total number of times the item appears, which will be a larger number if an items appears more than once in a list. countUnique takes two optional boolean arguments to indicate which count columns appear. If sort is 1, countUnique sorts the results by total; if it is set to zero, it keeps them in the order it found them in the original result set. All boolean arguments default to 1, meaning both count columns appear and the results are sorted. If set_headings is provided, it contains a list of strings to use as the headings in the result, overriding the default headings. The list must have the correct number of items corresponding for the number of columns (2 or 3 depending on the show parameters). By default, values that are None are ignored; if ignore_none is set to zero, None values are counted in the same way as other values.
Page 438
SEARCH Host SHOW name, name, ram, processor PROCESS WITH displayHistory(parseTime("2009-01-01"), currentTime(), 1)
The three arguments are the start date, the end date and the number of attributes to leave as is. By default the end date is now and the attribute count is 1 so this invocation could be simplified as:
SEARCH Host SHOW name, name, ram, processor PROCESS WITH displayHistory(parseTime("2009-01-01"))
This example returns a result set with one line for every change in the history of the attributes name, ram and processor, detailing on each line the date it happened, and the attribute name and the attribute value before and after the change.
Provenance functions
These functions analyze provenance information:
provenanceDetails(friendly_time=0)
provenanceDetails takes all of the attributes selected in the SHOW clause and finds the provenance information for them. For each attribute it shows a row in the output containing the source node label, the attribute name, the attribute value, the time the attribute was last confirmed, and the label of the evidence node. If the optional friendly_time argument is set to 1, the times are converted to strings; if set to 0 or not provided, times are returned in the internal time format.
provenanceFailures(friendly_time=0)
provenanceFailures is currently only supported for Host nodes. For each Host, it traverses to find the most recent DiscoveryAccess. It then finds the provenance information for each of the attributes in the SHOW clause. The output consists of one row for each attribute that was not confirmed in the most recent DiscoveryAccess. The rows contain the label of the Host node, the attribute name, the attribute value, the label of the evidence node used to set the attribute, the time the value was confirmed, and the time of the most recent DiscoveryAccess. If the optional friendly_time argument is set to 1, the times are converted to strings; if set to 0 or not provided, times are returned in the internal time format.
communicationForProcesses(targets=3, show="SUMMARY")
Given an input of DiscoveredProcesses, returns a list of node sets which varies depending on the value of targets: targets = 1 targets = 2 targets = 3 return DiscoveredNetworkConnections return DiscoveredListeningPorts return both
Returns network connections and listening ports that tie up with the given set of processes. (For example, the network connection or listening port comes from the same discovery access as the process, and the process ids match.) The show clause determines which attributes on the nodes are returned in tabular results. The same show clause is used for both node kinds. The result of this function is useful for feeding in to the localToRemote or communicationToRemoteHost functions.
processesForCommunication(show="SUMMARY")
The input must contain DiscoveredNetworkConnections or DiscoveredListeningPorts or both. Returns a node set of DiscoveredProcesses.
Page 439
Returns processes that tie up with the given network connections and listening ports. (For example, the processes come from the same discovery access as the connections and ports, and the process ids match.) The show clause determines which attributes on the nodes are returned in tabular results.
localToRemote(targets=15, show="SUMMARY")
The input must contain DiscoveredNetworkConnections or DiscoveredListeningPorts, or both. Returns nodes corresponding to the 'other end' of the input communication information. The results depend upon the targets specification. targets is a 'bit mask' formed by adding together the numbers corresponding to the required results: targets 1 2 4 8 result remote DiscoveredNetworkConnections remote DiscoveredListeningPorts in-machine DiscoveredNetworkConnections in-machine DiscoveredListeningPorts
The function distinguishes between 'remote' connections that are on different computers to the source information and 'in-machine' connections that are communication from one process to another on a single computer. DiscoveredNetworkConnection nodes can be used to find both target DiscoveredNetworkConnection and DiscoveredListeningPort nodes. DiscoveredListeningPort nodes can only be used to find target DiscoveredNetworkConnection nodes. In all cases, only network connections and listening ports found during the most recent complete DiscoveryAccess are considered, meaning that only 'current' data is used. Note that it is not an error to, for example, pass in only DiscoveredListeningPorts and set targets to 2. In this instance, an empty list results. The show clause determines which attributes on the nodes are returned in tabular results. The same show clause is used for both node kinds. The result of this function is useful for feeding in to the processesForCommunication function.
hostToHostCommunication(show="SUMMARY")
Given a set of Host nodes, returns a set of Hosts that are communicating with those hosts, according to observed network connections. A host is considered to be communicating with another if there is a network connection from either host to the other, according to the remote IP address on the network connection and the IP addresses of the host. The show clause determines which attributes on the nodes are returned in tabular results.
communicationToRemoteHost(show="SUMMARY")
Given a list of node sets, which must contain DiscoveredNetworkConnections or DiscoveredListeningPorts (or both), return a set of Host nodes that are communicating. A Host is returned if one of its IPs matches one of the remote IP addresses of one of the network connections, or if it has a network connection with a remote IP address and port that matches one of the listening ports. The show clause determines which attributes on the nodes are returned in tabular results.
hostToRemoteCommunication(show="SUMMARY")
Given a set of Host nodes, returns a list of DiscoveredNetworkConnections and DiscoveredListeningPorts that are communicating with the hosts. Network connections are returned if their remote IP address corresponds to an IP address of one of the hosts, and listening ports are returned if a network connection on one of the hosts contains a remote IP address and the port matches the listening port. Only network connections and listening ports found during the most recent complete DiscoveryAccess are considered, meaning that only 'current' data is used. The show clause determines which attributes on the nodes are returned in tabular results. The result of this function is useful for feeding in to the processesForCommunication function.
Page 440
communicatingSIs(show="SUMMARY")
Given a set of SoftwareInstance nodes, returns a set of the SoftwareInstance nodes with which they are communicating. This function is equivalent to a traversal from SoftwareInstance to DiscoveredProcess, then a chain of communicationForProcesses, localToRemote, processesForCommunication, followed by a traversal from DiscoveredProcess back to SoftwareInstance. In BMC Atrium Discovery version 8.2.01 and later, communicatingSIs returns SoftwareInstance nodes corresponding to both remote and in-machine communication links. Earlier versions only returned SoftwareInstance nodes on remote computers.
connectionsToUnseen
Takes a set of DiscoveredNetworkConnection nodes, and filters it to include only those ones that represent connections to addresses that have not been seen on NetworkInterface nodes present in the system. That is, it returns network connections to devices that have not been scanned.
networkConnectionInfo
Takes a set of DiscoveredNetworkConnection nodes and produces a summary of information about each one. Each row contains the name of the "local" Host, the associated local process command and arguments, port and IP address information for the connection, the remote command and arguments, and the remote host name. Not all of the information is always available (for example, if the remote side of the connection had not been scanned, or if insufficient discovery permission meant ports were not associated with processes). networkConnectionInfo takes four optional list parameters to modify the attributes shown for Hosts and DiscoveredProcesses. The first two parameters specify the attributes to show on Host nodes and the headings to use for those columns; the second two parameters specify the attributes to show on DiscoveredProcess nodes and the headings to use for those. The number of headings must match the number of attributes shown for each node kind. In both cases, the attributes and headings are used twice, once for the local end of the network connection, and once for the remote end. The headings are prefixed "Local" and "Remote" as appropriate. The default settings are equivalent to networkConnectionInfo(["name"], ["host name"], ["cmd", "args"], ["command", "arguments"]) . It is valid to use key expressions in the attribute lists, so attributes from nodes related to the hosts or processes can be displayed.
siCommunicationSummary
Takes a set of SoftwareInstance nodes and produces a summary with one row for each observed network connection, treating the input SoftwareInstance nodes as the local end. Each row contains the local Host and SoftwareInstance, local and remote IP address and port, remote SoftwareInstance and remote Host. Values for the remote end may not be available if the remote host was not scanned, or if the connection could not be associated with a particular SoftwareInstance. Like networkConnectionInfo, siCommunicationSummary takes four optional list parameters to modify the attributes shown for Hosts and SoftwareInstances. The default is equivalent to siCommunicationSummary(["name"], ["host name"], ["name"], ["SoftwareInstance"]). Key expressions are valid in the attribute lists, so attributes from nodes related to the Hosts or SoftwareInstances can be shown.
Chart-specific functions
These functions are only useful as input to charts:
Page 441
A Note on Performance
There are often several ways to write a query that produces a required output report. Writing queries in different ways can sometimes cause enormous differences in query performance, so this section provides some hints on how to write queries so they are as efficient as possible.
Use of indexes
Wherever possible, use HAS SUBWORD rather than MATCHES/LIKE or HAS SUBSTRING so the datastore indexes are used. The full text index is very good at finding words and phrases.
This query starts by building a set of all Host nodes, then checks the location of each one, which is much slower:
SEARCH Host WHERE NODECOUNT(TRAVERSE Location:Location:ElementInLocation:Host WHERE name = "London")
When interactively browsing, it is quite common for the Query Builder to construct queries like this. This version also builds a set of all Hosts and checks the location of each one, but in this case it fails when encountering Hosts in more than one location, as well as being inefficient. Queries like this should never be used:
SEARCH Host WHERE #Location:Location:ElementInLocation:Host.name = "London"
Default ordering
By default, queries with no SHOW clause are ordered by the label value defined in the taxonomy. If the data is to be exported to an external system that does not care about order, the time spent retrieving data to sort can be avoided by specifying ORDER BY "" .
Page 442
Using Export
This section describes exporting data from BMC Atrium Discovery. Introduction to BMC Atrium Discovery Export Setting up and configuring an export Running an export The export process Export APIs
Exporters
An exporter is the combination of a mapping set and an adapter configuration. The mapping set is a set of mapping files. These allow you to determine which data to extract from BMC Atrium Discovery and how to transform it according to the target data model. The adapter configuration is used to determine where to publish the data.
Page 443
4. group. Ensure that they are also members of the public group.
Setting up an export
1. Specify where the exporter should send data to by creating a new configuration for the appropriate adapter. Configure it with the connection settings for the server you want to export to. See Creating a New Adapter Configuration. 2. Specify which data to export by creating a new mapping set and uploading the mapping files you want to use. You can reuse one of the existing mapping sets. See Creating Mapping Sets. 3. Create an exporter, selecting the adapter configuration and mapping set you created in the previous steps. See Creating a New Exporter.
Starting an export
1. From the Model section of the Administration page, click the Export icon. The Integrations page is displayed which has three tabs. These are listed below with links to the tasks that you perform from them. Exporters tab Creating a New Exporter Deleting an Exporter Running an export Adapter Configurations tab Creating a New Adapter Configuration Editing Adapter Configurations Deleting Adapter Configurations Mapping Sets tab Creating Mapping Sets Editing Mapping Sets Deleting Mapping Sets
Export Configurations Backed up by appliance backup The Appliance Backup feature now backs up export configurations.
Page 444
Configuring the CSV file adapter for the CSV file configuration. Configuring the RDB adapter for the RDB adapter configuration. 4. To save the new adapter configuration, click Apply Only. 5. To save the changes and verify that the adapter can connect to the target system with the details supplied, click Apply and Test (see below for more details). If a validation error occurs on any of the details that you have entered, an error message is displayed. If the connection test fails, a page is displayed with a message and a button which enables you to edit the adapter configuration. You cannot click Back from this page in order to edit and correct the configuration details. Your adapter configuration has already been created. Clicking Back, editing the options and clicking Apply results in an attempt to create another configuration with the same name. This causes an error. 6. To edit the adapter configuration that resulted in a failed connection, click Edit This Adapter.
Page 445
SMB export over IPv6 not supported on RHEL 5 appliances If you upgraded from BMC Atrium 8.3.x rather than migrating, your appliance is still running on RHEL 5 and you cannot export to SMB shares over IPv6. Click here to show how to determine whether your appliance is running on RHEL 5
To determine whether your appliance is running on RHEL5 check the footer of any UI page. The last line displays the release and build number, and if the appliance is running on RHEL5 it is stated between the release number and build number.
3. If you want to use the default port for the protocol, leave the Specify TCP Port check box unchecked. Otherwise, check it and enter the TCP port. 4. The following table lists the default ports for the various protocols. Even though the BMC Atrium Discovery appliance firewall is already configured to allow these connections, you may need to open these ports on your firewall(s) to enable the file publishing. Protocol SMB (Windows networking) SCP and SFTP FTP HTTP PUT Ports TCP 445 TCP 22 TCP 21 TCP 80
5. In the Username field, enter the user name BMC Atrium Discovery Export should log in to the remote system as. 6. Enter the password for the specified user into the Password field. To set a blank password, leave the password field blank. If editing an adapter configuration, select the check box and leave the field blank To leave the password unchanged, do not select the check box. The password defaults to blank. 7. Specify the path to copy the CSV files to. Note that for Windows networking publishing, the path delimiter is \ (backslash), whereas all the other publishers use a / (forward slash). For example: To publish the CSV files to a Windows UNC path of \\myserver\myshare\mysubdir, set the Destination Server field to myserver (no slashes) and the Path field to myshare\mysubdir (note the relative path specified by no leading slash). To publish the CSV files to the directory /tmp/ over SCP, set the Path to /tmp (note the absolute path specified by the leading forward slash). To publish the CSV files to the home directory of your user's account over SCP, leave the Path blank. To publish the CSV files to a directory beneath your user's home directory over SCP ~/mydir/mysubdir, specify the Path as mydir/mysubdir (note that the path is specified as relative to the user's home directory by omitting the leading forward slash). To publish the CSV files over HTTP PUT to the URL http://mywebserver/mypath/mysubpath, specify the Destination Server as mywebserver and the Path as mypath/mysubpath. By default, the CSV adapter will output data files that are compatible with Microsoft Excel and other popular spreadsheet applications. Changing the following settings could prevent these products from reading the files successfully. 1. Select the separator character from the drop-down list. The available separator characters are comma, semicolon, or tab. The default character is the same as the option set in the Application Preference page. 2. Uncheck Write Header Row if you do not want each CSV file's column names to be written to the first line of the file. By default the CSV adapter will copy a manifest file with the generated CSV files. The manifest records the date of the export and the files that form part of it. 3. Uncheck Include a Manifest File to disable the inclusion of the manifest. When you test the connection, BMC Atrium Discovery Export attempts to contact the host, log in using the credentials provided, and write a test file (called "connection.test") to the specified location. If there are problems then an error message is displayed and you can correct the error.
Output files
When generating files, the exporter will generate one CSV file for every CI type in the mappings. Information about CI types can be found in The export process. Each CSV file will be named as follows:
Page 446
<mapping file name>-<ci name>.csv For example, if you have a mapping file named hosts.xml containing three CIs (host, ip_endpoint and sc), three files will be generated: hosts-host.csv hosts-ip_endpoint.csv hosts-sc.csv
5. Enter the RDB user name into the Username field. 6. Select the Set Password check box (only available if you are editing an existing adapter) and enter the password in the Password field. The JDBC adapter can write export statistics and dates to a manifest table after each mapping is complete. This feature is enabled by default on each adapter configuration. The manifest records the date of the export and the files that form part of it. 7. To disable the inclusion of the manifest, uncheck Write export statistics to a manifest table.
RDB Adapter: Determining the table structure for your mapping files
The RDB adapter exports data to tables in the database that are determined by the names used in the mapping files. This section describes how to determine the table structure that will be required for your mapping files. See The Mapping File Format for more information about mapping files. For your refernce, there is generic SQL at the beginning of each mapping file to delete and create each of the required staging tables for that file. The RDB adapter exports to a set of staging tables in a manner similar to a CSV export. It requires that the destination tables are empty when the export begins, as it performs no reconciliation and will attempt to INSERT records only. The tables and fields that it exports to are determined by a simple set of rules:
Page 447
1. Each CI declaration in the mapping file exports to a different table. 2. The table name is formed by concatenating the mapping file's name (without the .xml extension), an underscore (_) and the cmdb-name attribute of the CI. 3. The fields of the table exported to by the main CI are taken from the fields declared in the main CI as-is. 4. The fields of the table exported to by all sub CIs are taken from two places: The first fields in the table are the main CI's identity fields. The remaining fields in the table are the sub CI's fields. Thus, the main CIs are exported to a table of their own. The sub-CIs are exported to a table of their own, but each sub-CI's table contains the identity fields of the main CI that it was related to when it was exported. For example, the following are the CI declarations from host.xml, a mapping file shipped in the RDB mapping set:
<ci cmdb-name="host" main="true"> <field src="hostname" dest="host_HostName" identity="true"/> <field src="name" dest="host_Name" identity="true"/> <field src="description" dest="Description"/> <field src="domain" dest="Domain"/>" <field src="model" dest="Model"/> <field src="serial" dest="Serial"/> <field src="ram" dest="RAM"/> <field src="workgroup" dest="Workgroup"/> <field src="vendor" dest="Vendor"/> <field src="os" dest="OS"/> <field src="os_version" dest="OsVersion"/> <field src="processor" dest="Processor"/> </ci> <ci cmdb-name="si" collection="true"> <field src="si_name" dest="SI_Name" identity="true"/> <relationship cmdb-name="hostedsoftware" direction="main-to-sub"/> </ci> <ci cmdb-name="ipendpoint" collection="true" parent-is-identifier="true"> <field src="netmask" dest="ipendpoint_Netmask" identity="true"/> <field src="ip_addr" dest="ipendpoint_IPAddress" identity="true"/> <relationship cmdb-name="deviceinterface" direction="main-to-sub"/> </ci>
There are three CI declarations. The first is the main CI, and the following two are both sub-CIs. Therefore, this mapping file will export data to three tables: 1. host_host - the "host" main CI 2. host_si - the "SI" sub-CI 3. host_ipendpoint - the "ipendpoint" sub-CI Each table name is formed by concatenating the mapping file name, an underscore and the CI name. The following is the SQL required for each CI's table. Note that this SQL, while generic, may need slight modifications to run in your database:
Page 448
drop table host_host; create table host_host ( host_HostName varchar(100) NOT NULL, host_Name varchar(100) NOT NULL, Description varchar(100), Domain varchar(100), Model varchar(100), Serial varchar(100), RAM integer, Workgroup varchar(100), Vendor varchar(100), OS varchar(100), OsVersion varchar(100), Processor varchar(100), primary key (host_HostName, host_Name) );
The fields for the host_host table were taken directly from the "host" CI declaration above.
drop table host_si; create table host_si ( host_HostName varchar(100) NOT NULL, host_Name varchar(100) NOT NULL, SI_Name varchar(100) NOT NULL, primary key (host_HostName, host_Name, SI_Name) );
The first two fields in the host_si table are the "host" CI's identity fields, and the remainder are the fields from the "SI" sub-CI.
drop table host_ipendpoint; create table host_ipendpoint ( host_HostName varchar(100) NOT NULL, host_Name varchar(100) NOT NULL, ipendpoint_Netmask varchar(100) NOT NULL, ipendpoint_IPAddress varchar(100) NOT NULL, primary key (host_HostName, host_Name, ipendpoint_Netmask, ipendpoint_IPAddress) );
Note that, as with the "SI" sub-CI, the "ipendpoint" CI's fields are the host's identity fields and the ipendpoint fields. You should create primary keys in the tables that match the identity fields declared in the mapping files. This will prevent the exporter from inserting duplicate rows, or at least highlight that duplicate rows could exist.
tideway service restart required You must restart the tideway services before using the newly uploaded JDBC driver.
The available statuses for the drivers are: No JAR Uploaded Jar Uploaded(Deactivated) Activated and in use. This means the driver is being used by an SQL credential.
Page 449
Activated but not in use Error. BMC Atrium Discovery is shipped with a default set of properties files which define the databases that you can connect to. If you want to connect to a different database, you must write a properties file and download an appropriate driver. This is described in Adding new JDBC drivers. The following table provides an example JDBC URL for database targets. It also provides documentation and download URLs where available. When selecting the appropriate driver, note that BMC Atrium Discovery uses JDK 1.6. Consult the database vendor's documentation for further information. Database Informix
jdbc:informix-sqli://host[:port]/database:INFORMIXSERVER=servername [;property=value][;property=value]
The following example URL connects to an Informix server running on a host on IP address 192.168.0.100, port 1533, database name ADDM_IMPORT, INFORMIXSERVER ADDMserver, user fred, and password password. jdbc:informix-sqli://192.168.0.100:1533/ADDM_IMPORT:INFOMIXSERVER=ADDMserver; user=fred;password=password Download and documentation See the IBM website. MySQL
jdbc:mysql://host[,failoverhost...][:port]/[database] [;property=value][;property=value]
The following example URL connects to a MySQL server running on a host on IP address 192.168.0.100, port 3306, database name ADDM_IMPORT, user fred, and password password. jdbc:mysql://192.168.0.100:3306/ADDM_IMPORT?user=fred&password=password Download: http://www.mysql.com/products/connector/ Documentation: http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-configuration-properties.html Postgres
jdbc:postgresql://host:[port]/database [?propertyName1][=propertyValue1][&propertyName2][=propertyValue2]
The following example URL connects to a Postgres server running on a host on IP address 192.168.0.100, port 5432, and database name ADDMdatabase. jdbc:postgresql://192.168.0.100:5432/ADDMdatabase Download: http://jdbc.postgresql.org/download.html Documentation: http://jdbc.postgresql.org/documentation/83/connect.html
Page 450
Oracle
For Oracle there are two possible connection styles: Connection using service
jdbc:oracle:<drivertype>:[username/password]@//host[:port]/service
The following example URL connects, using a thin driver, to an Oracle server running on a host on IP address 192.168.0.100, port 1521, and service ADDM_DB. jdbc:oracle:thin:@//192.168.0.100:1521/ADDM_DB Connection using SID
jdbc:oracle:<drivertype>:@host[:port]:SID
The following example URL connects, using a thin driver, to an Oracle server running on a host on IP address 192.168.0.100, port 1521, and SID ADDM100. jdbc:oracle:thin:[username/password]@192.168.0.100:1521:ADDM100 Download: http://www.oracle.com/technetwork/database/features/jdbc/index-091264.html Documentation: http://www.oracle.com/technetwork/database/enterprise-edition/jdbc-faq-090281.html See here for more information on setting up both styles in BMC Atrium Discovery. Ingres
jdbc:ingres://host:[port]/database [;property=value][;property=value]
The following example URL connects to an Ingres server running on a host on IP address 192.168.0.100, port mnemonic II7, database name ADDMdatabase, user fred, and password password. jdbc:ingres://192.168.0.100:II7/ADDMdatabase;user=fred;password=password Download: http://community.ingres.com/wiki/JDBC_Driver Documentation: http://community.ingres.com/wiki/Open_Office_How_To#Ingres_JDBC_URL_Connection_Information Sybase
jdbc:sybase:Tds:host[:port][/databasename]
The following example URL connects to a Sybase server running on a host on IP address 192.168.0.100, port 6689, and database name ADDM_DB. jdbc:sybase:Tds:192.168.0.100:6689/ADDM_DB Documentation: http://www.sybase.com/detail?id=1009876#sec2q2 MS SQL
jdbc:sqlserver://serverName[\instanceName][:portNumber] [;property=value][;property=value]
The following example URL connects to an instance of MS SQL server called TDA running on a host on IP address 192.168.0.100, port 1433, user fred, and password password. The username and password correspond to a user configured on the database rather than a Windows AD user. See this Microsoft article for more information. jdbc:sqlserver://192.168.0.100\TDA:1433;User=fred;Password=password Download: http://msdn.microsoft.com/en-us/data/aa937724.aspx Documentation: http://msdn.microsoft.com/en-us/library/ms378428.aspx The downloaded file (a gzipped tar archive, current version sqljdbc_3.0.1301.101_enu.tar.gz) contains two JAR files. Use the sqljdbc4.jar the other does not work.
Page 451
JTDS
jdbc:jtds:servertype://server[:port][/database] [;property=value][;property=value]
The following example URL connects using JTDS to an MS SQL server server running on a host on IP address 192.168.0.100, port 1433, database name ADDM_IMPORT, instance TDA, user fred, and password password. jdbc:jtds:sqlserver://192.168.0.100:1433/ADDM_IMPORT; instance=TDA;user=fred;password=password When using a domain credential (Windows Authentication) of the form DOMAINNAME\username enter the username in the URL described, and the domain information in the Additional JDBC parameters dialog box in the following form: domain="DOMAINNAME". Also, if the domain controller requires NTLM v2 add the parameter: useNTLMv2=true The following example URL connects using JTDS to an MS SQL server server running on a host on IP address 192.168.0.100, port 1433, database name ADDM_IMPORT, instance TDA, Windows user fred in the domain DOM1, and password password. jdbc:jtds:sqlserver://192.168.0.100:1433/ADDM_IMPORT; instance=TDA;domain=DOM1;useNTLMv2=true;user=fred;password=password Documentation: http://jtds.sourceforge.net/faq.html#urlFormat
Page 452
1. Click its Edit link. The name is not modifiable after the mapping set is created. 2. To delete mapping files from the set, check the files that you wish to delete. 3. To upload new mapping files to the mapping set, click on Choose file and select the files to upload. 4. To abandon your changes and return to the Mapping Sets tab, click Cancel. 5. To save your changes (including deleting any checked files and uploading any added files) and subsequently return to the Mapping Sets tab, click Save. If you attempt to upload an invalid mapping file or omit the description then an error will be displayed. You must re-add the new mapping files after correcting the error.
Page 453
Export Hosts. Export NIC-Switch Dependencies. Export SI-Host Dependencies. Export Switches.
For further information about the Compuware mapping set see The Compuware mapping set.
For further information about the extended RDB mapping set see The extended RDB mapping set.
Managing exporters
Page 454
To delete an exporter
To delete an exporter, click its Delete link. Once an exporter has been deleted there is no way to retrieve it. Deleting an exporter does not delete the mapping set or adapter configuration.
Running an export
Only run exports from BMC Atrium Discovery during time periods when no discovery runs are occurring. This ensures that the state of the datastore does not change during the export. Such a change could result in inconsistent data being exported.
There are two ways to run an export: From the user interface From the command line
Log files
Page 455
There are two types of log files produced by Export. Export Log Files Other Log Files
Page 456
Key steps
There are four key steps in the process: 1. The query string is run. You can find more information about the Query section in a mapping file in The Mapping File Format. 2. The result of running that query is produced. 3. The records are transformed into a set of CIs. This is described in more detail in Transforming a BMC Atrium Discovery Dataset using a Mapping File. 4. The set of CIs are inserted into the database.
No data deletion When BMC Atrium Discovery data is exported, the appliance does not delete any previously exported data on the target system. For example, when exporting to an RDBMS, you need to perform a manual table truncate procedure to remove the data.
Page 457
search BusinessApplicationInstance where parseTime("${lastExportFinished}") < modified(#) show name, description, #RunningSoftware:HostedSoftware:Host:Host.hostname as host_hostname, #RunningSoftware:HostedSoftware:Host:Host.local_fqdn as host_fqdn
This query returns the following result set: name 1 Payroll description The payroll application host_hostname websrv01 london_orcl sap_01 2 Website Our company website Webserv01 Webserv02 Webserv03 London_orcl 3 Employee Expenses The employee expenses application websrv01 london_orcl host_fqdn Webserv01.mycompany.com London_orcl.mycompany.com Sap_01.mycompany.com Webserv01.mycompany.com Webserv02.mycompany.com Webserv03.mycompany.com London_orcl.mycompany.com Webserv01.mycompany.com London_orcl.mycompany.com
The first two fields (Name and Description) have returned one value per record. The next two fields, on the other hand, are the result of key expression traversals over a relationship. They each return a sequence of values: one value per relationship that was traversed. They are the result of traversing all relationships of the type RunningSoftware:HostedSoftware:Host from the BusinessApplicationInstance (BAI). (Note that there is no need to use explode to cause the key expressions to be treated as separate rows in the output.) The first BAI (Payroll) had three such relationships, and so the host_hostname and host_fqdn fields returned three values each for that BAI's record. The second BAI (Website) had four such relationships, while the last BAI (Employee Expenses) had two. Note how both of the fields that returned sequences (host_hostname and host_fqdn) returned sequences that correspond. The first entry in the Payroll's host_hostname field (webserv01) corresponds to the first entry in Payroll's host_fqdn field. The second and third entries in each field also match. Using these corresponding sequences, we can compile a list of Hosts that are related to each application. In our example, the Payroll application could be described as follows:
Name: Payroll Description: The payroll application Hosts: Host 1 Hostname: webserv01 FQDN: webserv01.mycompany.com Host 2 Hostname: London_orcl FQDN: london_orcl.mycompany.com Host 3 Hostname: sap_01 FQDN: sap_01.mycompany.com
We have taken one record from the result set and pivoted it, generating a BusinessApplicationInstance CI and three Host CIs from the record. This is how the transformation process works. Consider the following CI declarations from a mapping file (this is described in more detail in A Closer look at Mapping Files).
Page 458
<ci cmdb-name="bai" main="true"> <field src="name" dest="Name" identity="true"/> <field src="description" dest="Description"/> </ci> <ci cmdb-name="host" collection="true"> <field src="host_hostname" dest="HostHostName" identity="true"/> <field src="host_fqdn" dest="HostFQDN" identity="true"/> <relationship cmdb-name="hostedsoftware" direction="main-to-sub"/> </ci>
The first CI (the one declared "main") is the principal CI that this mapping file is concerned with. It is typically the node from which the various traversals start. The sub-CI ("host") is generated from other fields in the result set. If its fields return sequences then you will need to set "collection='true'"; if you only expect one value per field then you can leave that declaration out. The "relationship" element in the sub-CI tells the exporter how your main CI and sub CI are related. It is used when exporting to systems where the relationship has a name, such as Atrium CMDB. For the simpler adapters (such as CSV and RDB) it is ignored. If you intend to use the mapping file for these adapters only, you still need to specify the relationship, its name and direction but you can specify any values. In order for the Exporter to validate mapping files, at least one field in each CI must be given the attribute "identity='true'".
The Query section is built up of search service functions. For more information on how to build search queries please see the Using the Search and Reporting service.
The following search service functions are not supported by BMC Atrium Discovery Export: dq dq_band dq_metric
In the query section, the exporter makes a variable available that contains the time at which the exporter was last run. This variable is called "lastExportFinished" and is used with the function parseTime as follows:
parseTime("${lastExportFinished}")
This generates a timestamp that the datastore can recognize. When this variable is encountered, the exporter substitutes the variable with the date that it was last run. The exporter then sends the search query to the datastore. By unchecking the "Export changed items only" check box, the exporter will set the lastExportFinished to 1 Jan 1980. This will result in a full export. Example: Using the variable as part of a where clause This variable can be used as part of a where clause.The following example will return items that have changed since the last time this exporter was run:
Page 459
This variable can also be used with search services functions inside mapping file queries. For example, it can be used to filter on changes in dependencies between BAI and software collection. If you have other conditions to place in the query's where clause, it is generally best to put the other conditions before the modified check, to avoid comparing modified times of many nodes that do not match the condition. For example:
search Host where os_type = "Windows" and parseTime("${lastExportFinished}") < modified(#) show hostname
Transformation section
The transformation section is made up of a number of CIs. Each CI has a name (cmdb-name) and a number of field elements. There is one main CI and zero or more sub CIs. There can only be one main element (it has the attribute "main" set to true).
Main CI transformation
In this section of the mapping file, the main attribute is set to "true", indicating that this is the main CI.
<ci cmdb-name="BMC_Application" main="true"> <field src="name" dest="Name" identity="true"/> <field src="description" dest="shortDescription"/> </ci>
The name of the CI on the remote computer is BMC_Application. The set of fields with identity = 'true' together uniquely identify this CI. Identity tags can be set on one or more fields.
If more than one field has "identity='true'" set then the exporter will only overwrite an existing item if it has identical values in all of the identity fields. In other words, multiple identity fields cause an AND operation, not an OR.
Page 460
The first field of a collection CI cannot be const. The main CI cannot be a collection. The main CI cannot have a relationship configured.
The first field of a CI with "collection='true'" cannot use "const" as a source. If you only need one data field in your CI, and it is const then export at least one other field of data with the CI. The main CI (ie. the CI with "main='true'") cannot be a collection CI. Remove the "collection='true'" attribute. The main CI (ie. the CI with "main='true'") is related to all the other CIs in the set by the relationships configured with those CIs. It cannot have a relationship of its own configured. Remove the "Relationship" element.
<?xml version="1.0" encoding="utf-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <!-- The direction of a relationship between the main and a sub CI --> <xs:simpleType name="relationshipDirection"> <xs:restriction base="xs:string"> <xs:enumeration value="main-to-sub"/> <xs:enumeration value="sub-to-main"/> </xs:restriction> </xs:simpleType> <!-- A field in the CMDB, ie. a CI attribute. --> <xs:complexType name="Field"> <!-Set a field's "const" attribute to specify a set value to write to a field in the CMDB. A Field can have either "const" or "src" set, not both. --> <xs:attribute name="const" type="xs:string"/> <!-The field in the query's result set to take this field's value from. You can use aliases for long fields. For example, use "name" as the src attribute in the following query: query host show name Or use the alias name "ip_address" as the src attribute in the following: query host show #HostWithAddress:HostAddress:AddressOfHost: IpAddress.address as ip_address (The traversal is on two lines for readability, in reality it is on a single line.) --> <xs:attribute name="src" type="xs:string"/> <!-The CMDB class' attribute to write the value to. --> <xs:attribute name="dest" type="xs:string" use="required"/> <!-If this field is required in the CMDB, set this value to "true". If you set this to "true" and a null value is returned from the data store, the sub-CI will be ignored, that is, the exporter will not attempt to insert it into the CMDB. Failure to do this could cause the CMDB to throw an exception when committing the transaction for the main CI. You would lose the main CI and all of its sub-CIs instead of just the sub-CI
Page 461
with the null field. --> <xs:attribute name="required" type="xs:boolean" default="false"/> <!-Does this field form part of the CI's identity? "false" by default. If any field is set to true then the transform engine will do a lookup in this CMDB class to see if an object with the same values for the identity fields exists, and use that if one is found. If none is found, this item will be inserted. If no fields are marked as identity fields, all items will be inserted as new, even if they're identical. --> <xs:attribute name="identity" type="xs:boolean" default="false"/> </xs:complexType> <!-- A relationship between the main CI and a sub-CI. --> <xs:complexType name="Relationship"> <xs:sequence> <!-A relationship can have fields in the same way as a CI can. --> <xs:element name="field" type="Field" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <!-The cmdb class name for this CI. --> <xs:attribute name="cmdb-name" type="xs:string" use="required"/> <!-The direction the relationship goes in. For example, if this is the dependency relationship from an application to a host, then the application is dependent on the host. The direction is thus "main-to-sub" if the main CI is the application. --> <xs:attribute name="direction" type="relationshipDirection" use="required"/> </xs:complexType>
<!-- A mapping for one CI class --> <xs:complexType name="CI"> <xs:sequence> <!-The fields for this CI --> <xs:element name="field" type="Field" minOccurs="1" maxOccurs="unbounded"/> <!-How this CI relates to the main CI --> <xs:element name="relationship" type="Relationship" minOccurs="0" maxOccurs="1"/> </xs:sequence> <!-CI class name in the CMDB --> <xs:attribute name="cmdb-name" type="xs:string" use="required"/> <!-Is this the "main" CI? There is one main CI and (optionally) many sub-CIs. --> <xs:attribute name="main" type="xs:boolean" default="false"/> <!-If true (the default), any existing item found in the CMDB based on the identity fields will have its non-identity fields
Page 462
overwritten by the fields from the imported item. If false, the existing object will be left untouched. This value should possibly be set per-attribute. --> <xs:attribute name="overwrite-non-id-fields" type="xs:boolean" default="true"/> <!-If true, then all of the fields that make up this CI have to be collection fields, and all the collections have to have the same length. The transform engine will generate one sub-CI per set of values in the collections. If this is false and any of the fields for this CI are returned as a collection, an error will occur. This is only applicable to sub-CIs. Defaults to "false". --> <xs:attribute name="collection" type="xs:boolean" default="false"/> <!-Some items exist only in the context of their main CIs. A good example is an IP address - the same IP address may exist many times on the network, but will only exist once per Host. Thus, the IP address is identified by its address and its relationship to its main CI, namely the Host. Set this attribute to true to tell the lookup to treat the relationship to the main CI as part of the identity. --> <xs:attribute name="parent-is-identifier" type="xs:boolean" default="false"/> <!-Specify which field contains a reference to the BMC Atrium Discovery node that this CI is an export of. The exporter uses this reference to store and retrieve the ID given to this CI in the remote system. This ID is then used to reconcile the CI against the remote system before resorting to lookups based on identity fields.
In BMC Atrium Discovery QL, you specify a node reference by "#". For example, you retrieve some fields and the node reference from a Host by: "search Host show name, fqdn, # as host_ref" In this example, you would set the "node-reference-field" to "host_ref". --> <xs:attribute name="node-reference-field" type="xs:string"/> </xs:complexType> <!-- The main document element. --> <xs:element name="mapping"> <xs:complexType> <xs:sequence> <!-- The query to run against the datastore --> <xs:element name="query" type="xs:string" minOccurs="1" maxOccurs="1"/> <!-- The CIs in this mapping--> <xs:element name="ci" type="CI" minOccurs="0" maxOccurs="unbounded"/> </xs:sequence> <xs:attribute name="description" type="xs:string" default="No description."/> <xs:attribute name="delete-kind" type="xs:string" /> </xs:complexType>
Page 463
</xs:element> </xs:schema>
Additions_NIC.xml
Host_nodeID, NIC_nodeID
Additions_NIC-15-TW_Additions_NIC.csv
Additions_SI.xml
BAI_nodeID, SI_nodeID
Additions_SI-13-TW_Additions_SI.csv
Additions_Switch.xml
NIC_nodeID, Switch_nodeID
Additions_Switch-16-TW_Additions_Switch.csv
BAI-SI.xml
BAI_nodeID, Application, Location, ExportDateFriendly, ExportDate, bai_LastModifiedFriendly, bai_LastModified SI_nodeID, si_type, si_name, si_version, ExportDateFriendly, ExportDate, si_LastModified, TW_ContainedSI SI_nodeID, ExportDateFriendly, ExportDate, bai_LastModifiedFriendly, bai_LastModified, TW_DEP_SI Host_nodeID, Event Time, Attribute, Value Before, Value After NIC_nodeID, Event Time, Attribute, Value Before, Value After Switch_nodeID, Event Date, Attribute, Value Before, Value After BAI_nodeID, Parent
Changes_Host.xml
Export Host Attribute Changes. Export NIC Attribute Changes. Export Switch Attribute Changes. Export Destroyed BAIs. Export Destroyed Hosts. Export Destroyed NICs. Export Destroyed SIs.
Changes_Host-10-TW_Changes_Host.csv
Changes_NIC.xml
Changes_NIC-11-TW_Changes_NIC.csv
Changes_Switch.xml
Changes_Switch-12-TW_Changes_Switch.csv
Deleted_BAI.xml
Deleted_BAI-17-TW_Deleted_BAI.csv
Deleted_Host.xml
SI_nodeID, Host_nodeID
Deleted_Host-TW_Deleted_Host.csv
Deleted_NIC.xml
Host_nodeID, NIC_nodeID
Deleted_NIC-20-TW_Deleted_NIC.csv
Deleted_SI.xml
BAI_nodeID, SI_nodeID
Deleted_SI-18-TW_Deleted_SI.csv
Page 464
Deleted_Switch.xml
NIC_nodeID, Switch_nodeID
Deleted_Switch-21-TW_Deleted_Switch.csv
Host-NIC.xml
Host_nodeID, NIC_nodeID, ExportDateFriendly, ExportDate, nic_LastModified NIC_nodeID, NICName, NICIP_Address, NICNetmask, NICBroadcast, NICmac_addr, NICSpeed, NICDuplex, NICNegotiation, NICAdapter, SwitchSpeed, SwitchDuplex, SwitchNegotiation, ExportDateFriendly, ExportDate, nic_LastModifiedFriendly, nic_LastModified, TW_HostInterface Host_nodeID, hostname, fqdn, vendor, model, serial, ram, os, processor NIC_nodeID, Switch_nodeID, ExportDateFriendly, ExportDate, ExportDateFriendly, ExportDate, host_LastModifiedFriendly, host_LastModified SI_nodeID, Host_nodeID, ExportDateFriendly, ExportDate, si_LastModified Switch_nodeID, SwitchName, SwitchDescription, SwitchModel, SwitchOS_Type, SwitchOS_Version, ExportDateFriendly, ExportDate, switch_LastModifiedFriendly, switch_LastModified
Host-NIC-08-TW_Dependancies_HOST-NIC.csv Host-NIC-04-TW_NIC_Node.csv
Host.xml
Export Hosts.
Host-03-TW_Host_Node.csv
Nic-Switch.xml
Nic-Switch-09-TW_Dependancies_NIC-Switch.csv
SI-Host.xml
SI-Host-07-TW_Dependancies_SI.csv
Switch.xml
Switch-05-TW_Switch_Node.csv
The integration exports dependency maps to Compuware BSM tool, including information about SW, HW and dependency changes.
Page 465
Best Practice This mapping set was developed to serve as the basis of a best practice, used to export BMC Atrium Discovery data to RDBMS. It is not intended to be the only best practice, nor does it cover all use cases.
The extended RDB Mapping set is designed in such a way that it can be used to form a modular export. That is, the user can pick and match any of the mapping files in the mapping set to decide what they want to export. Each mapping file contains a main node and relevant relationships to other nodes. The exception to this (no relationships) is the Host mapping, as this is expected to form the basis of any export set. As an example, the BAI mapping file exports to a BAI table, as well as to a BAI-to-Host relationships table. If you include only the Host and BAI mapping files then they will export all data related to both nodes and the relationship links. If you want to include SI in your exports, then this will export the SIs and relationships to BAIs and Hosts. The idea behind this is to allow flexibility in the export schema while maintaining consistency. If a new node is to be added to the export, then the same process should be applied. All mapping files will only export data that has been modified since the last export date. This functionality can be overridden by de-selecting the Export Changed Items flag when defining the Exporter. The Extended RDB mapping set is made of the mapping files as described below. Mapping File bai.xml Relationship BAIs, relationship to Hosts, Cyclic dependencies to: Contained BAIs, DependedOn BAIs Hosts, Network Interfaces and related Subnet Fields BAI: bai_key, name, description, type, version Host: key Cyclic Dependency BAIs: related BAIKey Host: host_key, host_hostname, host_name, description, domain, model, serial, ram, max_ram, workgroup, vendor, os, os_edition, os_version, os_eos, os_eoes, num_processors, num_logical_processors, processor_type, processor_speed, cores_per_processor, power_watts, btu_h, u_size, hw_eos, hw_eoes, host_created_date, host_modified_date, last_update_success Network Interface: interface_key, interface_name, interface_netmask, interface_ipaddress, interface_subnet, interface_speed, interface_duplex, interface_negotiation Switch: switch_key, switch_name, description, status, model, ostype, osversion Switch Port: switch_port, interface_key, connected_ip_addr, connected_mac_addr, port_speed, port_duplex, port_negotiation, port_description, port_state, port_domain, port_vlan, port_vland_id Host: key Software Instance: si_key, si_name, si_type, si_version, product_version, prod_release, edition, service_pack, build, patch, revision, si_eos, si_eoes BAI: bai_key Host: host_key Cyclic Dependency SIs: related SI key Output Table bai_ci, bai_contained_bai_rel, bai_dependedon_bai_rel, bai_host_rel host_ci, host_networkinterface_ci
host.xml
switch.xml
si.xml
Software Instances: relationships to Hosts and BAIs; Cyclic dependencies to: peer-to-peer communicating SIs, client-server communicating SIs, Contained SIs, DependedOn SIs Packages and Hosts with those packages installed
package.xml
Package: package_key, package_name, package_version, package_revision, package_os, package_created_date, package_modified_date Host: host_key Patch: patch_key, patch_name, patch_os, patch_created_date, patch_modified_date Host: host_key File: file_key, file_path, file_size, file_md5sum, file_last_modified, file_created_date Host: host_key SI: si_key
package_ci, package_host_rel
patch.xml
Patches and Hosts with those patches installed Files and relationships to SIs and Hosts
patch_ci, patch_host_rel
file.xml
Page 466
cluster.xml
Clusters and relationships to member Hosts Virtual Hosts and related Physical Hosts
Cluster: cluster_key, cluster_name, cluster_type, cluster_created, cluster_modified Host: host_key SI running virtualisation software: v_instance_key, v_instance_name, v_product_name, v_product_version, v_product_last_update_success, v_product_created, v_product_modified Running Host: v_host_key Host container: host_container_key, host_container_name, host_container_type, host_container_created, host_container_modified Running Host: contained_host_key
cluster_ci, cluster_host_rel
virtualcontainer.xml
virtualcontainer_ci, virtualcontainer_vhost_rel
hostcontainer.xml
hostcontainer_ci, hostcontainer_host_rel
MySQL
In the MySQL command line client 'use <databasename>' 'source <file>'
Oracle
In a command window: 'sqlplus <user>/<password>@<SID> -d <databasename>' '@<filename>'
SQL Server
In a command window: 'osql -S <hostname>[,<port>] -U <user> -P <password> -i <file>'
Export APIs
Page 467
The BMC Atrium Discovery Export APIs enable users to interrogate the datastore using a script or program, and receive data back as a stream of text, an empty string, or a return code. The BMC Atrium Discovery Export APIs provide the following benefits for users: The ability to automate processes. The ability to extract data for processing in third party applications. The following BMC Atrium Discovery Export APIs are provided: CSV API XML API
CSV API
The CSV API enables users to interrogate the datastore using a script or program, and receive data back as a stream of CSV (comma separated values) text, an empty string, or a return code.
Usage
The CSV API is intended to be used by a script or program. It can also be accessed using a browser, or using the wget command, though these would generally be used for testing. When using a URL you should ensure that the the entire URL is properly encoded. For example, special characters in the password may require encoding. The following section describes the parameters required and values returned when using the CSV API. The query is specified as a URL of the form:
http://appliance/ui/api/CsvApi?query=search_string&username=username&password=password
If you are using an appliance with https redirection enabled, you must use an https URL in your scripts. The redirection script changes the http request and the script will not work. To enable or disable https redirection, contact Customer Support.
https://appliance/ui/api/CsvApi?query=search_string&username=username&password=password
The URL is entered on a single line. In these examples, appliance is the resolvable hostname or IP address of the appliance, search_string is a query string as recognized by the data store, username and password are the user name and password of the BMC Atrium Discovery user.
Security Considerations During a typical HTTP request to the CSV API, the username and password is visible in plain text and can be recovered in order to gain unauthorised access to the BMC Atrium Discovery user interface. To mitigate this, follow two security best practices: 1. Always use a dedicated, read-only account for queries, and never use a high level account such as 'system'. 2. Always use https access so that the username and password are not in plain text.
HTML URL Encoding When constructing a URL string, care must be taken to properly encode characters that are not allowed in a URL, for example spaces: https://appliance/ui/api/CsvApi?query=Search%20Host%20show%20name&username=username&password=password A reference of encodings can be found here
XML API
The XML API enables users to interrogate the datastore using a script or program, and receive data back as a stream of XML, an empty string, or a return code.
Usage
The XML API is intended to be used by a script or program. It can also be accessed using a browser, or using the wget command, though these would generally be used for testing. The following section describes the parameters required and values returned when using the XML API. The query is specified as a URL of the form:
http://appliance/ui/api/XmlApi?query=search_string&username=username&password=password
Page 468
If you are using an appliance with https redirection enabled, you must use an https URL in your scripts. The redirection script changes the http request and the script will not work. To enable or disable https redirection, contact Customer Support.
https://appliance/ui/api/XmlApi?query=search_string&username=username&password=password
The URL is entered on a single line. In these examples, appliance is the resolvable hostname or IP address of the appliance, search_string is a query string as recognized by the data store, username and password are the user name and password of the BMC Atrium Discovery user.
Security Considerations With a normal HTTP request to the XML API the username and password will be visible in plain text and could be recovered in order to gain unauthorised access to the ADDM UI. To mitigate this there are 2 security best practises 1) Always use a dedicated read only account for queries, never use a high level account such as 'system' 2) Always use https access so that the username and password are not in plain text.
HTML URL Encoding When constructing a URL string, care must be taken to properly encode characters that are not allowed in a URL, for example spaces: https://appliance/ui/api/XmlApi?query=Search%20Host%20show%20name&username=username&password=password A reference of encodings can be found here
Parameters
The following parameters are required for each script that interrogates the data store: 1. A valid query string as recognized by the datastore. See the BMC Atrium Discovery Searching and Reporting Service Technical Note for details of the syntax of datastore queries. 2. A valid user name and password. If you do not supply the correct credentials, the query will fail. You should note that unless you are using https, user names and passwords sent from a script are transmitted unencrypted. 3. The CSV API accepts an optional Boolean parameter enabling you to specify whether column headings are to be returned with CSV data. This parameter is ignored by the XML API. To specify that headings are returned, use headings=1. This is the default. To specify that headings are not returned, use headings=0. For example:
http://appliance/ui/api/CsvApi?query=SEARCH%20Host&username=system&password=system&headings=0
Return values
The API returns the data and error codes: If the query is valid and successful, a block of CSV formatted text or XML is returned. If the query is valid but unsuccessful, that is, the datastore does not contain any matching data, an empty string is returned. If the query is empty, an error code of 1 is returned. If the CSV or XML formatter is not found, an error code of 2 is returned. If you receive this error, contact Customer Support. If there is a problem running the query an error code of 3 is returned. Check the syntax of the query by referring to the BMC Atrium Discovery Searching and Reporting Service Technical Note. If the login fails, an HTTP server response code of 401 (Unauthorised) is returned. If the query takes longer to run than the appliance web server timeout, then no output is received by the script. By default, the timeout is ten minutes. If you need to increase the appliance web server timeout, contact Customer Support. You may also experience client-side timeouts. See No Client Timeout for further information.
Testing
Browser-Based Testing
You can test the BMC Atrium Discovery Export APIs using a browser. Enter the following URL in a browser, replacing appliance with the resolvable hostname or IP address, using the required CsvApi or XmlApi label, and using the appropriate user name and password for the appliance that you are connecting to. For example using the CSV API:
Page 469
http://appliance/ui/api/ CsvApi?query=SEARCH%20Host&username=system&password=system
A dialog is displayed. Click the Open button to view that data in the application registered for CSV files. Click the Save button to save the data. The default file name is CsvApi. For example using the XML API:
http://appliance/ui/api/ XmlApi?query=SEARCH%20Host&username=system&password=system
The results are displayed in the browser. Use the File > Save menus to save the output as an XML file.
The wget command returns the HTTP status code, for a successful query this is 200 OK. The data returned by the query above is written to a file in the current working directory which for the CSV API is called:
CsvApi?query=SEARCH%20Host\&username=system\&password=system
Page 470
import urllib def runQuery(): # Successful output query = "http://appliance/ui/api/CsvApi?query=SEARCH%20Host&username=system&password=system" f = urllib.urlopen(query) print f.read() # Empty query - output will be 1. query1 = "http://appliance/ui/api/CsvApi?username=system&password=system" f = urllib.urlopen(query1) print f.read() # Invalid query - output will be 3. query3 = "http://appliance/ui/api/CsvApi?query=SEARCHX%20Host&username=system&password=system" f = urllib.urlopen(query3) print f.read() # Authorisation failure - Output server code of 401. query4 = "http://appliance/ui/api/CsvApi?query=SEARCH%20Host&username=system&password=system1" f = urllib.urlopen(query4) print f.read() if __name__ == '__main__': runQuery()
With no exception handler around Query4, a 401 Unauthorised IOError traceback is produced.
Example queries
Empty query
The following example returns an error code of 1; this is the result of an empty query. The text to be appended to the command lines for this example is:
username=system\&password=system
In this case, the query=SEARCH%20Host string is missing. Use the cat command to view the output file; it contains only the returned error code; a 1.
No client timeout
You may experience client-side timeouts; in this case you will encounter an unexpected end of file in the output. To avoid this, increase the client timeout to a value in excess of the web server timeout, or disable it entirely. The following example shows how to disable the wget timeout. If the query takes longer to run than the appliance web server timeout, no output will be received.
$ wget --timeout=0 http://appliance/ui/api/...
Invalid query
The following example returns an error code of 3; this is the result of an invalid query. The text to be appended to the command lines for this example is:
query=SEARCHX%20Host\&username=system\&password=system
The query string now uses SEARCHX instead of SEARCH. This query will fail as there is no SEARCHX command defined in the query language. Use the cat command to view the output file; it contains only the returned error code; a 3.
Authorisation failure
Page 471
The following example shows an authorisation failure; this is the result of an invalid password. The text to be appended to the command lines for this example is:
query=SEARCH%20Host\&username=system\&password=system1
The query string now uses a password of system1 instead of system. This causes an authorisation failure returning a server response code of 401. This can be seen on the terminal the command was issued from.
Legal Information
Copyright 2012 BMC Software, Inc. BMC, BMC Software, and the BMC Software logo are the exclusive properties of BMC Software, Inc., are registered with the U.S. Patent and Trademark Office, and may be registered or pending registration in other countries. All other BMC trademarks, service marks, and logos may be registered or pending registration in the U.S. or in other countries. All other trademarks or registered trademarks are the property of their respective owners. The information included in this documentation is the proprietary and confidential information of BMC Software, Inc., its affiliates, or licensors. Your use of this information is subject to the terms and conditions of the applicable End User License agreement for the product and to the proprietary and restricted rights notices included in the product documentation. BMC Software, Inc. 5th Floor 157-197 Buckingham Palace Road London SW1W 9SP United Kingdom Tel: +44 (0) 800 279 8925 Email: customer_support@bmc.com Web: http://www.bmc.com/support
Table of Contents
1 BMC Atrium Discovery Configuration Guide 2 Table of Contents 3 Configuration guide 3.1.1 Download a PDF version of the BMC Atrium Discovery Configuration Guide 3.2 Navigating the administration interface 3.2.1 Common administration operations 3.3 Configuring after installation 3.3.1 Configuring appliance settings 3.4 Usage data submission 3.5 Anonymity 3.6 FAQ 3.6.1 Why? 3.6.2 What is and what can be sent to BMC? 3.6.3 Isn'a t this a half open backdoor that could be exploited further? 3.6.4 How can I ensure that this functionality is not switched on by mistake? 3.7 Managing usage data collection 3.8 Activating and deactivating the Usage Data Collection feature 3.8.1 Initial activation 3.8.2 Subsequent activation and deactivation of the Usage Data Collection 3.9 Setting the keyboard layout 3.10 Setting the system timezone 3.11 Setting the system time
Page 472
3.11.1 Configuring appliance networking 3.12 To set the hostname locally 3.13 To set the hostname using DHCP/DNS 3.14 Diagnosing hostname problems 3.14.1 Visualizations and export do not display 3.14.2 Cannot access the UI - 500 internal server error 3.15 Obtaining networking information 3.16 Stopping the Tideway services 3.17 Editing the network configuration files 3.18 Restarting networking 3.19 Restarting the Tideway services 3.20 Troubleshooting 3.21 Installing and configuring the Tectia SSH client 3.21.1 Configuring discovery to use the Tectia SSH client 3.21.2 Creating a bond point called eth0 3.22 Managing standard data 3.22.1 Setting up standard data categories 3.23 Viewing the standard categories 3.23.1 Accessing data in the standard categories 3.23.2 Managing attachment categories 3.23.3 Managing product families 3.23.4 Managing lifecycle status values 3.23.5 Managing locations 3.23.6 Managing organizational units 3.23.7 Managing Recovery Time Values 3.23.8 Purging history and destroyed nodes 3.24 History 3.25 Destroyed nodes 3.25.1 Viewing the system taxonomy 3.25.2 Viewing the system taxonomy 3.25.3 Storage and modification of the taxonomy 3.26 Managing users and security 3.26.1 Managing system users 3.26.2 Enabling other users 3.26.3 To enable other users 3.26.4 Creating a new user 3.26.5 Amending a user's details 3.26.6 Changing a user's password 3.26.7 Reactivating a user account 3.26.8 Deleting a user 3.26.9 User permissions 3.26.10 Managing groups 3.26.11 Listing all current groups 3.26.12 To create a new group 3.26.13 To amend group details 3.26.14 To delete a group 3.26.15 Group permissions 3.26.16 System group permissions by category 3.26.17 Managing security policies 3.26.18 Accounts and passwords 3.26.19 Login page 3.26.20 UI security page 3.26.21 Configuring HTTPS settings 3.26.22 Generating a server key 3.26.23 Uploading a server certificate 3.26.24 Self signing a server certificate 3.26.25 Uploading or downloading a CA certificate bundle 3.26.26 Enabling or disabling HTTP and HTTPS access to the appliance 3.26.27 Managing LDAP 3.27 LDAP Terms 3.27.1 The login procedure 3.27.2 The Global Catalog 3.28 Configuring LDAP 3.28.1 Changing from LDAPS to LDAP 3.29 LDAP group mapping 3.30 Troubleshooting 3.30.1 Configuring Web authentication settings 3.30.2 Configuring SSL client certificate verification 3.30.3 SSL certificate lookup 3.30.4 Standard Atrium Discovery web authentication 3.30.5 Viewing active sessions 3.30.6 Auditing the appliance 3.30.7 Reporting on audit events 3.30.8 Event groups 3.30.9 Purging the audit Log
Page 473
3.31 Maintaining the appliance 3.31.1 Using maintenance mode 3.31.2 Putting the appliance into maintenance mode 3.31.3 Rebooting the appliance 3.31.4 Shutting down the appliance 3.31.5 Rebooting or shutting down the appliance 3.32 Controlling the appliance 3.32.1 Restarting the services 3.32.2 Rebooting the appliance 3.32.3 Shutting down the appliance 3.32.4 Putting the appliance into maintenance mode 3.32.5 Baseline configuration 3.33 To view the appliance status 3.33.1 Configuring appliance status options 3.33.2 Configuring actions on changing appliance status 3.33.3 Checks performed 3.33.4 Tripwire commissioning and configuration 3.33.5 Tripwire maintenance 3.33.6 Configuring dependency visualizations 3.33.7 Configuring model maintenance settings 3.33.8 Modifying DDD, host, and software instance aging limits 3.33.9 When should DDD removal blackout windows be used? 3.33.10 Dark space DiscoveryAccess nodes 3.33.11 When should DDD removal blackout windows be used? 3.33.12 Configuring audit and application options 3.33.13 Visualization cache 3.33.14 Configuring the NTP client 3.33.15 Rebaseline the appliance after configuring the NTP client 3.33.16 Using command line utilities 3.34 Duplicate or enhanced functionality in user interface 3.35 Common options in the utilities 3.35.1 User example 3.35.2 User examples 3.35.3 User example 3.35.4 User example 3.35.5 User example 3.35.6 Cron overview 3.35.7 Managing cron entries 3.35.8 User example 3.35.9 User example 3.35.10 User example 3.35.11 User example 3.35.12 User example 3.35.13 User example 3.35.14 User examples 3.35.15 User examples 3.35.16 User examples 3.35.17 User example 3.35.18 User example 3.36 Effect on CMDB synchronization 3.36.1 User example 3.36.2 User examples 3.36.3 User example 3.36.4 Background 3.36.5 To use the utility 3.36.6 User example 3.36.7 User examples 3.36.8 User examples 3.36.9 User example 3.36.10 User example 3.36.11 User example 3.36.12 User example 3.36.13 User example 3.36.14 User example 3.36.15 User example 3.36.16 User example 3.36.17 Backing up and restoring the appliance 3.36.18 To create a backup of the appliance 3.36.19 To restore a backup to the appliance 3.36.20 Managing disks and swap space 3.36.21 To move data to a new disk 3.37 Configuring discovery 3.37.1 Configuring discovery settings 3.38 To set Discovery settings 3.38.1 Port settings
Page 474
3.38.2 Device identification settings 3.38.3 Session settings 3.38.4 Scanning settings 3.38.5 SQL integration settings 3.38.6 Mainframe discovery 3.38.7 Other discovery settings 3.38.8 Masking sensitive data 3.39 Managing Sensitive Data Filters 3.40 Creating a Sensitive Data Filter 3.41 Notes on Sensitive Data Filters 3.41.1 Managing the discovery platform scripts 3.42 Viewing the existing scripts 3.43 Amending the existing scripts 3.43.1 Additional discovery 3.43.2 Returning command exit status data 3.43.3 Adding privileged execution to commands 3.43.4 Password prompt in privileged command execution 3.43.5 Privileged commands in Solaris 3.43.6 AIX 3.43.7 FreeBSD 3.43.8 HPUX 3.43.9 IRIX 3.43.10 Linux 3.43.11 Mac OS X 3.43.12 NetBSD 3.43.13 OpenBSD 3.43.14 OpenVMS 3.43.15 POWER HMC 3.43.16 Solaris 3.43.17 Tru64 3.43.18 UnixWare 3.43.19 VMware ESX 3.43.20 VMware ESXi 3.43.21 Setting up ports for OS fingerprinting 3.44 To set up port settings 3.44.1 Using scanner files 3.44.2 Process for using scanner files 3.44.3 Creating a scanner file 3.44.4 Loading the scanner file onto the appliance 3.44.5 Considerations when using scanner files 3.44.6 Accessing and downloading the tool 3.44.7 Using the tool 3.44.8 Uploading the data to the appliance 3.44.9 Improving discovery 3.45 Viewing all current Discovery Conditions 3.46 Viewing all current Discovery Conditions from a host view 3.47 Viewing hosts with Discovery Conditions logged 3.47.1 Discovery Condition Detailed List report 3.47.2 Hosts impacted by Discovery Conditions report 3.48 Understanding Discovery Conditions 3.48.1 Integration points 3.49 What is an integration point? 3.50 Viewing integration points 3.50.1 Viewing a connection 3.50.2 Viewing a query 3.50.3 Creating a connection 3.50.4 Testing a query 3.51 To configure a connection to a DB2 database 3.51.1 Technology Knowledge Update (TKU) 3.51.2 Consolidation 3.51.3 What is consolidated? 3.51.4 Configuring consolidation 3.51.5 When consolidation is running 3.51.6 Cancelling consolidating discovery runs 3.51.7 Extending discovery 3.52 Configuring mainframe discovery 3.52.1 Mainframe discovery limitations 3.52.2 Required z/OS Discovery Agent version 3.52.3 Optional Additional Discovery Tools 3.52.4 Credentials 3.52.5 Enabling additional mainframe methods 3.53 Mainframe discovery and CMDB synchronization 3.54 Discovering Tomcat 3.54.1 Requirements for a full discovery 3.54.2 Tomcat Discovery results
Page 475
3.54.3 Improvements over extended discovery using JMX 3.55 Discovering WebSphere 3.55.1 Requirements for a full discovery 3.55.2 Database nodes and relationships 3.55.3 WebSphere Discovery results 3.56 Discovering WebLogic 3.56.1 Requirements for a full discovery 3.56.2 Database nodes and relationships 3.56.3 Supported product versions 3.56.4 Extended WebLogic discovery results 3.57 Configuring extended WebLogic discovery 3.57.1 To configure extended WebLogic discovery 3.58 Requirements 3.59 Configuration in BMC Atrium Orchestrator 3.60 IBM Virtual I/O (VIO) Server discovery 3.61 Hardware Management Console (HMC) discovery 3.62 Example visualization 3.63 VMware ESX and ESXi 3.63.1 VMware vCenter 3.63.2 VMware vSphere API 3.63.3 Discovered versions 3.64 Discovering VMware ESX and ESXi hosts 3.65 Discovery of vCenter 3.66 Discovery of VMware ESX and ESXi hosts via vCenter 3.66.1 Intermittent retrieval of serial number (ServiceTag) 3.67 VMware vSphere based discovery 3.67.1 VMware ESX and ESXi ssh discovery must be done as the root user 3.67.2 VMware ESX and ESXi discovery limitation 3.67.3 Credentials 3.67.4 About credential storage 3.67.5 User accounts on the target system 3.68 To view login credentials 3.69 To add host login credentials 3.70 To test host login credentials 3.71 To edit host login credentials 3.71.1 Fixing invalid credentials 3.72 Windows proxy 3.73 Windows proxy manager 3.74 Windows proxy pool 3.75 Operating System compatibility for IPv6 discovery 3.76 Steps to configure Windows discovery 3.77 Windows discovery utilities no longer shipped in BMC Atrium Discovery 3.78 Before you begin 3.78.1 Permissions required 3.78.2 Windows proxy version and operating system compatibility 3.78.3 Downloading a Windows proxy installer 3.78.4 Installing the Windows proxy manager and proxies 3.79 Post installation settings 3.79.1 To modify the Windows proxy host firewall 3.79.2 To specify the domain account used to run the Windows proxy 3.79.3 To start or stop the Windows proxy 3.79.4 To test Windows credentials and communication 3.79.5 Upgrading a Windows proxy 3.80 Windows proxy downgrade 3.80.1 To run the Windows proxy manager 3.80.2 Managing proxies 3.80.3 Managing known appliances 3.80.4 Additional tasks 3.81 Adding a Windows proxy pool 3.82 Viewing Windows proxy pools 3.83 Editing Windows proxy pools 3.84 Managing proxies using the Windows proxy manager 3.85 Managing Windows proxies from the UI 3.86 Adding Windows proxies 3.87 Viewing Windows proxies 3.88 Editing and managing Windows proxies 3.89 To change the default port of the proxy 3.89.1 To change the default port of an Active Directory Windows proxy 3.89.2 To change the default port of a Credential Windows proxy 3.90 Configuring the Appliance for SSL Communications 3.91 To stop a Windows proxy from the appliance command line 3.92 Windows proxy platform minimum specification 3.93 Windows proxies on Windows XP SP2 3.93.1 Adapters section 3.93.2 Search expression
Page 476
3.93.3 Search parameter 3.93.4 Speed Duplex and Negotiation Determination 3.93.5 Example Mapping 3.93.6 Access methods 3.93.7 Optional commands 3.93.8 Commands to run 3.93.9 Checksum 3.93.10 Example Windows proxy configuration file part 3.94 Viewing SNMP credentials 3.95 To add or edit SNMP credentials 3.95.1 SNMP v3 permissions 3.96 To test SNMP credentials 3.97 To view vCenter credentials 3.98 To set up vCenter credentials 3.99 Testing vCenter credentials 3.100 To view vSphere credentials 3.101 To set up vSphere credentials 3.101.1 Required vSphere privileges 3.102 To test vSphere credentials 3.102.1 Testing credentials for an arbitrary IP Address 3.102.2 Testing existing login credentials from the Host page 3.103 Windows discovery notes 3.103.1 Potential user lock out 3.103.2 Firewalls 3.103.3 Windows Vista and Windows 7 3.103.4 Windows Server 2008 and Windows Server 2008 R2 3.103.5 Windows 2012 Server 3.103.6 Windows Server 2003 Standard Edition Domain Controller 3.103.7 Windows 2000 and Windows NT 3.103.8 Windows discovery using IPv6 3.104 Windows discovery commands 3.104.1 WMI 3.104.2 RemQuery 3.104.3 SNMP 3.104.4 WMI Access and discovery behavior 3.104.5 RemQuery access and discovery behavior 3.104.6 Granting permissions 3.105 AIX 3.105.1 Shell scripts 3.105.2 SNMP 3.106 FreeBSD 3.106.1 Shell scripts 3.106.2 SNMP 3.107 HPUX 3.107.1 Shell scripts 3.108 SNMP 3.109 IRIX 3.109.1 Shell scripts 3.109.2 SNMP 3.110 Linux 3.110.1 Shell scripts 3.110.2 SNMP 3.111 Mac OS X 3.111.1 Shell scripts 3.111.2 SNMP 3.112 NetBSD 3.112.1 Shell scripts 3.112.2 SNMP 3.113 OpenBSD 3.113.1 Shell scripts 3.113.2 SNMP 3.114 OpenVMS 3.114.1 Shell scripts 3.114.2 SNMP 3.115 POWER HMC 3.115.1 Shell scripts 3.115.2 SNMP 3.116 Solaris 3.116.1 Shell scripts 3.116.2 SNMP 3.117 Tru64 3.117.1 Shell scripts 3.117.2 SNMP 3.118 UnixWare 3.118.1 Shell scripts
Page 477
3.118.2 SNMP 3.119 VMware ESX 3.119.1 vSphere 3.119.2 Shell scripts 3.120 VMware ESXi 3.120.1 vSphere 3.120.2 Shell scripts 3.121 To identify matching credentials 3.122 What is a Database Credential Group? 3.122.1 To configure a Database Credential Group 3.123 What is a Middleware Credential Group? 3.123.1 Configuring a Middleware Credential Group 3.123.2 Viewing vCenter credentials 3.123.3 Viewing vCenter credential details 3.123.4 Adding vCenter credentials 3.123.5 Updating vCenter credentials 3.123.6 Testing vCenter credentials 3.123.7 To view vSphere credentials 3.123.8 To view VSphere credential details 3.123.9 To add vSphere credentials 3.123.10 To update vSphere credentials 3.123.11 To test vSphere credentials 3.123.12 Viewing mainframe credentials 3.123.13 To configure mainframe credentials 3.123.14 The mainframe credential test page 3.123.15 Testing mainframe credentials for an arbitrary IP address 3.124 To manage the credential vault 3.124.1 Setting a passphrase 3.124.2 Changing a passphrase 3.124.3 Clearing a passphrase 3.124.4 Opening the credential vault 3.124.5 Closing the credential vault 3.124.6 Verifying the copied credentials vault 3.125 To view a DiscoveryAccess page 3.125.1 From a host node 3.125.2 From the Discovery Recent Runs page 3.125.3 Troubleshooting using session results 3.125.4 Checking credentials after a failure 3.125.5 Pattern management 3.126 Viewing the pattern module 3.126.1 Pattern module information 3.126.2 Pattern configuration 3.126.3 Editing a pattern module source 3.127 To select hosts or other nodes 3.127.1 Node types against which patterns can be run 3.128 To run a pattern 3.129 Importing data 3.129.1 Importing network device data 3.130 Importing data from CiscoWorks 3.131 Importing a CiscoWorks data file 3.132 Generating CiscoWorks data 3.132.1 Generating CSV files 3.132.2 Generating XML import files 3.132.3 Importing Hardware Reference Data 3.132.4 Importing a Hardware Reference Data file 3.132.5 Importing CSV data 3.133 To import a CSV file 3.134 Notes on the CSV importer 3.134.1 File format 3.134.2 Keys 3.134.3 Relationships 3.134.4 Type conversion 3.135 Example 1 3.136 Example 2 3.137 Context-sensitive reporting and linking 3.137.1 Configuring additional links 3.137.2 Configuration files 3.137.3 Link tags 3.137.4 Tags available 3.137.5 Example 00additional_links.xml file 3.137.6 Custom reporting 3.137.7 Configuration files 3.137.8 <report> elements 3.137.9 <chart> elements 3.137.10 <chart-channel> elements
Page 478
3.137.11 <chart-multi-channel> elements 3.137.12 <report-channel> elements 3.137.13 <rss-channel> elements 3.137.14 <summary-channel> elements 3.137.15 <time-series-channel> elements 3.137.16 <video-channel> elements 3.137.17 <web-channel> elements 3.137.18 <page> elements 3.137.19 Built-in channels 3.137.20 Built-in page names 4 Legal Information
Configuration guide
The BMC Atrium Discovery Configuration Guide is intended for IT operations staff and managers who are using or setting up and managing BMC Atrium Discovery. It describes, for example, how to view details of your IT systems, both from an infrastructure and from a business perspective, how to add to and amend information in the database, and how to attach existing files to a data object (for example, for disaster recovery purposes). Goal Configure and update the BMC Atrium Discovery appliance. View standard data structures. Related information Setting up Managing standard data Users and security Maintaining the appliance Discovery Benefit Be able to configure the appliance the first time you use it. Set up data specific to your organization (for example, add business information to the data automatically discovered by BMC Atrium Discovery). Be able to quickly configure users and security settings to use in BMC Atrium Discovery. Be able to configure the way the system is monitored and audited through the user interface or by using command line utilities. Understand how to set up, run, and monitor the discovery process to optimize BMC Atrium Discovery in your environment. Understand the import process, and how to exporting BMC Atrium Discovery data to other applications (for example, Microsoft Excel). Understand the reporting features of BMC Atrium Discovery and how to search for specific data to produce in reports.
Set up and configure system users, including creating new users, amending and deleting users and setting up passwords and security settings. Set up maintenance mode and take appliance backup or do command-line configuration. Set up initial data and run the discovery process.
Importing data
Page 479
Platforms: configure the commands used for each target operating system. This is described in Managing the discovery platform scripts. SNMP Devices: view the network devices which can be discovered by BMC Atrium Discovery. This is described in [Viewing SNMP devices]. Sensitive Data Filters: mask any sensitive data which may otherwise be seen in the command output. See Masking sensitive data. Discovery Configuration: configure the Discovery process. This is described in Configuring discovery. Discovery Consolidation: configure consolidation of discovery data between appliances. This is described in Consolidation. Ciscoworks Import: import switch data from CiscoWorks. This is described in Importing network device data. Vault Manangement: manage the Credential Vault. This is described in Managing the credential vault. Credential Migration: migrate UNIX, Windows and SNMP credentials from a BMC Atrium Discovery 7.5 system to an 8.3 system. This is described in Migrating credentials. Device Capture: capture an SNMP device using BMC Atrium Discovery to dump the MIB of an SNMP agent, which is then used to request that support be included in BMC Atrium Discovery for that SNMP device. This is described in Capturing SNMP devices. Security: provides links for the pages used to configure the security of the appliance and users. For more information, see Managing users and security. Users: add, remove and modify users. This is described in Managing system users. Groups: add, remove and modify groups. This is described in Managing groups. Security Policy: configure the appliance security options. These are described in Managing security policies. HTTPS: configure the HTTPS settings for the appliance. This is described in Configuring HTTPS settings. LDAP: view information about integrating BMC Atrium Discovery with LDAP. This is described in Managing LDAP. Web Authentication: use Web Authentication plugins to authenticate user logins. This is described in Configuring Web authentication settings. Active Sessions: view all users who are currently logged in. This is described in Viewing active sessions. Audit: configure the appliance audit feature. This is described in Auditing the appliance. Appliance: provides links for the configuration pages for the appliance. The pages related to the initial set up of the appliance are described later in this section. These are: Configuration: configureidentification and network settings. Control: restart services, reboot or shutdown appliance, and put appliance into maintenance mode. This is described in Maintaining the appliance. Logs: manage logs and log levels. This is described in Log files. Backup & Restore: operate the appliance backup feature that enables you to back up the appliance. This is described in Backing up and restoring the appliance. Baseline status: check and configure the appliance baseline. This is described in Baseline configuration. Performance: view charts showing the appliance performance over the last 30 days. This is described in Appliance performance. Support Services: create an archive of diagnostic information for customer support. This is described in Collecting additional data for support cases. Miscellaneous: configure miscellaneous settings. JDBC Drivers: configure JDBC drivers that are available for Database Discovery and for Export. This is described in Uploading new JDBC drivers. Model: provides links to the pages used to configure and maintain the model, and import and export data. View Taxonomy: view the system taxonomy. This is described in Viewing the system taxonomy. Model Maintenance: configure model maintenance settings. This is described in Configuring model maintenance settings. Custom Categories: set up data categories. This is described in Setting Up Standard Data Categories. CMDB Sync: configure scheduled export synchronizations with BMC Atrium CMDB. This is described in Preparing BMC Atrium CMDB for synchronization. Export: Perform export functions. This is described in Introduction to BMC Atrium Discovery Export. HRD Import: import Hardware Reference Data. This is described in HRD Import. Knowledge Update: upload and install the latest Knowledge Update. This is described in Uploading and activating a TKU for more information. CSV Import: import generic data in CSV format. This is described in CSV Import. Application Mapping Import: import application mapping definitions. This is described in import application mapping definitions. Search Management: view any search in progress, and depending on your privileges, cancel searches. This is described in Using the Search service. Channels: manage existing channels and create new channels.
Page 480
Viewing network interface and routing settings Configuring name resolution settings Configuring mail settings Modifying JVM settings Configuring usage data collection Localizing the appliance
3. In the Name field, type a name that you want to display overlaying the colored banner. The selected banner color with the specified name persists so that all pages that you display in the user interface use the same scheme. The banner color also displays on the login page.
Consolidation and Scanning Appliances You cannot change the name of an appliance if it is configured as a consolidation or scanning appliance. See Consolidation for more information.
To view or edit the appliance identification: 1. Click Administration. 2. From the Appliance section, click Configuration > Identification. The Identification page displays details of the appliance identity, support information, software and hardware. 3. Edit the fields that you want to change. The following fields can be edited: Field name Appliance Name Description Admin Email Banner color The name of the appliance. A description of the appliance. An e-mail address for the person or group responsible for the administration of the appliance. The color for the top banner in the user interface. Setting a banner color makes it easy for users in the field to identify the appliance they are using for various purposes (for example, development, test, and production environments). The appliance name is displayed on the colored banner. See Configuring the banner color for more information. Description
Support
Page 481
Support URL Support URL Title Support URL Label Support URL Description
The URL to use in the help drop down. The title for the support URL. The label for the support URL. The descriptive text label for the support URL.
If the appliance does not have sufficient resources for one or more classes of appliance deployment, the following types of messages are displayed: Message at the top of the application specification table: Specifies which resources do not meet the minimum resource specification. For example, if sufficient resource is not available for DISK, the warning message Below minimum specification (Disk Capacity) is displayed. Warning message at the bottom of the application specification table: Specifies that the appliance has insufficient resources and points to the documentation link for information about hardware specifications. In the application specification table, the classes of the appliance deployment that do not meet the minimum hardware specification are highlighted in red. The classes of appliance deployment that meet or exceed the respective minimum hardware specification are highlighted in green. See the Configuring the Virtual Appliance section for more information about classes of appliance deployment and related topics.
Page 482
5. default is 5 seconds. Larger values can affect Discovery performance and are not recommended. 6. From the Retries drop-down, select the required number of retries after DNS look up failure. This can be any value between 1 and 5. The default is 2. Larger values can affect Discovery performance and are not recommended. 7. Click Apply to save the changes.
If you configure the appliance mail server settings for an invalid mail server, using mail.send() in a pattern introduces a delay of approximately three minutes while the appliance attempts to contact the SMTP server.
To postpone the submission for a day, click not today. Click more info to pause the countdown and show the data to be submitted.
Anonymity
The data sent to BMC Software is totally anonymous. An appliance ID is used, but the appliance cannot be identified from the appliance ID, it is
Page 483
only used to group submissions from the same appliance. The webserver to which the data is sent is configured not to record the source IP address of submitted data.
FAQ
Click here to show FAQs and their answers.
Why?
BMC Atrium Discovery is a high risk system that has access to some very sensitive data in my data center - why does BMC have functionality to share information from BMC Atrium Discovery across the internet? The critical point to notice here is that BMC Atrium Discovery is not sharing any sensitive data there are no details in the data about your environment or IT, and there is nothing in the data that can identify its source. Even the web servers at BMC that receive the submissions of the data are configured to discard (that is, to not log) the originating IP addresses, so that we at BMC would be unable to tie a submission to the customer that submitted it even if we wanted to. The reason that we have this functionality is so that we can see how our customers use the software, and how to optimally make improvements to it - this will be to your direct benefit, as we can ensure we cater to environments of your size.
Page 484
Submission Data
Displays the data submitted by the appliance. The submitted data includes the following: Request Type: The type of the request. This field is displayed as ADDM Usage Data. Request Id: The appliance ID. The appliance cannot be identified from the ID; it is only used to group submissions from the same appliance. Product: Name of the product used. The product name is displayed as Atrium Discovery and Dependency Mapping. Edition: The edition of BMC Atrium Discovery used (for example, Enterprise edition or Community edition). Version: The version number of the BMC Atrium Discovery product. Release: The corresponding release number. Operating System: The operating system in use. Timestamp: The time stamp of the request. Kickstart Version: The kickstart version number. OS Revision: The OS revision number. Kickstart Foundation Version: The kickstart foundation version number. TKU Packages: Active TKU packages. Node Counts: The number and type of nodes. Relationship Counts: The number and type of relationships. Unknown Devices: Reported SNMP SysOID and SysDescr of any unknown devices. In version 9.0 SP1 and later, the count of unknown devices is also reported. LDAP Enabled: Whether LDAP is enabled. SI Count: Count of SIs in the datastore. Versioned SI Count: Count of versioned SIs in the datastore. Report Usage: The reports that have been used. The actual data in the reports is not submitted. Disk Sizes: A list of disk and filesystem sizes. The output of the df -h command. Datastore size: The size of the datastore on disk. The output of the du -hLc on the tideway.db directory. Credential Counts: The types and numbers of configured credentials. Scheduled Runs: The number of scheduled discovery runs configured. Weekly Logins: The total number of logins for the week and the number of unique users. Local Users Count: The number of locally configured (non-LDAP) users. CMDB Sync Enabled: Whether CMDB Sync is enabled. CMDB Sync Data Model: The CMDB data model used for CMDB Sync. CMDB Sync Counts: The total number of CIs and relationships pushed to CMDB. SNMP Recognition Rules: If enabled (see below) this is the number of SNMP recognition rules and the actual rules configured. Consolidation Role: Is the appliance in a consolidating or scanning role. Consolidation ID: If it is part of a consolidating system, (a scanning or consolidation appliance), the consolidation ID. Consolidating To: If it is a scanning appliance, the IDs of the consolidation appliances it is sending data to. Consolidating From: If it is a consolidating appliance, the appliance IDs of the scanning appliance or appliances. Include SNMP Recognition Rules: Select this checkbox if you want to submit data on SNMP recognition rules. The previous ten data submission history and links to the corresponding audit report. Submission Time: The submission time in the Day Month Date Hour Minutes Seconds Year format. To view the audit report of a submission, click on the corresponding submission time entry. User: The user who was logged in when the data was submitted. To view the audit report of a submission, click on the corresponding user entry. To activate usage data collection, click Activate. To deactivate usage data collection, click Deactivate. To go back to the main Administration tab, click Cancel.
Page 485
After you have determined that the layout works correctly, you should make the change permanent. To do so, change the KEYTABLE, MODEL, and LAYOUT variables in the /etc/sysconfig/keyboard file. For example, to change the keyboard layout to a US layout, use the following:
The keyboard mapping files can be found in /lib/kbd/keymaps/i386/ but usually you can use the 2-letter ISO Country Code. See the ISO website to find the code for the country you require. For example, us (United States), uk (United Kingdom), de (Germany), and no (Norway).
Page 486
You can set the time using the date command. For example, to set the current date to ten past twelve on 13 October 2009, enter the following command:
[root@london01 ~]$ date -s "12:10:00 20090102" Tue Oct 13 12:10:00 BST 2009 [root@london01 ~]$
The format for the date string is HH:MM:SS YYYYMMDD. You can also configure the appliance to synchronize the internal clock to an ntp server. See Configuring the NTP client for more information.
Do not change the appliance time on to an earlier setting After BMC Atrium Discovery has been running and has created nodes in the datastore, you must not change the time to an earlier setting. The transaction scheme in the datastore is based on timestamps and setting an earlier time makes data appear out of date causing many transactions to fail.
This section describes how to set the appliance hostname and to ensure that it is locally resolved even if the IP address of the appliance changes. You can set the hostname either locally, or using DHCP/DNS.
where london01 is the new hostname. If you do not have a DNS entry for the host, or you require a failsafe when DNS is unavailable, enter the hostname and FQDN in the etc/hosts file. Use the appliance IP address if it is unlikely to change, or a loopback IP address other than 127.0.0.1 (which can interfere with the resolution of localhost and localdomain. The following example uses london01 as the hostname, london.com as the domain name, and 192.168.0.100 as the IP address. Add only one of the example entries.
Page 487
127.0.0.1 ::1
### Enter either this line if the IP address is known 192.168.0.100 london01.london.com london01 ### Or this line to use a loopback address 127.0.0.2 london01.london.com london01
The following example uses the same host and domain names as above and shows commands to enter to ensure that the hostname is set correctly:
[tideway@london01 ~]$ hostname london01 [tideway@london01 ~]$ hostname --domain london.com [tideway@london01 ~]$ hostname --fqdn london01.london.com [tideway@london01 ~]$ ping `hostname` PING london01 (192.168.0.100) 56(84) bytes of data. 64 bytes from london01 (192.168.0.100): icmp_seq=1 ttl=64 64 bytes from london01 (192.168.0.100): icmp_seq=2 ttl=64 64 bytes from london01 (192.168.0.100): icmp_seq=3 ttl=64 64 bytes from london01 (192.168.0.100): icmp_seq=4 ttl=64
ms ms ms ms
--- london01 ping statistics --4 packets transmitted, 4 received, 0% packet loss, time 3002ms rtt min/avg/max/mdev = 0.041/0.045/0.050/0.003 ms [tideway@london01 ~]$
DHCP_HOSTNAME=london01.london.com
When the appliance renews its IP address with the DHCP server, the DHCP server will also update the DNS server with the hostname and IP address of the appliance.
where, local_hostname is the hostname set on the computer. To resolve this problem, see the previous section, Setting the hostname locally.
Page 488
[current date/time] [error] couldn't resolve WKServer address [current date/time] [error] Couldn't resolve hostname for WebKit Server
Setting a static IP address for the first interface (eth0) requires the following steps: 1. 2. 3. 4. 5. Obtain networking information. Stop the tideway services Edit the network configuration files. Restart networking. Restart the tideway services.
[tideway@london01 ~]$ sudo /sbin/service tideway stop [tideway@london01 ~]$ sudo /sbin/service omniNames stop [tideway@london01 ~]$ sudo /sbin/service appliance stop
The appliance is configured by default to use DHCP. To configure it to use a static IP address, you must edit the /etc/sysconfig/network-scripts/ifcfg-eth0 file as root. In addition to the network settings obtained previously, you need to determine the MAC address of the network card. To do this: 1. Enter the following command:
Page 489
1.
[tideway@london01 ~]$ sudo /sbin/ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:0C:29:BB:34:6D ... [tideway@london01 ~]$
The MAC address is given after the heading HWaddr. In this example, the MAC address is 00:0C:29:BB:34:6D. The following example shows the default ifcg-eth0 file :
2. Edit the /etc/sysconfig/network-scripts/ifcfg-eth0 and add the values obtained above. The file will look like this:
DEVICE=eth0 BOOTPROTO=static HWADDR=00:0C:29:BB:34:6D TYPE=Ethernet ONBOOT=yes IPADDR=192.168.0.100 NETMASK=255.255.255.0
3. Add the gateway address to the /etc/sysconfig/network file. You can edit the appliance hostname, but you must ensure that it still resolves correctly. See Changing the appliance hostname for more information. The file will look like this:
NETWORKING=yes NETWORKING_IPV6=yes HOSTNAME=london01 GATEWAY=192.168.0.1
Restarting networking
You must restart networking to make sure that the changes have been applied correctly. To do this, enter:
[tideway@london01 ~]$ sudo /sbin/service network restart [tideway@london01 ~]$
When networking starts, enter the following commands to ensure that the new networking information is showing correctly and that the appliance can resolve its own hostname correctly:
[tideway@london01 ~]$ /sbin/ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:0C:29:BB:34:6D inet addr:192.168.0.100 Bcast:192.168.0.255 Mask:255.255.255.0 ... [tideway@london01 ~]$ ping `hostname` PING london01 (192.168.0.100) 56(84) bytes of data. 64 bytes from london01 (192.168.0.100): icmp_seq=1 ttl=64 time=0.050 ms 64 bytes from london01 (192.168.0.100): icmp_seq=2 ttl=64 time=0.047 ms 64 bytes from london01 (192.168.0.100): icmp_seq=3 ttl=64 time=0.042 ms 64 bytes from london01 (192.168.0.100): icmp_seq=4 ttl=64 time=0.041 ms --- london01 ping statistics --4 packets transmitted, 4 received, 0% packet loss, time 3002ms rtt min/avg/max/mdev = 0.041/0.045/0.050/0.003 ms [tideway@london01 ~]$
Page 490
If the appliance cannot resolve its own hostname (there is no response from the ping command), see Changing the appliance hostname for information on changing the hostname.
Troubleshooting
If you cannot log in through a web browser, try stopping and restarting the services:
[tideway@london01 [tideway@london01 [tideway@london01 [tideway@london01 [tideway@london01 [tideway@london01 ~]$ ~]$ ~]$ ~]$ ~]$ ~]$ sudo sudo sudo sudo sudo sudo /sbin/service /sbin/service /sbin/service /sbin/service /sbin/service /sbin/service tideway stop omniNames stop appliance stop appliance start omniNames start tideway start
Try to ping the gateway or use traceroute to the gateway. In this example, 192.168.0.1 is the gateway address provided by your system administrator. Enter the following:
[tideway@london01 ~]$ ping 192.168.0.1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. 64 bytes from 192.168.0.1: icmp_seq=1 ttl=127 time=0.324 64 bytes from 192.168.0.1: icmp_seq=2 ttl=127 time=0.450 64 bytes from 192.168.0.1: icmp_seq=3 ttl=127 time=0.342 64 bytes from 192.168.0.1: icmp_seq=4 ttl=127 time=0.401
ms ms ms ms
--- 192.168.0.1 ping statistics --4 packets transmitted, 4 received, 0% packet loss, time 3001ms rtt min/avg/max/mdev = 0.324/0.379/0.450/0.051 ms [tideway@london01 ~]$ [tideway@london01 ~]$ traceroute 192.168.0.1 traceroute to 192.168.0.1 (192.168.0.1), 30 hops max, 40 byte packets 1 192.168.1.4 (192.168.1.4) 4.225 ms 4.377 ms 4.532 ms 2 192.168.0.1 (192.168.0.1) 0.265 ms 0.262 ms 0.252 ms [tideway@london01 ~]$
Page 491
To configure Discovery to use the Tectia SSH client: 1. Create a .tideway.py file in /usr/tideway and add the following entry:
SSH_CMD="/opt/tectia/bin/sshg3"
2. Restart the Tideway services. You will see deprecation warnings in the logs about use of the .tideway.py file but you can ignore them. Discovery will now use Tectia SSH instead of OpenSSH for connections to remote systems.
Network configuration When changing network configuration on your appliance, you should always be able to access the system console in case the new configuration does not work correctly.
Netadmin does not work with bonded NICs If you configure your appliance to use bonded NICs, you can no longer use the netadmin user to perform any networking configuration.
The BMC Atrium Discovery appliance relies on eth0 having an IP address. When the appliance starts, it requires an IP address on eth0 for configuration and start-up services. In a default bonded NIC configuration, the IP address would be assigned to the bond point bond0, and the BMC Atrium Discovery services would be unable to start without additional configuration. The ifconfig output for bonded NIC cards is shown in the following example. Note that bond0 has an IP address but eth0 and eth1 do not.
bond0 Link encap:Ethernet HWaddr 00:11:43:FD:9A:B1 inet addr:192.168.0.25 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:58071 errors:0 dropped:0 overruns:0 frame:0 TX packets:1465 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4315472 (4.1 Mb) TX bytes:120360 (117.5 Kb) eth0 Link encap:Ethernet HWaddr 00:11:43:FD:9A:B1 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:26447 errors:0 dropped:0 overruns:0 frame:0 TX packets:1262 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1992430 (1.9 Mb) TX bytes:95078 (92.8 Kb) Interrupt:16 eth1 Link encap:Ethernet HWaddr 00:11:43:FD:9A:B1 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:31624 errors:0 dropped:0 overruns:0 frame:0 TX packets:203 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2323042 (2.2 Mb) TX bytes:25282 (24.6 Kb) Interrupt:17
3. Configure the boot process to include the bonding module and to use eth0 as the bond point, with the various options associated with that bond. To do this, log in as the tideway user and perform the following steps: 1. Stop the tideway, omniNames and appliance services. Enter:
[tideway@localhost] $ sudo /sbin/service tideway stop [tideway@localhost] $ sudo /sbin/service omniNames stop [tideway@localhost] $ sudo /sbin/service appliance stop
2. Change to the /etc/sysconfig/network-scripts directory and determine the MAC addresses (HWaddr of the network cards that you want to bond. Enter:
[tideway@localhost] $ cd /etc/sysconfig/network-scripts [tideway@localhost network-scripts] $ /sbin/ifconfig -a | grep HWaddr eth0 Link encap:Ethernet HWaddr 00:11:43:FD:9A:B1 eth1 Link encap:Ethernet HWaddr 00:11:43:FD:9A:B2 [tideway@localhost network-scripts] $
In this example the MAC Addresses are as follows: eth0 00:11:43:FD:9A:B1 eth1 00:11:43:FD:9A:B2 3. Change to the root user and using a text editor, edit, or create and edit ifcfg-eth1 to contain the following:
DEVICE=eth1 BOOTPROTO=none ONBOOT=yes HWADDR=00:11:43:FD:9A:B1 # The MAC address of eth1 as determined in the previous step. TYPE=Ethernet MASTER=eth0 SLAVE=yes
4. Using a text editor, edit or create and edit ifcfg-eth2 to contain the following:
DEVICE=eth2 BOOTPROTO=none ONBOOT=yes HWADDR=00:11:43:FD:9A:B2 # The MAC address of eth2 as determined in the previous step. TYPE=Ethernet MASTER=eth0 SLAVE=yes
5. Using a text editor, edit or create and edit ifcfg-eth0 to contain the following. Complete the IPADDR, NETMASK, and GATEWAY information
DEVICE=eth0 BOOTPROTO=none ONBOOT=yes IPADDR=<required IP address> NETMASK=<required netmask> GATEWAY=<required gateway>
Note that options such as BROADCAST and NETWORK must be defined in ifcfg-eth0 if they are required. 6. Modify /etc/modprobe.conf. The lines that must be changed contain eth0 and possibly eth1. These need to be modified to specify that eth0 is now a bonded interface. For example, if the file looks like this:
alias eth0 e1000 alias eth1 e1000 alias scsi_hostadapter ata_piix
Page 493
Note The entries after options eth0 are the bonding options. They might not be required, or might be device specific, but the ones shown are good defaults. miimon=100 means check the link state every 100 milliseconds, and mode=balance-rr provides load balancing and tolerance. The balance-rr mode uses sequential order transmissions. There are other mode options available, some of which require changes to other items. 7. Reboot the system. 8. Log in as the tideway user and run the following command:
[tideway@localhost ] $ /sbin/ifconfig -a
The output should look like the following, note the bond0 interface:
bond0 Link encap:Ethernet HWaddr 00:00:00:00:00:00 BROADCAST MASTER MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) eth0 Link encap:Ethernet HWaddr 00:11:43:FD:9A:B1 inet addr:192.168.0.5 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:fea7:362/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:21026 errors:0 dropped:0 overruns:0 frame:0 TX packets:175 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1405048 (1.3 MiB) TX bytes:61541 (60.0 KiB) eth1 Link encap:Ethernet HWaddr 00:11:43:FD:9A:B1 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:10529 errors:0 dropped:0 overruns:0 frame:0 TX packets:87 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:703475 (686.9 KiB) TX bytes:30280 (29.5 KiB) eth2 Link encap:Ethernet HWaddr 00:11:43:FD:9A:B1 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:10498 errors:0 dropped:0 overruns:0 frame:0 TX packets:90 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:701633 (685.1 KiB) TX bytes:31793 (31.0 KiB)
Page 494
Nov 18 15:47:18 Nov 18 15:47:18 Nov 18 15:47:18 Nov 18 15:47:18 Nov 18 15:47:18 Nov 18 15:47:18 down link. Nov 18 15:47:18 Control: None Nov 18 15:47:18 Nov 18 15:47:18 Nov 18 15:47:18 Nov 18 15:47:18 Control: None Nov 18 15:47:18 up link.
Ethernet Channel Bonding Driver: v3.4.0-1 (October 7, 2008) bonding: MII link monitoring set to 100 ms bonding: eth0 is being created... ADDRCONF(NETDEV_UP): eth0: link is not ready bonding: eth0: Adding slave eth1. bonding: eth0: enslaving eth1 as an active interface with a
localhost kernel: e1000: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow localhost localhost localhost localhost kernel: kernel: kernel: kernel: bonding: eth0: link status definitely up for interface eth1. ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready bonding: eth0: Adding slave eth2. e1000: eth2 NIC Link is Up 1000 Mbps Full Duplex, Flow
The list of running kernel modules should include the bonding module. 10. Run the following command:
[tideway@localhost ] $ /sbin/lsmod | sort ac 38729 0 acpi_memhotplug 40517 0 asus_acpi 50917 0 ata_piix 57541 0 autofs4 63049 3 backlight 39873 1 video battery 43849 0 *bonding 142537 0* button 40545 0 cdrom 68713 1 ide_cd crypto_api 42945 1 xfrm_nalgo ...
Page 495
Each tab for the standard data shows the data types that are configured on the appliance, along with abbreviations, descriptions, whether that type is active or not, arrow keys to reorder the types, whether the type is standard in BMC Atrium Discovery or user defined, and a Delete button to delete that data type. The page also has a New button which enables you to create a new type, depending on the page you are viewing.
Page 496
Attachments
Relationship that defines the actual attachment objects in this category. This information is normally set up when the documents are attached to the objects, but you can search for and select one or more Attachment objects.
1. From the Custom Categories page, select the Lifecycle Status tab. Existing lifecycle status values are displayed. 2. Click New Lifecycle Status button. The Create Lifecycle Status page is displayed. 3. Complete the Attributes and Relationships of the host type. Field Name Name Abbreviation Description Active Withdrawn State? Details Name of this lifecycle status value. Abbreviated life-cycle status name. Free-text description of life-cycle status. Yes if this lifecycle status is currently active, No if it is not active. Defaults to Yes. Yes if this lifecycle status value is Withdrawn, No if it is not. Defaults to No.
4. Complete the Relationships of the lifecycle status. 5. Click Apply to save the lifecycle status details.
Managing locations
A Location object defines a physical location in your organization which can be associated with a managed element or a person. You can create various levels of locations - for example, you can model buildings in sites, or rooms on floors. You would normally set up all the location values appropriate to your organization when you first set up the system, however, you can update them at any time. If you need to link locations to other nodes, you can do this by means of patterns. A template pattern is available to do this. For information on template patterns, see Template patterns. For information on modeling your business applications using patterns, see the Tideway web site and click the Community tab and follow the Documentation Resources link. From the Location page all locations are listed. From this page you can create a new location or display an existing one in detail.
Page 498
Defines instances of Business Applications in this physical location. Search for and select one or more applications.
Page 499
3. Field Name Name Abbreviation Description Active Details Name of this recovery time value. Abbreviation of recovery time value. Free-text description of recovery time value. Yes if this recovery time value is currently active, No if it is not active. Defaults to Yes.
History
History information about each node and relationship is recorded in the datastore. This audit trail includes details of the changes made, the date and time of the changes and the user who made the changes. You can view the history of each object in the datastore and you can search for and display details of destroyed objects. In previous releases a purge history script was provided enabling you to purge the history manually. From BMC Atrium Discovery 9.0 this can be done automatically. By default, history is not purged. For information on how to enable this and specify an age above which history data is purged, see configuring model maintenance settings. To ensure that the historical information is self-consistent, the creation time record for each node is maintained, and all state changes that occurred between the creation time and the purge time are rolled up into a single state change entry.
Destroyed nodes
When a node is destroyed, it is marked as destroyed but remains in the datastore for a fixed period of time, after which it is automatically purged. For information on how to specify a time after which destroyed nodes are purged, see configuring model maintenance settings.
None of this information can be changed from the BMC Atrium Discovery user interface.
Page 500
The relationships displayed in the taxonomy are not exhaustive. That is, all possible relationships are not displayed in the taxonomy. For example, the following containment relationships are valid: Group:Container:Containment:ContainedItem:SoftwareInstance Group:Container:Containment:ContainedItem:Host
The diagrams below shows the BMC Atrium Discovery Default Data Model with its different nodes, attributes and relationships between the different parts of BMC Atrium Discovery. It is split across 3 diagrams. Main Diagram contains all the core entities used in modeling and discovery of Hosts. Network and Printer Diagram contains entities used in the modeling and discovery of NetworkDevices and Printers. Mainframe Diagram contains entities used in the modeling and discovery of Mainframes. It is important to note that the model itself is not in separate sections, the three diagrams exist in order to more clearly convey information. Main Network and Printer Mainframe
Color-Code Key
Inferred View Nodes and connection links Blue. Directly Discovered Data Nodes and connection links Green. Knowledge (Pattern Management) View Nodes and connection links Pink. Provenance Relationship connection links Red. Auxiliary Nodes and connection links Brown (used internally by BMC Atrium Discovery).
The PNG images are static and represent the latest taxonomy. They do not reflect any on-site changes that may have been made to the taxonomy on your appliance.
Page 501
Managing security policies Configuring HTTPS settings Managing LDAP Configuring Web authentication settings Viewing active sessions Auditing the appliance
Recommendation Each user has an initial password in BMC Atrium Discovery that is the same as the user name. BMC recommends that you change the password when you first log in as the system user. Not doing so will make it easier for someone to access the BMC Atrium Discovery user interface. Use the system user only for configuration tasks which require system privileges.
Page 502
Enter the password to be allocated to this user. Repeat the password for security reasons. A read only display of the rules which are used to validate the password strength. Select one or more groups that this user will be a member of. By default, all new users are members of the public group.
User names are case sensitive. That is, user names with the same spelling but different case are permitted, for example, Johnson and JOHNSON are not recognized as duplicates.
To reset the BMC Atrium Discovery user password at the command line
The tw_passwd utility enables you to change the password of a specified user interface user. To use the utility, enter the following command at command prompt:
tw_passwd username
Page 503
Changing passwords for command line users The tw_passwd utility is for changing UI users' passwords. To change the passwords for command line users, as the root user, use the Linux command passwd. This is described in Changing the root and user passwords
Deleting a user
You can delete any existing user except for yourself or the default system-created users.
User permissions
User permissions in BMC Atrium Discovery are additive. When you grant a user an additional permission (through adding the user to another group), that permission is added to the user's existing permissions. For example, if you grant appmodel permissions to a user with discovery permissions, the user gains no additional permissions because all of the appmodel permissions were already granted in the discovery permission set. Similarly, you cannot add readonly permissions to a system user in the hope of achieving a read only system user.
Managing groups
All users of the BMC Atrium Discovery system must be a member of one or more groups. Membership of groups defines the various BMC Atrium Discovery modules that a user is entitled to access. For example, users defined as members of the System group are able to create and edit user details, while members of the Public group cannot access these areas. To log in, a user must be in a group that has permissions security/user/passwd, appserver/login and appserver/module/home. Only four default groups have this permission: readonly, public, system and admin. Every user must be a member of one of these four groups, or a member of a custom group that has at least these permissions. For example, a user who is only in the discovery group cannot login. You should put a user that requires access to discovery commands into the discovery and public groups. The BMC Atrium Discovery Administrator is responsible for setting up details of all the user groups in the BMC Atrium Discovery system. Each group is a collection of permissions. Permissions control granular access to BMC Atrium Discovery modules and are described in Group Permissions.
Security groups
The default user groups and their security access rights are as follows: admin: These users have the highest level of customer access to the system. appmodel: These users can write and edit patterns, and create nodes to model business applications. They cannot view credentials but can run discovery (in order to test patterns). discovery: These users have access to all of the discovery-related data. They can start and stop discovery, add and remove credentials, and enable or disable audit logging. cmdb-export-administrator: These users have access to all of the export-related data. They can build, modify, delete and run Exporters. public: These users have read/write access to all of the system although they cannot access the discovery credentials. readonly: These users have read only access to the system. They cannot view the credentials for logging into target hosts. system: These users have full access to the system. unlocker: These users are able to unlock and unblock user accounts which have been locked or blocked after exceeding the number of permitted authentication failures. See Managing security policies for more information.
Page 504
To delete a group
You can delete any group provided you have created it initially. You cannot delete either the public or the system groups. 1. From the Groups page, click Delete next to the group to be deleted. The group is deleted and the system does not display any confirmation.
Group permissions
The following table shows the permissions assigned by default to each group in BMC Atrium Discovery. The individual permissions are described in System Group Permissions by Category. Group Name admin Permissions *
Page 505
appmodel
admin/category/createmodify admin/log/info admin/log/read model/audit/purge reasoning/events/read reasoning/events/write reasoning/pattern/config reasoning/pattern/execute reasoning/pattern/quickload reasoning/pattern/write reasoning/ranges/once reasoning/ranges/read reasoning/ranges/write reasoning/start reasoning/startstop reasoning/stop ui/report/admin vault/open admin/cmdb-exporter vault/close vault/credential_types/read vault/credential_types/write vault/credentials/read vault/credentials/write vault/open consolidation/consolidation/write consolidation/discovery/write consolidation/read discovery/credentials/test discovery/filters/read discovery/filters/write discovery/kslave/read discovery/kslave/write discovery/options/read discovery/options/write discovery/platforms/read discovery/platforms/write discovery/port/settings model/audit/purge reasoning/events/read reasoning/events/write reasoning/pattern/config reasoning/pattern/write reasoning/ranges/once reasoning/ranges/read reasoning/ranges/write reasoning/start reasoning/startstop reasoning/stop vault/credential_types/read vault/credential_types/write vault/credentials/read vault/credentials/write vault/open
cmdb-export-administrator
discovery
Page 506
public
appserver/login appserver/module/* model/audit/read model/audit/write model/datastore/main/read model/datastore/main/write model/datastore/partition/Audit/read model/datastore/partition/Conjecture/read model/datastore/partition/Conjecture/write model/datastore/partition/DDD/read model/datastore/partition/DDD/write model/datastore/partition/Default/read model/datastore/partition/Default/write model/datastore/partition/Taxonomy/read model/datastore/partition/_System/read model/datastore/partition/_System/write model/taxonomy/nodekind/read model/taxonomy/relkind/read model/taxonomy/rolekind/read reasoning/events/write reasoning/status reports/read reports/write security/user/passwd ui/dashboard/admin ui/report/admin ui/taxonomy/admin appserver/login appserver/module/* model/audit/read model/audit/write model/datastore/main/read model/datastore/partition/Audit/read model/datastore/partition/Conjecture/read model/datastore/partition/DDD/read model/datastore/partition/Default/read model/datastore/partition/Taxonomy/read model/datastore/partition/_System/read model/datastore/partition/_System/write model/taxonomy/nodekind/read model/taxonomy/relkind/read model/taxonomy/rolekind/read reasoning/status security/user/passwd * security/group/read security/options/read security/user/activate security/user/read
readonly
system unlocker
There are no permissions that restrict access to patterns. All logged in users can view patterns.
Security permissions
The following table shows the current group permissions relating to the security operations. Permission Definition
Page 507
security/group/read security/group/write
Enables the user to view and configure group membership to a user. security/group/read: View the user group names and the corresponding permissions. security/group/write: View, edit, delete, and create new groups. You can manage user groups from the Groups page of the UI. To navigate to the page: 1. Click Administration. 2. From the Security section, click Groups.
security/https/admin
Enables the user to configure the HTTPS settings, which include: generating server keys and certificate signing requests uploading and sign server certificates uploading or downloading a CA certificate bundle enabling and disabling HTTP or HTTPS web access to the appliance You can manage the HTTPS configuration from the HTTPS Configuration page of the UI. To navigate to the page: 1. Click Administration. 2. From the Security section, click HTTPS. For more information, see Configuring HTTPS settings.
security/options/read security/options/write
Enables the user to view and configure the security options which include accounts and passwords, login page, and UI security page. security/options/read: View the security options for the appliance. security/options/write: Configure the security options settings for the appliance. You can manage the options from the Security Policy page of the UI. To navigate to the page: 1. Click Administration. 2. From the Security section, click Security Policy. For more information, see the Managing security policies page.
security/user/activate
Enables the user to unlock and re-activate accounts for other users from the Users page of the UI. To navigate to the page: 1. Click Administration. 2. From the Security section, click Users. For more information, see Accounts and passwords .
security/user/passwd
Enables the user to change her or his own BMC Atrium Discovery password from the UI. For more information, see changing your password . Enables the user to view and configure the user security information related to system users, groups, security policies, HTTPS settings, LDAP, Web authentication settings, active sessions, appliance audit, and so on. security/user/read: View the user security information. security/user/write: Configure the user security information. For more information, see the Managing users and security page.
security/user/read security/user/write
Page 508
vault/credential_types/read vault/credential_types/write
Enables the user to manage the following types of credentials which are based on the system to access: Device: To log on to hosts running a Unix, Linux operating system, or Windows operating system, or any SNMP enabled device such as routers and switches. Database: To query databases. Middleware: To query middleware such as web and application servers, and so on. Management System: For vCenter, vSphere, and mainframe credentials. You can view and manage the credentials types from the Credentials page of the UI. To navigate to the page: 1. Click Discovery. 2. Click Credentials. For more information, see the Credentials page.
vault/credentials/read vault/credentials/write
Enables the user to view and manage credentials (For example, Windows proxies, vSphere credentials, and so on). vault/credentials/read: View the credentials. vault/credentials/write: Manage the credentials. You can view and manage credentials from the Credentials page of the UI: 1. Click Discovery. 2. Click Credentials. For more information, see the Credentials page.
Discovery permissions
The following table shows the current group permissions relating to the discovery operations. Permission discovery/network/probe discovery/options/read discovery/options/write discovery/credentials/test Definition Obsolete permission. Enables the user to read the discovery options. These are separate from the main system settings. Enables the user to test discovery credentials. For example, from the UI, you can test: Discovery credentials from the Credential Tests tab. For more information, see Testing credentials. Mainframe credentials from the Mainframe Credential Tests tab. For more information, see Testing mainframe credentials. Enables the user to view and amend the platform discovery commands from the Discovery Platforms page. discovery/platforms/read: View the platform discovery commands. discovery/platforms/write: Amend the platform discovery commands. You can view and amend the platform discovery commands from the Discovery Platforms page of the UI. To navigate to the page: 1. Click Administration. 2. From the Discovery section, click Discovery Platforms. For more information, see Managing the discovery platform scripts. discovery/host/access Enables the user to query a host on the network. For more information, see Query Builder. Enables the user to view and modify sensitive data filters from the Sensitive Data Filters page of the UI. To navigate to the page: 1. Click Administration. 2. From the Discovery section, click Sensitive Data Filters. For more information, see Masking sensitive data.
discovery/platforms/read discovery/platforms/write
discovery/filters/read discovery/filters/write
Page 509
discovery/kslave/read discovery/kslave/write
Enables the user to view and modify the Windows proxies. You can manage Windows proxies from the Windows proxy management page of the UI. To navigate to the page: 1. Click Discovery. 2. Click Credentials. 3. Click Windows Proxies. The Device Credentials page for Windows proxies (Windows proxy management page) is displayed. For more information, see Managing Windows proxies.
discovery/port/settings
Enables the user to configure the port settings that Discovery uses. You can manage the port settings from the Discovery Configuration page on the UI. To navigate to the page: 1. Click Administration. 2. From the Discovery section, click Discovery Configuration. For more information, see port settings .
ssh_key/write
Obsolete permission.
Consolidation permissions
The following table shows the current group permissions relating to configuring consolidation and scanning appliances. Permission consolidation/consolidation/write Definition Enables the user to change the configuration on the consolidation appliance (set as consolidation appliance and approve scanning appliances). You can manage consolidation from the Discovery Consolidation page on the UI. To navigate to the page: 1. Click Administration. 2. From the Discovery section, select Discovery Consolidation. For more information about consolidation, see the Consolidation page. consolidation/discovery/write Enables the user to add new consolidation targets to a scanning appliance from the Discovery Consolidation page on the UI. Enables the user to view the consolidation setup page from the Discovery Consolidation page on the UI.
consolidation/read
Datastore permissions
These permissions are a subsystem of the model. The following table shows the current group permissions relating to the datastore operations. Permission model/datastore/main/read model/datastore/main/write model/datastore/partition/*/read model/datastore/partition/*/read Definition Enables the user to read or write the datastore through the main user interface (UI). Enables the user to read or write to any partition which support user interaction. For more information, see the Partitions and History page. Enables the user to read or write to the given partition. The name is one of: Audit Conjecture DDD Default Taxonomy _System For more information, see the Partitions and History page.
model/datastore/partition/name/read model/datastore/partition/name/write
Page 510
Audit permissions
These permissions are a subsystem of the model. The following table shows the current group permissions relating to the audit operations. Permission model/audit/read Definition Enables the user to read the audit log. Audit logs are stored in /usr/tideway/log. The files can be accessed directly from the appliance command line or in the log viewer from the UI. Logs can be downloaded from the appliance through the Support Services administration page. Enables the user to write to the audit log. Enables the user to purge the audit log. You can purge the audit log of all events that are over one month old (events less than one month old cannot be deleted) from the Audit Purge page. To navigate to the Audit Purge page: 1. Click Administration. 2. From the Security section, click Audit. 3. Click Purge. For more information, see purging audit logs . model/audit/admin Enables the user to administer the audit service. You can configure the reporting on audit events from the Audit Logs page. To navigate to the Audit Logs page: 1. Click Administration. 2. From the Security section, click Audit. 3. Click Audit Logs. For more information, see the Auditing the appliance page.
model/audit/write model/audit/purge
Reasoning permissions
These permissions are a subsystem of the model. The following table shows the current group permissions relating to the reasoning operations. Permission reasoning/start Definition Enables the user to start reasoning. You can start reasoning by using the tw_scan_control utility. For more information, see tw_scan_control. Enables the user to start and stop reasoning. You can start and stop reasoning by using the tw_scan_control utility. For more information, see tw_scan_control. Enables the user to stop reasoning. You can stop reasoning by using the tw_scan_control utility. For more information, see tw_scan_control. Enables the user to view the reasoning status information. You can view the reasoning status by using the The tw_reasoningstatus utility. Typically this utility is used by Customer Support as a troubleshooting tool for investigating possible problems. For more information, see tw_reasoningstatus. Enables the user to view the Discovery Status page. The status of the discovery process displays on the Home tab in the Discovery Status summary. This page also displays the current status of the reasoning process. For more information, see Viewing discovery status page. Enables the user to cancel consolidations or local scans. You can cancel consolidation or local scans from the Discovery Status page. To navigate to the Discovery Status page: 1. Click Discovery. 2. Click Discovery Status. For more information, see Viewing discovery status.
reasoning/startstop
reasoning/stop
reasoning/status
reasoning/ranges/read
reasoning/ranges/write
Page 511
reasoning/ranges/once
Enables the user to add or remove snapshot discovery ranges from the Discovery Status page. To navigate to the Discovery Status page: 1. Click Discovery. 2. Click Discovery Status. To learn how to add discovery ranges, see Scanning IP addresses or ranges. To learn how to exclude discovery ranges, see Excluding IP addresses and ranges from scanning.
An internal permission. Do not use this. Enables the user to configure patterns. For more information, see Pattern Configuration. Enables the user to edit patterns. For more information, see Pattern configuration and editing. Enables the user to execute patterns. For more information, see Manual pattern execution. Enables the user to upload and activate patterns from the Pattern Management page: 1. Click Discovery. 2. Click Pattern Management. For more information, see Pattern management.
reasoning/pattern/edit
reasoning/pattern/execute
reasoning/pattern/quickload
reasoning/pattern/write
Enables the user to write patterns using pattern templates from the appliance. The pattern templates can be downloaded from the Pattern Management page: 1. Click Discovery. 2. Click Pattern Management. For more information, see using pattern templates .
Search permissions
These permissions relate to listing and cancelling searches using the Search Management Page. To navigate to the Search Management page: 1. Click Administration. 2. From the Model section, click Search Management. For more information on viewing and cancelling searches, see Using the Search service. Permission model/search/list model/search/cancel Definition Enables the user to view searches submitted by all users. Enables the user to cancel searches submitted by all users.
Taxonomy permissions
These permissions are a subsystem of the model. The following table shows the current group permissions relating to the taxonomy operations. Permission model/taxonomy/nodekind/read model/taxonomy/nodekind/write model/taxonomy/relkind/read model/taxonomy/relkind/write Definition Enables the user to read node kind information. Enables the user to write node kind information. Enables the user to read relationship kind information. Enables the user to write relationship kind information.
Page 512
model/taxonomy/rolekind/read model/taxonomy/rolekind/write
Enables the user to read role kind information. Enables the user to write role kind information.
appserver/module/* appserver/sessionaccess
Specific UI permissions
The following table shows the current group permissions relating to specific user interface operations. Permission ui/dashboard/admin Definition Enables the user to administer the dashboard. You can use the tw_config_dashboards utility to configure and customize dashboards in BMC Atrium Discovery. For more information, see tw_config_dashboards. Enables the user to administer the datastore. Enables the user to administer the taxonomy. Enables the user to access the Generic Search Query page and enter search queries. To navigate to the Generic Search Query page: 1. Click the Search icon to the left of the Search box at the top right of the User Interface. The Search Options in the drop down panel is displayed. 2. Click the Generic Search Query link. For more information, see Using the Search service. Note that, by default, all admin users get this permission.
Page 513
appliance/info/read
Enables the user to view appliance information (identity, support information, read-only information about the appliance software and hardware configuration, and so on) from the Appliance Configuration page. To navigate to the Appliance Configuration page: 1. Click Administration. 2. From the Appliance section, click Configuration. For more information, see Configuring appliance settings.
appliance/info/write
Enables the user to configure appliance information from the Appliance Configuration page. To navigate to the Appliance Configuration page: 1. Click Administration. 2. From the Appliance section, click Configuration. For more information, see Configuring appliance settings.
appliance/maintenance
Enables the user to put the appliance into maintenance mode from the Appliance Control page. To navigate to the Appliance Control page: 1. Click Administration. 2. From the Appliance section, click Control. For more information, see Using maintenance mode.
appliance/reboot
Enables the user to reboot the appliance from the Appliance Control page. To navigate to the Appliance Control page: 1. Click Administration. 2. From the Appliance section, click Control. For more information, see Rebooting or shutting down the appliance.
appliance/reportsusage/reset appliance/restart
Enables the user to reset the report usage statistics. Enables the user to restart the appliance from the Appliance Control page. To navigate to the Appliance Control page: 1. Click Administration. 2. From the Appliance section, click Control. For more information, see Rebooting or shutting down the appliance.
appliance/shutdown
Enables the user to shut down the appliance from the Appliance Control page. To navigate to the Appliance Control page: 1. Click Administration. 2. From the Appliance section, click Control. For more information, see Rebooting or shutting down the appliance.
Baseline baseline/admin Enables the user to change the baseline configuration (such as the recipients of automatic emails, and the messages to be included) from the Appliance Baseline page. To navigate to the Appliance Baseline page: 1. Click Administration. 2. From the Appliance section, click Baseline Status. For more information, see Baseline configuration.
Page 514
baseline/read
Enables the user to view the baseline configuration from the Appliance Baseline page. To navigate to the Appliance Baseline page: 1. Click Administration. 2. From the Appliance section, click Baseline Status. For more information, see Baseline configuration.
baseline/update
Enables the user to update the baseline configuration after changes have been seen from the Appliance Baseline page. To navigate to the Appliance Baseline page: 1. Click Administration. 2. From the Appliance section, click Baseline Status. For more information, see Baseline configuration.
Logging admin/log/info Enables the user to view log information. As each BMC Atrium Discovery component and script runs, it outputs logging information. Logs are all stored in /usr/tideway/log. You can access the files directly from the appliance command line or in the log viewer in the UI. Enables the user to read log files. Logs are all stored in /usr/tideway/log. You can access the files directly from the appliance command line or in the log viewer in the UI. Enables the user to delete log files. You can delete the log files from the Logs page. To navigate to the Logs page: 1. Click Administration. 2. From the Appliance section, click Logs. For more information, see view and delete logs . admin/loglevel/read Enables the user to read the appliance log level from the Logs page. To navigate to the Logs page: 1. Click Administration. 2. From the Appliance section, click Logs. For more information, see view log levels . admin/loglevel/write Enables the user to change the service log levels at runtime from the Logs page. To navigate to the Logs page: 1. Click Administration. 2. From the Appliance section, click Logs. For more information, see Changing log levels at runtime. Import admin/import/ciscoworks Enables the user to import data using the CiscoWorks importer. You can use the tw_imp_ciscoworks utility to import CiscoWorks network device data from the command line. For more information, see tw_imp_ciscoworks. Enables the user to import CSV data from the Import CSV Data page. To navigate to the CSV Data page: 1. Click Administration. 2. From the Model section, click CSV Import. For more information, see Importing CSV data.
admin/log/read
admin/log/delete
admin/import/csv
Page 515
admin/import/hrd
Enables the user to import Hardware Reference data (HRD) from the Import Hardware Reference Data page. To navigate to the Hardware Reference Data page: 1. Click Administration. 2. From the Model section, click HRD Import. For more information, see Importing Hardware Reference Data.
Interface admin/interface/read Enables the user to view interface information from the Appliance Configuration page for network interfaces. To navigate to the Appliance Configuration page for network interfaces: 1. Click Administration. 2. From the Appliance section, click Configuration. 3. Click Network Interfaces. For more information, see Viewing network interface and routing settings. admin/interface/write Routing admin/routing/read admin/routing/write DNS admin/dns/read Enables the user to read DNS information. You can view the DNS (Name Resolution) information from the Appliance Configuration page for name resolution. To navigate to the page: 1. Click Administration. 2. From the Appliance section, click Configuration. 3. Click Name Resolution. For more information, see Configuring name resolution settings. admin/dns/write The user is allowed to write DNS information. You can configure the DNS (Name Resolution) information from the Appliance Configuration page for name resolution. To navigate to the page: 1. Click Administration. 2. From the Appliance section, click Configuration. 3. Click Name Resolution. For more information, see Configuring name resolution settings. Email Configuration admin/mail/read Enables the user to view email configuration information from the Appliance Configuration page for mail settings. To navigate to the page: 1. Click Administration. 2. From the Appliance section, click Configuration. 3. Click Mail Settings. For more information, see Configuring mail settings. admin/mail/write Enables the user to configure the mail settings from the Appliance Configuration page for mail settings. To navigate to the page: 1. Click Administration. 2. From the Appliance section, click Configuration. 3. Click Mail Settings. For more information, see Configuring mail settings. System Obsolete permission. Obsolete permission. This permission is not used.
Page 516
system/settings/read
Enables the user to read system settings from the command line utilities and from the UI. Enables the user to write system settings from the command line utilities and from the UI.
Enables the user to download and install Windows proxies onto the local Windows host. You can download the Windows proxies as installation files from Discovery Tools page. To navigate to the Discovery Tools page: 1. Click Discovery. 2. Click Tools. For more information, see Installing Windows proxies.
The 'all' permission (*) allows the user to perform any tasks in BMC Atrium Discovery. Each user has a token which is assigned by the security system and whenever a privilege is requested by a user, the security service checks the database to see if that particular user has permission to carry out that particular task.
However, the first check that BMC Atrium Discovery carries out is to see if the user has the * permission. If the answer is yes, no further privilege checks will be carried out.
Page 517
Account Deactivation
Unused user accounts can be deactivated after a specified period of time. Select the period from the drop-down list. Choose from the following 15, 30, 45, 60, 75, 90, 105, and 120 days. If you do not want accounts to be deactivated, select Never. The default is that disabled accounts cannot be reactivated. Select Yes or No to allow user accounts to be reactivated. You will need an administrator to reactivate the account.
Disabled Accounts can be reactivated Minimum Password Length Password History Password constraints
You can specify a minimum length for passwords. Select a minimum length from the drop-down list. Choose a length from 1 to 32 characters. Select None to enforce no minimum length. The default is 6 characters. You can specify a password history length to prevent users from recycling passwords too quickly. Select the password history length from the drop-down list. Choose from 3, 5, 10, or 20. Select None to enforce no restrictions on password reuse. The default is 10. Select from the following check boxes to apply constraints to the password contents. In general, the password quality improves with more selected check boxes: Must contain uppercase characters for example AIV. The default is true. Must contain lowercase characters for example aiv. The default is true. Must contain numeric characters for example 174. The default is true. Must contain special characters for example ^). The default is true. Must not contain sequences for example AAA, ppp, or 222. The default is true. You can specify a maximum length of time for passwords before they are automatically expired. Select an expiry period from the drop-down list. Choose from 30, 45, 60, 75, 90, 105, and 120 days. Select None to enforce no expiry period. The default is 90 days. Users can be warned that their password will expire soon when login into the user interface.
The warning icon is displayed in the Dynamic Toolbox. Select a warning period from the drop-down list. Choose from 5, 10, and 15 days. Select Never to give users no warning of an expiring password. The default is 10 days. The expiry warning cannot be set to more than the expiry period.
Login page
You can configure the appearance of the login page and add a legal notice to the login page. To configure the login page: 1. From the Security section of the Administration tab, select Login Page Options. The options on the The Security Options: Login page are described below: Field Name Plain login page Details Where security is a concern, you can choose to remove all banners and logos from the login page. Doing so reduces the risk of attack by hiding the nature of system from a would be attacker. Select Yes to do this. This option is not available in the BMC Atrium Discovery Community Edition. The BMC favicon (shown in browser tabs) remains visible when you use the plain login page. If you want to remove the favicon, you should rename the /var/www/html/favico.ico file. For example favico.ico.hidden.
Legal Notice
Page 518
You can specify whether to allow data stored in browsers to be used to autocomplete fields in the UI. Select Yes or No.
UI security page
You can prevent cross site framing to defend against possible "clickjacking" attacks. To configure the UI security page: 1. From the Security section of the Administration tab, select UI Security. Field Name Prevent Cross Site Framing Details You can specify whether to allow the BMC Atrium Discovery UI to be incorporated as part of an umbrella UI. Select Yes or No.
Country Code State or Province Locality Company Name Department Email Address
The values in the Server Key tab must match those used by the certificate authority.
2. When you have entered the required information, click Generate New Server Key. The new server key is saved as $TIDEWAY/etc/https/server.key onto the appliance's file system. A certificate signing request is also generated, it is called server.csr and is saved in the same location. When you have a key and a signing request, it must be signed before it can be used. You can do this using one of the following methods: Use a certificate authority: continue with this procedure. Sign the certificate yourself: see Self Signing a Server Certificate. 3. Copyright 2003-2013, BMC Software Page 519
3. To download the certificate signing request, click Download CSR. Use the download dialog to choose the location on your local filesystem in which to save the file. 4. Send the certificate signing request file to your certificate signing authority for signing. When the certificate signing authority has approved the request, they will generate the corresponding certificate and return it as a .crt file.
If you are signing the server certificate by your organization's CA you need to ensure that the CA public key is loaded into the browser in order for it to trust the BMC Atrium Discovery server certificate.
If you do not have a CA bundle, either the default supplied with the appliance, or one supplied by your organization, you will be unable to use HTTPS.
The default CA bundle is stored on the appliance in the following directory: /etc/pki/tls/certs/ca-bundle.crt When the certificate signing authority has approved the request, they will generate the corresponding certificate bundle and return it as a .crt file. To replace the certificate bundle with one from a certificate authority used by your organization: 1. On the HTTPS Configuration page, click CA Certificates. 2. Click Browse next to CA Certificate Bundle File and select the server certificate you saved in Step 1 of this procedure. 3. Click Upload New CA Certificate Bundle. The new certificate bundle is uploaded. To download the existing CA certificate bundle: 1. Click Download CA Certificate Bundle. 2. Use the download dialog to choose the location on your local filesystem in which to save the file.
If HTTPS is not configured correctly, and you enable redirect to HTTPS, you could be locked out of the appliance. By default users can access the BMC Atrium Discovery over HTTP. You can enable HTTPS connections on this page and specify that attempts to connect over HTTP should be redirected to HTTPS. By default HTTP access is enabled and HTTPS access is disabled. 1. On the HTTPS Configuration page, click HTTPS tab.
Page 520
1. The following screen illustrates an example of HTTPS enabled and HTTP redirected to HTTPS:
To enable HTTPS access, from the HTTPS list, select Enabled. To disable HTTPS access, from the HTTPS list, select Disabled. To enable HTTP access, from the HTTPS list, select Enabled. To redirect HTTP access attempts to HTTPS, from the HTTP list, select Redirect to HTTPS.
Managing LDAP
Lightweight Directory Access Protocol (LDAP) is commonly used to access user or group information in a corporate directory. Using your corporate LDAP infrastructure to authenticate users can reduce the number of administrative tasks that you need to perform in BMC Atrium Discovery. LDAP groups can be mapped to BMC Atrium Discovery groups and hence assigned permissions on the system. The way in which BMC Atrium Discovery integrates with your LDAP infrastructure depends on the schema that is implemented in your organization.
If you are using LDAP authentication there is no need to set up local user accounts for LDAP users on BMC Atrium Discovery.
LDAP Terms
The following terms are used in the sections describing BMC Atrium Discovery LDAP configuration: Directory Information Tree (DIT): The overall tree structure of the data directory queried using the LDAP protocol. The structure is defined by the schema. Each entry in a directory is an object; one of two types: Containers: A container is like a folder: it contains other containers or leaves. Leaves: A leaf is an object at the end of a tree. Leaves cannot contain other objects. Domain Component (dc): Each element of the Internet domain name of the company is given individually. Organizational Unit (ou): Organizations in the company. Common Name (cn): The name of a person. Distinguished Name (dn): The complete name for a person, including the domain components, organisational unit, and common name. An example Directory Information Tree is shown below.
dc=tideway,dc=com ou=engineering cn=Timothy Taylor telephoneNumber=1234 email=t.taylor@bmc.com ou=test cn=Sam Smith telephoneNumber=2345 email=s.smith@bmc.com ou=product management cn=John Smith telephoneNumber=3456 email=j.smith@bmc.com
Page 521
Configuring LDAP
To configure the LDAP settings: 1. From the Security section of the Administration tab, select LDAP. The LDAP page is displayed showing the LDAP tab.
The options on this page are described below: Field Name LDAP Support Connection Status Details Select Enabled or Disabled to enable or disable LDAP support for this appliance. Displays a message regarding the status of the connection to the LDAP server. For example: No LDAP operations performed (last update: timestamp) Invalid credentials (last update: timestamp) Connection established (last update: timestamp) Can't contact LDAP server (last update: timestamp) The address of the LDAP server to connect to. For example: ldap://engineering.bmc.com:3268 or ldaps://engineering.bmc.com The default LDAP port is 389 and the default LDAPS port is 636. For multiple (failover) LDAP servers, enter a space separated list of LDAP server URIs. When using the Microsoft Active Directory group mode for LDAP, you can also use port 3268 to reference the Global Catalog. Check with your LDAP administrator to ensure that you use the correct port. Displays a message regarding the CA certificate and provides controls enabling you to upload, remove or replace a certificate. Many large enterprises have their own CAs that will provide a root CA certificate which will allow the appliance to trust the LDAP server's certificate it receives over the network. To upload a certificate, click the Browse button, select the new certificate using the file upload dialog, then click Apply. To replace an existing CA Certificate, select the Remove CA Certificate check box and click Apply. When the page is refreshed, add the new certificate as above. You should contact your organization's LDAP administrator to obtain a CA certificate. Multiple CA certificates can be uploaded by concatenating into a single file prior to upload. Note that you cannot delete a CA certificate when LDAPS is enabled. Likewise, you cannot enable LDAPS without a CA Certificate loaded. In both these cases you will encounter a Cannot use LDAPS without a CA Certificate warning. The user name with which to connect to the LDAP server. For example, user01@bmc.com. The password that corresponds to the user name entered in the Bind Username field. Check the box to modify the password. The length of time that the appliance will wait before the login is assumed to have failed. The location in the directory from which the LDAP search begins. For example: dc=tideway,dc=com. This restricts the search to the tideway container in the directory information tree. When you are not using group mapping (see LDAP Group Mapping) any BMC Atrium Discovery group you enter, must be entered in lower case.
Server URI
LDAPS
Page 522
Search Template
Specifies the template to use to search for the user name in the LDAP database. For example: (userPrincipalName=%(username)s) This queries the LDAP database for the userPrincipalName attribute which is equal to %(username)s, which is the user name string entered at the login prompt. If no response is received from the server in this length of time, the query times out. Select a timeout value from the drop-down list. Defines how deep to search within the search base. "Base", or zero level, indicates a search of the base object only. "One level" indicates a search of objects immediately subordinate to the base object, but does not include the base object itself. This is typically used to search for objects immediately contained in the search base level. "Sub Tree" indicates a search of the base object and the entire subtree of which the base object distinguished name is the topmost object. Select the required scope from the drop-down list. The appliance queries the LDAP server for user information and caches the results to avoid overloading the LDAP server. Select a timeout value from the drop-down list. Note that values less than 10 minutes are not recommended. You can also clear the cache manually using the Flush Cache button. The appliance queries the LDAP server for group information and caches the results to avoid overloading the LDAP server. Select a timeout value from the drop-down list. Values less than 1 hour are not recommended. You can also clear the user and group cache manually using the Flush Cache button. The group mode determines the way that the LDAP server is queried for group information, it should match the LDAP server used by your organization. Select one of the following LDAP server types from the drop-down list: Microsoft Active Directory SunONE Directory Server Other The LDAP attribute name to search for when running a group query. The attribute is on the User node, and provides a list of distinguished names of groups that the user belongs to. For example, the attribute might be called "memberOf" and contain data such as "cn=sales,ou=groups,dc=bmc,dc=com". This field is user editable when the Other Group Mode is selected from the Group Mode drop-down (if the User node does not contain such an attribute, this field should be empty so the Membership Attribute on Group node will be used instead). When any other mode is selected the field is automatically populated. The LDAP query that is used to find Group objects. It is usual to match the nodes' Object Class, for example: (objectclass=group). This field is user editable when the Other Group Mode is selected from the Group Mode drop-down. When any other mode is selected the field is automatically populated. The LDAP attribute name to search for to determine whether an individual is a member of a group. The attribute is on the Group nodes, and provides a list of names of users. For example, the attribute might be called "member". This field is user editable when the Other Group Mode is selected from the Group Mode drop-down. When any other mode is selected the field is automatically populated.
Group Query
Page 523
members of, that is, in LDAP form dc=tideway,dc=com,ou=engineering.... To configure the LDAP group mapping settings: 1. From the LDAP page, select the Group Mapping tab.
2.
3. 4.
5. 6.
The LDAP Group Mapping page lists the LDAP groups that are assigned to BMC Atrium Discovery security groups. For each LDAP group, the appliance security groups to which it is assigned are listed. Links for each action that you can perform are provided for each group. Select Enabled or Disabled from the drop-down list to enable or disable LDAP group mapping. For each LDAP group listed, an edit link and a delete link are provided. To remove the LDAP group mapping, click Delete. To edit the security groups mapped to an LDAP group, click Edit. Add enables you to add a new group mapping. To do this: Click Add. On the Add LDAP Group Mapping page, enter a search term for the common name into the LDAP Group field and click the Search button. A list of matches is displayed. If more than ten entries match, the first ten are shown and a label is displayed at the bottom of the list showing how many additional matches there are. Select the matching LDAP group from the list. The LDAP groups field is not case sensitive. If you use any upper case characters, their case is ignored. All LDAP groups returned from the LDAP server are displayed in lower case. Select the appliance security groups to which you want to assign the LDAP group. To save the mapping, click Apply.
Troubleshooting
If you receive a Can't Contact LDAP Server error in the Connection Status field, this may be caused by certificate problems rather than simple connectivity (wrong URI, port and so forth). Check that the certificate you are using is the one you received from your LDAP administrator. If the login fails when attempting LDAP authentication, set the security log /usr/tideway/log/tw_svc_security.log level to debug. Where the account used to bind to the directory fails to authenticate look for messages similar to the following:
-1285350512: 2010-08-13 10:00:46,843: security.authenticator.ldap: DEBUG: Attempt to auth bind as username "administrator" -1285350512: 2010-08-13 10:00:47,117: security.authenticator.ldap: DEBUG: LDAP passwd for "CN=Administrator,CN=Users,DC=generic,DC=com" not valid
If you are using group mapping and are experiencing login failures, check that the groups have been correctly defined. The security log shows the list of groups that the user is assigned to, if the list is empty, no groups have been mapped and no login is possible.
-1306330224: 2010-08-13 10:01:54,253: security.validator.ldap: DEBUG: LDAP.permitted(): Final mapped group list for user Administrator is [ ]
Page 524
login). For each authentication module (except for the Standard Atrium Discovery Web Authentication module), the following controls are provided: Disable link: click this link to disable the module. When a module is disabled, the link is replaced with an Enable link. Configure link: click this link to open a dialog box to configure the module. These dialogs are described in the following sections. Ordering controls: click the up or down arrow to move the module up or down. Click the barred up or down arrow to move the module to the top or bottom. The page also provide links to the configuration pages for HTTPS and LDAP.
Note A colon is assumed to delimit fields in the subjectAltName value so the string will not be split correctly if a value contains a colon.
These are the Apache mod_ssl variables. See the Apache website for more information. 3. Enter the LDAP Attribute against which to check the user name. 4. Copyright 2003-2013, BMC Software Page 525
Cannot use system and other standard users You cannot access the system user and the other standard users unless they have an exactly corresponding RSA/LDAP user. You must create an RSA/LDAP user with permissions exactly corresponding to any default users that you use.
To configure RSA SecurID authentication: 1. Log in to the BMC Atrium Discovery UI using an LDAP account with permissions equivalent to the system user. Ensure you can access the Administration -> Web authentication page while logged in as this user. 2. Click Configure in the RSA SecurID Authentication row. There is a single editable field in the configure page, this is the Logout URL which is required to logout via the web authentication framework. The default is /webauthentication?logoff?referrer=/ui. 3. Log out of the BMC Atrium Discovery UI. 4. Install and configure the RSA Authentication Manager according to the instructions in the documentation contained in the download. During the configuration of RSA SecurID, "Use RSA Token for Cross-Site Request Forgery Protection" must be set to disabled otherwise logging out from the BMC Atrium discovery UI will fail. The installation requires that some environment variables are configured. These variables should be appended to /etc/sysconfig/httpd. A typical entry looks like this:
# RSA enablement export VAR_ACE=/var/ace export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/etc/httpd/rsawebagent
5. 6. 7. 8.
If the appliance is a virtual machine and you use VMware snapshots, you should ensure that you update the snapshot after configuring the RSA Authentication Manager. Rolling back to an earlier snapshot removes the shared secret and prevents subsequent log ins. See the RSA Authentication Manager documentation for more information. Navigate to the BMC Atrium Discovery URL. You are presented with the RSA SecurID login page. Log in using the same LDAP account with permissions equivalent to the system user. You are now presented with the standard BMC Atrium Discovery login page. Log in to BMC Atrium Discovery with the same LDAP account as you used in the previous step. Navigate to the Administration -> Web authentication page and enable the RSA SecurID integration. If you cannot access the Administration -> Web authentication page, you must log out of BMC Atrium Discovery, log back in as the system user, and grant sufficient permissions to the RSA/LDAP user to access that page.
Once RSA SecurID Authentication is enabled in BMC Atrium Discovery, the BMC Atrium Discovery login screen is no longer displayed. To login, enter your username, password, and code from the SecurID token in the RSA SecurID login screens. You are authenticated against the RSA Authentication Manager, and once authenticated you are logged into BMC Atrium Discovery using the same username. If RSA SecurID Authentication is not enabled, the normal BMC Atrium Discovery login page is displayed, even after successfully logging in using the RSA Authentication Agent. If RSA SecurID Authentication is enabled in ADDM, but the RSA Authentication Agent is not installed or is installed incorrectly, the normal BMC Atrium Discovery login page is also displayed.
Configuring user authentication using HTTP Header (9.0 SP2 and later only)
This section contains instructions on how to integrate BMC Atrium Discovery with single sign on (SSO) technologies which provide authentication using custom HTTP headers such as CA SiteMinder. The HTTP header plugin scans each HTTP request for a specific HTTP Header. If the HTTP header is present and contains a valid user ID, the user is authenticated; if not, the user is not authenticated. The header is assumed to contain the username or user ID which is used in an LDAP query to obtain authorization. The LDAP query uses LDAP group mapping.
Page 526
Warning HTTP header authentication is a simple authentication mechanism which requires additional protection. HTTPS must be enabled with HTTP redirection. LDAP support must be enabled A reverse proxy must be used, and BMC Atrium Discovery configured only to accept HTTP requests from the IP address or addresses of the proxy. Enabling HTTP header authentication without securing the appliance in this manner leaves the appliance vulnerable to attack.
Page 527
From the Security section of the Administration tab, select Audit. To Search for events, enter search criteria in all or some of the following fields: From: the start date and time of the search. The default for this field is 24 hours before the page was loaded. To: the end time and date of the search. The default for this is to display the following text in the To fields: Day Month Year hh mm. This means that the logs will be searched up to the current time. User ID: a filter to search only for events logged to a particular user, for example, the reporter user. Event group: a drop-down filter to search only for events belonging to a particular event group or category. The event group provides a means for viewing related event types. See Event Groups for a list of event groups. Events: a drop-down filter to search only for events of a specified type. When you have entered the search criteria, click Run to start the search. The page is refreshed to show a results table below the search panel. You can only search the logs through the user interface (UI) using the fields in the Search audit records page. However, if you export the Results List by clicking Export as CSV, you can use a spreadsheet or text editor to perform detailed searches on the data. For example, you can search for events on a specific host. Click Export as CSV and choose a location to save the file. Each item in the result row is a hyperlink to the detailed record of the event. The record data is divided into two sections: Standard Details Additional Details
Standard details
The standard details that are recorded for every event are described in the table below: Name Event Event Group User Full Name User Groups When Summary Description The type of event. The event group to which this event belongs. The purpose of the event group is to provide a filter for viewing related event types. The user ID who initiated the event. The full name of the user who initiated the event. The name of the groups the user who initiated the event belongs. When the event was logged. Summary description of the event.
Additional details
The details shown in the Additional Details section varies from event to event. For example, the following information is provided for a Windows proxy that has been pinged: IP Address Port Windows proxy Name Windows proxy Type When logging in to the user interface over an IPv6 connection, the client may use a temporary IPv6 address. It is this temporary IP address that is reported in the appliance's audit log. Where temporary addresses are shown, tracing the particular computer from which the login came is difficult. To avoid this, you can disable temporary IPv6 addresses on client computers.
Event groups
Audited events are collected into the following groups: Appliance Config Audit Log Consolidation DIP Datastore Edit Discovery Config
Page 528
Discovery Ruleset ECA Reasoning Search Security Windows proxy UI Access cmdb-export The events that belong to these groups are shown on the Audit page in the user interface.
There is no automatic purge of the audit information. When the audit information on the appliance becomes very large, you can use the appliance backup feature to create an archive.
A typical number of auditable events is approximately 1000 per day. This equates to approximately 90 000 events in three months. When deleting events, you can typically remove 500 events per second. Deleting 60 000 or more events will result in the browser timing out, however, the process continues.
Page 529
Maintenance mode is not a single-user mode. If you are performing any tasks that could affect other users (such as appliance backup) you should ensure that you are the only user logged in. Use the Administration => Security => Active Sessions window to verify this.
In maintenance mode, a flashing banner is displayed at the top of all pages. The flashing banner is a link to the Appliance Control page. 3. To leave maintenance mode, click Quit Maintenance Mode.
When non-system users are logged out, the login banner is displayed with an "under maintenance" message. When logging into an appliance that is in maintenance mode, you should ensure that your work does not interfere with that of other logged in users. For example, the appliance may be in maintenance mode so that an appliance backup can be taken.
If you choose to shut down the appliance, you will require full security access rights to restart the appliance.
3. Confirm the reboot action. If you choose to reboot the appliance, a reboot countdown page is displayed and after 600 seconds, you will be returned to the login page.
If you choose to shut down the appliance, you will require full security access rights to restart the appliance.
In maintenance mode, a flashing banner is displayed at the top of all pages. The flashing banner is a link to the Appliance Control page. 3. To leave maintenance mode, click Quit Maintenance Mode.
When non-system users are logged out, the login banner is displayed with an "under maintenance" message. When logging into an appliance that is in maintenance mode, you should ensure that your work does not interfere with that of other logged in users. For example, the appliance may be in maintenance mode so that an appliance backup can be taken.
Baseline configuration
The appliance status is displayed in the Appliance Status icon in the dynamic toolbox. It is displayed as a Status link next to a traffic light symbol which shows the overall status of the appliance. See Appliance status for more information.
Page 531
To view the appliance status, click Appliance Status in the dynamic toolbox. The appliance status drop down shows the following information: Appliance Name: the name of the appliance. Appliance Time: the time read from the appliance's internal clock. ECA Engines: the number of ECA engines running. The number of ECA engines affects the maximum number of concurrent discovery requests. For more information, see Configuring discovery. Summary link describing the overall status of the appliance. The summary link description is one of the following: No Problems Detected: The status is green. No problems have been detected. Status Information Available: The status is green, but at least one potential problem has been detected which has an information level message. Minor Problems Detected: At least one minor problem has been detected with your appliance. Major Problems Detected: At least one major problem has been detected with your appliance. Critical Problems Detected: At least one critical problem has been detected with your appliance. One or more of the following actions can be configured to occur if the appliance status is reported as critical, major, or minor. Send Email Restrict Network Access Stop Discovery You can configure the precise levels at which states change. This is described in the following section.
Page 532
Services To Allow
Select one or more of the following services to remain open if network access is restricted according to the actions configured. Note that blocking services such as DHCP or ICMPv6 may prevent the appliance from obtaining an address and it could become unreachable. Also, blocking HTTP, HTTPS, and SSH blocks all remote access to the appliance. In this instance you will need to use the system console to fix the problems. DHCP DHCPv6 DNS HTTP HTTPS ICMPv6 LDAP SMTP SSH For example, where a critical problem is detected, you may choose to limit network access to and from the appliance to HTTP or HTTPS only. To do this, select HTTP and HTTPS, ensure the other services are deselected. When a failure occurs that you have configured to restrict network access, the firewall is raised. When the problem is fixed, the firewall is not lowered. To do this you must restart the firewall.
If the appliance mail server settings are set to an invalid mail server, configuring baseline to send email introduces a delay of approximately three minutes while the appliance attempts to contact the SMTP server, each time baseline is run. Baseline is run hourly by cron, and may be run manually by a user.
Checks performed
Click here to see a comprehensive list of the checks performed The checks that are performed for each item in the Appliance Baseline Page are described in the following table:
Page 533
Check Performed Checks to ensure that the Apache configuration has not been changed since the last baseline. Checks that the HTTPS configuration which allows secure web access (enabled/disabled) on the appliance is the same as that configured in the baseline. Checks the tripwire logs to ensure that no appliance configuration files have been added, deleted, or edited since the last baseline. Checks that the eth0 configuration on the appliance is the same as that configured in the baseline. The following items are checked: Speed Duplex Autonegotiation Checks that the firewall (iptables) configuration matches that recorded in the baseline. Checks that the IPv6 firewall (iptables) configuration matches that recorded in the baseline. Checks the tripwire logs to ensure that no HTML files have been added, deleted, or edited since the last baseline. Checks whether the appliance run level matches that recorded in the baseline. Checks whether the appliance specification matches that recorded in the baseline. Checks the tripwire logs to ensure that no system files have been added, deleted, or edited since the last baseline. Checks to ensure that the application configuration has not been changed since the last baseline. Checks that the UI service is alive. Checks to ensure that the application server configuration has not been changed since the last baseline. Checks to ensure that the application server start script has not been edited since the last baseline. Checks to ensure that the Atrium credentials have not been changed since the last baseline. Checks that the BMC Atrium Discovery Operatin System (OS) RPM version number matches that in the baseline. Checks that the BMC Atrium Discovery RPM version number matches that in the baseline. Checks to ensure that the audit settings have not been changed since the last baseline. Checks to ensure that the CMDB blackout windows settings have not been changed since the last baseline. Checks to ensure that the CMDB Synchronization Export settings have not been changed since the last baseline. Checks to ensure that the CMDB Synchronization Transformer settings have not been changed since the last baseline. Checks to ensure that the consolidation settings (scanning or consolidation appliance and configured connections including status) have not been changed since the last baseline. Checks to ensure that the Coordinator service settings have not been changed since the last baseline.
Major
Minor
Appliance Firewall Appliance Firewall (IPv6) Appliance HTML Files Tripwire Appliance Run Level Appliance Specification Appliance System Files Tripwire Application Configuration Application Server AppServer Configuration AppServer Start Script Atrium Credentials Atrium Discovery OS RPM Atrium Discovery RPM Audit Settings CMDB Sync Blackout Windows CMDB Sync (Exporter) Service CMDB Sync (Transformer) Service Consolidation
Critical Major Major Minor Major Major Minor Critical Minor Minor Major Minor Critical Minor Major Major Major
Major
Coordinator Service
Major
Page 534
Coordinator start script Crontab DataStore SoftLimit DDD Removal Blackout Windows Default Scan Level Discovery Configuration Discovery File Content Filters Discovery Mode Discovery Scripts Discovery Service Discovery Start Script
Checks that the settings in the Coordinator start script match those in the baseline. Checks that the cron tab setting on the appliance is the same as that configured in the baseline. Checks that the datastore soft limit matches that in the baseline. Checks to ensure that the DDD removal blackout windows settings have not been changed since the last baseline. Checks that the default scan level matches that in the baseline. Checks that the Discovery configuration matches that in the baseline. Checks that the file content filters configured on the appliance match those in the baseline. Checks that the Discovery mode (Record/Playback/Normal) matches that in the baseline. Checks that the Discovery commands match those in the baseline. Checks that the Discovery service is alive. Checks that the following settings in the Discovery start script match those in the baseline: Mode - record or playback Log level Pool data expiry time Checks that the following DNS settings match those in the baseline: Name servers Domain Checks to ensure that the exclude ranges have not been changed since the last baseline. Checks that the Export Adapter matches those in the baseline. Checks that the Export Adapter configurations matches that in the baseline. Checks that the Export Exporter configurations matches that in the baseline. Checks that the Export mapping sets match that in the baseline. Checks to ensure that the file export credentials have not been changed since the last baseline. Checks that the settings in the Disk space monitor start script match those in the baseline. Checks whether the installed devices matche that recorded in the baseline. Checks to ensure that integration points and software credential groups, and their constituent queries and connections/credentials have not been changed since the last baseline. Checks that the settings in the Integration start script match those in the baseline. Checks to ensure that the JDBC credentials have not been changed since the last baseline. Checks that the JDBC drivers match those in the baseline. Checks that the Discovery login credentials match those in the baseline. Checks to ensure that the Mainframe credentials have not been changed since the last baseline.
Minor Minor Minor Major Minor Major Major Major Major Critical Minor
DNS Configuration
Minor
Exclusion Ranges Export Adapter Export Adapter Configurations Export Exporter Configurations Export Mapping Sets File Export Credentials Free_space_monitor start script Installed Devices Integrations Configuration Integrations start script JDBC Credentials JDBC Drivers Login Credentials Mainframe Credentials
Page 535
Miscellaneous Settings Model Service Model Start Script NTP Configuration NTP Running
Checks to ensure that the miscellaneous settings have not been changed since the last baseline. Checks that the model service is alive. Checks to ensure that the model start script has not been edited since the last baseline. Checks whether the NTP configuration matches that recorded in the baseline. Checks whether the NTP status (enabled/disabled) matches that in the baseline. When ntpd is running, the message ntpd is not configured to run at run level 5 is displayed this is incorrect and can be ignored. Checks whether the operating system version matches that in the baseline. Checks to ensure that the options service settings have not been changed since the last baseline. Checks that the settings in the options start script match those in the baseline. Checks that the pattern configuration matches that in the baseline.
Operating System Options Service Options start script Pattern Configuration Modification Pattern Definition Modifications Pattern Modification Port Scan Settings Reasoning Service Reasoning Start Script Reports Service Reports start script Security Options Security Service Security Start Script SQL Credentials Windows Proxy Availability Windows Proxy Configuration
Checks that pattern definitions match those in the baseline. Checks that the patterns match those in the baseline. Checks that the port scan settings match those in the baseline. The check is performed for each port that is enabled for TCP, UDP, or both. Checks that the Reasoning service is alive. Checks that the log level for Reasoning matches that in the baseline. Checks to ensure that the Reports service settings have not been changed since the last baseline. Checks that the settings in the Reports start script match those in the baseline. Checks that the security service options match those in the baseline. Checks that the security service match those in the baseline. Checks to ensure that the security start script has not been edited since the last baseline. Checks to ensure that the SQL credentials have not been changed since the last baseline. Checks that all of the Windows proxies respond when pinged. Checks that the Windows proxy configuration on the appliance (not the external Windows proxies) matches that recorded in the baseline. This includes checking the type, version, and position in the Windows proxy order. Checks to ensure that the winproxy.conf file on each connected Windows proxy has not been edited since the last baseline. Checks that the Windows proxy pool configuration on the appliance (not the Windows proxies) matches that recorded in the baseline. Checks that the Discovery SNMP credentials match those in the baseline. Checks that the appliance SSL key file MD5 checksums that match those in the baseline.
Major Major Major Critical Minor Major Minor Major Critical Minor Major Info Major
Windows Proxy Configuration File Windows Proxy Pool Configuration SNMP Credentials SSL Appliance Key
Page 536
Checks that the appliance certificate authority file MD5 checksums that match those in the baseline. Checks that the usage data collection configuration matches that recorded in the baseline. Checks to ensure that the Vault service settings have not been changed since the last baseline. Checks that VMwareTools is installed and running. If not running on a VMware platform, the test will be skipped. If the platform cannot be determined, VMware will be assumed; in this case, if VMwareTools are not required, the test can be disabled. Checks to ensure that the vSphere credentials have not been changed since the last baseline. Checks to ensure that the WebLogic credentials have not been changed since the last baseline.
Major Major
mkdir -p /usr/tideway/var/tripwire/report
The tripwire directory is archived into the backup directory in a file called addm_tripwire_etc.tgz. The archive is not restored when the backup is restored but can be copied manually onto the restored appliance and recommissioned using the Commissioning tripwire passkeys procedure.
Page 537
@@section GLOBAL TWROOT="/usr/tideway/tripwire/sbin"; TWBIN="/usr/tideway/tripwire/sbin"; TWPOL="/usr/tideway/tripwire/etc"; TWDB="/usr/tideway/tripwire/var/lib"; TWSKEY="/usr/tideway/tripwire/etc"; TWLKEY="/usr/tideway/tripwire/etc"; TWREPORT="/usr/tideway/var/tripwire/report"; ARCH="x86_64"; HOSTNAME="localhost";
3. 4. 5. 6.
If you want to monitor any additional files, add the full path to that file to the policy file. If you want to monitor any additional directories, add the full path to that directory to the policy file. Copy the /usr/tideway/etc/twpol.txt file to /usr/tideway/tripwire/etc/twpol.txt, overwriting the existing file. Run the following command which will set up the initial database and passwords allowing changes to the Tripwire configuration /usr/tideway/tripwire/sbin/tripwire-setup-keyfiles 7. You are prompted to create a site and a local password. Record these passwords or you will need to reinstall the Tripwire database. The local password is required to remove Tripwire violations. The site password is required to update the Tripwire policy file. 8. You are prompted to sign the configuration file twcfg.txt and the policy file twpol.txt. 9. Change the ownership and permissions of the /usr/tideway/tripwire/etc/twpol.txt and the /usr/tideway/tripwire/etc/twcfg.txt files to the tideway user. Enter the following commands:
cd /usr/tideway/tripwire/ chown tideway:tideway etc chmod 750 etc cd etc chown tideway:tideway twcfg.txt twpol.txt chmod 640 twcfg.txt twpol.txt
An error is reported as a database backup file is created. 4. Run the following command again to rebaseline the Tripwire database:
/usr/tideway/bin/tw_tripwire_rebaseline
This time, no errors are reported as no files have been added. The tripwire database is now initialised and baselined.
Warning This will cause all of the appliance baseline checks to be reset. Make sure that all existing baseline failures are addressed.
1. Run /usr/tideway/bin/tw_baseline or click Check Baseline Now in the user interface to execute all the baseline tests. 2. Copyright 2003-2013, BMC Software Page 538
2. Verify that only tripwire related tests are failing. Trip wire test names end with "tripwire". 3. Update the tripwire report and then update the appliance baseline as follows:
sudo /usr/tideway/tripwire/sbin/tripwire --check > /usr/tideway/var/tw_tripwire.txt /usr/tideway/bin/tw_baseline --rebaseline
Tripwire maintenance
Updating after a violation
When you use the tw_tripwire_rebaseline utility to rebaseline the Tripwire database, you accept that all files that are being monitored are correct. This procedure should be performed as the tideway user. To update the Tripwire database after an error: 1. Check the items that are reported in the violation report and ensure that the reported changes are what you expected. 2. Run the following command:
/usr/tideway/bin/tw_tripwire_rebaseline
2. Run the following command (on one line) to update the Tripwire policy file:
cd /usr/tideway/tripwire/etc/ sudo /usr/tideway/tripwire/sbin/tripwire --update-policy twpol.txt
You will need both the local and site password for this operation. 3. Check that the update has been performed correctly. Enter:
sudo /usr/tideway/tripwire/sbin/tripwire --check
Page 539
The appliance status is updated. For more information about the tw_baseline utility, see tw_baseline.
Notes:
1. Configuration File Location - The default configuration file can be found in the following location: /usr/tideway/data/default/graph-definition.txt This default configuration file must not be changed. 2. Configuration File Customisation - To customize a configuration file, create the following: /usr/tideway/data/customer/graph-definition.txt This overlays the default configuration file.
Visualization definitions
The visualization definition section contains a number of definitions for visualizations. Each definition takes the form:
visualization_name (tab=tab_name) viewed_node_type type_of_dependency dependency_target_node_type type_of_dependency dependency_target_node_type ... type_of_dependency dependency_target_node_type
visualization_name the name of the visualization. This is displayed in the visualizations list when viewing a node of the type defined in viewed_node_type. tab_name the tab under which to display the visualization. Valid values are: Home Application Infrastructure Discovery Reports Setup If no tab is specified, no tab is selected when the visualization is displayed. viewed_node_type the type of node for which this visualization is available. type_of_dependency - the type of dependency, for example, depended_upon, or dependant. dependency_target_node_type - the node type that is the target for the dependency. For example, in a dependency visualization describing the relationship between a business application running on a host and the host itself, the target is Host. For example:
Dependency (tab=Infrastructure) SoftwareInstance depended_upon SoftwareInstance dependant SoftwareInstance running_on Host BusinessApplicationInstance depended_upon BusinessApplicationInstance dependant BusinessApplicationInstance running_on Host
Page 540
This defines a visualization called Dependency which will be available on the visualizations list when viewing a SoftwareInstance or a BusinessApplicationInstance. When displayed, the Infrastructure tab will be selected. When you pick the SoftwareInstance visualisation, you see the SoftwareInstance node as the starting point of the visualisation, and then all SoftwareInstance and Host nodes related to that SoftwareInstance. Exactly which SoftwareInstance and Host nodes are shown depends on the definition of the depended_upon, dependant, and running_on dependency definitions, which are defined in the dependency definitions part of this file. The indentation is important and defines the structure of the visualization. An indentation must be composed of spaces, not tabs, and be a multiple of two spaces.
Dependency definitions
The dependency definition (type_of_dependency) defines the route from the viewed_node_type to the dependency_target_node_type. For example, the following visualization definition, from the first section of the file, shows all Hosts that a SoftwareInstance is running on:
Dependency SoftwareInstance running_on Host
The running_on Host relationship defines the route from the SoftwareInstance to the Host. One of these definitions must exist in the dependency definitions. For example:
SoftwareInstance running_on Host RunningSoftware:HostedSoftware:Host:Host
Here, SoftwareInstance is defined as the source node kind and Host is defined as the destination node kind, and the system will use the traversal RunningSoftware:HostedSoftware:Host:Host to get from any SoftwareInstance nodes currently in the visualization to any related Host nodes. A dependency definition can have multiple traversals chained one after the other in order to get to a node kind that is distantly related:
Host connected_to Subnet DeviceWithInterface:DeviceInterface:InterfaceOfDevice:NetworkInterface DeviceOnSubnet:DeviceSubnet:Subnet:Subnet
The visualization will show only Host and Subnet nodes, but in order to get to Subnet nodes from Host nodes, it will traverse through NetworkInterface nodes. You can also define a number of alternative paths by listing them on separate lines:
SoftwareInstance communication SoftwareInstance Server:Communication:Client:SoftwareInstance Client:Communication:Server:SoftwareInstance
Each path will be tried in turn, and all resulting nodes connected to the source node will be returned. Relationships can be wildcarded in the same way as in a generic search using ::
Traversal steps can have an optional '+' after them, for example:
SoftwareContainer:SoftwareContainment:ContainedSoftware:SoftwareInstance+
This causes the step to be followed repeatedly, adding nodes to the set of nodes found so far, until further traversals add no further nodes. The step is always evaluated at least once. A good use of this function is to follow a relationship that forms an arbitrarily deep hierarchy: for example,
Page 541
the SoftwareInstances making up a BAI. Here it is possible to have a first-order SoftwareInstance with a second-order SoftwareInstance as the container, before reaching the BAI. Although not recommended and not often seen, a third-order SoftwareInstance can be inserted between the second order SoftwareInstance and the BAI, and so on. This function will navigate this structure.
Attributes
Attributes can be set on a dependency definition that affect how it is rendered. For example:
SoftwareInstance communication.server SoftwareInstance (color=0f0,left_arrow) Server:Communication:Client:SoftwareInstance communication.client SoftwareInstance (color=0f0,right_arrow) Client:Communication:Server:SoftwareInstance
This displays lines between SoftwareInstance nodes representing client/server communications in green, and draws an arrow pointing at the server. Attributes appear in brackets after the definition of the dependency type and destination node kind. No spaces are allowed between the brackets, and the attributes are a comma separated list of either flags, or key and value. Defined attributes are:
color=rgb left_arrow right_arrow left_box right_box
color: sets the color of the line in the graph. It takes a three character argument that is the color as an RGB tuple. Each character is the color of that component as a hexadecimal digit (0-f). left_arrow: sets arrow heads on the line, pointing to the source node. right_arrow: sets arrow heads on the line, pointing to the target node. Both attributes can be set, in which case both ends of the line have an arrow, and neither attribute can be set, in which case the line is plain. left_box traverse to the destination nodes using the given traversal represent the source node as a box add the destination nodes in the box right_box traverse to the destination node using the given traversal represent the destination node as a box add the source node in that box Note that there must only be a single destination node when using right_box, because it is impossible to turn several destination nodes into separate boxes and put the source node inside each of them. It should be used when it is known there will always be at most one destination node, for example, the HostContainer for a Host. Boxes only appear at the top level; they cannot be nested. If a later definition conflicts with an earlier, for example, attempting to nest boxes or put a node inside two different boxes, the later definition overrides the earlier. The order depends on how the graph is constructed and is not predictable. The left_box and right_box attributes override the color, left_arrow, and right_arrow attributes.
Special traversals
The :NetworkConnections:: traversal which can be used with Hosts and DiscoveredProcesses, connects nodes based on observed communication information. Observed communication information is directly discovered, rather than specific communication relationships built by patterns. DiscoveredNetworkConnections and ListeningPorts are used to find this information. These types of traversal are quite slow.
Page 542
given edge in the visualization means. It is also recommended that when defining a pair of traversals for moving between nodes in either direction (eg. from Host to SoftwareInstance and from SoftwareInstance to Host, both using the "RunningSoftware" relationship), that the traversal is named the same thing. In other words, instead of defining the two traverals like this:
Host has_running_on_it SoftwareInstance (right_arrow) Host:HostedSoftware:RunningSoftware:SoftwareInstance SoftwareInstance is_running_on Host (right_arrow) RunningSoftware:HostedSoftware:Host:Host
The latter form ensures that the edges all point towards the depended upon node (the Host), and also keeps the name the same. If the name, attributes (color and dashed), and source and destination nodes are the same, but the arrow and source and destination nodes are opposites, the two edges will be merged. Without this, the visualization can become ugly as lots of edges will be double edges. There is one case where a pair of opposite traversals cannot have the same name: when the source and destination nodes are the same node kind. For example:
SoftwareInstance depended_upon SoftwareInstance (color=00f,right_arrow) Dependant:Dependency:DependedUpon:SoftwareInstance dependant SoftwareInstance (color=00f,right_arrow) DependedUpon:Dependency:Dependant:SoftwareInstance
Ideally the traversals should be called the same thing, as they are opposites of each other. But because the source and destination nodes are the same, there is no way to distinguish them other than to use a different traversal name. In this case, the visualization might end up with two edges between a pair of SoftwareInstances, one pointing one way and labelled "depended_upon" and the other pointing the other way and labelled "dependant". It is a convention to use the name "xxxxx.box" for "box" traversals, ie. those that put the source or destination nodes in a box by using left_box or right_box. This is because often the non-box case is also required in some visualizations. With this naming scheme, it is clear which form is being used.
Tooltip definitions
By default, the attributes shown as a tooltip when a user hovers over a node are read from the summary list defined in the taxonomy. The tooltip section allows that to be overridden. For example:
Host name os_class
Page 543
The options on the page are described in the following table. Field Name Directly Discovered Data removal Details Specify the age past which Discovery Accesses and all the related DDD nodes are removed (the default is 28 days). Quarterly (90 days) Monthly (28 days) Fortnightly Weekly If the setting does not match one of these options the value will be shown as "--custom settings--". In a demonstration appliance only, the default setting for aging is Never, meaning that demonstration data never ages out of the system. This setting is not available for any other type of appliance, and demonstration appliances should never be used for anything other than demonstrations. For guidelines, see Modifying DDD, host, and software instance aging limits. Specify the time elapsed before destroyed nodes are purged from the datastore. The default is one year. To change this, choose one of the following periods from the drop-down list. Never 30 days 90 days 180 days One year Two years Specify the time elapsed before history entries are purged from the datastore. The default is never. To change this, choose one of the following periods from the drop-down list. Never 30 days 90 days 180 days One year Two years When an IP address is designated the preferred IP address for a host, Discovery does not perform scans on other known IP addresses for that host. This is referred to as scan optimization. After a specified Scan optimization timeout period, Discovery will scan other known IP addresses for that host to ensure that they are still non-preferred and on the same host. Specify one of the following timeout periods from the drop-down list. 1 - 10 days 14 days 21 days Specify the host/network device/printer/SNMP managed device/mainframe aging time limit (the default is 10 days). 2 - 10 days 14 days 21 days 28 days For guidelines, see Modifying DDD, host, and software instance aging limits Specify the number of host/network device/printer/SNMP managed device/mainframe access failures. 1 - 10 failures
Page 544
Time to elapse before a software instance/runtime environment/storage can be aged Minimum number of failed accesses before a software instance/runtime environment/storage is aged Dark space suppression scheme Cache Size
Specify the time to elapse before a software instance/runtime environment/storage node can be aged. (the default is 10 days). 1 - 20 days Steps of 5 to 100 days For guidelines, see Modifying DDD, host, and software instance/runtime environment aging limits Failed accesses before a software instance/runtime environment/storage node is aged. 1 - 20 failures Steps of 5 to 45 failures
The scheme to use for dark space suppression. Select either Keep Most Recent or Remove All from the list. See Dark space DiscoveryAccess nodes for a description of this setting. The size of the disk cache. The default is to set it automatically. Do not change the value of this setting unless you are advised to by Customer Support.
You must restart the appliance for this change to this setting.
The default is displayed in the drop-down list as Automatic (N.n GB). Size Warning The maximum size of the datastore. When this limit is reached, a warning is displayed in the Appliance Status page. This is a soft limit and does not prevent the datastore growing beyond the specified limit. You can select any value from 10 to 100 GB in steps of 5 GB from the list. The default is 30 GB. The interval between datastore checkpoints. The larger the interval, the more data has to be written to the checkpoint files. The smaller the interval, less data needs to be written to the checkpoint files, though the writing takes place more often. The default is 15 minutes. You must contact Customer Support before you make any changes to this setting.
Checkpoint Interval
Best Practice BMC recommends that you do not attempt to fine-tune model maintenance parameters. Doing so can make BMC Atrium Discovery highly reactive and have negative consequences, such as causing a dramatic increase in the size of the datastore and putting more load on your target estate. If you do alter the model maintenance parameters, BMC recommends that you do not vary more than half or twice the standard settings detailed in the preceding table.
However, there might be occasions when modifying the defaults are necessary, particularly if you scan your estate at different intervals and need to keep close control on disk consumption. The following table show recommended settings based on your scanning frequency. Scanning Frequency Every 2 days 2 times per day Recommended Aging Limit Maintain DDD at 28 days, and decrease the Host and Software Instance access failures to 4 (to account for half as many expected scans). Decrease DDD to 14 days (to cap how much space it consumes, unless you increase the disk space to more than the sizing guidelines); increase the Host and Software Instance access failures to 12-14 (to account for scanning at twice the expected rate).
For more information about aging and how it fits in the node removal process, see How nodes are removed.
Page 545
When you view the DDD Removal Blackout Windows tab, the active blackout window is highlighted. If multiple blackout windows are active, the one with the longest time remaining before it ends is highlighted. This is not automatically refreshed. DDD nodes are removed in batches which are not interrupted. Once removal starts, it continues to completion. Therefore, if a batch removal is in progress at the beginning of a DDD removal blackout window, it will continue into the blackout until completion. In normal operation this should take no more than a few minutes.
Start Time
Page 546
Duration
Select a duration in hours. This is the length of time that the blackout window operates and can be from 1 to 24 hours. For a daily blackout window the maximum number of hours you can select is 23. This prevents you from inadvertently scheduling a 24/7 blackout.
3. Click OK.
When you view the DDD Removal Blackout Windows tab, the active blackout window is highlighted. If multiple blackout windows are active, the one with the longest time remaining before it ends is highlighted. This is not automatically refreshed. DDD nodes are removed in batches which are not interrupted. Once removal starts, it continues to completion. Therefore, if a batch removal is in progress at the beginning of a DDD removal blackout window, it will continue into the blackout until completion. In normal operation this should take no more than a few minutes.
Page 547
2.
PDF generated from online docs at http://discovery.bmc.com BMC Atrium Discovery 9.0
Comment Frequency
Enter a descriptive comment for the blackout window. Select a frequency for the window to operate. For example, this can be Daily, Weekly, or Monthly. For a weekly blackout window, you are provided with buttons for each day. Select the day or days that you want the window to operate. For a monthly blackout window, you are provided with buttons for each day in the month. Select the day or days that you want the window to operate. Alternatively, select the No Removal On The radio button and choose one of: First Second Third Fourth Last and the day that you want the window to operate. In this way you can select the Second Tuesday of the month and so forth. Select a time for the window to start. Select a duration in hours. This is the length of time that the blackout window operates and can be from 1 to 24 hours. For a daily blackout window the maximum number of hours you can select is 23. This prevents you from inadvertently scheduling a 24/7 blackout.
3. Click OK.
Page 548
Visualization cache
Complex visualizations which need to traverse over many relationships to build the data, such as the Application Dependencies - Software View, can be slow to load. Caching of generated visualizations has been introduced to ensure that this can be minimized. You should note that cached data is not live, so using this feature will result in visualizations which may not be consistent with the current state of the datastore. This is particularly important when using the forever setting. Changing either visualization cache settings requires a restart of the tideway service to take effect. Restarting the tideway service clears the visualizations from the cache.
NTP and VMware Tools If you are using NTP on a BMC Atrium Discovery Virtual Appliance, you should disable the VMware tools time syncing. This is explained in VMware's timekeeping best practices for Linux based Virtual Machines which can be found here under the heading VMware Tools time synchronization configuration.
To configure the NTP client you must be logged in to the command line as the root user. 1. Edit the /etc/ntp.conf file. 2. Search for the lines beginning server.
server 0.rhel.pool.ntp.org server 1.rhel.pool.ntp.org server 2.rhel.pool.ntp.org
3. Replace the server entries with the IP address or hostname of the NTP server or servers with which you want to synchronize. For example:
server ntp.tideway.com server 172.17.1.24
4. Save the file. 5. Configure the NTP client service to start at run level 3 when the appliance boots. Enter:
[root@localhost] # /sbin/chkconfig --levels 3 ntpd on [root@localhost] #
6. Check to ensure that this change has been made correctly. Enter the following command and ensure that the output is the same as that shown:
[root@localhost] # /sbin/chkconfig --list ntpd ntpd 0:off 1:off 2:off 3:on [root@localhost] #
4:off
5:off
6:off
OK
Page 549
/usr/tideway/bin/tw_tripwire_rebaseline
For more information about tripwire and baselining the appliance, see Baseline configuration.
Message - ntpd is not configured to run at run level 5 The message ntpd is not configured to run at run level 5 which is displayed in the appliance baseline window is erroneous. It is displayed when ntpd is running.
Supported utilities All command line utilities that are supported in BMC Atrium Discovery are documented in this section. Any utility that is not documented in this section is an explicitly unsupported tool. Click here to show the list of supported command line utilities tw_adduser tw_backup tw_baseline tw_check_reports_model tw_config_dashboards tw_convert_reports tw_cron_update tw_deluser tw_disco_control tw_disco_export_platforms tw_disco_import_platforms tw_ds_offline_compact tw_excluderanges tw_imp_ciscoworks tw_imp_csv tw_injectip tw_listusers tw_model_init tw_passwd tw_pattern_management tw_query tw_reasoningstatus tw_remove_darkspace tw_restore tw_scan_control tw_sign_winproxy_config tw_tax_deprecated tw_tax_export tw_tax_import tw_terminate_winproxy tw_tripwire_rebaseline tw_upduser
Page 550
Note Although command line utilities offer a potentially faster or more convenient way to perform a specific function, they may also cause unintended consequences that might compromise your environment if not used carefully.
When using command line utilities, particularly long-running processes such as tw_ds_offline_compact it is recommended that you use the screen utility to prevent any problems that may arise from dropped connections. An example of using screen is shown in the tw_ds_offline_compact documentation.
Specifies the number of backup logs files that are preserved Specifies the log file that log messages are written to Specifies the logging level as one of the following values: debug: logs all messages info: logs critical, error, warning, and information messages warn: logs critical, error, and warning messages error: logs critical and error messages crit: logs only critical messages Specifies a file from which the password is to be read. This is only relevant for utilities that have the --username option. Note that this file is not encrypted, though you can set the file permissions to owner-only (chmod 600 passwordfile.txt) to restrict access to the file. For more information about password policies, see Managing security policies. Specifies the password of the BMC Atrium Discovery user. If no password is specified, you are prompted for one. This is only relevant for utilities that have the --username option. For more information about password policies, see Managing security policies. The user to run the utility as. This has to be a valid BMC Atrium Discovery UI user such as the system user, not a username used to access the command line via ssh. Displays the BMC Atrium Discovery version number, copyright information, and the name of the utility
--passwordfile=FILE
-p, --password=PASSWD
-u, --username
-v, --version
Warning You should not use any service utilities (those named tw_svc_name), because the improper use of a service-based command could have potentially adverse results on your system.
tw_adduser
The tw_adduser utility enables you to add a new user to the system, and assign a user to a new group.
Recommendation Use the BMC Atrium Discovery user interface to perform the functionality provided by the tw_adduser command line utility (see Enabling other users). If you choose to run the utility, read the documentation in this section to learn its usage and to understand the risks and potential impact on your environment.
Page 551
To use the utility, type the following command at the $TIDEWAY/bin/ directory:
where: username is the name of the new user. If you do not specify a user name, BMC Atrium Discovery will use the default, system. options are any of the options described in the following table and the common command line options described in Using command line utilities. Command Line Option -f, --fullname=ARG -g, --groups=ARG Description (Optional) Specifies the full name of the user
(Optional) Specifies additional groups to add a user to (the default value is public). The value public is also the default when you select the Groups option from the Add User page on the Administration tab.
User example
In the following example, you can add a new adminstrative user to a multiple groups by using the utility, the -g option, and a comma-separated list.
tw_backup
The tw_backup utility enables you to create a backup of a BMC Atrium Discovery system on a local directory on the appliance, an SSH enabled remote system, or a Windows share. To use the utility, type the following command:
tw_backup [options]
where: options are any of the options described in the following table and the common command line options described in Using command line utilities.
Common password options are described in this table Common password options are described in this table to make it clear whether the password referred to is for the local or remote system.
Backing up the system when a baseline run is in progress results in critical baseline alerts Baseline is run periodically by the BMC Atrium Discovery system. Should baseline be running when a backup is started critical baseline alerts may be reported. The backup is unaffected by these alerts. The automatic baseline check schedule can be modified using the tw_cron_update.
Description Specifies a remote backup directory relative to the user's home directory on SSH servers, and the top-level share directory in Windows shares. This option is required for SSH and Windows shares, unless the alternative --unc-path is specified for Windows shares.
Page 552
--backup-local
Create a local backup in the $TIDEWAY/var/backup directory. Only one local backup can be stored. Create a backup on a Windows share. Create a backup on a remote SSH server. Specify an email address to receive notification when the restore completes. This depends on email being correctly configured on the appliance. Exclude the credential vault from the backup. Appliance UI users are always backed up and restored, regardless of this setting. Unlock the system after a previous backup has failed to complete, starting services if possible. Force the creation of a backup when the services are down. This may omit some auditing of the action. Display help on standard options and exit. The hostname or IP address of the remote host. Do not verify (md5) the backup after it has been created. Enter some free text notes about the backup. Overwrite the existing local backup. The user to run the utility as. This has to be a valid BMC Atrium Discovery UI user such as the system user, not a username used to access the command line via SSH. Specifies the password of the BMC Atrium Discovery user. If no password is specified, you are prompted for one. This is only relevant for utilities that have the --username option. For more information about password policies, see Managing security policies. Specifies a file from which the password is to be read. This is only relevant for utilities that have the --username option. Note that this file is not encrypted, though you can set the file permissions to owner-only (chmod 600 passwordfile.txt) to restrict access to the file. For more information about password policies, see Managing security policies. The port on the remote Remote SSH server. Only required if the default SSH port (22) is not used. The password for the remote user on the remote system. For SSH servers or Windows shares. The username for the remote user on the remote system. For SSH servers or Windows shares. To specify the domain for Windows shares, use the following syntax: --remote-user=user@domain The share name of the remote Windows share. For example sharename. This option is required for Windows shares, unless the alternative --unc-path is specified. Stop services to create the backup. Required when creating a backup. The UNC path for the Windows share. For example \\sharename\foldername. You can specify this as an alternative to --share and --backup-dir. Display version information and exit.
--exclude-vault
--fix-interrupted
--force
-p, --password=PASSWD
--passwordfile=FILE
-P, --port=PORT
--remote-password=PASSWORD
--remote-user=USER
-S, --share=SHARE
--stop-services --unc-path=PATH
-v, --version
User examples
In the following examples, the local username is system and the password is not specified on the command line. The remote SSH username is tideway and the password is password. The remote Windows share is called remshare, the corresponding username is remdom/tideway and the password is password. The utility prompts for the local password after you enter the command. Type the commands on a single line; line breaks are provided in the examples to make them easier to read.
Page 553
tw_baseline
The tw_baseline utility enables you to audit your appliance for existing state, inventory, and relationships between configuration items.
Recommendation Use the BMC Atrium Discovery user interface to perform the functionality provided by the tw_baseline command line utility (see Baseline configuration). If you choose to run the utility, read the documentation in this section to learn its usage and to understand the risks and potential impact on your environment.
Options in the utility specify how the baseline properties can be performed or displayed, or determine what action to take. To use the utility, type the following command at the $TIDEWAY/bin/ directory:
tw_baseline [options]
where options are any of the options described in the following table and the common command line options described in Using command line utilities. Command Line Option --conditional --config=FILE --csvfile=FILE --interactive Description Specifies to check for missing data and baselines only that data Specifies to baseline a configuration file, where FILE is the name of the file Specifies to baseline a CSV output file, where FILE is the name of the file Specifies to run a baseline check in interactive mode, whereby you are prompted to respond Yes (Y) or No (N) to run specific checks until there are no further checks available. For checks that you request to run, results are displayed after you have responded to all checks. Checks that you have declined to run are skipped, and no further actions are run. Specifies to run all baseline checks and peform no subsequent action after the checks are complete Specifies to run the baseline and peforms no subsequent checks Specifies to run the baseline with no terminal highlighting Specifies to rebaseline the check data Specifies to rebaseline a status output file, where FILE is the name of the file Specifies to run the baseline and display information messages for checked data
--no-actions
User example
In the following example, type the commands on a single line. Line breaks are provided to make the example easier to read.
Page 554
tw_tripwire.txt file is updated with the details. The monitor which sets the appliance status in the user interface checks the tw_tripwire.txt file hourly and sets certain restrictions if configured. If you have rebaselined the Tripwire database, you should run the following commands (as the tideway user) to ensure that the correct status is shown in the user interface.
sudo /usr/tideway/tripwire/sbin/tripwire --check > /usr/tideway/var/tw_tripwire.txt /usr/tideway/bin/tw_baseline
The appliance status is updated. For more information about using Tripwire and baseline configuration, see Baseline configuration.
tw_config_dashboards
The tw_config_dashboards utility enables you to configure and customize dashboards in BMC Atrium Discovery.
Recommendation Use the BMC Atrium Discovery user interface to perform the functionality provided by the tw_config_dashboards command line utility (see Using and customizing dashboards). If you choose to run the utility, read the documentation in this section to learn its usage and to understand the risks and potential impact on your environment.
To use the utility, type the following command at the $TIDEWAY/bin/ directory:
where: title is the title of the dashboard. If a title is not specified, an error is displayed. options are any of the options described in the following table and the common command line options described in Using command line utilities. Command Line Option --lock --ls Description Specifies to lock the dashboard so that it is read-only (the default) Specifies to display a list of the current dashboards and whether they are locked or unlocked. For this option only, a title does not need to be specified. Specifies to unlock the dashboard so that it can be edited
--unlock
User example
In the following example, type the commands on a single line. Line breaks are provided to make the example easier to read.
Changing a dashboard
By default, dashboards are configured to be read-only. If you want to customize a dashboard, you first need to make it accessible using the --unlock option:
$TIDEWAY/bin/tw_config_dashboards --unlock Default
tw_convert_reports
The tw_convert_reports utility enables you to manually customize reports and channels on the appliance. This utility can serve as a standalone tool to manually convert reports from the old format to the new format (introduced in Tideway Foundation 7.3) after you have upgraded BMC Atrium Discovery and started the system.
Page 555
Note The new report format was introduced in Tideway Foundation 7.3. Therefore, the utility is required only for report files that have been manually copied from an appliance version earlier than 7.3 to an appliance version later than 7.3. If you upgrade to a BMC Atrium Discovery version later than 7.3, the utility runs automatically.
To use the utility, type the following command at the $TIDEWAY/bin/ directory:
where: reports_file is the name of the xml reports file to be converted. If a file name is not specified, an error is displayed. options are any of the options described in the following table and the common command line options described in Using command line utilities. Command Line Option -d, --debug -r, --rename -v, --verbose Description
Specifies to display debug messages Specifies to rename the input file reports_file.old and save the converted file as reports_file. If there are errors, they are written to the terminal and the file is not converted. Specifies to display errors and warning messages. If no errors are reported, the reports and charts work the same way they did in the previous version. These messages are also written in the comments section of the converted file.
User example
In the following example, type the commands on a single line. Line breaks are provided to make the example easier to read.
$ cd /usr/tideway/data/custom/reports $ ls 30ListenerReports.xml $ TIDEWAY/bin/tw_convert_reports 30ListenerReports.xml Generated converted_30ListenerReports.xml (0 errors, 2 warnings) $ ls 30ListenerReports.xml converted_30ListenerReports.xml $
The file is converted and saved as a new file prepended with converted_. The original file is not changed.
tw_cron_update
The tw_cron_update utility is used to manage tideway user cron.
Warning Do not use crontab -e to edit the tideway user cron directly on the appliance.
To use the utility, type the following command at the $TIDEWAY/bin/ directory:
Page 556
tw_cron_update
Cron overview
BMC Atrium Discovery makes use of cron on the appliance to run various housekeeping tasks. This means that crontab must not be directly edited on the appliance, but must instead be managed using tw_cron_update. This way, BMC Atrium Discovery can manage cron entries across upgrades without affecting, or being affected by, any local customizations. Cron entries are stored in $TIDEWAY/etc/cron. Each cron entry has its own file, using the file extension .cron to identify that file. A cron entry is created by adding a new file ending with .cron containing standard cron formatted commands. Similarly entries can be edited or removed by editing or removing .cron files. $TIDEWAY/bin/tw_cron_update checks, every time it is run, that the file has not been edited since tw_cron_update was last run. A note on the format of a cron entry is provided in the tw_cron.header file. For full details, see the Red Hat Enterprise Linux version 6 cron documentation, or version 5 cron documentation depending on your operating system. To determine whether your appliance is running on RHEL5 check the footer of any UI page. The last line displays the release and build number, and if the appliance is running on RHEL5 it is stated between the release number and build number.
Note If crontab has been directly edited, tw_cron_update can no longer manage cron. You will have to manually resolve any differences before continuing. A copy of the expected cron configuration is stored as $TIDEWAY/etc/cron/tw_cron.previous whenever tw_cron_update is run. This can be compared to the live configuration. If this file does not exist, then the utility has never been used and crontab should be used as the default.
tw_deluser
The tw_deluser utility enables you to delete a BMC Atrium Discovery user. To use the utility, type the following command at the $TIDEWAY/bin/ directory:
where: username is the name of the user to delete options are any of the common arguments described in Using command line utilities.
User example
In the following example, a user deletes another user named Joe with a specific logging level of info.
Page 557
tw_disco_control
The tw_disco_control utility enables you to manually control specific functions of the BMC Atrium Discovery process. To use the utility, type the following command at the $TIDEWAY/bin/ directory:
where: opt=val ... is a list of options to be set. options are any of the options described in the following table and the common command line options described in Using command line utilities.
Warning Use caution when using the --emergency, --start, and --stop options to start or stop BMC Atrium Discovery. Stopping discovery does not stop the Reasoning process. You must restart the discovery process before you can restart any scheduled scans.
Description Specifies to display current devices Specifies to stop BMC Atrium Discovery immediately. All scheduled discovery scans are stopped. Specifies the passphrase of the vault Specifies to set BMC Atrium Discovery into playback mode Specifies that the user does not receive any informational messages Specifies to set BMC Atrium Discovery into recording mode Specifies to set BMC Atrium Discovery into standard operating mode. It cannot be set to playback or record mode. Specifies to start BMC Atrium Discovery. All scheduled discovery scans are started. Specifies to stop BMC Atrium Discovery. All scheduled discovery scans are stopped. Specifies to cancel a specified test Specifies to remove a specified test Specifies to display current tests Specifies the name of the BMC Atrium Discovery user. If no name is specified, BMC Atrium Discovery uses the default, system.
User example
In the following example, you can stop a discovery scan in progress.
Any regular scans and snapshot scans currently in progress will stop, and no subsequent scans can be started until you restart the discovery
Page 558
process.
tw_disco_export_platforms
The tw_disco_export_platforms utility enables you to export the customizable platform scripts used by BMC Atrium Discovery so that they can be copied to another appliance, where they can subsequently be imported using tw_disco_import_platforms). After the scripts are imported, the corresponding platforms are displayed on the Administration => Discovery Platforms page. To use the utility, type the following command at the $TIDEWAY/bin/ directory:
tw_disco_export_platforms [options]
where options are any of the options described in the following table and the common command line options described in Using command line utilities.
Description Specifies to only export default platform scripts (not current scripts) Specifies the name of the output file. The default name is platforms.xml. Specifies the name of the BMC Atrium user. If no name is specified, BMC Atrium Discovery uses the default, system.
User example
In the following example, type the commands on a single line. Line breaks are provided to make the example easier to read.
tw_disco_import_platforms
The tw_disco_import_platforms utility enables you to import the customizable platform scripts used by BMC Atrium Discovery after they have been copied to another appliance, where they have been exported using tw_disco_export_platforms). After the scripts are imported, the corresponding platforms are displayed on the Administration => Discovery Platforms page. To use the utility, type the following command at the $TIDEWAY/bin/ directory:
where: files is a list of files to be imported (except for when you use the reset option). options are any of the options described in the following table and the common command line options described in Using command line utilities. Command Line Option --reset Description Resets all commands to the default value. This is equivalent to clicking Reset All on the Administration => Platforms page (except only commands that have been customized will display).
Page 559
-u, --username=NAME
Specify the name of the BMC Atrium user. If no name is specified, BMC Atrium Discovery uses the default, system.
User example
In the following example, type the commands on a single line. Line breaks are provided to make the example easier to read.
tw_ds_offline_compact
Over time, the database files within the datastore can become fragmented, meaning that the data within them is structured inefficiently, so the files take up an unnecessarily large amount of disk space, and data access speed can suffer. The tw_ds_offline_compact utility enables you to compact the datastore by writing new copies of the database files. As it creates new files, the data is packed more efficiently, helping to alleviate lost disc space caused by fragmentation. The tool operates on an offline system, meaning that the tideway services must be stopped when you use this tool. Compacting the datastore may take a long time. In a large datastore, this may be many hours. Once a compaction starts, you should not interrupt it. To prevent loss of terminal connection interrupting the compaction, you should run tw_ds_offline_compact inside the screen utility which is installed on the appliance. The user example below shows how to do this. To use the utility, enter the following command:
tw_ds_offline_compact [options]
where options are any of the options described in the following table and the common command line options described in Using command line utilities.
Warning Because the utility touches all of the data, you must create a backup of the datastore before running it.
Description Fix the databases following an interrupted compaction. See recovering from an interrupted compaction. Proceed immediately with no questions. Displays help messages and exits.
--no-questions --help
User example
The following example shows backing up a datastore and then compacting it using the tw_ds_offline_compact utility.
Page 560
The utility checks that the tideway services have been stopped and that there is sufficient disk space to continue. It then reports the data directory, largest database file, and the free space available. Before proceeding you must confirm that you have made a backup of the datastore. 5. Enter yes to confirm that you have made a backup of the datastore. You are then prompted to confirm that you want to start the compaction. 6. Enter yes to start the compaction. The utility starts and reports progress until completion. Do not interrupt the process. 7. Start the tideway services. Enter the following command:
[tideway@appliance01 ~]$ sudo /sbin/service tideway start
If there is only one screen listed, you may re-attach to it with a simple command:
[tideway@appliance01 ~]$ screen -r
The virtual terminal is recovered and you can see how the compaction is progressing until completion. 3. Start the tideway services. Enter the following command:
sudo /sbin/service tideway start
Page 561
Perform the compaction using the tw_ds_offline_compact utility again and allow it to complete without interruption. See the procedure above. Run the tw_ds_offline_compact utility again with the --fix-interrupted option. This fixes the datastore but does not perform any more compaction.
tw_excluderanges
The tw_excluderanges utility enables you to exclude IP ranges from discovery. To use the utility, type the following command at the $TIDEWAY/bin/ directory:
where: args are one of the following arguments: --add returns a list of files or IP ranges to exclude --replace returns a list of files or IP ranges to replace --remove returns a list of range IDs to remove
Note If you do not select an argument in the command, a list of the currently excluded ranges is displayed, which includes the exclude range ID and additional information about that range. You could redirect this output to a file and then clean it up in a text editor to serve as a file which could then be imported.
options are any of the options described in the following table and the common command line options described in Using command line utilities. Command Line Option -a, --add -d, --description=ARG -f, --file Description Specifies to add a new exclude range Specifies a description for the exclude range. Use double quotes around strings with spaces. Specifies a file or a list of files to read IP ranges from. This is useful for importing large numbers of exclude ranges. Specifies the name of the exclude range. Use double quotes around strings with spaces. Specifies to remove an exclude range Specifies to add any addresses that have been located, and then clear the current list of exclude ranges Specifies to turn off all informational messages Specifies the name of the BMC Atrium Discovery user. If a name is not specified, BMC Atrium Discovery uses the default, system.
User example
In the following examples, type the commands on a single line. Line breaks are provided to make the example easier to read.
Page 562
IPv6 address: for example fe80::655d:69d7:4bfa:d768. IPv4 range: for example 192.168.1.100-105, 192.168.1.100/24, or 192.168.1.*. An example file called excludes1.txt:
192.168.1.100 192.168.1.110-120
Import the exclude ranges from the two files using the following command:
[tideway@appliance01 ~]$ tw_excluderanges --username=system --add --name="Imported Ranges" --file excludes1.txt excludes2.txt Password: Feeding file excludes1.txt Feeding file excludes2.txt Add excluded range: 192.168.1.100,192.168.1.110-120,192.168.2.100-105, 192.168.2.*,192.168.3.0/24,2001:500:100:1187:203:baff:fe44:91a0 Imported Ranges [tideway@appliance01 ~]$
tw_imp_ciscoworks
The tw_imp_ciscoworks utility enables you to import CiscoWorks network device data from the command line.
Recommendation Use the BMC Atrium Discovery user interface to perform the functionality provided by the tw_imp_ciscoworks command line utility (see Importing network device data). If you choose to run the utility, read the documentation in this section to learn its usage and to understand the risks and potential impact on your environment.
To use the utility, type the following command at the $TIDEWAY/bin/ directory:
where: --username name is the BMC Atrium Discovery user name to use to import data. --file filename is the name of the CiscoWorks data file to import. options are any of the options described in the following table and the common command line options described in Using command line utilities. Command Line Option --delimiter 'delimiter' --destroy-missing --has-header --no-header --xml Description Specifies the field delimiter for input files. The default is the tab character. Removes network devices and associated ports that do not exist in the imported data Specifies that the CiscoWorks data file to import has a header row Specifies that the CiscoWorks data file to import has no header row Specifies that the data file being imported is in XML format, not CSV
Page 563
User examples
In the following examples, enter the commands on a single line. Line breaks are provided to make the examples easier to read.
This command produces a Java stack trace. This is a known issue and can be ignored. The file that is produced can be imported by running the following command on the BMC Atrium Discovery appliance:
$TIDEWAY/bin/tw_imp_ciscoworks --delimiter=',' --username name --password password --file ~/tmp/data.csv
The file is written into the following directory: C:\PROGRAM FILES\CSCOpx\files\cmexport\ut The file can be imported onto the BMC Atrium Discovery appliance with the following command:
$TIDEWAY/bin/tw_imp_ciscoworks --xml --username name --password password --file ~/tmp/2006516154526ut.xml
tw_imp_csv
The tw_imp_csv utility enables you to search the datastore for nodes of a specified kind that have keys matching rows in the supplied csv data. Where the keys match, the node is updated, or deleted and recreated (depending on the options selected).
Recommendation Use the BMC Atrium Discovery user interface to perform the functionality provided by the tw_imp_csv command line utility (see Importing CSV data). If you choose to run the utility, read the documentation in this section to learn its usage and to understand the risks and potential impact on your environment.
Incorrect usage may result in data loss Before using the tw_imp_csv tool you should fully understand the system taxonomy and the changes that you are going to make to your data. Using the tw_imp_csv tool incorrectly can cause irreparable damage to your data. The data you submit using this tool is applied directly to the production data without any validation. Always back up your datastore before using this tool.
Do not import the following node kinds You must never import DDD nodes. You should avoid importing Host nodes and other system maintained nodes. If in doubt, contact Customer Support.
To use the utility, type the following command at the $TIDEWAY/bin/ directory:
Page 564
where: files is a list of csv files to import. options are any of the options described in the following table and the common command line options described in Using command line utilities. Command Line Option --create --datastore=name --delete --delimeter=ARG --escape-char=ARG --force Description Specifies that only new nodes will be created; existing nodes are not updated (Do Not Use) Specifies the name of the datastore service. The default is tideway. Specifies that matching nodes will be deleted; no other nodes will be affected Specifies a field delimeter character Specifies an escape character Disables the validation checks that are performed against the taxonomy to check the specified node-kind and attribute/relationship keys. When you use this option, all keys, attributes and relationship links will be left as strings. Displays a comma-separated list of key columns to use for the data key Displays the kind of node to create, update, or clear Specifies a line termination string Specifies the name of the partition to query, where name can be set to All for all partitions (the default). Specifies a quote character (Do Not Use)Specifies the name of the search service (Do Not Use)Specifies the name of the taxonomy service Specifies that only existing nodes will be updated; new nodes will not be created Specifies to hide the name of the uploaded file Specifies the BMC Atrium Discovery user name to use to import data. If no name is specified, you are prompted for one. Specifies to display informational messages while processing. This is useful for diagnosing errors
User examples
In the following examples, enter the commands on a single line. Line breaks are provided to make the examples easier to read.
BMC Atrium Discovery processes the CSV file with the following utility at the command line:
Page 565
The utility reads the file called firehouse_move.csv line by line. It uses the first line to name the columns. The first column is called 'name', which doesn't begin with a '#' character so it is treated as an attribute name. The second column does begin with a '#' character so it is treated as a specification for some relationships. No explicit key has been specified so the first (and in this case, only) attribute name is taken to be the key. Next, the utility reads the second line. The first field, egon, is in the name column which was selected as the key earlier. So the script uses the search service to find a node of kind Host (from the --kind command line option) that has a name attribute equal to egon. It finds exactly one node matching that search. If it had not found that node, it would have been created. If it had found multiple nodes, an error would have been reported and processing would continue with the next line, NO nodes would have been updated. Having found the node, it updates it using the other fields on the row. Were there any other attribute columns in the the file, the utility would have used these to update the node before looking at the relationships. The file has only one relationship column. The name is #HostInLocation:HostLocation:LocationOfHost:Location.name. The utility searches for a Location node with a name attribute equal Firehouse, this row's value for the column. Having found the Location node, the utility creates a HostLocation relationship to it with the Host node playing the HostInLocation role and the Location playing the LocationOfHost role. Then the utility processes the second and third data rows, updating the ray and peter nodes with the new location.
name,fqdn,#HostInLocation:HostLocation:LocationOfHost:Location.name,#ITOwnedItem:ITOwner:ITOwner:Person.name,ip_addrs,#
BMC Atrium Discovery processes the CSV file with the following utility at the command line:
$ tw_imp_csv --username=system --password=system --kind=Host --create new_host.csv
The syntax is nearly the same as in the previous example. The differences are that the filename has changed because we are processing a different file and we are using Create Only mode. As in the previous example, the utility searches for Host nodes with a name attribute equal to winston. This time, because it is in create mode, the utility checks that there are no matching nodes. If there were, the utility would report an error and skip the row. Next, the utility creates a new node. It populates the attributes of the node from the non-relationship fields in the data. The ip_addrs field is a list and the value starts with a '[' character so it is converted into a list. The new node has attributes: name = 'winston' fqdn = 'winston.example.com' ip_addrs = [ '10.3.4.1', '192.168.101.45' ] Then the utility adds the relationships. The location relationship column is processed in the same way as in Example 1. The column called #ITOwnedItem:ITOwner:ITOwner:Person.name creates an ITOwnedItem:ITOwner:ITOwner relationship to a Person node with a name equal to Melnitz,J. The quotes around that field are needed because the field contains a comma. After the quote, the #HostOnSubnet:HostSubnet:Subnet.name field begins with a '[' character. This field is converted into a list. Then, for each item in the list, a HostOnSubnet:HostSubnet:Subnet relationship is created to a Subnet node with a name attribute equal to that item. So the new host has one relationship to the 10.0.0.0/8 subnet and one to the 192.168.101.0/24 subnet.
tw_injectip
Deprecated utility The tw_injectip utility is now deprecated and may be removed in future releases. Its functionality is available in the tw_scan_control utility which has all of the same options.
The tw_injectip utility enables you to scan IP ranges using the command line.
Page 566
Recommendation Use the BMC Atrium Discovery user interface to perform the functionality provided by the tw_injectip command line utility (see [Controlling Discovery]). If you choose to run the utility, read the documentation in this section to learn its usage and to understand the risks and potential impact on your environment.
To use the utility, type the following command at the $TIDEWAY/bin/ directory:
where: range is a single entry or a space separated list of IP address information in the following formats: IPv4 address (for example 192.168.1.100). IPv6 address (for example 2001:500:100:1187:203:baff:fe44:91a0). IPv4 range (for example 192.168.1.100-105, 192.168.1.100/24, or 192.168.1.*). If you use the --file option, a range refers to a file that contains IP addresses. options are any of the options described in the following table and the common command line options described in Using command line utilities.
Scanning the following address types is not supported: IPv6 link local addresses (prefix fe80::/64) IPv6 multicast addresses (prefix ff00::/8) IPv6 network prefix (for example fda8:7554:2721:a8b3::/64) IPv4 multicast addresses (224.0.0.0 to 239.255.255.255)
Description Removes all the recurrent ranges that are not currently being scanned from the Reasoning pipeline. Specify a company name to use for a scan in a multitenant deployment. A file or a list of files of IP addresses. They must be plain text files with a new line delimited list of IP addresses. Specifies a label for the scan. Specifies a vault passphrase to use Specifies to scan the IP addresses (located in a file or listed at the command prompt) in random order Specifies to add a daily recurrent range. This option specifies a recurrent range scan that must be modified with the --recurrence-duration and/or --recurrence-start options. Specifies the duration that the recurring scan lasts (in hours). Specifies the start time for recurrent ranges (in hours) after midnight Replace (edit) the specified scheduled discovery run. The discovery run is specified using its ID which can be determined using a search query like the following: search IPRange where scan_type='Scheduled' show range_id,label. See below for examples.
--recur-daily
Page 567
Specifies the scan level to use. This may be one of the following: Sweep Scan: Performs a sweep scan, trying to determine what is at each endpoint in the scan range. Attempts to login to a device to determine the device type. Full Discovery: Retrieve all the default information for hosts and complete full inference. Default: Use the current default level. Specifies that Reasoning will start. This is equivalent to clicking the START ALL SCANS button. Specifies that Reasoning will stop. This is equivalent to clicking the STOP ALL SCANS button. Cancels the in-progress topology run. If no topology run is in progress, a message to this effect is displayed. Applies a label to a topology run. Used in conjunction with --topology-start to change the label from the default "Topology Run". Starts a topology run. Specifies the name of the BMC Atrium Discovery user. If a user is not specified, BMC Atrium Discovery uses the default, system.
-s, --start
-x, --stop
--topology-cancel
User examples
In the following examples, the user name is system and the password is not specified on the command line. The utility prompts for the password after you enter the command. Type the commands on a single line; line breaks are provided in the examples to make them easier to read.
Note The utility is designed to handle only snapshot and daily scans; no weekly or monthly schedules are available within this tool. For this functionality, use the user interface.
Page 568
Specifying a two-hour scheduled scan of IP addresses listed in a file and label it TEST
$TIDEWAY/bin/tw_injectip --username system --recur-daily --recurrence-duration=2 --scheduled-label=TEST --file ~/scanlist
2. Copy the ID from the range that you want to replace and use it to specify the ID of the scan that you want to replace. 3. Enter the command. This example illustrates how a system user replaces a specified scan with a daily six-hour scan of the range 192.168.0.1-10 and a label of UPDATED.
tw_injectip -u system -p system --recur-daily --recurrence-duration=6 --label=UPDATED --replace 85be3f2d9ef810d84c1089485c704129 192.168.0.1-10
tw_listusers
The tw_listusers utility enables you to display BMC Atrium Discovery user information, and lets you filter our specific names you do not want to display. To use the utility, type the following command at the $TIDEWAY/bin/ directory:
where: --filter=ARG specifies a filter (regular expression) for listing users. Only users whose username match this regular expression are listed. options are the common, inherited commands described in Using command line utilities.
User example
In the following example, type the commands on a single line. Line breaks are provided to make the example easier to read. Filtering the list of users to just those whose username contains the string "joe"
$TIDEWAY/bin/tw_listusers --filter=joe
tw_model_init
The tw_model_init utility enables you to initialize the model, this consists of deleting any data that you have in the datastore and reapplying the taxonomy. The utility can also install and activate a TKU after deleting the datastore. The TKU package must be stored on the appliance filesystem. The tw_model_init utility does not delete configuration data held outside the datastore. However, some configuration information is held in the datastore and is lost when running tw_model_init. This is: Scan ranges Scheduled scan ranges Exclude ranges Scan levels (default scan level and those which can be selected when adding a new scan) CAM (saved queries, groups and subgroups, named values, functional component definitions, and generated patterns)
Page 569
Warning The tw_model_init utility deletes all data in your datastore. Before you use tw_model_init to resolve an issue, it is recommended that you contact BMC Customer Support.
Important You must stop all tideway services before running the utility. Restart the services after you run the utility.
To use the utility, type the following command at the $TIDEWAY/bin/ directory:
tw_model_init [options]
where options are any of the options described in the following table and the common command line options described in Using command line utilities. Command Line Option --force Description Specifies to remove any existing tideway.db datastore.
Warning Use this argument with caution! It deletes all data stored in your datastore!
--no-tku
Specifies to not automatically install and activate a TKU package. This improves the speed of the utility. Specifies to install a TKU from a location other than $TIDEWAY/data/installed/tpl
--tku-dir
User example
By default, the tw_model_init utility activates the TKU package that came with the version of BMC Atrium Discovery appliance that you have installed. If you are not planning to use that TKU, you can set the --no-tku option to save you time when running the utility. However, if you use this option and then forget to subsequently load a TKU, your BMC Atrium Discovery instance will collect significantly less data.
tw_passwd
The tw_passwd utility enables you to change the password of a specified user interface user. To use the utility, type the following command at the $TIDEWAY/bin/ directory:
where:
Page 570
username is the name of the new user to change. options are the common, inherited commands described in Using command line utilities.
User example
In the following example, type the commands on a single line. Line breaks are provided to make the example easier to read.
Changing passwords for command line users The tw_passwd utility is for changing UI users' passwords. To change the passwords for command line users, as the root user, use the Linux command passwd. This is described in Changing the root and user passwords
tw_pattern_management
The tw_pattern_management utility enables you to to upload patterns to the appliance, activate or deactivate patterns on the appliance, and delete patterns that are no longer required. It does not enable you to manage entire Technology Knowledge Updates (TKUs), because they contain a number of file types. You should always manage TKUs through the user interface (UI).
Recommendation Use the BMC Atrium Discovery user interface to perform the functionality provided by the tw_pattern_management command line utility (see Pattern management). If you choose to run the utility, read the documentation in this section to learn its usage and to understand the risks and potential impact on your environment. Be aware that informational messages for packages may be displayed in the UI but are not available using this tool at the command line.
To use the utility, type the following command at the $TIDEWAY/bin/ directory:
where: package/module file is the name of the package. This argument is also used by the install-package and replace-module options. package description is a description of the package. This argument is also used by the install-package and replace-module options. options are any of the options described in the following table and the common command line options described in Using command line utilities. Command Line Option --activate-all -a, --activate-package=PKG_NAME -d, --deactivate-package=PKG_NAME -e, --erase-packages -f, --force -i, --install-package=PKG_FILE -l, --list-packages -P, --partition=NAME Description Specifies to activate all installed packages Specifies to activate the pattern package and enable associated patterns Specifies to deactivate the specified pattern package Specifies to remove all pattern packages Specifies to deactivate patterns before they are removed Specifies to upload the specified pattern package Specifies to list uploaded pattern packages Specifies the name of the datastore partition
Page 571
Specifies to recompile all active packages Specifies to remove the specified pattern package Specifies to replace specified pattern package with another pattern module Specifies the name of the BMC Atrium Discovery user. If no name is specified, BMC Atrium Discovery uses the default, system. Specifies to display informational messages
-u, --username=NAME
--verbose
Recommendation To understand TPL patterns and how they might function in your environment, reference Configipedia, BMC's community website that facilitates knowledge sharing around TPL patterns. Configipedia also provides visibility of the Technology Knowledge Update release schedule and contents. A link to Configipedia is provided for the Pattern Module lists in the Pattern Management section of the Discovery Page. There is also a link to Configipedia provided from the Software Instance nodes and Business Application Instance nodes that are maintained by TPL software patterns.
User examples
In the following examples, type the commands on a single line. Line breaks are provided to make the example easier to read.
The changed pattern module is contained in a package with the suffix editn applied to the original name. The number n is incremented by one after each subsquent edit. The package and pattern module that you edit is noted in the description column of the Pattern Module Source window. The replacement package is made active if the one you edited was active.
2. Use the utility to upload the zipped package (providing your password to execute the command):
[tideway@app01 tpl]$ tw_pattern_management -i "my package" mypackage.zip Password: [tideway@app01 tpl]$
tw_query
The tw_query utility enables you to extract data using a query. The information can be output in CSV or XML format using one of the available
Page 572
where: query is the search query you wish to perform. options are any of the options described in the following table and the common command line options described in Using command line utilities. Command Line Option --csv, --delimeter=CHAR --file=FILE --no-headings --partition=NAME --time --xml Description Specifies that the output of the query will be saved in .CSV format Specifies the delimiter character used in .CSV files Specifies the name of the .CSV output file Do not output column headings Specifies the name of the partition to query Reports the time taken to perform the query Specifies that the output of the query will be saved in .XML format
tw_reasoningstatus
The tw_reasoningstatus utility enables you to view the status of the Reasoning service. Typically this utility is used by Customer Support as a troubleshooting tool for investigating possible problems.
Note Reasoning runs the same status check automatically every 15 minutes and outputs the results in the tw_svc_reasoning.log file.
To use the utility, type the following command at the $TIDEWAY/bin/ directory:
tw_reasoningstatus [options]
where options are any of the options described in the following table and the common command line options described in Using command line utilities. Command Line Option --waiting, -w Description Lists information for all endpoints which are on hold waiting for information from the discovery of a different endpoint. Expands the information provided by --waiting to include information about all endpoints being held waiting for discovery. This option is ignored if --waiting is not specified. Specifies the name of the BMC Atrium Discovery user. If no name is specified, BMC Atrium Discovery uses the default, system.
--waiting-full
-u, --username=NAME
User example
In the following example, you can view the status of the reasoning status for user joe.
Page 573
If you do not provide a password, you are prompted for one. After providing a password, a status is displayed that includes information about engine status, pool state, queue length, and so forth. The output is saved in the tw_svc_reasoning.log file.
tw_remove_darkspace
The tw_remove_darkspace utility enables you to remove any previously-discovered Dark Space DiscoveryAccess nodes from the datastore.
Important This utility should only be used when you initially set the Dark Space Suppression scheme from Keep All to Keep Most Recent or Remove ALL. Otherwise, you should rely on the suppression scheme selected and avoid using this utility entirely. Running tw_remove_darkspace multiple times serves no useful purpose and will take a long time. The time it takes depends entirely on the amount of dark space in your environment and it is not unknown for it to take more than a day to run.
Background
Originally BMC Atrium Discovery kept all nodes, including dark space nodes. This Dark Space Suppression scheme was called Keep All and is now deprecated. The tw_remove_darkspace utility removes these extra nodes immediately; although they would be removed as usual over a period of time during DDD aging. When an endpoint is discovered and there is no response, it is considered NoResponse. The system traverses back along the complete DiscoveryAccess chain and if it finds any DiscoveryAccess with an associated device then the endpoint is still considered NoResponse. It may be, for example, that the computer is temporarily down. Where no associated device is found, the endpoint is then considered dark space and the subsequent behavior depends on the dark space removal setting. This may be either: Keep Most Recent: the system removes all but the current DiscoveryAccess. Remove All: the system removes all DiscoveryAccesses. An effect of DDD aging is that an endpoint, which at one point in time had an associated device, can become a chain of no response DiscoveryAccesses. So, tw_remove_darkspace may find new dark space that it can remove, however, this would have been removed automatically when the endpoint is next scanned. If an endpoint was associated with an inferred device (Host, MFPart, NetworkDevice, Printer) at any time it remains connected via an AccessFailure until the inferred device is removed, even if that device has changed endpoints.
tw_remove_darkspace [options]
where options are any of the options described in the following table and the common command line options described in Using command line utilities. Command Line Option --quiet -u, --username=NAME Description Specifies that informational messages will not be printed. Specifies the name of the BMC Atrium Discovery user. If no name is specified, BMC Atrium Discovery uses the default, system.
User example
In the following example, the utility is run with no options to remove dark space DA nodes from the datastore.
Page 574
$TIDEWAY/bin/tw_remove_darkspace Password: system -----------------------------------------------------------------------------WARNING: Dark space removal may take considerable time to complete. WARNING: Please be patient. -----------------------------------------------------------------------------Stopping Reasoning processing events Checking for dark space Found 113 IPs to check Checking IPs 0 to 112 113 of which are dark space Completed removal of dark space DiscoveryAccess nodes Check for unconnected DiscoveryRunsNo unconnected DiscoveryRuns Restarting Reasoning processing events -----------------------------------------------------------------------------Dark space removal complete.
tw_restore
The tw_restore utility enables you to restore a BMC Atrium Discovery backup from a local directory on the appliance, an SSH enabled remote system, or a Windows share. The utility can be used on an appliance where the UI cannot be started. To use the utility, type the following command:
tw_restore [options]
where: options are any of the options described in the following table and the common command line options described in Using command line utilities.
Common password options are described in this table Common password options are described in this table to make it clear whether the password referred to is for the local or remote system.
Restoring backed up system data when a baseline run is in progress results in critical baseline alerts Baseline is run periodically by the BMC Atrium Discovery system. Should baseline be running when a restore is started critical baseline alerts may be reported. The restore is unaffected by these alerts. The automatic baseline check schedule can be modified using the tw_cron_update.
Description Specifies a remote backup directory relative to the user's home directory on SSH servers, and the top-level share directory in Windows shares. This option is required for SSH and Windows shares, unless the alternative --unc-path is specified for Windows shares. Restore from a local backup in the $TIDEWAY/var/backup directory. Restore from a backup on a Windows share. Restore from a backup on a remote SSH server. Specify an email address to receive notification when the restore completes. This depends on email being correctly configured on the appliance. This option may be used where BMC Atrium Discovery has been interrupted by an unscheduled reboot or power failure. When you run tw_restore you are prompted to use this option if it is required. Force backup restore when services are down. This may omit some auditing of the action. Display help on standard options and exit.
--fix-interrupted
--force
-h, --help
Page 575
The hostname or IP address of the remote host. The user to run the utility as. This has to be a valid BMC Atrium Discovery UI user such as the system user, not a username used to access the command line via SSH. Specifies the password of the BMC Atrium Discovery user. If no password is specified, you are prompted for one. This is only relevant for utilities that have the --username option. For more information about password policies, see Managing security policies. Specifies a file from which the password is to be read. This is only relevant for utilities that have the --username option. Note that this file is not encrypted, though you can set the file permissions to owner-only (chmod 600 passwordfile.txt) to restrict access to the file. For more information about password policies, see Managing security policies. The port on the remote Remote SSH server. Preserve the identity and the HTTPS configuration of the appliance rather than take on the identity from the restored backup. The password for the remote user on the remote system. For SSH servers or Windows shares. The username for the remote user on the remote system. For SSH servers or Windows shares. The share name of the remote Windows share. For example sharename. This option is required for Windows shares, unless the alternative --unc-path is specified. Stop the services to perform the restore. Required when restoring from a backup. The UNC path for the Windows share. For example \\sharename\foldername. You can specify this as an alternative to --share and --backup-dir. Verify the backup without performing a restore operation.
-p, --password=PASSWD
--passwordfile=FILE
--remote-password=PASSWORD
--remote-user=USER
-S, --share=SHARE
--stop-services
--unc-path=PATH
--verify-only
User examples
In the following examples, the local username is system and the password is not specified on the command line. The remote SSH username is tideway and the password is password. The remote Windows share is called remshare, the corresponding username is remdom/tideway and the password is password. The utility prompts for the local password after you enter the command. Type the commands on a single line; line breaks are provided in the examples to make them easier to read.
Page 576
tw_scan_control
The tw_scan_control utility enables you to scan IP ranges using the command line.
Recommendation Use the BMC Atrium Discovery user interface to perform the functionality provided by the tw_scan_control command line utility (see Scanning IP addresses or ranges). If you choose to run the utility, read the documentation in this section to learn its usage and to understand the risks and potential impact on your environment.
To use the utility, type the following command at the $TIDEWAY/bin/ directory:
where: range is a single entry or a space separated list of IP address information in the following formats: IPv4 address (for example 192.168.1.100). IPv6 address (for example 2001:500:100:1187:203:baff:fe44:91a0). IPv4 range (for example 192.168.1.100-105, 192.168.1.100/24, or 192.168.1.*). If you use the --file option, a range refers to a file that contains IP addresses. options are any of the options described in the following table and the common command line options described in Using command line utilities.
Scanning the following address types is not supported: IPv6 link local addresses (prefix fe80::/64) IPv6 multicast addresses (prefix ff00::/8) IPv6 network prefix (for example fda8:7554:2721:a8b3::/64) IPv4 multicast addresses (224.0.0.0 to 239.255.255.255)
Description Specifies to remove all the recurrent ranges that are not currently being scanned from the Reasoning pipeline Specify a company name to use for a scan in a multitenant deployment. Displays a file or a list of files as arguments. They must be plain text files with a new line delimited list of IP addresses. Specifies a label for the scan Specifies a vault passphrase to use
Page 577
-r, --random
Specifies to scan the IP addresses (located in a file or listed at the command prompt) in random order Specifies to add a daily recurrent range. This option specifies a recurrent range scan that must be modified with the --recurrence-duration and/or --recurrence-start options. Specifies the duration that the recurring scan lasts (in hours). Specifies the start time for recurrent ranges (in hours) after midnight Replace (edit) the specified scheduled discovery run. The discovery run is specified using its ID which can be determined using a search query like the following: search IPRange where scan_type='Scheduled' show range_id,label. See below for examples. Scan level to use. This may be one of the following: Sweep Scan: This will do a sweep scan, trying to determine what is at each endpoint in the scan range. It will attempt to login to a device to determine the device type. Full Discovery: Retrieve all the default info for hosts, and complete full inference. This is the default if the option is not specified.
--recur-daily
-s, --start
Specfies that Reasoning will start. This is equivalent to clicking START ALL SCANS. Specifies that Reasoning will stop. This is equivalent to clicking STOP ALL SCANS. Specfies the name of the BMC Atrium Discovery user. If a user is not specified, BMC Atrium Discovery uses the default, system.
-x, --stop
User examples
In the following examples, the user name is system and the password is not specified on the command line. The utility prompts for the password after you enter the command. Type the commands on a single line; line breaks are provided in the examples to make them easier to read.
Note The utility is designed to handle only snapshot and daily scans; no weekly or monthly schedules are available within this tool. For this functionality, use the user interface.
Page 578
Specifying a two-hour scheduled scan of IP addresses listed in a file and label it TEST
$TIDEWAY/bin/tw_scan_control --username system --recur-daily --recurrence-duration=2 --label=TEST --file ~/scanlist
2. Click Run Query. 3. Copy the ID from the range that you want to replace and use it to specify the ID of the scan that you want to replace. 4. Enter the command. This example illustrates how the system user replaces a specified scan with a daily six-hour scan of the range 192.168.0.1-10 and a label of UPDATED.
tw_scan_control -u system -p system --recur-daily --recurrence-duration=6 --label=UPDATED --replace 85be3f2d9ef810d84c1089485c704129 192.168.0.1-10
tw_sign_winproxy_config
The tw_sign_winproxy_config utility enables you to add a checksum to a Windows proxy configuration file without uploading the file. When you run this utility, it modifies the specified file but also saves a copy named <original filename>.orig before signing the file.
Recommendation Use the BMC Atrium Discovery user interface to perform the functionality provided by the tw_sign_winproxy_config command line utility (see The Windows proxy configuration file). If you choose to run the utility, read the documentation in this section to learn its usage and to understand the risks and potential impact on your environment.
To use the utility, type the following command at the $TIDEWAY/bin/ directory:
tw_sign_winproxy_config [options]
where options are any of the options described in the following table and the common command line options described in Using command line utilities. Command Line Option --config-file=PATHNAME -u, --username=NAME Description Specifies the name of the configuration file to sign Displays the name of the BMC Atrium Discovery user. If a name is not specified, BMC Atrium Discovery uses the default, system.
Page 579
User example
In the following example, type the commands on a single line. Line breaks are provided to make the example easier to read.
You can then copy the signed file to multiple appliances using ftp or similar.
tw_tax_deprecated
The tw_tax_deprecated utility enables you to check patterns for references to deprecated attributes. The utility is primarily intended for the upgrade to BMC Atrium Discovery 9.0, but it can also be used outside the upgrade. The username and password to supply for this utility are those of a UI user, such as the system user. To use the utility, type the following command:
tw_tax_deprecated [options]
where options are any of the common arguments described in Using command line utilities, or the command described in the following table: Command Line Option --show-tku Description Specifies to check for deprecated attributes in all installed TKU pattern modules.
User example
The following example checks patterns for references to deprecated attributes as the system user and includes all installed TKU pattern modules. No password is entered so the utility prompts for a password which is not echoed when entered. In this example, deprecated attributes are found.
Searching for deprecated MFPart node attribute 'part_type' ... [tideway@fredapp bin]$
tw_tax_export
The tw_tax_export utility enables you to export taxonomy files so that they can be stored in another location. This utility can be used as an effective troubleshooting tool to check if all your taxonomy overlays worked as you expected, especially if you left old files in place or ordered them incorrectly. Taxonomy definitions are configured using .xml files that are stored in the following directories:
Page 580
/usr/tideway/data/installed/taxonomy/ /usr/tideway/data/custom/taxonomy/ The directories are parsed in the order shown (installed files take precedence over custom files), and the files contained in these directories are parsed in alphabetical order (numbers before letters). This order is important, because any taxonomy definitions that are subsquently added override any previously loaded definitions. The standard base taxonomy file is contained in /usr/tideway/data/installed/taxonomy/00taxonomy.xml. To use the utility, type the following command at the $TIDEWAY/bin/ directory:
tw_tax_export [options]
where options are any of the options described in the following table and the common command line options described in Using command line utilities. Command Line Option -D, --datastore=NAME -P, --partition=NAME -r, --relkinds=REGEXP -l, --rolekinds=REGEXP --sort -t, --taxonomy=LOC -u, --username=NAME Description (Do Not Use) Specifies the name of the datastore service Specifies the name of the datastore partition Specifies the RelationshipKinds to export Specifies the RoleKinds to export Specifies to sort names (Do Not Use) Specifies the name of the taxonomy service Specifies the name of the BMC Atrium Discovery user. If no user is specified, BMC Atrium Discovery uses the default, system.
User example
In the following example, type the commands on a single line. Line breaks are provided to make the example easier to read.
tw_tax_import
The tw_tax_import utility enables you to import custom taxonomy files into the current taxonomy.
Recommendation After you run a taxonomy import using this utility, you must restart the tideway service. Failure to do so will result in a malfunctioning user interface.
Taxonomy definitions are configured using .xml files that are stored in the following directories: /usr/tideway/data/installed/taxonomy/ /usr/tideway/data/custom/taxonomy/ The directories are parsed in the order shown (installed files take precedence over custom files), and the files contained in these directories are parsed in alphabetical order (numbers before letters). This order is important, because any taxonomy definitions that are subsquently added override any previously loaded definitions. The standard base taxonomy file is contained in /usr/tideway/data/installed/taxonomy/00taxonomy.xml. To use the utility, type the following command at the $TIDEWAY/bin/ directory:
Page 581
where: files is a list of files to be imported. This command is optional if you use the --clear option, but cannot be used with the --handle-broken-extensions option. options are any of the options described in the following table and the common command line options described in Using command line utilities. Command Line Option --clear --handle-broken-extensions Description Specifies to clear any existing taxonomy before importing Specifies to ignore any broken extensions. If a taxonomy extension is invalid, the importer ignores it and restarts, attempting to load all other extensions. It repeats this until it has loaded the base taxonomy and has loaded or attempted to load all extensions. In this way you always finish with a valid taxonomy. Where the taxonomy importer fails to load an extension it logs a message to stdout. This option cannot be used with the files command. It is primarily intended for upgrading, but it can be used from the command line. (Do Not Use) Specifies the name of the datastore service Specifies to merge import data with any existing taxonomy Specifies the name of the datastore partition Specifies to replace data in any existing taxonomy with imported data Specifies that there is no backwards compatibility for the previous format (Do Not Use) Specifies the name of the taxonomy service Specifies the name of the BMC Atrium Discovery user. If no user is specified, BMC Atrium Discovery uses the default, system. Specifies to display informational messages Specifies to verify XML data only
-D, --datastore=NAME --merge -P, --partition=NAME --replace --strict -t, --taxonomy=ARG -u, --username=NAME
--verbose --verify
User example
In the following example, type the commands on a single line. Line breaks are provided to make the example easier to read.
The standard base taxonomy file named /usr/tideway/data/installed/taxonomy/00taxonomy.xml is supplemented with the imported data, and you can view the updated installed .xml file on the Administration > Taxonomy page. 2. Restart the tideway service.
tw_terminate_winproxy
The tw_terminate_winproxy utility enables you to send a request to the Windows proxy to terminate operation. To use the utility, type the following command at the $TIDEWAY/bin/ directory:
tw_terminate_winproxy [options]
Page 582
where options are any of the options described in the following table and the common command line options described in Using command line utilities.
Important You must have the discovery/kslave/write permission to use the utility.
Description (Required) Specifies the name of the Active Directory, Workgroup or Credential Windows proxy that you are logging into (Required) Specifies the name of the BMC Atrium Discovery user. If no name is specified, BMC Atrium Discovery uses the default, system.
User example
In the following example, you must have the discovery/kslave/write permission to use the utility.
When the utility successfully sends a terminate request to a Windows proxy, an audit event is logged. The audit event, named windows_proxy_process_terminate, contains the name of the Windows proxy that the terminate request is sent to.
Important If a Windows proxy is not running as either the Local System account or as a member of the Administrators group, tw_terminate_winproxy will fail to stop the Windows proxy. The following error is logged in the Windows proxy log file: ERROR: Failed to terminate slave service: [(5, 'OpenSCManager', 'Access is denied.')] Workaround: Allow the user that the Windows proxy is running to stop the Windows proxy service. This is documented on the Microsoft Support Site.
For more information about Windows proxy configuration, see Additional Windows proxy configuration.
tw_tripwire_rebaseline
The tw_tripwire_rebaseline utility enables you to rebaseline a Tripwire database. Tripwire is a third-party software tool that monitors a specific set of configuration, system, and source files on an appliance. When you use the utility to rebaseline the Tripwire database, you accept that all files that are being monitored are correct. To use the utility, type the following command at the $TIDEWAY/bin/ directory:
tw_tripwire_rebaseline
You might use the script to initialize the Tripwire database, commission and configure Tripwire, or to run a reconfiguration procedure after the rebaseling process returns errors. For more information about rebaselining Tripwire and baseline configuration, see Baseline configuration.
User example
In the following example, a user updates the Tripwire database after an error. This procedure should be performed as the tideway user.
Page 583
tw_upduser
The tw_upduser utility enables you to update properties of a specific user, such as the current state, permissions, and passwords. To use the utility, type the following command at the $TIDEWAY/bin/ directory:
where: username is the name of the user to update options are any of the options described in the following table and the common command line options described in Using command line utilities. Command Line Option --active --blocked --disabled -f, --fullname=ARG -G, --groups=GROUP ,... --locked --password-must-change --password-ok --password-should-change -u, --username=NAME Description Sets the user to an ACTIVE state Sets the user to a BLOCKED state Sets the user to a DISABLED state Sets a full name of the user Sets the user's group membership Sets the user to a LOCKED state Specifies that the password must be changed Specifies that the password is okay as currently stated Specifies that the password should be changed Specifies the name of the BMC Atrium Discovery user. If no name is specified, BMC Atrium Discovery uses the default, system.
User example
In the following example, the tw_upduser utility sets the user Joe to a locked state to protect from unauthorized changes to that account.
To unlock the user, run the utility again with the --active option.
tw_check_reports_model
The tw_check_reports_model utility enables you to check custom reports for the following: Attributes marked as deprecated in the taxonomy Key expressions and traversals between Subnet and NetworkInterface. Subnet is now related to IPAddress so these no longer exist. Usage of ip_addr, netmask, and broadcast on NetworkInterface These attributes have been moved to IPAddress. The utility is primarily intended for the upgrade to BMC Atrium Discovery 9.0, but it can also be used outside the upgrade. The username and password to supply for this utility are those of a UI user, such as the system user. To use the utility, type the following command:
Page 584
tw_check_reports_model [options]
where options are any of the common arguments described in Using command line utilities.
User example
The following example checks custom reports as the system user. No password is entered so the utility prompts for a password which is not echoed when entered.
Appliance shutdown and restart The appliance is shutdown for the backup or the restore and restarted when the backup or the restore is complete.
Backing up or restoring the system when a baseline run is in progress results in critical baseline alerts Baseline is run periodically by the BMC Atrium Discovery system. Should baseline be running when a backup or restore is started critical baseline alerts may be reported. The backup or restore is unaffected by these alerts. The automatic baseline check schedule can be modified using the tw_cron_update.
The following items are backed up where present: Appliance Configuration CMDB Sync Configuration Consolidation Configuration Cron Jobs Custom Device Definitions Custom JDBC Drivers Custom Startup Scripts Custom OS Data Custom Reports Custom Taxonomy Customer Data Datastore Data Datastore Transaction Logs
Page 585
Export Adapters Export Exporters Export Mappings HTTPS Configuration Installed Device Definitions LDAP Configuration Vault (optional)
Destination system time must not be earlier than source You must ensure that the current system time on the destination appliance is no earlier than that of the appliance on which the backup was created. If the modification times on the files contained in the backup are later than the system time when they are restored, the backup will hang. To recover at this point you must kill the backup process, correct the time, run tw_restore --fix-interrupted then repeat the restore using the tw_restore command line utility.
Backing up and CMDB synchronization The backup contains the CMDB synchronization configuration. When a host has significantly changed so that its key has also changed, problems can be caused if a backup is restored before the changed host is rediscovered. In this case, on the next CMDB synchronization, duplicate hosts will be created in the CMDB representing the changed host, and the CIs representing the original hosts will never be deleted. To ensure that no duplicate hosts are created, you can delete and then recreate the BMC.ADDM dataset. The backup contains the LDAP configuration. If the destination appliance cannot access the LDAP server, you must ensure that a local (non-LDAP) user belonging to the system and public groups is activated and successfully tested on the source appliance before making a backup. If you choose to exclude the credential vault when backing up an appliance in which CMDB synchronization has been configured, the CMDB Sync page on the restored appliance displays the "This appliance has not been set up for synchronization with the Atrium CMDB" message. Once the Setup form is complete, filter and blackout window settings are restored.
The appliance backup feature replaces the appliance snapshot that was available in previous releases. On upgraded appliances only, if snapshots are still held in the filesystem, a banner and Remove Snapshots button is provided so you can remove the snapshots and release disk space.
2. Enter the details for the backup destination. The fields that can be completed are displayed or hidden depending on the backup type selection. Required fields are indicated with a red asterisk. Field Name Backup Type Details Select the destination type from the drop down list. This can be one of the following: Local: The backup is written to the $TIDEWAY/var/backup directory. No other configuration is required. Only one local backup can be stored. SSH: The backup is written to a remote server over ssh. Windows Share: The backup is written to a Windows share. You cannot create a backup on a Windows share from a FIPS enabled appliance. The mount operation fails and an error message written to syslog. Notes A free text area in which you can write notes about the backup.
Page 586
Directory
The directory into which to write the backup on the remote SSH server. Backups are written into a subdirectory called YYYY-MM-DD_hhmmss_addm_backup inside the specified directory. (SSH only). The share name and directory name into which to write the backup on a Windows share. Backups are written into a subdirectory called YYYY-MM-DD_hhmmss_addm_backup inside the specified directory. Path syntax is \\sharename\directoryname, where sharename is the name of the Windows share, and directoryname is the name of the directory into which to write the backup. (Windows share only). The hostname or IP address of the remote server onto which to write the backup (SSH). The port to which to connect (SSH). The username to use to connect to the remote server (SSH and Windows share). To specify the domain for Windows shares, use the following syntax: --remote-user=user@domain The corresponding password (SSH and Windows share). Select Verify backup to verify (md5) that the files in the backup archive are the same as those on the appliance. Shown only after successfully testing the connection for SSH and Windows share backups. Select Exclude vault to exclude the credential vault from the backup. Appliance UI users are always backed up and restored, regardless of this setting. So, for example, after a restore the password in effect for the system user will be the one from the source appliance. Shown only after successfully testing the connection for SSH and Windows share backups. Select Email when complete and enter an email address if you want an email to be sent automatically when the backup task is completed. Shown only after successfully testing the connection for SSH and Windows share backups. Click Test Connection to ensure that the remote host can be contacted and that the credentials are valid. Shown only for SSH and Windows share backups.
Path
3. Click Shutdown & Backup to start the backup operation. You are requested for confirmation. 4. Click No to return to the Appliance Backup page. Click Yes to continue and backup the appliance. All services are shut down before the backup occurs and a progress screen is displayed.
1. Make sure that the time setting on the destination appliance is not earlier than the source appliance. Failing to do so results in a failed restore, and a time consuming process to repair the restore. See this warning for more information. 2. From the Appliance section of the Administration tab, click Backup & Restore. The Appliance Backup page is displayed. 3. Click the Restore Backup tab. The Restore Backup tab has a panel in which you can choose the source of the backup, and provides details of the size and contents of the existing local backup.
4. Enter the details for the backup source. The fields that can be completed are displayed or hidden depending on the backup type selection. Required fields are indicated with a red asterisk. Field Name Details
Page 587
Backup Type
Select the destination type from the drop down list. This can be one of the following: Local: The backup is read from the $TIDEWAY/var/backup directory. No other configuration is required. SSH: The backup is read from a remote server over ssh. Windows Share: The backup is read from a Windows share. The directory from which to read the backup on the remote server (SSH). Backups are written into a subdirectory called YYYY-MM-DD_hhmmss_addm_backup inside the specified directory. Ensure you specify the subdirectory name too. (SSH only). The share name and directory name from which to read the backup on the remote server (Windows share). Backups are written into a subdirectory called YYYY-MM-DD_hhmmss_addm_backup inside the specified directory. Path syntax is \\sharename\directoryname, where sharename is the name of the Windows share, and directoryname is the name of the directory containing the backup. (Windows share only). The hostname of the remote server from which to read the backup (SSH). The port to which to connect (SSH). The username to use to connect to the remote server (SSH and Windows share). To specify the domain for Windows shares, use the following syntax: --remote-user=user@domain The corresponding password (SSH and Windows share). Select the check box to preserve the appliance's identity rather than take on the identity from the restored backup. This consists of: Appliance identity HTTPS configuration Consolidation configuration Clear the check box to use the identity from the backup. Select the check box and enter an email address if you want an email to be sent automatically when restoring the backup is complete. When you enter valid connection information, the Test Connection button is enabled. Click this to test the connection to the remote backup server. When the test is successful, and a backup is present, the Remote Backup Details pane displays information on the remote backup.
Directory
Path
5. Click Shutdown & Restore to start the restore operation. You are requested for confirmation. 6. Click No to return to the Appliance Backup page. Click Yes to continue and restore the appliance. All services are shut down before the restore occurs and a progress screen is displayed.
Device package error on restore Network device definitions are automatically installed with a TKU. The source and destination appliance must have the same version network device definitions. If you see the following error when restoring, it is important to update the package as described in the error message.
An error was reported while installing the device package. Please correct the error and install the device package manually by running (as root): tw_device_inport --force -U --oldpackage <rpm file>
Page 588
Back up your data before changing disk configurations The disk management utility performs disk operations such as partitioning and moving data. All data on the new disks is deleted. When data is moved from the system disk, it is copied to the new disk and deleted from the system disk. Before using it you should always back up your appliance.
LVM is not supported If you try to add an LVM disk, the operation fails and the following message is displayed: Unexpected system configuration: LVM-based configuration detected
The following disk types are defined: Disk type System Disk Additional Disk New Disk Description The disk on which BMC Atrium Discovery is installed. Any additional disk which contains BMC Atrium Discovery data. Any additional disk with no mounted partitions. There must be 4000 MB free space on the disk after the copy operation or the operation is not permitted. If you are only creating swap space, the disk must be larger than the suggested swap size. Any additional disk with mounted partitions. These partitions must be unmounted before this disk can be configured using the disk management utility.
The Disk Configuration page has a panel at the top which shows any pending operations, and a panel which shows the current configuration of any disks in the appliance and the suggested configuration of any new disks. A comprehensive key to the data types and their representation is also provided.
On any new disk, a Change Usage drop down list is displayed which is populated with the data partitions on the system disk. Where there is insufficient space on the new disk to copy the data from a particular partition, that partition's label is disabled. The label is also disabled if the partition has already been selected to have its data moved to another disk. You can also choose to add a swap partition to the disk by selecting the check box. The check box is only displayed if you have insufficient swap space configured. The disk configuration utility uses the following calculation to determine the best swap size. Where the amount of memory is less than 16 GB, swap size is set at double the memory size. Where the amount of memory is between 16 and 32 GB, swap size is set at 32 GB. Where the amount of memory exceeds 32 GB, swap size is set to equal the memory. The add swap check box is not displayed if the available new disk does not have enough space to store the additional swap to reach the optimal swap size. For example, if the system has 8GB swap by default, and the optimal swap size is 24GB, then 16GB additional swap is required. If the new disk has only 12GB available, the check box is not displayed. When you make the changes you have chosen or to accept the changes suggested by the system, the services are shut down and a progress indicator is displayed.
4. If you do not want to accept the suggested configuration, select Do not move any data here from Change Usage. 5. Select the data to move to the new disk from Change Usage. If the disk is of the type "Non ADDM Disk" then you must unmount all partitions on the disk before proceeding. 6. Select the Add swap check box if you want to add additional swap space. 7. Review the pending changes list. If it is what you want, click Apply changes. 8. You are requested for confirmation. The BMC Atrium Discovery Services are stopped and all of the contents of the new disks is deleted before moving data or creating swap partitions. Click OK to proceed. 9. The status page is displayed which shows a progress indicator and details of the individual operations. Once the operation has been completed, the services are started and the login page is displayed.
If the only operation you perform is to add swap to a new disk, the services are not stopped and started and you do not need to login.
When data is moved from an additional disk that contains only a data partition, at the end of processing, the disk is identified as an "New Disk" and is displayed with a suggested configuration. The existing data is not deleted, though the partition is not mounted. When data is moved from an additional disk that contains a data partition and a swap partition, at the end of processing, the disk is identified as an "Non ADDM Disk". The existing data is not deleted, though the partition is not mounted. To reuse this disk, disable the swap partition as shown here.
Configuring discovery
This section describes how BMC Atrium Discovery operates and how to configure and run the product. The Discovery Engine is designed to locate systems in the network and obtain relevant information from them as quickly as possible, using a variety of different tools and techniques to communicate with devices. The Reasoning Engine works on the raw data obtained by the Discovery Engine to infer the maximum amount of information about hosts and programs and populate the datastore intelligently. The Reasoning Engine uses patterns that identify running software based on the network ports, processes found, packages installed, protocols used, and so on. It also intelligently searches the discovered data to resolve relationships between items of software. Configuring discovery settings Masking sensitive data Managing the discovery platform scripts Adding privileged execution to commands Setting up ports for OS fingerprinting Using scanner files Improving discovery Integration points Technology Knowledge Update (TKU) Consolidation Extending discovery Credentials Pattern management
Port settings
Page 590
This section contains settings related to the ports that discovery uses. Field Name TCP ports to use for initial scan Details Enter the TCP ports that will be scanned on a first scan. Use this setting to prevent scanning of any ports that you want to avoid. The default is to use ports: 21, 22, 23, 80, 135, 443, 513, 902 and 3940. See notes on Order of Operations and TCP and UDP ports to use for initial scan below. Older versions included port 514, which can now be removed from upgraded systems. Enter the UDP ports that will be scanned on a first scan. Use this setting to prevent scanning of any ports that you want to avoid. The default is port 161. See notes on Order of Operations and TCP and UDP ports to use for initial scan below. The default is port 22. Enter any custom ports to scan in a comma separated list in the Change To column. The default is port 513. Enter any custom ports to scan in a comma separated list in the Change To column. The default is port 135. Enter any custom ports to scan in a comma separated list in the Change To column. The default is port 23. Enter any custom ports to scan in a comma separated list in the Change To column. The default is port 21. Enter any custom ports to scan in a comma separated list in the Change To column. The default is port 161. Enter any custom ports to scan in a comma separated list in the Change To column. The default is port 80. Enter any custom ports to scan in a comma separated list in the Change To column. The default is port 443. Enter any custom ports to scan in a comma separated list in the Change To column. The only supported port is 902. Enter any custom ports to scan in a comma-separated list in the Change To column. The default is port 3940. Enter any custom ports to scan in a comma separated list in the Change To column. When nmap runs port scans, it returns a result of open, closed or filtered. Using the check boxes you can choose which states are valid to investigate further. open|filtered: discover a device that should be accessible but isn't. It might be open, or filtered. filtered: this port is open but you still cannot connect to it. It must be filtered. A port for which a result of open is returned is always considered valid. Port 135 is usually open on Windows computers. Selecting Yes for this option means that nmap checks whether port 135 is open before a Windows proxy is used to discover an IP device. This is the default. You should select No in firewalled environments where a ping may be filtered by a firewall, but the Windows proxy may be able to connect to the target (for example, it is part of the same Workgroup).
UDP ports to use for initial scan SSH Ports RLogin Ports Windows Ports Telnet Ports FTP Ports SNMP Ports HTTP Ports HTTPS Ports VMware Authentication Daemon Ports Mainframe Host Server Ports Valid Port States
Page 591
Always try public SNMP community Use Host Server to Identify Mainframes Use Telnet Banner to Identify OS Use HTTP(S) HEAD Request to Identify OS Use FTP Banner to Identify OS Use vSphere API to Identify OS Use IP Fingerprinting to detect OS
Discovery attempts to use the public SNMP community to query the host's SNMP service if no credential is available for that host. In this case, only device classification is possible. Discovery attempts to connect to the host server port to determine whether the discovery target is a mainframe computer.
Discovery makes a telnet connection to a host and uses the telnet "welcome" banner to determine host and operating system information. Discovery attempts to connect to port 80 or 443 of the host and perform an HTTP or HTTPS HEAD request to determine the host and operating system.
Discovery starts an FTP session with the host and use the FTP "welcome" banner to determine host and operating system information. Discovery makes a TCP connection to examine the header and ensure that the VMware authentication deaemon is really on port 902 (or the specified port). When confirmed discovery makes a webservices request. This requires an open VMware Authentication Daemon and HTTPS port, and a valid vSphere credential. This option controls whether or not discovery will use IP fingerprinting to determine the operating system, if the previous methods have been unsuccessful. The network ports scanned during this phase of discovery can be configured. See Setting up ports for OS fingerprinting. The process of IP fingerprinting can cause instability in some legacy systems, and may trigger intrusion detection systems. This option is enabled by default. This option controls whether or not open ports are used to identify the operating system.
Session settings
This section contains settings related to the way in which discovery uses sessions to login and run commands. Field Name Session Line Delay Details A delay of 10 ms is introduced between each line sent by Discovery. This avoids problems where remote shells are unable to cope with rapid command sequences. Select one of the following from the list: 1, 2, 5, 10 (the default), 15, 20, 25, 50, 100 milliseconds. The length of time for the discovery script to wait for a login prompt. If this is exceeded the attempt is abandoned. Use default timeout. 10, 20, 30, 60, 90, 120, or 180 seconds. The amount of data to examine when detecting the shell command prompt. The default value is 512 bytes. Changing the default value may cause significant degradation of appliance performance. Do not change this value unless directed by Customer Support. Certain systems require an authorization step after logging in. At the command line you are prompted to enter session details. The required response is usually a user name, and some other information. In the Authorised Prompt field, enter the text of the prompt. Where an Authorised Prompt has been entered, you must enter the expected response (that you would enter at the command line) in the Authorised Response field.
Session Login Timeout Maximum search window size Authorised Prompt Authorised Response
Scanning settings
This section contains settings related to any scanning that discovery undertakes.
Page 592
Details Select the scanning levels that you want to permit. Choose one or more levels from the following: Sweep Scan: Performs a sweep scan, trying to determine what is at each endpoint in the scan range. Discovery attempts to log in to a device to determine the device type. In particular, it executes the getDeviceInfo discovery script on each platform. See Managing the discovery platform scripts for more information about platform scripts. Full Discovery: Retrieves all the default information for hosts, and complete full inference. Scanning levels are described in Scanning and Data Processing Levels. Select the default scan level for this appliance from the list. The choices available are those selected in the Authorized Scanning Levels row above. If this option is disabled, then all hosts are discovered, but discovery of empty IP ranges will be slower. The default is to allow discovery to ping the host first. If you enable this option, Discovery pings hosts before it starts a scan. Discovery will scan more quickly on empty IP ranges, although hosts might be missed if there are firewalls configured to reject pings. In this situation you should specify IP ranges behind firewalls that you do not want to ping. See the following Exclude ranges from ping option. Note that this option only affects scanning of networks other than the one on which the appliance is physically located. If you are using ICMP filtering, you should set this option to No. See note on Order of Operations below. Ping addresses with TCP ACK packets to determine which hosts are actually up. You should use this option when scanning networks that do not permit ping packets. You can specify multiple ports in a comma-separated list. This option is only available if the ping hosts before scanning option is set to Yes. See note on Order of Operations below. Ping addresses with TCP SYN packets to determine which hosts are actually up. You should use this option when scanning networks that do not permit ping packets. You can specify multiple ports in a comma-separated list. This option is only available if the ping hosts before scanning option is set to Yes. Note that TCP SYN pings are likely to trigger IDS and firewall blocks as they are often regarded as ping floods. Also see note on Order of Operations below. Enter a list of IP addresses or IP ranges that you do not want to ping. For example, you may want to scan IPs which are behind a firewall that blocks ICMP packets. If BMC Atrium Discovery pings an IP address and receives no response, it makes no further attempt to scan that IP address. Excluding a range from pinging enables you to scan IPs behind such firewalls. Note this option only affects scanning of networks other than the one on which the appliance is physically located. Number of retries to be attempted on each host. The system will only retry for machines on which the operating system cannot be determined. The Scan retries and Default OS options work together in sequence to help locate host machines. Timeout (in minutes) that applies when BMC Atrium Discovery uses nmap to determine open ports or performs OS fingerprinting. It is not used to limit the time to scan devices. See also the credential timeout for the sessions. A discovery run may take some time to complete. If it is started too close to the end of a Discovery window, it does not complete before the end of the window. To prevent this, you can specify a period in which discovery runs will not be started. The default is 30 minutes, meaning that no discovery runs will be started within 30 minutes of the end of a discovery window. Select the period from the following values in the list: 5, 10, 15, 20, 25, 30, 35, 40, and 45 minutes. Enables you to permit scanning outside permitted discovery windows. The default is no. If you change this option you must restart the tideway service. Cause discovery to retrieve MAC and port information from neighboring scanned network devices. The default is Yes. Only select No if you do not want to collect any edge connectivity information.
Scan retries
Scan timeout
Minimum time before end of window to avoid starting new scheduled discovery operations Allow scans even if no window defined Discover neighbor information when scanning network devices
Page 593
This section contains settings related to SQL integrations. Field Name Timeout to establish a connection Maximum connections held open Details The timeout for establishing a connection to the database. Select the timeout period in seconds from the following values in the list: 5, 10, 30 (the default), 60, 90, 120, and 180. Specifies the number of connections to databases that can be held open after they would otherwise be closed. Higher values can reduce connection delays but will consume extra resources. The default is unlimited. If you change this option you must restart the tideway service. Select the number of connections from the following values in the list: 0, 10, 20, 30, 40, 50, and Unlimited. Specifies the maximum time to hold an unused database connection open. Higher values can reduce connection delays but will consume extra resources. The default is 2 minutes. If you change this option you must restart the tideway service. Select the timeout period in minutes from the following values in the list: 2, 4, 6, 8, and 10.
Mainframe discovery
This section contains settings related to mainframe discovery. Field Name Timeout to establish a connection Details The timeout for establishing a connection to the database. Select the timeout period in seconds from the following values in the list: 30, 60, 90, 120 (the default), 180, and 300.
Page 594
The minimum version of the Windows proxy that the appliance will use for Windows Discovery. You can enter a new minimum Windows proxy version in this field. Ensure that you do not include any whitespace in the version number. The version number of a Windows proxy corresponds to the version number of BMC Atrium Discovery that the Windows proxy was released with (for example, 8.1). Note: You must stop scanning to change the minimum Windows proxy version or release. This option controls whether or not arbitrary commands can be run or not. Disabling this option prevents many patterns retrieving information needed to build SIs and BAIs.
Enable running of arbitrary commands Enable Automatic Grouping Scanner File Polling Interval
Automatic grouping is the automatic grouping of hosts into logical groups called Automatic Groups. This is primarily intended to help in baselining. By default it is enabled. Select this option to enable Automatic Grouping. Disabling Automatic Grouping may improve scanning performance. Scanner files are used to simulate discovery of inaccessible hosts. Discovery polls for new scanner files periodically. Select the polling interval from the following values in the list: Every minute, Every hour, and Every day. If you change this option you must restart the tideway service. When set to Every day, the polling time is at midnight UTC time. Daylight saving time is not considered.
Use this option to permit or prevent discovery of desktop hosts. The default is No, that is, do not discover desktop hosts. When the option is set to No, if a Windows or Mac OS host is determined to be desktop then the host is skipped. A Windows host is determined to be a desktop or server depending on the OS version or edition string. If this cannot be determined, then the host is assumed to be a server. This can only be determined after logging into the target system. A Mac OS host is always considered a desktop. When a host is skipped, the device_type attribute on the Device Info Node is set to Desktop, no inferred host is created, and the corresponding DiscoveryAccess result is shown as "Skipped (Desktop host discovery has been disabled)".
Order of operations
Where selected, these groups of operations are carried out in the following order: Ping hosts before scanning Use TCP ACK "ping" before scanning Use TCP SYN "ping" before scanning TCP ports to use for initial scan UDP ports to use for initial scan Use IP Fingerprinting to Identify OS
Page 595
host has previously been discovered using telnet, this host is still discovered using telnet, because it is listed as the last login for that host.
2. 3. 4. 5. 6.
To view or edit filters for files, click the Files tab. To edit an existing filter, click Edit. To delete an existing filter, click Delete. To add a new filter, click Add .... A new field is added into which you can enter a regular expression. To create the filter, click Apply.
To reorder sensitive data filters, click the up or down arrow in the ordering column for the filter you want to move. You can also move a filter to the top or bottom of the list using the top or bottom arrow buttons.
2. This regular expression adds "\S+" to identify a sequence of one or more non whitespace characters, making a regular expression of "--password \S+". Brackets are then added to define the portion that requires masking, making "--password (\S+)". 3. After rediscovery, the new process node will have the password portion replaced with an md5 hash.
./pfg_serv -h -Hj -g lob -l full --user gussie --password 4343ab718997a9570ab20c0c1b5e18ad --dominion emea
4. For more resilience against extra white space the single space in the regular expression should be replaced with \s+, which matches any whitespace character making "--password\s+(\S+)" which is the form that most sensitive data filters would take.
Page 596
The discovery system runs various scripts through the login session to obtain information from the host systems. From the BMC Atrium Discovery user interface, you can: View the scripts Amend scripts (edit script parameters, or enable or disable scripts) However, the BMC Atrium Discovery user interface does not enable you to add new scripts. Where this page refers to "default scripts", it refers to the discovery scripts that were shipped with the release. For example, in a version 8.3.02 appliance, the default scripts are those that shipped with 8.3.02. If you upgrade the appliance to version 9.0, the default scripts are those that shipped with version 9.0. If you have just upgraded your installation and have not updated the discovery scripts, you can use the Discovery Platforms page to view the script differences and choose to upgrade to the current version by clicking Reset to Default. This enables you, post-upgrade, to view the script differences and choose to upgrade to the current version by clicking Reset to Default.
2. Where the discovery scripts have been changed from the default, a red change bar is shown to the left of the platform name. 3. To manage the changed scripts in the UNIX, Linux and Related Discovery section, you can use the following controls: Reset All: click this to reset all scripts to the default. View Script Differences: this link opens a page showing a list of changed discovery methods, sorted by platform.
Click the discovery method header to reveal the detailed view of the differences between the default method and the current
Page 597
method:
4. Click the operating system link whose commands you want to edit. You cannot edit the SNMP methods, you can only view them.
To edit the command path which is set when a shell is opened on the target, click the corresponding Edit link. Just the command path displayed is set, it is not added to any existing path in the user environment. To display a view only version of all the scripts, click this link. Displays the discovery method used. * indicates methods that will successfully create a host. To display a view only version of a script, click this link for the corresponding script. Some scripts might be marked as Privileges. This means that a normal login does not have sufficient access rights to get all of the information required. In this case, the root, or another user with higher access rights is used.
In the Actions column, the following options are available: Edit: To edit the script, click this link. You can edit the discovery script, notes, and enable or disable the script. Disable or Enable: To change the enable and disable status of a script, click this link. Download: To download a copy of the script, click this link. You can then edit it in a text editor and save it locally. Upload: To upload a script to replace the existing one, click this link. Where you edit or replace an existing script, a red change bar shows that the script has been changed and an additional Reset To Default link is displayed. Click this link to reset the script to the default.
Maximum Line Length The maximum line length of an interactive script is OS and shell dependent, and exceeding such a limit will cause the commands to fail. Ensuring that all lines in discovery scripts contain fewer than 128 characters will prevent this occurring.
Page 598
Additional discovery
In addition to the standard discovery commands described above, additional discovery methods can be called by patterns. See manual pattern execution and DiscoveryAccess page. Methods such as the following are used: getFileInfo: Used where a pattern uses the discovery.fileGet to return a file. The result is stored in a DiscoveredFile node. getFileMetadata: Used where a pattern uses the discovery.fileInfo to return the metadata on a file. The result is stored in a DiscoveredFile node with no content attribute. runCommand: Used where a pattern uses the discovery.runCommand function to run a command on a host during its execution. The result is stored in a DiscoveredCommandResult node. getDirectoryListing: Used where a pattern uses the discovery.listDirectory function to return a list of the directory contents. Returns a list of DiscoveredDirectoryEntry nodes.
In order to be able to capture output using tw_capture, /tmp on the target host needs to be writable by the discovering account.
You can record an exit status from commands used in a pipeline, for example:
tw_capture foo dpkg -l '*' | egrep '^ii '
It can also be used in backticks, for example when running a command and assigning the result to a shell variable:
dns_domain=`tw_capture hostname -d | sed -e 's/(none)//'`
The following screen shows two example tw_capture commands, one which will succeed, and one which will fail. These have been added to the start of the getDeviceInfo method:
Page 599
2. If it is a link to a single DiscoveryAccess, then that DiscoveryAccess page is shown. If there are multiple DiscoveryAccesses, then a list page is displayed. 3. Click the method link for the method which you edited to add the return status.
5. Use the find function in your browser to search for the PRIV section (search for PRIV_XXX to find the beginning of the PRIV section). 6. In the PRIV function (in this example PRIV_LSOF), add the command required (such as sudo, pbrun, or dzdo) to run the commands as a privileged user. For example:
... PRIV_LSOF() { sudo "$@" } ...
7. Click Apply to apply the changes. The screen is refreshed and the initialise method is highlighted to show that it has changed from the default.
8. Click Show Differences to show the differences between the default script and the current script.
The $@ represents the command that BMC Atrium Discovery issues. Adding sudo (or similar privileged command) tells it how to escalate the privilege for that command. Now when a script needs to call lsof, it calls the PRIV_LSOF() command with the full command it needs to run, which then runs lsof with the correct privilege.
Page 600
If the path is specified, it will affect all discovery commands that use that function. The privileged command may not always be at the same place on all discovery targets. If the path is not specified, the privileged command will be found with the path of the user profile and the BMC Atrium Discovery path environment variable. You can check the path environment variable as it is displayed at the top of the Platforms page.
You must add a privileged execution method to whichever commands you require in order to gain the fullest possible discovery. The available commands, their impact on discovery and the platforms they are available on described on the Privileged commands page.
Privileged commands
This page describes the available privileged commands, their impact on discovery, and the platforms on which they are available. By default, each command is left unprivileged (for example, PRIV_LSOF() { "$@" }). The user or administrator must modify the script to insert the relevant command to allow discovery to run the privileged commands. Examples are provided in adding privileged execution to commands.
AIX
PRIV_CAT: The cat command requires superuser privileges to display the contents of files not readable by the current user. For example, configuration files owned by the root user. PRIV_LSOF: The lsof command requires superuser privileges to display information on processes other than those running as the current user. PRIV_RUNCMD: This function supports running privileged commands from patterns. PRIV_LSLPP: The lslpp command requires superuser privileges to list all installed packages. PRIV_TEST: This function supports privilege testing of attributes of files. PRIV_LS: This function supports privilege listing of files and directories.
FreeBSD
PRIV_CAT: The cat command requires superuser privileges to display the contents of files not readable by the current user. For example, configuration files owned by the root user. PRIV_LSOF: The lsof command requires superuser privileges to display information on processes other than those running as the current user. PRIV_RUNCMD: This function supports running privileged commands from patterns. PRIV_DMIDECODE: The dmidecode command requires superuser privileges to read data from the system BIOS. PRIV_TEST: This function supports privilege testing of attributes of files. PRIV_LS: This function supports privilege listing of files and directories.
HPUX
Page 601
PRIV_CAT: The cat command requires superuser privileges to display the contents of files not readable by the current user. For example, configuration files owned by the root user. PRIV_LSOF: The lsof command requires superuser privileges to display information on processes other than those running as the current user. PRIV_RUNCMD: This function supports running privileged commands from patterns. PRIV_LANADMIN: The lanadmin command requires superuser privileges to display any interface speed and negotiation settings. PRIV_SWLIST: The swlist command requires superuser privileges to list all installed packages. PRIV_TEST: This function supports privilege testing of attributes of files. PRIV_LS: This function supports privilege listing of files and directories. PRIV_FCMSUTIL: This command requires superuser privileges to list attributes of HBA devices.
IRIX
PRIV_CAT: The cat command requires superuser privileges to display the contents of files not readable by the current user. For example, configuration files owned by the root user. PRIV_LSOF: The lsof command requires superuser privileges to display information on processes other than those running as the current user. PRIV_RUNCMD: This function supports running privileged commands from patterns. PRIV_TEST: This function supports privilege testing of attributes of files. PRIV_LS: This function supports privilege listing of files and directories.
Linux
PRIV_CAT: The cat command requires superuser privileges to display the contents of files not readable by the current user. For example, configuration files owned by the root user. PRIV_LSOF: The lsof command requires superuser privileges to display information on processes other than those running as the current user. PRIV_RUNCMD: This function supports running privileged commands from patterns. PRIV_TEST: This function supports privilege testing of attributes of files. PRIV_LS: This function supports privilege listing of files and directories. PRIV_DMIDECODE: The dmidecode command requires superuser privileges to read data from the system BIOS. PRIV_HWINFO: The hwinfo command requires superuser privileges to read data from the system BIOS PRIV_MIITOOL: The mii-tool command requires superuser privileges to display any interface speed and negotiation settings. PRIV_ETHTOOL: The ethtool command requires superuser privileges to display any interface speed and negotiation settings. PRIV_NETSTAT: The netstat command requires superuser privileges to display process identifiers (PIDs) for ports opened by processes not running as the current user PRIV_LPUTIL: The lputil command requires superuser privileges to display any HBA information. PRIV_HBACMD: The hbacmd command requires superuser privileges to display any HBA information. PRIV_XE: The xe command command requires superuser privileges to to report CPU information on Xen platforms. PRIV_ESXCFG: The esxcfg-info command requires superuser privileges to to report hardware information on a VMWare ESX controller.
Mac OS X
PRIV_CAT: The cat command requires superuser privileges to display the contents of files not readable by the current user. For example, configuration files owned by the root user. PRIV_LSOF: The lsof command requires superuser privileges to display information on processes other than those running as the current user. PRIV_RUNCMD: This function supports running privileged commands from patterns. PRIV_TEST: This function supports privilege testing of attributes of files. PRIV_LS: This function supports privilege listing of files and directories.
NetBSD
PRIV_CAT: The cat command requires superuser privileges to display the contents of files not readable by the current user. For example, configuration files owned by the root user. PRIV_LSOF: The lsof command requires superuser privileges to display information on processes other than those running as the current user. PRIV_RUNCMD: This function supports running privileged commands from patterns. PRIV_TEST: This function supports privilege testing of attributes of files. PRIV_LS: This function supports privilege listing of files and directories. PRIV_DMIDECODE: The dmidecode command requires superuser privileges to read data from the system BIOS.
OpenBSD
PRIV_CAT: The cat command requires superuser privileges to display the contents of files not readable by the current user. For example, configuration files owned by the root user. PRIV_LSOF: The lsof command requires superuser privileges to display information on processes other than those running as the current user. PRIV_RUNCMD: This function supports running privileged commands from patterns. PRIV_TEST: This function supports privilege testing of attributes of files. PRIV_LS: This function supports privilege listing of files and directories. PRIV_DMIDECODE: The dmidecode command requires superuser privileges to read data from the system BIOS.
Page 602
OpenVMS
Not applicable to this platform. The Normal privilege category is sufficient to run the commands in the discovery script.
POWER HMC
Not applicable to this platform.
Solaris
Solaris versions 9 and later no longer use sudo as the preferred method of privilege escalation, rather, they use a more sophisticated Role Based Access Control (RBAC) privilege mechanism. One of the ways of granting a user escalated privileges is to assign them a role, which can be either system, or user defined. The preferred way to provide escalated privileges for BMC Atrium Discovery is to grant the proc_owner role to the discovery user. This enables the discovery user to obtain information on processes that belong to other users. An alternative method is to use elevated profiles using the pfexec command. This prompts for a password, but will be handled by the discovery scripts in the same way as sudo. PRIV_CAT: The cat command requires superuser privileges to display the contents of files not readable by the current user. For example, configuration files owned by the root user. PRIV_LSOF: The lsof command requires superuser privileges to display information on processes other than those running as the current user. PRIV_RUNCMD: This function supports running privileged commands from patterns. PRIV_TEST: This function supports privilege testing of attributes of files. PRIV_LS: This function supports privilege listing of files and directories. PRIV_EMLXADM: The emlxadm command requires superuser privileges to display any HBA information. PRIV_FCINFO: The fcinfo command requires superuser privileges to display any HBA information. PRIV_DMIDECODE: The dmidecode command requires superuser privileges to read data from the system BIOS on Solaris X86 platforms only. PRIV_IFCONFIG: The ifconfig command requires superuser privileges to display the MAC address of each # interface. PRIV_NDD: The ndd command requires superuser privileges to display any interface speed and negotiation settings. PRIV_PS: The /usr/ucb/ps command requires superuser privileges to display full command line information (without this, command lines will be limited to 80 characters). This affects Solaris 10 and later and Solaris 8 & 9 with certain patches. PRIV_LPUTIL: The lputil command requires superuser privileges to display any HBA information. PRIV_HBACMD: The hbacmd command requires superuser privileges to display any HBA information. PRIV_PFILES: The pfiles command requires superuser privileges to display open port information for processes not running as the current user.
Tru64
PRIV_CAT: The cat command requires superuser privileges to display the contents of files not readable by the current user. For example, configuration files owned by the root user. PRIV_LSOF: The lsof command requires superuser privileges to display information on processes other than those running as the current user. PRIV_RUNCMD: This function supports running privileged commands from patterns. PRIV_SETLD: The setld command requires superuser privileges to display information on installed packages. PRIV_TEST: This function supports privilege testing of attributes of files. PRIV_LS: This function supports privilege listing of files and directories.
UnixWare
PRIV_CAT: The cat command requires superuser privileges to display the contents of files not readable by the current user. For example, configuration files owned by the root user. PRIV_LSOF: The lsof command requires superuser privileges to display information on processes other than those running as the current user. PRIV_RUNCMD: This function supports running privileged commands from patterns. PRIV_TEST: This function supports privilege testing of attributes of files. PRIV_LS: This function supports privilege listing of files and directories.
VMware ESX
This refers to ssh discovery rather than discovery via the vSphere API. PRIV_LSOF: The lsof command requires superuser privileges to display information on processes other than those running as the current user. PRIV_RUNCMD: This function supports running privileged commands from patterns. PRIV_CAT: The cat command requires superuser privileges to display the contents of files not readable by the current user. For example, configuration files owned by the root user. PRIV_TEST: This function supports privilege testing of attributes of files. PRIV_LS: This function supports privilege listing of files and directories. PRIV_DMIDECODE: The dmidecode command requires superuser privileges to read data from the system BIOS. PRIV_HWINFO: The hwinfo command requires superuser privileges to read data from the system BIOS.
Page 603
PRIV_MIITOOL: The mii-tool command requires superuser privileges to display any interface speed and negotiation settings. PRIV_ETHTOOL: The ethtool command requires superuser privileges to display any interface speed and negotiation settings. PRIV_NETSTAT: The netstat command requires superuser privileges to display process identifiers (PIDs) for ports opened by processes not running as the current user. PRIV_LPUTIL: The lputil command requires superuser privileges to display any HBA information. PRIV_HBACMD: The hbacmd command requires superuser privileges to display any HBA information. PRIV_XE: The Xen xe command requires superuser privileges. PRIV_ESXCFG: The esxcfg-info command requires superuser privileges.
VMware ESXi
This refers to ssh discovery rather than discovery via the vSphere API. PRIV_LSOF: The lsof command requires superuser privileges to display information on processes other than those running as the current user. PRIV_RUNCMD: This function supports running privileged commands from patterns. PRIV_CAT: The cat command requires superuser privileges to display the contents of files not readable by the current user. For example, configuration files owned by the root user. PRIV_TEST: This function supports privilege testing of attributes of files. PRIV_LS: This function supports privilege listing of files and directories.
6. Click OK to save the new or changed details. 7. Click Apply to remove the lock and allow other users to edit the settings. If you click Cancel at this point, any changes you have made are discarded.
Page 604
Scanning Windows targets For Windows targets, you cannot download discovery commands. To discover Windows targets you must use the Standalone Windows scanning tool.
3. Click Download host script (named Linux.sh, in this example) and save the file to the local computer as an executable. 4. Copy the file to the remote host. In the following example, the SCP utility is used to copy the files between the local host teabag and the remote host teaspoon:
tideway@teabag:~$ scp linux.sh tideway@teaspoon:linux.sh tideway@teaspoon's password: linux.sh 100% 19KB tideway@teabag:~$
18.9KB/s
00:00
5. Log on to the remote host and run the script, piping the output into a text file:
Page 605
tideway@teabag:~$ ssh teaspoon tideway@teaspoon's password: Linux teaspoon 2.6.18-6-686 #1 SMP Sat Dec 27 09:31:05 UTC 2008 i686 ... tideway@teaspoon:~$ tideway@teaspoon:~$ ./linux.sh > teaspoon.txt tideway@teaspoon:~$ more teaspoon.txt FORMAT Linux --- START device_info hostname: teaspoon fqdn: teaspoon dns_domain: tideway.com domain: os: Debian Linux lenny/sid --- END device_info --- START host_info kernel: 2.6.25-2-amd64 num_logical_processors: 2 cores_per_processor: 2 ... tideway@teaspoon:~$
00:00
2. Copy the output file to the appliance. In this example, the SCP utility is used:
dtweed@teabag:~$ scp -p teaspoon.txt upload@appliance:~/linux.teaspoon.txt upload@appliance's password: teaspoon.txt 100% 265KB 262.7KB/s 00:00 dtweed@teabag:~$
When you load a scanner file onto the appliance, its name must be unique; otherwise, it might get overwritten by another scanner file being uploaded at the same time. For this reason, it is helpful to use a naming scheme that enables you to correlate scanner files and created hosts. Do not use a name starting with . or ending with .ignore. If you do, that file will be ignored. File names are used only for internal purposes.
Using scanner files with consolidating appliances If you are using scanner files with consolidating appliances, upload the scanner files to the consolidating appliance, rather than to the scanning appliance. Doing so correctly identifies the hosts as "Read from scanner file", rather than as "Retrieved by scanning appliance".
After a scanner file is loaded, you can look at the results of the discovery and view the host, as shown in the following illustration.
Page 606
On the DiscoveryAccess page, in the Data Source field of the Discovery Details section "Read from scanner file" is displayed. This is shown in the following illustration.
Note Hosts discovered by scanner files never age. BMC Atrium Discovery does not handle overlapping IP ranges. Data provided in scanner files must not use IP addresses that overlap with other scanner files, or addresses scanned directly by BMC Atrium Discovery. Using the Windows scanning tool might increase the likelihood of ranges overlapping; however, the tool enables you to select a specific IP to avoid one you have already selected. For more information, see Standalone Windows scanning tool.
Warning The standalone Windows scanner can be used to manually collect a limited set of information from a Windows host. The scanner is designed to be used solely on isolated systems or networks. It is NOT equivalent to a Windows proxy, because it will only collect basic host, process and package information that can be obtained by WMI queries, not additional data such as NIC registry information (for NIC discovery).
Page 607
3. Extract the .zip file to a directory on a writable a USB flash drive or similar removable media.
Page 608
prevent these occurrences, it might be necessary to explicitly specify the IP address you want to use. To do so, run the tool and use the --target IPADDR option to set the target system. It is also possible to scan other sytems using this option, as long as your user account has the required privileges (which are typically Administrator privileges). Setting the IP address to a specific target is especially useful for scanning an isolated subnet, because you would only need to insert the tool into one computer to collect data from them all.
Improving discovery
BMC Atrium Discovery provides as much information as it can about your environment. To maximize this ability, there are features in the product to help you improve the level and quality of access to the environment. Monitoring credential usage: Discovery relies on access credentials. The credentials management tools provide an overview of the success rate of the credentials to assist you in monitoring the roll out and currency of the credentials in use. This can monitor Login Credentials, Windows proxies and SNMP Credentials. Troubleshooting via SessionResults: Discovery records a lot of useful metadata information about each access to the environment. This page describes some of the information available and how to use the results to troubleshoot access issues. Discovery Conditions: There are many scenarios where the system can detect that data could be improved. The Discovery Condition tools enable you to see advice on what actions you can take, and which hosts are impacted by these actions.
Discovery Conditions
Discovery Conditions are hints to administrators. They describe configuration issues in an estate that, if resolved, can help BMC Atrium Discovery retrieve more detailed or complete data. Doing this improves the view of the estate, so that users can act on the knowledge it gathers with greater confidence. The following Discovery Conditions are described in Configipedia: Discovery Condition AIX Netstat truncates IPv6 Addresses Discovery Condition Red Hat Enterprise Linux RHBA_2012_0188 Discovery Condition Solaris 11 Command Line Truncation Discovery Condition Solaris 10 Command Line Truncation Discovery Condition Solaris 9 Command Line Truncation Discovery Condition Solaris 8 Command Line Truncation Discovery Condition vSphere Socket Exhaustion Discovery Condition Windows 2003 KB932370 Discovery Condition Windows XP KB936235 Discovery Conditions Windows NIC
Discovery Conditions are not logged until a Host with the issue is scanned.
Page 609
Integration points
Copyright 2003-2013, BMC Software Page 610
Viewing a connection
The Connections tab on the Integration Point window shows a list of connections configured for this integration point. To view a connection: 1. From the Connections tab on the Integration Point window, click the connection name.
Viewing a query
The Queries tab on the Integration Point window shows a list of queries configured for this integration point. To view a query, click the query name. The Query window is displayed: 1. From the Query tab on the Integration Point window, click the query name.
Creating a connection
An SQL discovery connection provides the information to create a connection from BMC Atrium Discovery to the target database. Once a connection has been made, the database can be queried. A connection can be represented as a URL which should be immediately familiar to a database administrator. This is shown in the preview section of the window. An example TPL call using the connection is also shown. To Create a connection in an integration point: 1. Click the Connections tab on the Integration Point screen. 2. Click Create at the top right hand side of the Integration Point screen.
Page 611
2. Enter details of the connection. Which fields are displayed and whether they are required depends on the database driver selected. Field Name Name Description Username Password Database Driver Database IP Address Port Database Additional Parameters Details Enter a name for the connection. You can only use numbers, letters, or the underscore character (_). This name is used in TPL to call the connection. An example TPL call using the connection name is shown in the Useful Information section. A free text description of the connection. The database user name which will be used to connect to the database server. The password corresponding to the database user name. The driver to use to connect to the database server. Select the appropriate driver from the drop down list. BMC Atrium Discovery ships with a limited number of database drivers due to licensing restrictions. See adding new drivers for details of how to add further drivers. The IP address of the database server. The port to use to connect to the database server. The database to connect to in the database server. Any additional parameters that you want to supply to the database driver. These are specified as key=value pairs in a semicolon separated list. For example, you may want to specify that a driver which supports it should connect using SSL and with a non-default timeout: useSSL=true;timeout=60 For information about the parameters that can be specified for each driver, you should consult the driver documentation on the internet. Links are provided on the JDBC drivers page.
3. To save the connection, click Apply. The Connection window is displayed showing the details of the connection including the example URL and TPL call, and an empty result section. The results section is updated when the connection is used.
Testing a connection
Once you have created a connection, you should test it. To test a connection: 1. From the Connection window, click Test. 2. The screen is refreshed to show a Test Results section below the Connection Details section. The following screenshot shows a success:
Testing a query
Once you have created a query, you should test it. To test a query: 1. 2. 3. 4. From the Query window, click Test. From the Test Query dialog select a connection from the list to use to test the query. Click Next. Enter the parameters to be passed with the query. The example above shows a query requiring the hostname parameter. 5. To perform the test, click Test. The screen is refreshed to show a Test Results section below the Query Details section.
Page 612
properties files are shipped with BMC Atrium Discovery and are located in the $TIDEWAY/data/installed/jdbcdrivers directory: Informix Ingres Microsoft SQL Server MySQL Oracle PostgreSQL Sybase JTDS IPv6 access using JTDS is not currently possible
Additional driver support While it is possible to add new JDBC drivers not specified on the preceding list, these will not have been tested by BMC, and as such no guarantees can be made that they will work with BMC Atrium Discovery as expected. For further information, please contact BMC Support.
The $TIDEWAY/data/custom/jdbcdrivers and $TIDEWAY/data/installed/jdbcdrivers directories are both checked for properties files. When you save a driver to the $TIDEWAY/data/custom/jdbcdrivers directory, you can create a new properties file in the same directory, or you can edit one of the supplied properties files in the $TIDEWAY/data/installed/jdbcdrivers directory.
driver.class
driver.url
Optional Entries Entry Name driver. default=true Description Specifies that this driver is the default driver and will be displayed first in lists when creating objects that use JDBC drivers, for example, SQLIntegrationConnections. If no default is specified across all the drivers then the order in which they are displayed is random. If more than one driver is specified as the default then the last driver to be loaded will be marked as the default. Specifies a Java class that can format SQL exceptions (java.sql. SQLException) that bubble up from the JDBC driver. This class must implement the ExceptionProcessorIF interface. When an SQL exception occurs, the specified class will be called and then the result of that call will be published to the relevant source, which may be a log file or to the UI. For example, driver.exceptionProcessor= com.tideway.integrations.service.servants.MySqlExceptionProcessor Translates a variable that is specified in the JDBC URL to a user visible label. See the table below for a list of variable names. (e.g. translation.extra_parameters=Additional Parameters)
driver. exceptionProcessor
translation. variablename
Page 613
validationregex. variablename
Used only with Integration Points and Software Credential Groups. Validates a given variable specified in the JDBC URL. When the user tries to save a connection, each of the properties is checked to see if it has an associated validation regex. If so, and if the Match on this check box is NOT selected, or the connection is static, then the value entered by the user is matched against the value specified here. If they do not match, the user is prompted to re-enter the value. Endpoint is always validated, for static credentials, to ensure that it is a valid IPv4 address.
The following table shows the parameters used by each driver: Field Informix MySQL Postgres Oracle (service) Oracle (SID) Ingres Sybase MS SQL Server JTDS
servertype endpoint port database INFORMIXSERVER extra_parameters instance_name service sid Req Req Req Opt Req Req Opt Opt Opt Opt Opt Opt Req Opt Opt Req Opt Req Req Opt Req Opt Req Opt Req Req Opt Opt Req Opt
Opt
When you have added a new driver and written a properties file for it, you must restart the Tideway services. To do this, enter the following command:
$ sudo /sbin/service tideway restart
This creates a symbolic link called oracleService.jar for the oracleService.properties properties file and one called oracleSID.jar for the oracleSID.properties properties file.
Page 614
tideway service restart required You must restart the tideway services before using the newly uploaded JDBC driver.
The available statuses for the drivers are: No JAR Uploaded Jar Uploaded(Deactivated) Activated and in use. This means the driver is being used by an SQL credential. Activated but not in use Error. BMC Atrium Discovery is shipped with a default set of properties files which define the databases that you can connect to. If you want to connect to a different database, you must write a properties file and download an appropriate driver. This is described in Adding new JDBC drivers.
JDBC drivers
The following table provides an example JDBC URL for database targets. It also provides documentation and download URLs where available. When selecting the appropriate driver, note that BMC Atrium Discovery uses JDK 1.6. Consult the database vendor's documentation for further information. Database Informix
jdbc:informix-sqli://host[:port]/database:INFORMIXSERVER=servername [;property=value][;property=value]
The following example URL connects to an Informix server running on a host on IP address 192.168.0.100, port 1533, database name ADDM_IMPORT, INFORMIXSERVER ADDMserver, user fred, and password password. jdbc:informix-sqli://192.168.0.100:1533/ADDM_IMPORT:INFOMIXSERVER=ADDMserver; user=fred;password=password Download and documentation See the IBM website. MySQL
jdbc:mysql://host[,failoverhost...][:port]/[database] [;property=value][;property=value]
The following example URL connects to a MySQL server running on a host on IP address 192.168.0.100, port 3306, database name ADDM_IMPORT, user fred, and password password. jdbc:mysql://192.168.0.100:3306/ADDM_IMPORT?user=fred&password=password Download: http://www.mysql.com/products/connector/ Documentation: http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-configuration-properties.html Postgres
jdbc:postgresql://host:[port]/database [?propertyName1][=propertyValue1][&propertyName2][=propertyValue2]
The following example URL connects to a Postgres server running on a host on IP address 192.168.0.100, port 5432, and database name ADDMdatabase. jdbc:postgresql://192.168.0.100:5432/ADDMdatabase Download: http://jdbc.postgresql.org/download.html Documentation: http://jdbc.postgresql.org/documentation/83/connect.html
Page 615
Oracle
For Oracle there are two possible connection styles: Connection using service
jdbc:oracle:<drivertype>:[username/password]@//host[:port]/service
The following example URL connects, using a thin driver, to an Oracle server running on a host on IP address 192.168.0.100, port 1521, and service ADDM_DB. jdbc:oracle:thin:@//192.168.0.100:1521/ADDM_DB Connection using SID
jdbc:oracle:<drivertype>:@host[:port]:SID
The following example URL connects, using a thin driver, to an Oracle server running on a host on IP address 192.168.0.100, port 1521, and SID ADDM100. jdbc:oracle:thin:[username/password]@192.168.0.100:1521:ADDM100 Download: http://www.oracle.com/technetwork/database/features/jdbc/index-091264.html Documentation: http://www.oracle.com/technetwork/database/enterprise-edition/jdbc-faq-090281.html See here for more information on setting up both styles in BMC Atrium Discovery. Ingres
jdbc:ingres://host:[port]/database [;property=value][;property=value]
The following example URL connects to an Ingres server running on a host on IP address 192.168.0.100, port mnemonic II7, database name ADDMdatabase, user fred, and password password. jdbc:ingres://192.168.0.100:II7/ADDMdatabase;user=fred;password=password Download: http://community.ingres.com/wiki/JDBC_Driver Documentation: http://community.ingres.com/wiki/Open_Office_How_To#Ingres_JDBC_URL_Connection_Information Sybase
jdbc:sybase:Tds:host[:port][/databasename]
The following example URL connects to a Sybase server running on a host on IP address 192.168.0.100, port 6689, and database name ADDM_DB. jdbc:sybase:Tds:192.168.0.100:6689/ADDM_DB Documentation: http://www.sybase.com/detail?id=1009876#sec2q2 MS SQL
jdbc:sqlserver://serverName[\instanceName][:portNumber] [;property=value][;property=value]
The following example URL connects to an instance of MS SQL server called TDA running on a host on IP address 192.168.0.100, port 1433, user fred, and password password. The username and password correspond to a user configured on the database rather than a Windows AD user. See this Microsoft article for more information. jdbc:sqlserver://192.168.0.100\TDA:1433;User=fred;Password=password Download: http://msdn.microsoft.com/en-us/data/aa937724.aspx Documentation: http://msdn.microsoft.com/en-us/library/ms378428.aspx The downloaded file (a gzipped tar archive, current version sqljdbc_3.0.1301.101_enu.tar.gz) contains two JAR files. Use the sqljdbc4.jar the other does not work.
Page 616
JTDS
jdbc:jtds:servertype://server[:port][/database] [;property=value][;property=value]
The following example URL connects using JTDS to an MS SQL server server running on a host on IP address 192.168.0.100, port 1433, database name ADDM_IMPORT, instance TDA, user fred, and password password. jdbc:jtds:sqlserver://192.168.0.100:1433/ADDM_IMPORT; instance=TDA;user=fred;password=password When using a domain credential (Windows Authentication) of the form DOMAINNAME\username enter the username in the URL described, and the domain information in the Additional JDBC parameters dialog box in the following form: domain="DOMAINNAME". Also, if the domain controller requires NTLM v2 add the parameter: useNTLMv2=true The following example URL connects using JTDS to an MS SQL server server running on a host on IP address 192.168.0.100, port 1433, database name ADDM_IMPORT, instance TDA, Windows user fred in the domain DOM1, and password password. jdbc:jtds:sqlserver://192.168.0.100:1433/ADDM_IMPORT; instance=TDA;domain=DOM1;useNTLMv2=true;user=fred;password=password Documentation: http://jtds.sourceforge.net/faq.html#urlFormat
1. 2. 3. 4. 5.
Create the properties file and save it as db2jcc.properties. Copy the db2jcc.properties file to the /usr/tideway/data/custom/jdbcdrivers directory on the appliance. Copy the DB2 JAR file db2jcc.jar to the /usr/tideway/data/custom/jdbcdrivers directory. Copy the DB2 license JAR file, for example, db2jcc_license_cu.jar to the /usr/tideway/java/integrations/lib directory. Restart the tideway service.
Page 617
Technology Knowledge Update patterns are distributed as a zip file which is uploaded to the appliance. See Uploading and activating a TKU for more information.
Changes in the pattern configuration block revision number may reset the pattern configuration changes to the default values If you install a Technology Knowledge Update (TKU) which contains an update to a pattern configuration block that requires its major revision number to be incremented, any end-user configuration changes to that pattern (made through the pattern configuration block) are lost and the default pattern values are restored.
A TKU release contains the following components. You may download one or both of these depending on your license agreement: Technology-Knowledge-Update-YYYY-MM-N-RELEASE.zip Core Patterns that model various software products. Extended Database discovery Patterns that extend some of the core database patterns to obtain additional detailed information about certain databases. Middleware discovery Patterns that obtain detailed information about certain J2EE Application Servers and Web servers Network Device Integration Module with associated Network Device definition files (compatible with BMC Atrium Discovery 8.2.01 and above)* and Printer definition files (compatible with BMC Atrium Discovery 8.3 and above). System Patterns that detect configuration issues and populate Discovery Conditions and patterns used by the CMDB sync system Extended-Data-Pack-YYYY-MM-N-RELEASE.zip Product Lifecycle Patterns that populate BMC Atrium Discovery with information about End-of-Life, End-of-Support, and End-of-Extended-Support for versions of certain host Operating Systems as well as versions of certain software products. Hardware Reference Data Patterns and data import CSV file that populate BMC Atrium Discovery with power and heat consumption figures for various hosts. Where a TKU upgrade makes changes to syncmapping files (see Default CDM Mapping and Syncmapping block), the initial CMDB syncs after upgrade may result in longer reconciliation times. Examples of such changes are key changes or attribute changes on a CMDB CI.
Hardware Reference Data This page uploads hardware reference data using the Create and Update option available when importing using the Import Hardware Reference Data page.
Lifecycle reports The Extended Data Pack provides a means of populating many of the Lifecycle reports that are shown in the OS Product Lifecycle Analysis dashboard.
These supplied zip archives can be uploaded and their content activated without any further processing by following the procedure below.
3. Click New Update to display the Knowledge Update dialogs. 4. Enter the filename into the File To Upload dialog box, or click Browse... to locate a file. 5. To upload and perform the update, click Apply. This task may take a long time. You can navigate away from this page while the task is running. When the task is finished, the page is updated with the results which remain until cleared.
Updating a TKU
Page 618
When updating a TKU, follow the same procedure for uploading a TKU. The patterns in the previous TKU are automatically deactivated. Where the major version of TKU patterns changes, the new patterns do not become the maintainer of the inferred model created by the patterns from the previous TKU. Where this happens the inferred model created by the previous TKU and that by the new TKU may coexist in parallel until the older one is aged out. You can avoid this by deleting the previous TKU, and consequently removing the inferred model created by its patterns.
Consolidation
Consolidation refers to the centralization of discovery data from scheduled or snapshot scans on multiple scanning appliances to one or more consolidation appliances. You might want to use consolidation in the following scenarios: Firewalled environments: When an environment is divided by firewalls so that a single appliance is unable to reach all parts, a scanning appliance can be situated on each section of the network blocked by a firewall. The scanning appliances can all feed back data to a central consolidation appliance. Restricted (policy) networks: Certain lines of business might enforce policies on the control of IT infrastructure in their environments. Where such policies limit or prohibit access, scanning appliances can be deployed which all feed back data to a central consolidation appliance. Restricted (time) scanning windows: Where a discovery window is short, a single appliance may be unable to complete a scan of a large range of IP addresses during the permitted time. Sharing the IP addresses between multiple scanning appliances means each smaller scan can be completed in less time, and the results can be consolidated and viewed on the consolidation appliance. In each of these situations, multiple scanning appliances can be deployed, and their data consolidated into a central consolidation appliance. The consolidation appliance is then used for reporting and provides a coherent view of the entire scanned network. A consolidation appliance must be set as one which accepts connections or feeds from scanning appliances. Scanning appliances must in turn register with a consolidation appliance.
IP address ranges Although consolidation can be used to scan a firewalled environment, it is essential that the IP address ranges scanned by each scanner belong to the same IP address space. That is, if two scanning appliances scan the same address, they must both reach the same device. If the IP address spaces are not consistent across all the scanners, information on the consolidation appliance can be missing or incomplete. This restriction only applies to the addresses scanned by the scanning appliances if discovery targets possess other IP addresses, there is no need for them to belong to a consistent IP address space.
Consolidation Appliance: The main purpose of the consolidation appliance is to report on data consolidated from a number of other scanning appliances. It can also perform normal discovery, although this is usually not recommended. Scanning Appliance: The scanning appliance also operates as a normal standalone appliance. The only difference is that it constantly sends discovery data to the consolidation appliance. After setting up, this process is transparent to the user. A scanning appliance must request and be approved on a consolidation appliance before it can send any consolidation data to that appliance. This is described in Approving or rejecting a scanning appliance request. On the consolidation appliance user interface, the Discovery Currently Processing Runs tab shows any local scans and any consolidation runs in progress. The Discovery Currently Processing Runs is described in The Discovery Status page. The tab is shown below:
What is consolidated?
The consolidated data is Discovery Directly Discovered Data (DDD) nodes including data collected by the patterns. The data inferred by the scanning appliances, for example, Software Instance nodes, is not consolidated, but the consolidation appliance will infer it again (based on its pattern configuration).
TKU release, patterns, CSV imports and consolidation The TKU release package and custom patterns that are loaded on the scanning and consolidation appliances must be the same in order to infer the same data, for example, Software Instance nodes. This is not enforced in any way by the system. Any data imported via CSV in a scanning appliance will not be consolidated. It has to be imported in the consolidation appliance too.
Restrictions
Normal version restrictions
Page 619
The consolidation appliance minor release must be the same or greater than the scanning appliance.
Configuring consolidation
Configuring consolidation is a two step procedure. Initially the appliance which is to be the consolidation appliance must be set as a consolidation appliance, and then one or more scanning appliances register with the appliance. To configure consolidation you need the permissions detailed in Consolidation Permissions.
Firewalls and consolidation Consolidated appliances use port 25032 to communicate. The scanning appliance must be able to connect to port 25032 on the consolidation appliance. You must configure any firewalls between scanning appliances and consolidation appliances to allow this traffic.
Consolidation appliances communicate using port 25032, and the port is open whether or not an appliance is configured as a consolidation appliance. Therefore you cannot, for example, telnet to the appliance IP address and port 25032 to determine whether it is a consolidation appliance.
Page 620
2.
PDF generated from online docs at http://discovery.bmc.com BMC Atrium Discovery 9.0
Name
The name of the scanning appliance. Names must be unique in the consolidation network and you cannot consolidate a scanning appliance with the default name, Discovery_Appliance. The name is taken from the Administration => Appliance Configuration => Identification page. See [Initial configuration]. A change link is provided which displays the Identification page. In the identification page you can change the name of the appliance. You can only consolidate appliances which have unique names. The address of the consolidation appliance. This may be specified as one of the following: Hostname or FQDN IPv4 or IPv6 address You can supply credentials for the consolidation appliance in this dialog. If you supply valid credentials here, the scanning appliance is approved automatically.
Consolidation Appliance
Username
The user name for a user on the consolidation appliance. This user must have appropriate permissions to approve the connection of the scanning appliance to the consolidation appliance. The password for the user on the consolidation appliance.
Password
If the target consolidation appliance is an earlier version that the scanning appliance, you are warned that the Consolidation appliance version is too old.
If you supplied valid credentials for automatic approval on the consolidation appliance, then the scanning appliance is now configured and working as a scanning appliance.
To accept the appliance connection, click Approve. To reject the request, click Reject. When you do this, the connection is deleted from the consolidation appliance and when no connections remain the scanning appliance reverts back to a non-consolidated appliance.
Page 621
Canceling a Consolidation Run on the consolidation appliance stops the consolidation though the scan continues on the scanning appliance. This leads to inconsistencies between the data on the two appliances. Where possible you should always stop the scan on the scanning appliance and allow the consolidation to run to completion.
If you must cancel a consolidation run from the consolidation appliance, you can do so by selecting the discovery run on the Discovery Status page of the consolidation appliance and clicking Cancel Runs.
Extending discovery
This section details how to extend discovery coverage in BMC Atrium Discovery. Discovering mainframe computers Extended discovery of IBMi Extended discovery of Tomcat Extended discovery of WebSphere Extended discovery of WebLogic Integrating with BMC Atrium Orchestrator Power System discovery Discovering VMware ESX and ESXi hosts
BMC Atrium Discovery for z/OS agent BMC Atrium Discovery cannot natively discover z/OS systems and requires BMC Atrium Discovery for z/OS to be installed on the z/OS systems. This is a separately licensed product. Unless the support ID you use has been registered as licensed for this additional product then you may be unable to see the references linked to on this page. Contact your BMC sales representatives if you need to discuss this further.
Page 622
Discovering all mainframes and LPARs To discover all mainframes and LPARs, a BMC Discovery for z/OS agent must be installed on every LPAR, and at least one MainView Explorer (host server) must be running on each CEC (mainframe). You must then scan the IP addresses for each of the MainView Explorer instances.
For information on installing and configuring the BMC Discovery for z/OS Agent, see the BMC Discovery for z/OS Agent Installation Guide available from the BMC support site.
A separate agent must be configured for each additional mainframe to be discovered, even if agents for the mainframes are communicating with each other and each agent is able to gather details of more than one mainframe. The ability to infer multiple Mainframe nodes depends on the following PTF being applied on the BMC Discovery for z/OS agent: BPA1401 If the PTF is not applied, BMC Atrium Discovery will not be able to infer multiple mainframes (the orphan nodes previously described, however, will no longer be created), behavior that existed in version 8.2.01 of the product.
Page 623
Some of the detail data that the z/OS Discovery Agent collects is dependent on additional MainView products being installed. The following table shows the MainView products used to obtain additional detail information. See the BMC Discovery for Z/OS documentation for further information. Method getMainframeInfo * getMFPart * getApplication Script Views View Z View Z Views 49Z, 4HZ and 4WZ (requested as View 49HWZ) Requires MainView for WebSphere, Application Server, and MainView Transaction Analyzer products to be installed. View Z Views 1IZ and 1XZ (requested as View 1IXZ) Requires MainView for DB2 and MainView for IMS products to be installed. Views 1IJZ, 1ILZ and 1IKZ (requested as View 1IJKLZ) Requires MainView for DB2 product to be installed. View Z View #Views 5UZ, 5VZ and 5NZ (requested as View 5NUVZ) Requires MainView for WebSphere MQ product to be installed. Views 0Z, 1Z, 2Z, 3Z, 4Z, 5Z, 6Z and 7Z (requested as View 01234567Z) View View Z View -@ View 0YZ Requires MainView for CICS and MainView for IMS products to be installed. Version 9.0 SP1 View 0YZ Requires MainView for CICS and MainView for IMS products to be installed.
getCouplingFacility getDatabase
getDatabaseDetail
getTransactionProgram
*indicates methods that must succeed for a Mainframe to be created. Though the DB2/IMS/MQ/CICS servers will be discovered, this detailed data will not be available in a z/OS Discovery Agent only install.
Credentials
To discover a mainframe computer you must have a z/OS user ID and password pair that is also defined in RACF, ACF2, Top Secret, or other external security managers. You configure the credential in the Management Systems tab in the Discovery Credentials page. 1. From the mainframe credentials page, perform one of the following actions: a. To add a new credential, click Add. b. To amend an existing credential, click Actions => Edit. 2. Enter the following information: Field Name Details
Page 624
Matching Criteria
The IP address or addresses against which to use this mainframe credential. Select "Match All" to match all endpoints. Deselect it to enter IP address information which determines whether this credential is suitable for a particular endpoint. Enter IP address information in one of the following formats: IPv4 address (for example 192.168.1.100). Labelled v4. IPv4 range (for example 192.168.1.100-105, 192.168.1.100/24, or 192.168.1.*). Labelled v4.
The following address type cannot be specified IPv4 multicast addresses (224.0.0.0 to 239.255.255.255)
As you enter text, the user interface (UI) divides it into pills, discrete editable units, when you enter a space or a comma. According to the text entered, the pill is formatted to represent one of the previous types or presented as invalid. Click here for more information on using the pill UI. Invalid pills are labelled with a question mark. You can also paste a list of IP addresses or ranges into this field. There is no paste option on the context sensitive (right click) menu. You cannot paste a comma-separated list of IP address information into the Range field in Firefox. This may crash the browser. You can use a space separated list without any problems.
To edit a pill, click the pill body and edit the text. To delete a pill, click the X icon to the right of the pill, or click to edit and delete all of the text. To view the unformatted source text, click the source toggle switch. The source view is useful for copying to a text editor or spreadsheet. Click the source toggle switch again to see the formatted pill view. You can also sort the pill view by value or type. Value sorts by ascending numerical value. Type sorts by type, placing invalid pills first. Underneath the entry field is a filter box. Enter text in the filter box to only show matching pills.
Pills are not currently supported in Opera. Enabled Username Set password A checkbox enabling you to enable or disable the credential. Enter the z/OS username with which to log in to the mainframe computer. When updating a credential, the password is shown as a series of asterisks in this field and it cannot be edited. To enter a new password, select the check box. The password entry field is cleared. Enter the password into the password entry field; the password text is not echoed to the screen. Enter a free text description of the credential. The port to use to connect to the mainframe. By default this is 3940. To use a different port, select the Enable custom mainview port? check box and choose a port number from the list. The list is populated with port numbers specified on the Discovery Configuration page.
To test an existing mainframe credential: 1. From the mainframe credential list, select Actions => Test. 2. Enter the IP address for which you want to test credentials. 3. Copyright 2003-2013, BMC Software Page 625
3. Click Test. When the test completes, the Management System Credentials page is refreshed with the result of either success or a failure. By clicking through the state link, you can view the results of the credential test in more detail.
The credential test result page shows the discovery methods enabled in BMC Atrium Discovery, whether corresponding views are detected in the Discovery for z/OS agent, and the z/OS agent version. For information on configuring security for the BMC Discovery for z/OS Agent, see the BMC Discovery for z/OS Agent Installation Guide.
Using getDiskDrive, getTapeDrive, and getTransactionProgram with CMDB Synchronization When enabling getDiskDrive, getTapeDrive, and getTransactionProgram in a system where CMDB synchronization is used, you should be aware that these methods cause a large number of CIs to be created.
You can view and download an example pattern module created from the templates here.
Page 626
// This module contains example patterns based on the Mainframe pattern // templates. // // THIS IS FOR EXAMPLE USE ONLY // tpl 1.5 module Mainframe.Examples; metadata origin := "ONLINE_DOCS"; end metadata; from SupportingFiles.Mainframe.Support import MainframeModel 1.0;
pattern Mainframe_TapeDrive 1.0 ''' This pattern creates a Storage node for all TapeDrives. ''' overview tags IBM, TapeDrive; end overview; triggers on dtd := DiscoveredTapeDrive; end triggers; body MainframeModel.createTapeStorage(dtd); end body; end pattern;
pattern Mainframe_DASD 1.0 ''' This pattern creates a Storage node for DASD Drives with a volume_id beginning with AL. ''' overview tags IBM, DASD; end overview;
triggers on dasd := DiscoveredDiskDrive where volume_id matches regex '^AL'; end triggers; body MainframeModel.createDiskStorage(dasd); end body; end pattern;
pattern Mainframe_Transaction 1.0 ''' This pattern creates a Detail node for all Transactions with a name beginning with CICSTRAN-C. ''' overview tags IBM, Transaction; end overview;
Page 627
triggers on trn := DiscoveredTransaction where name matches regex '^CICSTRAN-C'; end triggers; body MainframeModel.createTransaction(trn); end body; end pattern;
pattern Mainframe_MQ 1.0 ''' This pattern creates a Detail node for all MQDetails with a name beginning with MQCHNL-. ''' overview tags IBM, MQ; end overview; triggers on dmq := DiscoveredMQDetail where name matches regex '^MQCHNL-'; end triggers; body MainframeModel.createMQDetail(dmq); end body;
Page 628
end pattern;
Discovering Tomcat
BMC Atrium Discovery uses the ApacheFoundation.Tomcat.Tomcat pattern to discover the Tomcat instance, identify the home directory ( catalina.home), identify the base directory (catalina.base), and determine the Tomcat version. The pattern extracts information from the Tomcat configuration files to create and populate a Tomcat SI and its attributes. Extended Tomcat discovery is enabled by the activation of the Tomcat.ExtendedDiscovery.DiscoverTomcat pattern. This is activated by default in a new installation of BMC Atrium Discovery. In an upgraded appliance, it must be activated manually. The Tomcat.ExtendedDiscovery.DiscoverTomcat pattern triggers on the creation or update of a Tomcat SI. The Tomcat.ExtendedDiscovery.DiscoverTomcat pattern is fully described in Configipedia. Creation of a "JDBC Resource" Detail node triggers the CreateJDBCToDatabaseSI pattern which searches the BMC Atrium Discovery model for an SI representing that database. If the database SI is found, the pattern creates relationships between the Tomcat Application Server SI and the nodes representing the database (see below). If the host that the database runs on has not been scanned, no further action is taken. Creation of a "J2EEApplication" SoftwareComponent node does not trigger any additional operations. See the Tomcat documentation for additional information about Tomcat datasources.
The attributes section of the Tomcat SI contains sections for Components, and Details. Components are the SoftwareComponent nodes representing J2EE applications on Tomcat.
Page 629
Details are detail nodes representing resources on Tomcat. These may be application specific or globally defined. They may be one of the following: Custom Resources: a custom resource can be any kind of JavaBean declared as a resource. User Database Resource: the default Tomcat user management database. Java Mail Resources JDBC Resources
Discovering WebSphere
BMC Atrium Discovery uses the IBM.WebSphereApplicationServer pattern to discover the WebSphere instance, identify the installation directory and determine the WebSphere version. The pattern extracts information from the WebSphere configuration files to create and populate a WebSphere SI and its attributes. Extended WebSphere discovery is enabled by the activation of the WebSphere.ExtendedDiscovery.DiscoverWebSphere pattern. This is activated by default in a new installation of BMC Atrium Discovery, but in an upgraded appliance, it must be activated manually. The WebSphere.ExtendedDiscovery.DiscoverWebSphere pattern triggers on the creation or update of a WebSphere SI and performs a deep discovery of WebSphere instances via configuration files. It then parses the results to build a J2EE inferred model of its Applications and Resources. Creation of a JDBC Resource Detail Node triggers the CreateJDBCToDatabaseSI pattern which searches the BMC Atrium Discovery model for an SI representing that database. If the database SI is found the pattern creates relationships between the WebSphere Application Server SI and the nodes representing the database (see below). If the host that the database runs on has not been scanned, no further work is undertaken. Creation of a J2EEApplication Component node does not trigger any further operations. The pattern attempts to: Get the application names from serverindex.xml and create the application SoftwareComponent node. Get the JNDI names of all the resources used by each application from the <appname.ear>/WEB-INF/web.xml file Read the resources.xml files for the server, node and cell and get database and mail resource info. Create the database and mail Detail nodes. Link each resource with the apps that use it. The WebSphere.ExtendedDiscovery.DiscoverWebSphere pattern is fully described in Configipedia.
MainView for WebSphere Application Server product version 2.5 Attempting to discover a host server running version 2.5 of the optional MainView for WebSphere Application Server product results in the script failure "message View 49Z not available", and no WebSphere Discovered Application Components are created, preventing extended discovery.
Page 630
The relationships created depend on the way that the database type is represented. (For example, an Oracle database is represented by an SI whereas a MySQL database server is represented as an SI and the individual databases by Detail nodes with contained by relationships to the database server SI).
The attributes section of the WebSphere SI contains sections for Components, and Details. Components are J2EE applications on WebSphere. Details are resources on WebSphere. These may be application specific or globally defined. They may be one of the following: User Database Resource: the default WebSphere user management database. Java Mail Resources JDBC Resources
Discovering WebLogic
BMC Atrium Discovery uses the BEA.WebLogicApplicationServer pattern to discover a WebLogic application server instance and its version, to identify the JMX port, and most importantly for extended discovery, to determine whether JMX access is enabled. Extended WebLogic discovery attempts to discover detailed information related to the WebLogic application server such as the J2EE domain, the J2EE applications running on the WebLogic application server, and the JDBC resources that the WebLogic application server is using. To do this, an additional pattern, WebLogic.ExtendedDiscovery, initially determines whether JMX access is enabled, and the JMX port has been identified. It then attempts to determine whether the WebLogic application server version is supported (see Supported product versions). If the WebLogic server version cannot be determined using a JMX query, then pattern execution ends. If the JMX port has not been identified, the extended WebLogic discovery uses port 7001 by default. The WebLogic.ExtendedDiscovery pattern then queries the WebLogic Administration Server's JMX monitoring agent for details about the J2EE applications, application servers, databases, database servers, mail servers, web servers, J2EE domain, and J2EE clusters. Information returned is stored in the J2EEApplication Component, J2EEDomain Collection, JDBCResource Detail, or JavaMailResource Detail nodes. Creation of a JDBCResource Detail node triggers the CreateJDBCToDatabaseSI pattern that searches the BMC Atrium Discovery model for a software instance (SI) representing that database. If the database SI is found, the pattern creates relationships between the WebLogic Application Server SI and the nodes representing the database (see Database nodes and relationships). If the host that the database runs on has not been scanned, no further work is undertaken. The WebLogic.ExtendedDiscovery.DiscoverWebLogic pattern is fully described in Configipedia. Creation of a J2EEApplication Component node does not trigger any further operations. If extended discovery fails, it falls back to using a host login and extracts information from the WebLogic configuration files to create and populate a WebLogic Application Server SI and its attributes.
Page 631
WebLogic application server login credentials (JMX) see Configuring extended WebLogic discovery
The attributes section of the WebLogic SI contains sections for Components, Details, and Collections. Components are deployed EAR and WAR modules on WebLogic Details are resources on WebLogic. These may be application specific or globally defined. They may be one of the following: Java Mail Resources JDBC Resources Collections are J2EE Domains on WebLogic
Middleware credential always required You must set up a middleware credential to perform extended discovery of WebLogic even if the target system requires no authentication. In this case, any values may be entered in the username and password fields.
Page 632
6.
PDF generated from online docs at http://discovery.bmc.com BMC Atrium Discovery 9.0
Enter a name for the Credential (For example, ExtendedWebLogic). Enter a free text description of the Credential. The logon ID with which to connect to the WebLogic application server. You must specify a logon ID that has a security role of Monitor, Operator, Deployer, or Admin. The password corresponding to the username above. The IP address of the WebLogic host. This option can be one of the following: an IP address (10.10.10.3) a range specification (10.10.10.* or 10.10.1-5.* or 10.10.10.0/24) a regular expression matching an IP address (.* or 10.10.10.(23|25))
Requirements
To detect when a running virtual machine is moved from one physical host to another. 1. Discovery must be running. 2. The physical hosts and the virtual host must already have been scanned by BMC Atrium Discovery. If these requirements are not fulfilled, the VMotion event is not stored for BMC Atrium Discovery to catch up. However, this will occur as part of routine discovery.
Non-matching hostnames BMC Atrium Orchestrator derives hostnames from the VMware API and BMC Atrium Discovery derives hostnames by using the discovery techniques. The derived hostnames could either be a Fully Qualified Domain Name (FQDN), or an actual host name. For successful integration between BMC Atrium Orchestrator and BMC Atrium Discovery, the derived hostnames must match (that is, in both the products, the derived hostname for a specific host must be either be an FQDN, or an actual hostname). If there is a hostname mismatch, Discovery rescan is not triggered for that particular host.
Page 633
For example:
7. Enter values for the following items appropriate to your system. HTTP_POST_URL a. Click on the HTTP_POST_URL link. b. For the Value (string) enter https://appliancename/ui/aoint where appliancename is the name of your BMC Atrium Discovery appliance. This URL must use https. c. To save the changes, click OK. a. Click on the password link. b. For the Value (string) field, enter the password corresponding to the username specified below. c. To save the changes, click OK. a. Click on the username field. b. For the Value (string) field, enter the username on the BMC Atrium Discovery system with permissions to launch scans, that is, a member of the Discovery group. For more information, see Group Permissions. c. To save the changes, click OK.
password
username
In some early versions of HMC, hmcbash shell does not exist. HMC hosts which do not have hmcbash installed cannot correctly interpret the Discovery commands and are therefore not supported by BMC Atrium Discovery.
After you have root access, you can add the discovery user using the useradd command as you would for any other AIX host:
# useradd discovery
Page 634
Before you can use this account you need to configure a password.
# passwd discovery
You are prompted twice to enter a new password. After the user is configured and a corresponding credential added to BMC Atrium Discovery, VIO Servers are discovered using the standard AIX discovery scripts. A discovered VIO server is shown in the following screen.
It is also possible to avoid adding the local UNIX user account by escaping to the OEM environment in the standard AIX discovery script. However, this would give BMC Atrium Discovery unrestricted root access to the operating system. This would also affect normal AIX hosts (where the oem_setup_env command does not exist). However, in this case you might only receive a runtime error message that you can ignore. It is not possible to detect the presence of oem_setup_env on the VIO server due to the shell restrictions.
Example visualization
The following screen shows a visualization of an HMC management software SI with links to the Host Containers (frames) managed by the HMC. Contained within each of the frames is a single VIO server and a number of LPARS. To view this visualization, select Management from the
Page 635
VMware vCenter
VMware vCenter Server provides centralized management of VMware vSphere (ESX and ESXi) virtual machines. BMC Atrium Discovery uses the VMware vSphere API to communicate with VMware vCenter to discover VMware ESX and ESXi hosts.
Discovered versions
VMware ESX and ESXi discovery uses version 2.5 of the vSphere API. This supports discovery of the following versions and later: ESX 3.5 ESXi 3.5 vCenter Server 4.0 VirtualCenter 2.5
Page 636
Discovery of vCenter
Discovery of a vCenter instance itself is simply a matter of standard host discovery and the creation of a vCenter SI. It does not rely on vSphere API calls, or ports 902 or 443 being open. However, to use vCenter to discover ESX/ESXi hosts, discovery uses port 443 to communicate with the vCenter server.
Page 637
Unpatched VMware vSphere known problems Unpatched versions of VMware vSphere have known problems when scanned by various tools. We strongly recommend that you apply the appropriate patches to affected systems. There is more information about this issue on the following Configipedia link.
Discovery using vSphere API only supported over IPv4 Discovery of VMware ESX and ESXi hosts using the VMware vSphere API is only supported over IPv4. However, discovery using ssh over IPv6 is supported.
The API calls described on the vSphere API Support page at Administration => Discovery Platforms => VMware ESX or VMware ESXi are used to discover the system.
If the API is not contacted then discovery falls back to an ssh login as used in previous BMC Atrium Discovery versions.
VMware ESX and ESXi ssh discovery must be done as the root user
VMware ESX and ESXi ssh discovery must be done as the root user. You should login directly as the root user. It is possible to login as a non-root user but such a user cannot close its sessions properly. This results in sessions hanging and inactive sessions building up on the ESXi host.
Page 638
Credentials
During discovery, the BMC Atrium Discovery system attempts to access host systems to obtain details of processes running. Credentials including IDs and passwords, and credential-like entities (Windows proxies and SNMP credentials) for different access methods, can be stored on the system to allow the required level of access. You can set up the following: The login credentials (user IDs and passwords) for interactive log-in to different host systems. The Windows proxies used to discover Windows systems. The SNMP credentials used on particular host systems. SNMP queries are only tried if an attempted login fails and if the SNMP port (UDP 161) is open on the target host. The vSphere credentials used to discover VMware ESX and ESXi hosts. The vCenter credentials used to discover VMware ESX and ESXi hosts by querying the vCenter server. You can also run and view the progress of Device Credential Tests. Databases used to query databases. Middleware used to query middleware such as web and application servers. Management Systems used to query management systems such as VMware Virtual Center.
Shell support BMC Atrium Discovery is tested to work with Bourne and Bourne-compatible shells. Support for other shells such as the Korn shell is best effort only. The product has been sporadically tested and may work but with known issues and we may not fix bugs that affect these shells.
Device credentials
Device credentials are those credentials which provide means of logging into hosts running a Unix, or Linux operating system, a Windows operating system, or any SNMP enabled device such as routers and switches. The configuration of device credentials is described in the following sections: Configuring host login credentials Configuring Windows discovery Configuring SNMP credentials Configuring vCenter credentials Configuring vSphere credentials Testing credentials User privileges and information access
Page 639
If an access login method is disabled (for example, telnet) and that method is recorded as the last successful login method, it is tried again on a subsequent scan. If it fails on that scan then that method will not be tried again until it is re-enabled. An access method is only attempted if it is seen to be available (for example, SSH access will only be attempted if the SSH port is open). The following pages describe how to configure and view host login credentials: Viewing host login credentials Adding host login credentials Using SSH keys
At the top of the host login credentials page, the system displays the total number of host login credentials and the number of host login credentials by each access method.
The credentials are checked in sequence, and the first matching entry is used. After a working credential is found, further credentials are not checked. If you want to reorder login credentials, drag the credential to the required position in the list. The credentials are shown in color-coded boxes. The colors represent the level of login success achieved with that credential: Green: 100% success rate. Yellow: partial success. Blue: the credential has never been used. Red: 0% success rate. Where a credential is shown in a filled red box, it is an invalid credential from a previous BMC Atrium Discovery version. Previous versions have permitted range specifications using regular expressions. Before the credential can be used, the range must be corrected in the Matching Criteria field of the Edit Credential dialog. The following information is shown for each credential: Credential link Username Description Usage Options This is the first part of the heading link for the credential. The range of IP addresses on which this credential is intended to be used. This is the second part of the heading link for the credential. The user name used for this credential. A free text description of the credential supplied by the user who created the credential. A summary of the success rate when the credential has been used, information about failures, and links to DiscoveryAccesses, credential lists and other useful diagnostic pages. Additional options used with this credential. With the exception of No Password (use ssh key exchange), the options are those selected from the Options section when the credential is set up. The No Password (use ssh key exchange) option is selected by not entering a password. For information about these, see the Field Name-Details table for Adding host login credentials.
Page 640
Actions
A list with the following options: Edit: To edit the credential, select Edit. See Adding host login credentials for information about the fields and settings available from the Edit Login Credential page. Disable: To disable a credential, select Disable. The credential is a marked as disabled in the credential list. When a credential is disabled, this option is replaced with an Enable option. To enable the credential, click Enable. Delete: Delete the credential. Test: Select this to test the credential. See Adding host login credentials and To test host login credentials for more information. Move to top: Moves the credential to the top of the list. Move to bottom: Moves the credential to the bottom of the list.
The following address types cannot be specified IPv6 link local addresses (prefix fe80::/64) IPv6 multicast addresses (prefix ff00::/8) IPv4 multicast addresses (224.0.0.0 to 239.255.255.255)
As you enter text, the user interface (UI) divides it into pills, discrete editable units, when you enter a space or a comma. According to the text entered, the pill is formatted to represent one of the previous types or presented as invalid. Click here for more information on using the pill UI. Invalid pills are labelled with a question mark. You can also paste a list of IP addresses or ranges into this field. There is no paste option on the context sensitive (right click) menu. You cannot paste a comma-separated list of IP address information into the Range field in Firefox. This may crash the browser. You can use a space separated list without any problems.
To edit a pill, click the pill body and edit the text. To delete a pill, click the X icon to the right of the pill, or click to edit and delete all of the text. To view the unformatted source text, click the source toggle switch. The source view is useful for copying to a text editor or spreadsheet. Click the source toggle switch again to see the formatted pill view. You can also sort the pill view by value or type. Value sorts by ascending numerical value. Type sorts by type, placing invalid pills first. Underneath the entry field is a filter box. Enter text in the filter box to only show matching pills.
Pills are not currently supported in Opera. Enabled A checkbox to define whether or not the credential is enabled. You can also enable or disable existing credentials on the Credentials page.
Page 641
Username
Username used to log in to hosts identified by the key. If this is a Windows credential that will be used by a pre-8.2 Windows credential proxy, ensure that you add a localhost prefix to the user name (for example, localhost\Administrator). Enter the password into the password entry field; the password text is not echoed to the screen.
Password
In the Edit Login Credential page, this field is displayed as Set Password. The existing password is shown as a series of asterisks in this field and it cannot be edited. To enter a new password, select the check box. The password entry field is cleared. Now enter the new password.
A free-text description of this login credential. Choose the access methods to be attempted for any host identified by the key by selecting them and moving them to the right-hand (enabled) list box using the right arrow button. By default, all access methods are placed in this box, that is, they are all enabled. You can also change the order in which the access methods are attempted by selecting them and moving them up or down with the up or down arrow buttons. If you want to create a session log, select Enabled. This logs all communication between the BMC Atrium Discovery appliance and a host and should only be used for diagnosing discovery problems with that host. There is currently no option for recording a session log for Windows hosts. A regular expression to define valid prompt characters expected. To use the su command to change to the root or any other user, select Switch User. Enter the user to change to, and the corresponding password. The password text is not echoed to the screen. Enter a timeout period (in seconds) for a session. This timeout includes the credential handshaking (see also the Session Login Timeout). This timeout is used to control sessions. The default is 180 seconds. In general, it is not used to limit the time to scan devices. Note that more than one session can be used to scan one device. For this reason, a scan can take more time than this timeout. A typical consequence of this timeout (for example, when the execution of the platform script for getInterfaceList takes longer than this timeout) is that the scan will fail with a script failure (error message Connection timed out). To force the session to open a Bourne (/bin/sh) subshell, if the default login shell is a C shell (/bin/csh /bin/tcsh), select Yes. This enables you to cater for machines using non-standard shells. Specify an existing SSH key which you already have deployed in your organization. Click Browse to locate the private key and click Open to select it. Enter the passphrase in the passphrase field. When you click Apply to save the credential, the key and passphrase are validated. When you upload the private key to the appliance it is strongly recommended that you protect the vault with a passphrase. See SSH keys below for more detailed information. To use an SSH key or password, select Key or Password. If you have not configured an SSH key, Key is dimmed. If the host for which this credential is intended is configured to listen for SSH connections on a non-standard port, enter this here. To do this, select the Enable custom ssh port? check box and enter the port number in the entry field. If you add a port here, it is automatically added to the TCP ports to use for initial scan. For more information, see TCP and UDP ports to use for initial scan. If the host for which this credential is intended is configured to listen for Telnet connections on a non-standard port, enter this here. To do this, select the Enable custom telnet port? check box and enter the port number in the entry field. If you add a port here, it is automatically added to the TCP ports to use for initial scan. For more information, see TCP and UDP ports to use for initial scan.
Session Logging
Prompt SU
Timeout
Force Subshell
SSH Key
3. To add the credentials, click Apply. 4. Repeat Steps 1 through 3 for all the credentials you want to add.
Page 642
After the key is stored in the credential vault, it is encrypted and cannot be recovered from the vault. You are strongly recommended to keep copies of private keys in secure storage according to your local security guidelines.
Windows proxy
The Windows proxy scans Windows hosts on behalf of the discovery service on the BMC Atrium Discovery appliance.
Proxy changes in 8.3 SP2 and later versions In BMC Atrium Discovery 8.3 SP2 and later versions, installation and management of proxies has been improved with the introduction of a new Windows proxy manager tool. It is also possible to install multiple proxies of each type on a single host. For information about the changes, see Windows proxy manager.
Page 643
You can download the Windows proxies and Windows proxy manager as installation files from the appliance and install onto the local Windows host. For more information, see Installing Windows proxies. Windows discovery is handled by one of the following ways: Credential Windows proxy: It is a BMC Atrium Discovery service that runs on a customer-provided Windows host. To perform discovery, it uses credentials supplied by the BMC Atrium Discovery appliance from the credentials vault. Active Directory Windows proxy: It is a BMC Atrium Discovery service that runs on a customer-provided Windows host. To perform discovery, it logs in as an Active Directory user. While installing the proxy, you must configure it as a user on the Active Directory domain that can log in and run discovery commands on the hosts to discover.The Active Directory proxies do not use the credentials supplied by the BMC Atrium Discovery appliance from the credentials vault. When you install the Active Directory Windows proxy (as the Windows domain administrator), the appliance uses it to discover the Windows hosts in that domain. The proxy can only discover Windows hosts on the domain it is a member of, or other domains trusted by that domain. To discover domains which are not trusted, you must configure another Windows proxy with the appropriate domain permissions.
From BMC Atrium Discovery version 8.2, the Workgroup Windows proxy is no longer supplied with the appliance. All of its functionality has been moved into the Active Directory Windows proxy.
Page 644
Potential user lock out By default, AD accounts have a limited number of login attempts, for example three attempts in fifteen minutes. Access Denied errors from WMI, DCOM, and local commands such as systeminfo are counted as unsuccessful login attempts. Where target hosts are incorrectly configured, this limit can be exceeded and the account locked out. To avoid this, configure the Discovery account to accept unlimited login attempts.
Windows proxies and firewalls Windows proxies are no longer restricted to the default ports of previous releases. For a new installation or an upgrade the default ports are used. Installations of additional proxies use incremental ports. You must modify the proxy host firewall, and any other firewalls between the proxy host and the appliance to permit communication on the necessary ports.
Permissions required
You can download the Windows proxies and Windows proxy manager as installation files from the appliance and install onto the local Windows host. To install Windows proxies: You must be logged in as an administrator. If the software is not installed as this user then you need to grant permissions to write to C:\Program Files\BMC Software\ADDM Proxy. The user that runs the Windows proxy must have necessary permissions to read from and write to the etc, log, and record directories. As a user on the appliance, you must have been granted the admin/software/slave/download permission to download the Windows proxy installers.
Page 645
getFileSystems The getFileSystems discovery method was introduced with BMC Atrium Discovery version 8.2. Windows proxies before version 8.2 do not support this method although are supported in all other respects.
To avoid any impact during resource-intensive periods of discovery, it is strongly recommended not to install the Windows proxy on any host supporting other business services. This is true even if the minimum Windows proxy specification is exceeded, since the Windows proxy will attempt to use what resources are available, in order to optimize scan throughput.
Page 646
failed to run, including information about what credential or Windows proxy was used.
To install the Windows proxy manager and Windows proxies: 1. Run the installer by double-clicking on the downloaded installer file. A welcome screen is displayed. 2. Click Next. 3. Click Browse... to select an installation directory, or click Next to accept the default installation directory (C:\Program Files\BMC Software\ADDM Proxy). The bottom of the Select Destination Location screen displays the minimum free disk space required in MB. 4. To create the Windows proxy application's shortcuts, click Browse to select a different folder, or click Next to accept the default folder (BMC Software\ADDM Proxy). If you choose Don't create a Start Menu Folder here, ensure that you clear all the start menu option check boxes in the next step. 5. On the Select Additional Tasks screen, select the options that will be available in the Start menu, and then click Next. 6. To install an Active Directory Proxy, select the Install Active Directory Proxy check box. a. Enter the credentials for the user account that will run the Windows proxy. If you do not enter the credentials at this point you can do so later, see Specifying the Account Used to Run the Windows proxy. The Windows proxy will run as the Local System user if credentials are not entered. However, an Active Directory Proxy running as a Local System user will not have the necessary domain credentials to perform any discovery. b. Click Next. 7. To install a Credential Proxy, select the Install Credential Proxy check box. a. Enter the credentials for the user account that will run the Windows proxy. You must prefix the user name with localhost (for example, localhost\Administrator). If you do not enter the credentials at this point you can do so later.. The Windows proxy will run as the Local System user if credentials are not entered.
Credential Windows proxy User You should not run the Credential Windows proxy as the Local System user, but as a valid user account, which should be in the Administrators group.
b. Click Next. 8. Review the details in the Ready to Install window. If the details are incorrect, click Back and navigate through the installer to correct the error. If they are correct, click Install to install the selected components. 9. The Completing the BMC Atrium Discovery Proxy Setup Wizard is displayed. a. (If you have already installed the Active Directory proxies) To register the proxies with the appliance, select Register Active Directory Proxy with ADDM Appliance. b. (If you have already installed the Credential proxies) To register the proxies with the appliance, select Register Credential Proxy with ADDM Appliance. c. To run the Windows proxy manager immediately after installation, select Run Proxy Manager. The BMC Atrium Discovery UI Create Windows proxy page, pre-populated with details of this Windows proxy is displayed when this part of the setup is complete. You may see a dialog box regarding File Download. Accept this to go to the pre-populated Create Windows proxy page. 10. To exit the installer, click Finish.
Page 647
Service startup failure Sometimes Windows may refuse the installer permission to start the Windows proxy service, resulting in a dialog box along the lines of service installed but could not be started. This is remedied by manually supplying the credentials directly to the service using the Windows Services control panel. See Specifying the Account Used to Run the Windows proxy.
For information on post installation settings and modifications that may be required for Windows proxies, see Post installation settings.
Page 648
You can also enter the start up options described in the table in the registry key appropriate for the Windows proxy type and host architecture. On a 32 bit system this is one of: Active Directory Windows proxy HKEY_LOCAL_MACHINE\SOFTWARE\BMC Software\Atrium Discovery\ADPROXY\CommandLine Credential Windows proxy HKEY_LOCAL_MACHINE\SOFTWARE\BMC Software\Atrium Discovery\PROXY\CommandLine On a 64 bit system this is one of: Active Directory Windows proxy HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\BMC Software\Atrium Discovery\ADPROXY\CommandLine Credential Windows proxy HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\BMC Software\Atrium Discovery\PROXY\CommandLine Option --auto-purge-all Description You can configure the Windows proxy to automatically purge its log and record data directories. The default behavior is not to purge. Specifies that log and record data directories will be purged. Set via UI. Specifies that only log directories will be purged. Set via UI. Specifies that only record data directories will be purged. Set via UI. Specify an age above which data is automatically purged. This is set in days and the default is seven. Set via UI. The frequency at which the automatic purge occurs. This is set in hours and the default is 24 (daily). Set via UI. A size limit (in MB) for the log directories. If this limit is exceeded the oldest records will be deleted. The default behavior is not to specify a limit (zero). A size limit (in MB) for the record data directories. If this limit is exceeded the oldest data will be deleted. The default behavior is not to specify a limit (zero). Enable or disable uploading configuration, overriding the setting specified in the configuration file. The number of backup configuration files to keep. The default is none. If this is exceeded, the oldest file is deleted. Specify a configuration file to use. Enable or disable OpenPorts, overriding the setting specified in the configuration file. Enable or disable Tcpvcon, overriding the setting specified in the configuration file. The getInfo method retrieves patch, device, and host information. If no hostname is found then a reverse DNS lookup is performed to determine the hostname. Specify --dont-resolve-hostnames to prevent this. Enable or disable RemQuery, overriding the setting specified in the configuration file. Set through the user interface (UI). Specify a timeout value (in seconds) for RemQuery calls. The default is 60 seconds. Set through the UI. Enable or disable WMI, overriding the setting specified in the configuration file. Set through the UI. Specify a timeout value (in seconds) for WMI queries. The default is 120 seconds. Set through UI.
--auto-purge-logs
--auto-purge-record
--auto-purge-max-data-age value
--auto-purge-period value
--log-soft-limit value
--record-soft-limit value --enable-config-upload --disable-config-upload --config-file-limit value --conf <config file> --openports --no-openports --tcpvcon --no-tcpvcon --dont-resolve-hostnames
--remquery --no-remquery
--remquery-timeout value
--wmi --no-wmi
--wmi-timeout value
Page 649
Replace DOMAIN with the domain name, for example TIDEWAY, username with the user name, for example discovery, and TARGET with the resolvable hostname or IP address.
Proxy username/password and upgrading During the upgrade process you need to enter the Active Directory credentials. Usernames are preserved during the upgrade, but passwords are not.
Silent installation
The Windows proxy manager and proxy installer uses Inno Setup which provides silent installation capabilities at the command line. To invoke the installer at the command line: 1. Copyright 2003-2013, BMC Software Page 650
1. Using a command prompt, change directory to the directory into which you downloaded the installer file. Enter:
C:\>cd "Documents and Settings\username\My Documents\Download\"
2. Run the installer using the Inno Setup options and the additional Windows proxy manager installer options. Enter:
addmproxy_installer_8.3.2_xxxxxx.exe
The Inno Setup options are described on their website. Additional Windows proxy manager and proxy installer options are described in the following table: Option /ADCREATE=Y|N Description Create an AD proxy during the install. The default is Y, that is, create an AD proxy. Create a Credential proxy during the install. The default is N, that is, do not create a Credential proxy. The username with which to run an AD proxy. The default is "". The corresponding password. The default is "". The username to run a credential proxy. The default is "". The corresponding password. The default is "".
/CREDCREATE=Y|N
Managing proxies
You can perform the following proxy management tasks from the Windows proxy manager: Create (install a new proxy service) Edit the port and the user account that the proxy uses Delete (uninstall a proxy service) View the log directory for a proxy Start a selected proxy Stop a selected proxy Restart a selected proxy
Page 651
These are described in the following table: Task Create a new proxy Button/Menu Description Click the Create button, or from the corresponding menu: or Proxy > Create menu 1. Enter the following information on the create proxy dialog: Name: A name for the proxy Type: Select either Active Directory or Credential. Port: Enter a port if you need to use a specfic unused port. Otherwise, choose the port which is automatically populated. Log on as: Select either Local System or This account. If you select Local System, the Windows proxy will run as the Local System user. An Active Directory Proxy running as a Local System user will not have the necessary domain credentials to perform any discovery. If you select This account, you must enter the domain and username for the user account that will run the Windows proxy in the domain\username format and the corresponding password. Options: Select the check box to run the proxy immediately after installation. Register With: Select the appliance for the proxy to register. 2. To create the new proxy, click OK. You are prompted to login to the Create Windows proxy page of the UI with pre-populated proxy details.
Proxy registration using IPv6 When registering a proxy with an appliance using IPv6, the proxy IP address that is automatically populated in the Create Windows Proxy page may be a temporary address. The appliance can communicate while the temporary address is valid, but when no longer valid, communication is not possible. This behavior has only been observed in Internet Explorer.
1. Click the Edit button. You can edit the following: Port that the proxy uses User account that the proxy runs as 2. When prompted, confirm the changes by clicking OK. Changes in the user account automatically updates the file permissions and restart the proxy service.
Delete the selected proxy Open Log Directory Start Stop Restart Refresh proxy list
1. Click the Delete button. 2. When prompted, click Yes. Click the Open Log Directory button. The log directory for the currently selected proxy is displayed in an Explorer window (an example pathname is C:\Program Files\BMC Software\ADDM Proxy\runtime\AD4\log). Click the start button. When the proxy starts, a green check icon is displayed next to its name. Click the stop button. When the proxy stops, a red cross icon is displayed next to its name. Click the restart button to stop and start the proxy. Click the refresh button to refresh the list. Sometimes the list does not pick up changes in the status of services, so it is good practice to refresh the list before starting or stopping a proxy.
Editing proxy config files is only undertaken through the main user interface You cannot edit the proxy configuration file as that is better handled centrally from the appliance.
Page 652
following tasks: View the known appliances Add new known appliances Edit known appliances Delete known appliances These are described in the following table: Task View the known appliances Add new known appliances 1. From the Known Appliances dialog, click the Add a new Appliance address button. 2. Enter the following information on the Add New Appliance dialog: Appliance Name: The name of the new appliance to be added. IP Address: IP address of the new appliance to be added. 3. To add the new known appliance, click OK. Edit known appliances 1. From the Known Appliances dialog, click on the appliance to edit. 2. Click the Edit the selected Appliance address button. You can edit the following: Name of the known appliance IP address of the known appliance 3. To save the changes, click OK. Delete known appliances 1. From the Known Appliances dialog, click on the appliance to delete. 2. Click the Delete the selected Appliance address button. 3. When prompted, confirm the action by clicking Yes. Description The Known Appliances dialog displays the available known appliances.
Additional tasks
You can perform the following additional tasks from the Windows proxy manager: Task Refresh Windows proxy list Open the Service Control Manager View documentation View Windows proxy manager version Exit the Windows proxy manager Menu View > Refresh View > Service Control Manager Help > Documentation Help > About Proxy > Exit Description Refreshes the Windows proxy list displayed on the Windows proxy manager. Opens the Service Control Manager (SCM). Displays the Windows proxy manager online documentation. Displays the version of the Windows proxy manager. Exits the Windows proxy manager.
The Windows proxy manager enables you to perform only the tasks listed on this page. You can perform tasks like adding a Windows proxy to a pool, or configuring advanced proxy settings (for example, configuring the log level, recording mode, discovery methods, purging, and so on) only from the appliance's UI. To learn how to manage the Windows proxies from the appliance's UI, see Managing Windows proxies.
Page 653
Based on the proxy version and version of the Operating System (OS) the proxy runs on, the pool capability is one of the following: Fully IPv6 capable: Can scan IPv6 addresses and retrieve IPv6 data. Where BMC Atrium Discovery version 9.0 or later proxies are running on Windows 2008 or later. Cannot scan IPv6 addresses: Can retrieve IPv6 data but the Windows version does not support scanning IPv6 addresses. Where BMC Atrium Discovery version 9.0 or later proxies are running on versions of Windows older than Windows 2008. Not IPv6 capable: Cannot scan IPv6 addresses and cannot retrieve IPv6 data. BMC Atrium Discovery proxies from versions older than 9.0. It is possible to have proxies of different versions (BMC Atrium Discovery version 9.0 or earlier versions) in a single proxy pool, though this is not recommended. However, if a proxy does not have all the capabilities of the pool, it is disabled. For example if you add a pre-version 9.0 proxy to an IPv6 capable pool it is disabled. You can re-enable it once the proxy is upgraded. This enables you to upgrade all the proxies in a pool over a period of time rather than having to upgrade them all at the same time. The Windows proxy management page can contain multiple Windows proxy pools. For more information about the Windows proxy pool list, see Viewing Windows proxy pools. The minimum version and release options can be configured in the Discovery configuration settings.
Page 654
Matching Criteria
Select "Match All" to match all endpoints. Deselect it to enter values that will be used to determine if this credential is suitable for a particular endpoint. They can be one or more of the following, separated by commas: IPv4 address: for example 192.168.1.100. IPv4 range: for example 192.168.1.100-105, 192.168.1.100/24, or 192.168.1.*. IPv6 address: for example fda8:7554:2721:a8b3::3. IPv6 network prefix: for example fda8:7554:2721:a8b3::/64.
The following address types cannot be specified IPv6 link local addresses (prefix fe80::/64) IPv6 multicast addresses (prefix ff00::/8) IPv4 multicast addresses (224.0.0.0 to 239.255.255.255)
As you enter text, the user interface (UI) divides it into pills, discrete editable units, when you enter a space or a comma. According to the text entered, the pill is formatted to represent one of the previous types or presented as invalid. Click here for more information on using the pill UI. Invalid pills are labelled with a question mark. You can also paste a list of IP addresses or ranges into this field. There is no paste option on the context sensitive (right click) menu. You cannot paste a comma-separated list of IP address information into the Range field in Firefox. This may crash the browser. You can use a space separated list without any problems.
To edit a pill, click the pill body and edit the text. To delete a pill, click the X icon to the right of the pill, or click to edit and delete all of the text. To view the unformatted source text, click the source toggle switch. The source view is useful for copying to a text editor or spreadsheet. Click the source toggle switch again to see the formatted pill view. You can also sort the pill view by value or type. Value sorts by ascending numerical value. Type sorts by type, placing invalid pills first. Underneath the entry field is a filter box. Enter text in the filter box to only show matching pills.
Pills are not currently supported in Opera. Type Select the type of the Windows proxy pool. A Windows proxy pool can contain only one type of Windows proxy. The available types are: Windows Active Directory Proxies Windows Credential Proxies Windows Workgroup Proxies The Workgroup Windows proxy has not been supplied since before BMC Atrium Discovery version 8.2. All of its functionality has been moved into the Active Directory Windows proxy.
Domains
This field is enabled only for the Active Directory proxy pools. The domain or a space-separated list of domains that the Windows proxy pool will discover. This specifies that the pool is preferred for the discovery of these domains. A free text description of the Windows proxy pool.
Description
3. Click Apply. The Windows proxy pool is added to the appliance. You can add multiple Windows proxy pools to the appliance.
Page 655
You can change the order of the Windows proxy pools by the following ways: Using the colored margin at the left of the row, drag and drop the row to the required position in the user interface. Click the Actions list for the Windows proxy pool and select Move up or Move down. To move a Windows proxy pool to the top or bottom of the list, click the Actions list and select Move to top or Move to bottom.
Reordering of Windows proxies in a Windows proxy pool is not supported, as the proxy used is selected automatically.
For each Windows proxy pool, the following fields are displayed: Field Name Name Type State Details The name of the Windows proxy pool. The type of the Windows proxy pool (for example, Active Directory). Based on the version of the proxies you have added to a pool and the version of the Operating System (OS) the proxies run on, a proxy pool displays one of the following states in brackets next to the Type field: Fully IPv6 capable: Contains proxies for version 9.0 or later and runs on Windows 2008 or later Operating System. Cannot scan IPv6 addresses: Contains proxies for version 9.0 or later and runs on an Operating System earlier than Windows 2008. Not IPv6 capable: Contains proxies for versions earlier than 9.0 and runs on an Operating System earlier than Windows 2008. From a Fully IPv6 capable pool, if you delete the latest version proxies and add proxies of a version earlier than 9.0, the pool state (Fully IPv6 capable) does not change. However, the newly added proxies in the pool are disabled.
This field is available for Active Directory Windows proxy pools only. It displays the applicable domains. A free text description of the proxy pool. A list with the following options: Add Windows proxy: Enables you to add Windows proxies to the Windows proxy pool. For more information, see Managing Windows proxies. Manage: Opens the Edit Windows Proxy Pool page. For more information, see Editing Windows proxy pools. Delete: Enables you to delete the Windows proxy pool. If you delete a Windows proxy pool, the Windows proxies in the pool are also deleted. Move to top: Moves the Windows proxy pool to the top of the list. Move to bottom: Moves the Windows proxy pool to the bottom of the list.
Page 656
1. Edit Windows Proxy Pool Pool contents 2. The Edit Windows Proxy Pool section enables you to change the IP address range and domains associated with the Windows proxy pool as follows: Field name Matching Criteria Description To enable the proxies to scan all the IP adresses associated with the pool, select Match All. By default, Match All is displayed as selected. To enable the proxies to scan specific IP addresses or IP range, you must clear Match All and enter the IP addresses information in one of the following formats: IPv4 address (for example 192.168.1.100). IPv6 address (for example 2001:500:100:1187:203:baff:fe44:91a0). IPv4 range (for example 192.168.1.100-105, 192.168.1.100/24, or 192.168.1.*). IPv6 network prefix (for example fda8:7554:2721:a8b3::/64). As you enter text, the user interface (UI) divides it into pills, discrete editable units, when you enter a space or a comma. According to the text entered, the pill is formatted to represent one of the previous types or presented as invalid. Invalid pills are labelled with a question mark. You can also paste a list of IP addresses or ranges into this field. There is no paste option on the context sensitive (right click) menu. You cannot paste a comma-separated list of IP address information into the Range field in Firefox. This may crash the browser. You can use a space separated list without any problems.
Type Domains
Displays the type of the Windows proxy pool. The field is enabled only for the Active Directory proxy pools and is dimmed for Credential or Workgroup proxy pools. The domain or a space-separated list of domains that the Windows proxy pool will discover. This specifies that the pool is preferred for the discovery of these domains. A free text description of the proxy pool.
Description
3. To apply any changes that you have made in the Edit Windows Proxy Pool section, click Apply. 4. The Pool contents section displays the following information on the contents of the Windows proxy pool: Field name Name Status Description Name of the Windows proxy. Displays whether the proxy is active or not and the IPv6 capability of the proxy. The IPv6 capability is displayed as one of the following: Can get IPv6 data and scan IPv6 addresses: BMC Atrium Discovery version 9.0 or later proxies that run on Windows 2008 or later. Can get IPv6 data but Windows version does not support scanning IPv6 addresses: BMC Atrium Discovery version 9.0 or later proxies that run on a version of Windows earlier than Windows 2008. IPv6 not supported: Proxy version earlier than BMC Atrium Discovery version 9.0 proxies. Displays the version number of the Windows proxy. To delete the Windows proxy, click Delete. To ping the Windows proxy, click Ping.
Version Actions
If you want to add a new Windows proxy to this pool, click Add. For more information about adding a Windows proxy to a pool, see Managing Windows proxies.
Page 657
The Windows proxy manager is a simple tool which enables you to install and manage proxies on the Windows host on which the manager is installed. You can perform the following tasks from the Windows proxy manager: Install a new proxy service Register a proxy with an appliance Edit the port that the proxy uses and the user account that the proxy runs as Uninstall a proxy service Start a selected proxy Stop a selected proxy Restart a selected proxy The Windows proxy manager is installed when you install a proxy. For more information, see Windows proxy manager.
Page 658
The address of the Windows proxy. This may be specified as one of the following: Hostname or FQDN IPv4 or IPv6 address The port on which to communicate with the Windows proxy. For AD Windows proxy, the default port number is 4321. For Workgroup Windows proxy, the default port number is 4322. For credential Windows proxy, the default port number is 4323. To enable the Windows proxy, select the check box.
Enabled
When you install a Windows proxy from the Windows proxy manager, the Proxy Address and Port fields are pre-populated and Enabled is selected by default.
Version Notes
Usage
Page 659
Actions
A list with the following options: Manage: Opens the Manage Windows proxy page. See Editing and managing Windows proxies. Disable or Enable: Allows you to enable or disable the proxy. Ping: Pings the Windows proxy to ensure that it can be contacted. This is only available if it is enabled. On a successful ping, the information is refreshed. If the ping is unsuccessful, no version number is displayed. Test: Tests connectivity between the Windows proxy, if it is enables, and a supplied IP address. See Testing connectivity. Delete: Deletes the entry.
Testing connectivity
To test the connectivity between a Windows proxy and a supplied IP address: 1. On the Windows proxy management page, select Actions > Test for the Windows proxy. 2. Enter the IP address to use for the test and click Test. The credential tests page is displayed with the test in progress. For more information about the credential tests page, see Testing credentials.
General Details
This section list the details used to create the Windows proxy. You can manage the following details about the Windows proxy: Field Name Proxy Pool Details The Windows proxy pool for the Windows proxy. If you want to change the Windows proxy pool for the Windows proxy, select from the corresponding proxy pool list. The list is only populated with pools that contain the same proxy type. The Windows proxy IP address. If you want to change the current IP address, enter the new IP address in the IP Address field.
Proxy Address
Page 660
Port Enabled
The port number associated with the Windows proxy. If you want to change the current port number, enter the new port number in the Port field. Displays whether the Windows proxy is enabled or not. You can enable the windows proxy by selecting the check box or disable it by clearing the check box. If you have modified any of the general details and want to apply those, click Apply General Details.
If you want to reset the current general details, click Reset General Details.
Proxy Settings
You can manage the following settings of the Windows proxy: Field Name Log Level Details You can change the log level of the Windows proxy at runtime here in the same way as you can for the service log levels from the main Logging page in BMC Atrium Discovery. For more details, see Changing log levels at runtime. The Log Level list has the following levels: Debug: Fine-grained informational events that are most useful to debug an application. Info: Informational messages that highlight the progress of the application at coarse-grained level. Warning: Potentially harmful situations. Error: Error events that may still allow the application to continue running. Critical: Severe error events that may cause the application to abort. (8.3 and later proxies only) Displays the recording mode of the Windows proxy. The recording mode can be one of the following: Record Playback None (8.3 and later proxies only) Select the discovery methods and their timeouts for the Windows proxy. The available methods are: WMI RemQuery If you disable RemQuery, network connection information will not be discovered, and consequently CAM will be unable to model communication.
Recording Mode
Discovery Methods
Overload Notification
If you select this option, Windows proxy informs the main Discovery service when it is overloaded (that is, the Windows proxy has a large number of requests in progress). In such a situation, Discovery, if possible, will try another Windows proxy from the same Windows proxy pool. However, if you do not select this option, Discovery will wait for all the requests in progress to complete, which can take a long time if the Windows proxy is overloaded. Displays the disk usage of the log directory.
Enables you to automatically purge stored data for a specific time period for log files and record data. A selection list enables you to select the purge date. After you have selected the time period to purge data from the Windows proxy, select the Log Files check box, or Record Data check box, or both, and click Apply. The selected data is now set to automatically get deleted on the specified duration. If you want to immediately purge data, select the data (either the Log Files check box, or the Record Data check box, or both) to delete and click Purge Files Now.
Page 661
If you have modified any of the proxy settings details and want to apply those, click Apply Proxy Settings. If you want to reset the current proxy setting details, click Reset Proxy Settings.
Proxy Information
The following information about the proxy is displayed: Field Name Operating System Processor(s) Memory Disk Free Notes Details The name and version of the Windows Operating System. Displays the number of processors and processor type. Displays the memory of the appliance and the percentage of memory used. The available disk space in MB. Based on the proxy version, OS version on which the proxy runs, and IPv6 capability, one of the following is displayed: Can get IPv6 data and scan IPv6 addresses: The proxy version is 9.0 or later and runs on Windows 2008 or later OS. Can get IPv6 data but Windows version does not support scanning IPv6 addresses: The proxy version is 9.0 or later and runs on a version of the OS earlier than Windows 2008. IPv6 not supported: The proxy version is earlier than 9.0.
IPv6 functionalities are not enabled until the proxies of versions earlier than 9.0 are upgraded to 9.0 or later, or are removed.
Proxy Configuration
You can view, download, and apply the Windows proxy configuration data from this section. Field Name Retrieve Current Configuration Details If you want to view the current Windows proxy configuration in XML format, click View winproxy.conf. The current Windows proxy configuration in the XML format is displayed. If you want to download the Windows proxy configuration in XML format, click (download). You can either choose to open the configuration file with an XML editor or save the file and open it later. If you want to apply a new Windows proxy configuration file, perform the following: Click Choose File and navigate to the new Windows proxy configuration XML file. Select the file in the File Upload box and click Open. Click Upload Proxy Configuration.
Proxy port changes in 8.3 SP2 In BMC Atrium Discovery 8.3 SP2, the default port of a proxy can be changes using the Windows proxy manager.
Page 662
To change the default port (4321) for an Active Directory Windows proxy: 1. Change the registry to configure the proxy to listen to a different port based on your computer type: a. For 32-bit computers: HKEY_LOCAL_MACHINE\SOFTWARE\BMC Software\Atrium Discovery\ADProxy\CommandLine b. For 64-bit computers: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\BMC Software\Atrium Discovery\ADProxy\CommandLine 2. Edit the CommandLine value by changing '--port 4321' to the required port.
Note If a Windows proxy is not running as either the Local System account or as a member of the Administrators group, tw_terminate_winproxy does not stop the Windows proxy. The following error is logged in the Windows proxy log file: ERROR: Failed to terminate slave service: [(5, 'OpenSCManager', 'Access is denied.')] Workaround: Allow the user that the Windows proxy is running to stop the service. This is documented on the Microsoft Support Site.
For more information about the utility and the command line options, see tw_terminate_winproxy.
Page 663
TCP/IP has reached the security limit imposed on the number of concurrent TCP connect attempts. This means there were more than 10 un-ACKed TCP SYNs in a second (normally from attempting to connect to an invalid address). The patched version of Windows XP interprets this as a potential virus and starts to queue connections. This can cause other network activity on the host to be very slow. More information about the warning can be found on the Microsoft Support Site. This feature has also been included in Windows Vista and Service Packs for Windows 2003 Server.
Adapters section
The adapters section begins with the <adapters> statement
<adapters>
Search expression
The next statement is a regular expression to search for in the adapter key in the registry. This is specified in the <adapter> statement.
<!-- regex to find the adapter type eg. "^Intel(R) PRO/100" --> <adapter match="^Intel(R) PRO/100">
The search is performed on the hive "HKLM" and the key SYSTEM\CurrentControlSet\Control\Class\ {4D36E972-E325-11CE-BFC1-08002BE10318}
Search parameter
You specify the parameter name which in this case is SpeedDuplex.
<adapter-key name="SpeedDuplex"/> <!-- name in registry key to search for info eg. SpeedDuplex -->
The names of some network cards may contain special characters (for example, *). For these cards, you must specify the actual attribute. For example:
<adapter-key name="SpeedDuplex" actual="*SpeedDuplex"/>
While defining the network card name, make sure that the adapter- key name matches with the adapter-map name. In this example, it is SpeedDuplex. With other network cards you may need to search for different parameter names in the registry key, for example:
<adapter match="^Realtek RTL8139 Family PCI Fast Ethernet NIC"> <adapter-key name="DuplexMode"/>
Page 664
Parameter Value 0 1 2 3 4
An <adapter-map ParameterName="value"> statement block is entered for each value expected for the parameter name. Contained in this statement are adapter-value statements with name-value pairs for, in this case, speed, duplex, and negotiation. For example:
<adapter-map SpeedDuplex="1"> <adapter-value name="speed" value="10"/> <adapter-value name="duplex" value="HALF"/> <adapter-value name="negotiation" value="FORCED"/> </adapter-map>
Example Mapping
An example for the Intel PRO/100 network card is shown below:
<adapter-key name="SpeedDuplex" actual="*SpeedDuplex"/> <adapter-map SpeedDuplex="0"> <adapter-value name="speed" value=""/> <adapter-value name="duplex" value=""/> <adapter-value name="negotiation" value="AUTO"/> </adapter-map> <adapter-map SpeedDuplex="1"> <adapter-value name="speed" value="10"/> <adapter-value name="duplex" value="HALF"/> <adapter-value name="negotiation" value="FORCED"/> </adapter-map> <adapter-map SpeedDuplex="2"> <adapter-value name="speed" value="10"/> <adapter-value name="duplex" value="FULL"/> <adapter-value name="negotiation" value="FORCED"/> </adapter-map> <adapter-map SpeedDuplex="3"> <adapter-value name="speed" value="100"/> <adapter-value name="duplex" value="HALF"/> <adapter-value name="negotiation" value="FORCED"/> </adapter-map> <adapter-map SpeedDuplex="4"> <adapter-value name="speed" value="100"/> <adapter-value name="duplex" value="FULL"/> <adapter-value name="negotiation" value="FORCED"/> </adapter-map> <adapter-map SpeedDuplex="6"> <adapter-value name="speed" value="1000"/> <adapter-value name="duplex" value="FULL"/> <adapter-value name="negotiation" value="AUTO"/>
Page 665
The Windows proxy is configured using a configuration file (winproxy.conf), which is located at: C:\Program Files\BMC Software\ADDM Proxy\runtime\<Proxy_Name>\etc\winproxy.conf where <Proxy_Name> is the type of the Windows proxy (AD Proxy or Credential). You can download winproxy.conf files from Windows proxies that are connected to your appliance. Any modifications can be made to the file which can then be uploaded to the Windows proxy or to multiple Windows proxies. This simplifies the management of your Windows proxies. See Windows proxy configuration for more information. The file is divided into the following sections: Access methods Optional commands Commands to run Checksum
Access methods
This section is a list of access methods, in the order in which they should be attempted. In this section you can enable or disable an access method. For example:
<setting name="wmi" enabled="True"/>
or
<setting name="wmi" enabled="False"/>
The access methods available are: wmi - WMI commands. remquery - RemQuery commands.
Optional commands
This section lists the additional executables which are available on the Windows proxy. These are generally tools which are not standard on all versions of Windows, or are third-party supplied tools such as the PSINFO utility described in Windows operating systems. In this section you can enable or disable the optional commands. For example:
<setting name="tcpvcon" enabled="True"/>
or
<setting name="tcpvcon" enabled="False"/>
Commands to run
This section lists the commands and provides the actual command string that is executed. The commands are referred to by the setting name defined in the previous section. For example:
<command name="tcpvcon">tcpvcon -anc</command>
or
<command name="tcpvcon">tcpvcon -anc 160</command>
Checksum
In this section a checksum is written to ensure that the file has not been tampered with since being copied or uploaded from the appliance.
Page 666
You can add the checksum without uploading the file using the tw_sign_winproxy_config utility. You can then copy the signed file to multiple appliances using ftp or similar.
Page 667
<slave> <!-- These settings control the various access methods --> <setting name="wmi" enabled="True"/> <setting name="remquery" enabled="True"/> <!-- These settings control the use of optional commands --> <setting name="wmi-query-creation" enabled="True"/> <setting name="openports" enabled="False"/> <setting name="tcpvcon" enabled="True"/> <!-- These are the commands that will be run --> <command name="systeminfo">echo begin wmic_values: && wmic bios get serialnumber 2>&1 & wmic csproduct get uuid 2>&1 & echo end wmic_values: && systeminfo /fo csv /nh</command> <command name="infocmds">hostname && ver</command> <command name="ipconfig">ipconfig /all</command> <command name="tasklist">tasklist /fo csv /nh /v</command> <command name="netstat">netstat -ano</command> <command name="netstat">netstat -an</command> <command name="openports">openports.exe -netstat</command> <command name="tcpvcon">tcpvcon.exe -anc</command> <!-- lputil commands for Emulex HBAs --> <command name="lputil-listhbas">lputil listhbas</command> <command name="lputil-count">lputil count</command> <!-- %s is replaced with the board index number: --> <command name="lputil-fwlist">lputil fwlist %s</command> <!-- hbacmd commands for Emulex HBAs --> <command name="hbacmd-listhbas">hbacmd listhbas</command> <command name="hbacmd-listhbas">"%ProgramFiles%\HBAnyware\hbacmd" listhbas</command> <!-- %s is replaced with the WWPN: --> <command name="hbacmd-hbaattr">hbacmd hbaattrib %s</command> <!-- the % character needs to be doubled up, since the string undergoes text substitution first --> <command name="hbacmd-hbaattr">"%%ProgramFiles%%\HBAnyware\hbacmd" hbaattrib %s</command> <adapters> <!-- <adapter> may have an optional key attribute (default: name) --> <adapter match="^(AMD PCNET Family|VMware Accelerated AMD PCNet)"> <!-<adapter-key> supports several optional attributes: hive - Registry hive name (Default: "HKLM") key - Registry key (Default: "SYSTEM\CurrentControlSet\Control\Class\{4D36E972-E325-11CE-BFC1-08002BE10318}\%(index)04d") --> <adapter-key name="EXTPHY"/> <adapter-map EXTPHY="0"> <adapter-value name="speed" value=""/> <adapter-value name="duplex" value=""/> <adapter-value name="negotiation" value="AUTO"/> </adapter-map> ...
Page 668
Discovery using SNMP is supported for hosts (see the Discovery Platforms page for a complete list) if only an SNMP credential is available for the host's IP address. However, SNMP only provides basic host information, running processes, network connections and installed packages. It does not support interrogating files, HBAs or running operating system commands. If a host is discovered using SNMP, Reasoning always checks to see whether a login credential is available for that host as discovered data is richer when a login is achieved. If a login credential is found and used successfully, the host node created using SNMP discovery is updated. In rare cases, duplicate nodes could be created when the host is subsequently discovered using a login credential (for example, this can happen when the IP configuration changes).
The SNMP credentials are checked in sequence, and the first matching entry is used. After a working SNMP credential is found, further credentials are not checked. To reorder SNMP credentials, drag the credential to the required position in the list. The SNMP credentials are shown in color-coded boxes. The colors represent the level of login success achieved with that credential: Green: 100% success rate. Yellow: partial success. Blue: the credential has never been used. Red: 0% success rate.
Page 669
Matching criteria
Select "Match All" to match all endpoints. Deselect it to enter values that will be used to determine if this credential is suitable for a particular endpoint. They can be one or more of the following, separated by commas: IPv4 address: for example 192.168.1.100. IPv4 range: for example 192.168.1.100-105, 192.168.1.100/24, or 192.168.1.* . IPv6 address: for example fda8:7554:2721:a8b3::3. IPv6 network prefix: for example fda8:7554:2721:a8b3::/64.
The following address types cannot be specified IPv6 link local addresses (prefix fe80::/64) IPv6 multicast addresses (prefix ff00::/8) IPv4 multicast addresses (224.0.0.0 to 239.255.255.255)
As you enter text, the user interface (UI) divides it into pills, discrete editable units, when you enter a space or a comma. According to the text entered, the pill is formatted to represent one of the previous types or presented as invalid. Click here for more information on using the pill UI. Invalid pills are labelled with a question mark. You can also paste a list of IP addresses or ranges into this field. There is no paste option on the context sensitive (right click) menu. You cannot paste a comma-separated list of IP address information into the Range field in Firefox. This may crash the browser. You can use a space separated list without any problems.
To edit a pill, click the pill body and edit the text. To delete a pill, click the X icon to the right of the pill, or click to edit and delete all of the text. To view the unformatted source text, click the source toggle switch. The source view is useful for copying to a text editor or spreadsheet. Click the source toggle switch again to see the formatted pill view. You can also sort the pill view by value or type. Value sorts by ascending numerical value. Type sorts by type, placing invalid pills first. Underneath the entry field is a filter box. Enter text in the filter box to only show matching pills.
Pills are not currently supported in Opera. Enabled SNMP Version A checkbox to define whether or not the credential is enabled. The SNMP version to use. From the SNMP version list, select one of the following: 1, 2c, or 3. The default is Version 2c. Note that if you are setting up credentials for discovering Netware, you must select Version 1 from the SNMP version list.
SNMP v1/v2c Community SNMP v3 Username Security Level For SNMP V3 credentials only. For SNMP V3 credentials only. Shows the security level selected using the authentication and privacy protocols. noAuthNoPriv: no authentication and no privacy. authNoPriv: authentication, no privacy. authPriv: authentication and privacy. Note that there is no setting for privacy without authentication. Community used for SNMP read access to the defined host(s). For SNMP V1 and V2c credentials only.
Page 670
Authentication Protocol
The protocol used to encrypt the authentication with the client. For SNMP V3 credentials only. Select one of the following from the drop down list: None: no encryption used. Operates in the same way as v1 and v2. MD5: an authentication passphrase is entered and MD5 hashed. The MD5 hashed passphrase is used to access the target system. SHA: an authentication passphrase is entered and SHA hashed. The SHA hashed passphrase is used to access the target system. The key (passphrase) which will be used to encrypt the credentials. For SNMP V3 credentials only, and only if you have chosen an authentication protocol. Must be at least 8 characters. The protocol used to encrypt data retrieved from the target. Encrypting the data retrieved from a discovery target causes performance degradation over no encryption. This is for SNMP V3 credentials only, and only if you have chosen an authentication protocol. That is, you cannot have privacy without authentication. Select one of the following from the drop down list: None: no data encryption is used. Operates in the same way as v1 and v2. DES: uses a privacy key to encrypt data using the DES algorithm. AES CFB128: uses a privacy key to encrypt data using the AES algorithm. The key (passphrase) which will be used to encrypt the data. For SNMP V3 credentials only, and only if you have chosen a privacy protocol. Must be at least 8 characters.
A free-text description of this SNMP credential. The number of attempts made if no response is received. The default is five. The time (in seconds) in which a response is expected. The default is one second. To choose a custom SNMP port, select the check box and choose from the ports in the list. You must already have configured a custom SNMP port in the Discovery Configuration window.
3. Click Apply. The SNMP Credentials page is refreshed to show details of the new credentials.
SNMP v3 permissions
When SNMP v3 is used to discover a device that uses different security contexts for different instances of a MIB (in the same way that community string indexing is used for v1 or v2), the SNMP v3 user may not have access to the different security contexts. If a device is discovered where access to different contexts is required, but access has not been granted to the user, discovery will gather less information and topology discovery may not be complete. A ScriptFailure node will be associated with the DeviceInfo for the DiscoveryAccess, with a message of the type, Failed to access vlan-1 (AuthorizationError), where vlan-1 is the name of the security context that discovery attempted to access. To ensure discovery has full access, the user should be granted access to all of the contexts on the network device. For example, to grant access to all contexts to the group privgroup on a Cisco device with a recent version of IOS, you can use this configuration command:
You should consult your device's documentation or manufacturer for more details.
Page 671
VMware vCenter Server provides centralized management of VMware vSphere (ESX and ESXi) virtual machines. BMC Atrium Discovery can discover ESX and ESXi hosts through the vSphere web services API, or a fallback to an ssh login. However, if a host is being managed by vCenter and it is put into lockdown mode, these discovery techniques are disabled and access is only available through the vCenter server managing it.
vCenter credentials are not host credentials The vCenter credentials that you enter in this page are only used for the vCenter server. Unmanaged, or ESX and ESXi instances that have not been put into lockdown mode can be discovered using VMware vSphere credentials.
The vCenter credentials are checked in sequence, and the first matching entry is used. After a working vCenter credential is found, further credentials are not checked. To reorder vCenter credentials, drag the credential to the required position in the list. The vCenter credentials are shown in color-coded boxes. The colors represent the level of login success achieved with that credential: Green: 100% success rate. Yellow: partial success. Blue: the credential has never been used. Red: 0% success rate.
Page 672
vCenter credentials entered using the Devices tab A vCenter credential button is also available on the Management System tab. This is a holder for credentials provided by the vCenter DIP provider. These credentials are populated when patterns containing vSphere queries are activated. You can manually add vCenter credentials from the Management System tab. However, the recommended procedure to add or edit vCenter credentials is from the Devices tab as it is a lot simpler. If you edit a vCenter credential on the Devices tab, the same changes are reflected in that credential on the Management System tab.
To add or edit a vCenter credential: 1. From the vCenter credentials page, perform one of the following actions: a. To add a new credential, click Add. b. To amend an existing credential, click Actions => Edit. 2. Enter the vCenter credential details as follows: Field Name Matching Criteria Details The IP address or addresses of the vSphere (ESX and ESXi) targets that may be managed by the vCenter server. Select "Match All" to match all endpoints. Deselect it to enter IP address information which determines whether this credential is suitable for a particular endpoint. Enter IP address information in one of the following formats: IPv4 address (for example 192.168.1.100). Labelled v4. IPv4 range (for example 192.168.1.100-105, 192.168.1.100/24, or 192.168.1.*). Labelled v4.
The following address type cannot be specified IPv4 multicast addresses (224.0.0.0 to 239.255.255.255)
As you enter text, the user interface (UI) divides it into pills, discrete editable units, when you enter a space or a comma. According to the text entered, the pill is formatted to represent one of the previous types or presented as invalid. Click here for more information on using the pill UI. Invalid pills are labelled with a question mark. You can also paste a list of IP addresses or ranges into this field. There is no paste option on the context sensitive (right click) menu. You cannot paste a comma-separated list of IP address information into the Range field in Firefox. This may crash the browser. You can use a space separated list without any problems.
To edit a pill, click the pill body and edit the text. To delete a pill, click the X icon to the right of the pill, or click to edit and delete all of the text. To view the unformatted source text, click the source toggle switch. The source view is useful for copying to a text editor or spreadsheet. Click the source toggle switch again to see the formatted pill view. You can also sort the pill view by value or type. Value sorts by ascending numerical value. Type sorts by type, placing invalid pills first. Underneath the entry field is a filter box. Enter text in the filter box to only show matching pills.
Pills are not currently supported in Opera. Enabled Name A checkbox enabling you to enable or disable the credential. The name used to identify the connection. It may contain only letters, numbers, and underscores and must begin with a letter or underscore. Username used to access the vCenter server. Note that this is not a credential for the target host, but the vCenter server. The permission level required is read-only.
Username
Page 673
Password
The corresponding password. In the Edit vCenter Credential page, this field is displayed as Set Password. When editing a credential, the password is shown as a series of asterisks in this field and it cannot be edited. To enter a new password, select the check box. The password entry field is cleared. Enter the password into the password entry field; the password text is not echoed to the screen. The address of the vCenter server managing the vSphere (ESX and ESXi) targets. This may be specified as one of the following: Hostname or FQDN IPv4 address To choose a custom HTTPS port, choose from the ports in the list. You must already have configured a custom HTTPS port in the Discovery Configuration window. The time (in milliseconds) in which a response is expected. The default is 60 seconds. A free-text description of this login credential.
vCenter Address
3. Click Apply. The vCenter credential page is refreshed to show details of the new credentials.
The VMware vSphere API is used to discover VMware ESX and ESXi systems. When the vSphere API cannot be accessed, discovery falls back to an ssh login to access the underlying Linux operating system.
vSphere credentials are not host credentials The vSphere credentials that you enter in this page are only used for the vSphere API. If this is not accessible and ssh discovery is attempted, a separate host login credential for the endpoint is required.
Description
Page 674
Usage Actions
A summary of the success rate when the credential has been used, information on failures, and links to DiscoveryAccesses, credential lists and other useful diagnostic pages. A drop-down menu with the following options: Edit: Select this to edit the credential. See Setting up vSphere credentials for information on the fields and settings available from this page. Disable: To disable a credential, select Disable. The credential is a marked as disabled in the credential list. When a credential is disabled, this option is replaced with an Enable option. To enable the credential, click Enable. Test: Select this to test the credential. For more information, see Testing vSphere credentials. Move to top: moves the credential to the top of the list. Move to bottom: moves the credential to the bottom of the list.
The credentials are checked in sequence, and the first matching entry is used. After a working credential is found, no additional credentials are checked. If you want to reorder login credentials, drag the credential to the required position in the list. The vSphere credentials are shown in color coded boxes. The colors represent the level of login success achieved with that credential: Green: 100% success rate. Yellow: Partial success. Blue: The credential has never been used. Red: 0% success rate.
To add or edit a vSphere credential: 1. From the vSphere credentials page, perform one of the following actions: a. To add a new credential, click Add. b. To amend an existing credential, click Actions > Edit. 2. Enter the vSphere credential details as follows: Field Name Details
Page 675
Matching Criteria
The IP address or addresses of the vCenter server. Select "Match All" to match all endpoints. Deselect it to enter IP address information which determines whether this credential is suitable for a particular endpoint. Enter IP address information in one of the following formats: IPv4 address (for example 192.168.1.100). Labelled v4. IPv4 range (for example 192.168.1.100-105, 192.168.1.100/24, or 192.168.1.*). Labelled v4.
The following address type cannot be specified IPv4 multicast addresses (224.0.0.0 to 239.255.255.255)
As you enter text, the user interface (UI) divides it into pills, discrete editable units, when you enter a space or a comma. According to the text entered, the pill is formatted to represent one of the previous types or presented as invalid. Click here for more information on using the pill UI. Invalid pills are labelled with a question mark. You can also paste a list of IP addresses or ranges into this field. There is no paste option on the context sensitive (right click) menu. You cannot paste a comma-separated list of IP address information into the Range field in Firefox. This may crash the browser. You can use a space separated list without any problems.
To edit a pill, click the pill body and edit the text. To delete a pill, click the X icon to the right of the pill, or click to edit and delete all of the text. To view the unformatted source text, click the source toggle switch. The source view is useful for copying to a text editor or spreadsheet. Click the source toggle switch again to see the formatted pill view. You can also sort the pill view by value or type. Value sorts by ascending numerical value. Type sorts by type, placing invalid pills first. Underneath the entry field is a filter box. Enter text in the filter box to only show matching pills.
Pills are not currently supported in Opera. Enabled Name A checkbox to define whether or not the credential is enabled. The name used to identify the connection. It may contain only letters, numbers, and underscores and must begin with a letter or underscore. Username used to access the vSphere API. See Required vSphere privileges for information on the privileges required. The corresponding password. When editing a credential, the password is shown as a series of asterisks in this field and it cannot be edited. To enter a new password, select the check box. The password entry field is cleared. Enter the password into the password entry field; the password text is not echoed to the screen. To choose a custom HTTPS port, choose from the ports in the list. You must already have configured a custom HTTPS port in the Discovery Configuration window. The time (in milliseconds) in which a response is expected. The default is 60 seconds. A free-text description of this login credential.
Username Password
3. Click Apply. The vSphere credential page is refreshed to show details of the new credentials.
Page 676
The VMwareVM.VMwareVSphereLicenseDetail pattern requires additional privileges to gain complete information. This pattern requires access using a credential with the Global.Licenses privilege. Without this, the license key information will either be partially (if discovered via vCenter) or fully (if discovered via vSphere) redacted. Privilege and discovery method With Global.Licenses privilege vCenter with System.View privilege Direct (vSphere) with System.View privilege Returned key information 30DCK-DUMMY-KEY!!-DUMMY-911F0 30DCK-#####-#####-#####-911F0 XXXXX-XXXXX-XXXXX-XXXXX-XXXXX
The same information is returned by the VMware client when accessing the target using the same methods and privileges.
Testing credentials
You can test discovery credentials using the Credential Tests page. To access this page: 1. From the secondary navigation bar on the Discovery tab, click Credentials. 2. Click Devices. 3. Click Credential Tests. The Credential Tests page is displayed.
The Credential Tests page displays the credential tests that are currently in progress and the tests run in the last ten minutes. You can delete completed tests, or run them again. However, you cannot cancel tests that are currently in progress. You can also test whether credentials exist for any IP address on your network. To delete a credential test, in the Actions list of that test, click Delete. To run a credential test again, in the Actions list of that test, click Retry.
5. For ssh credentials, a verbose session log is displayed. To view the session log, click the show session log link. The credential test results page shows you all the credentials, access methods and Windows proxies that were tried during the test, and which failed (if any). As would happen during a normal discovery, BMC Atrium Discovery continues until either a successful access is obtained, or it runs
Page 677
2. Click Test Discovery Credentials. The results are displayed; this may take a few minutes. If you do not have the correct credentials, a message will be displayed to this effect. If this occurs you must exit the test and select another host or configure an appropriate credential.
Firewalls
Some versions of Windows have a default firewall configuration that does not permit discovery. You should configure the firewall to permit access; otherwise you will be unable to discovery your Windows hosts. See Discovery communications for information on the ports that should be open.
Credential Windows proxy user You should not run the Credential Windows proxy as the Local System user, but as a valid user account, which should be in the Administrators group. Using a local system account on a domain network may result in system problems.
Page 678
The account being used to discover the target host must be one of the following: A domain user with Administrator privileges on the target host. A non-domain user with Administrator privileges and with remote UAC disabled on the target host.
WMI
Method Notes getDeviceInfo* Handled by getHostInfo call getDirectoryListing root\CIMV2 ASSOCIATORS OF {Win32_Directory='%path%'} WHERE ResultClass = CIM_LogicalFile WMI Namespace WMI Query
Page 679
getFileSystems
root\CIMV2
SELECT * FROM Win32_LogicalDisk WHERE DriveType = 3 or DriveType = 4 SELECT * FROM Win32_LogicalDiskToPartition SELECT * FROM Win32_Share SELECT * FROM MSFC_FCAdapterHBAAttributes SELECT * FROM MSFC_FibrePortHBAAttributes SELECT Name, Manufacturer, Model, Domain, SystemType FROM Win32_ComputerSystem SELECT Workgroup FROM Win32_ComputerSystem SELECT DNSDomain FROM Win32_NetworkAdapterConfiguration WHERE IPEnabled = 1 SELECT * FROM Win32_OperatingSystem SELECT SystemUpTime FROM Win32_PerfFormattedData_PerfOS_System SELECT Capacity FROM Win32_PhysicalMemory SELECT SerialNumber FROM Win32_BIOS SELECT Vendor, IdentifyingNumber, Name, UUID FROM Win32_ComputerSystemProduct SELECT * FROM Win32_Processor SELECT HotFixID, ServicePackInEffect FROM Win32_QuickFixEngineering HKLM\HARDWARE\DESCRIPTION\System\ CentralProcessor\0~MHz
root\CIMV2 root\CIMV2 getHBAInfo root\WMI root\WMI getHostInfo* This query must succeed. Optional, This query may fail. root\CIMV2
root\CIMV2 root\CIMV2
root\CIMV2 root\CIMV2
root\CIMV2 root\CIMV2
SELECT * FROM Win32_NetworkAdapterConfiguration SELECT * FROM Win32_NetworkAdapter SELECT * FROM Win32_NetworkAdapterConfiguration SELECT * FROM Win32_NetworkAdapter SELECT * FROM Win32_NetworkAdapterConfiguration SELECT * FROM Win32_NetworkAdapter SELECT * FROM MSNdis_EnumerateAdapter SELECT * FROM MSNdis_LinkSpeed HKLM\SOFTWARE\Microsoft\Windows\ CurrentVersion\Uninstall*\DisplayName
root\CIMV2 Optional, This query may fail. Optional, This query may fail. getPackageList See notes below. root\WMI root\WMI root\default: StdRegProv root\default: StdRegProv root\default: StdRegProv root\default: StdRegProv
HKLM\SOFTWARE\Microsoft\Windows\ CurrentVersion\Uninstall*\QuietDisplayName
HKLM\SOFTWARE\Microsoft\Windows\ CurrentVersion\Uninstall*\HiddenDisplayName
HKLM\SOFTWARE\Microsoft\Windows\ CurrentVersion\Uninstall*\DisplayVersion
Page 680
HKLM\SOFTWARE\Microsoft\Windows\ CurrentVersion\Uninstall*\Publisher Handled by getHostInfo call, specifically: SELECT HotFixID, ServicePackInEffect FROM Win32_QuickFixEngineering
getProcessList Calls getOwner() on each WMI object returned. getRegistryListing Registry keys are passed directly to the standard registry provider. getRegistryValue Registry values are passed directly to the standard registry provider. getServices
root\CIMV2
%key%
%key%
root\CIMV2
getPackageList
Package information is obtained by walking these registry keys described in the previous table rather than using Win32_Product, as it provides more reliable data. In order to speed this process, a temporary WMI class is created on the remote computer to query the registry locally. This temporary class is given a unique name and is removed once the registry data has been retrieved.
getHBAInfo
WMI support for gathering HBA information uses the following queries to populate the HBA information if it is safe to do so:
SELECT * FROM MSFC_FCAdapterHBAAttributes SELECT * FROM MSFC_FibrePortHBAAttributes
The OS version and patch list is checked to see whether HBA queries are safe. On Microsoft Windows Server 2003, Vista, and Server 2008 the HBAAPI.DLL module used by WMI leaks handles unless patched with KB957052. If this patch is not installed, no WMI requests are made. By inspection, no current version of Windows 2003 (5.2.x) or Windows 2008 (6.0.x) has this patch included (current versions including service packs) but Windows 2008 R2 (6.1.x) has. It is not clear whether the problem exists on Windows 2000, though there is no patch available. We make the following assumptions: Windows 2000 HBA queries are safe via WMI. Newer versions of Windows do not have the bug. This check is unnecessary when running FCINFO.EXE. This does use HBAAPI.DLL and could experience the same handle leak, but is a short lived process and they are cleared on exit. The Microsoft FCINFO.EXE command line tool is also used by RemQuery. This is used where WMI is deemed unsafe (or has failed for some reason). This provides equivalent information about HBAs since it uses the same API as the WMI provider.
RemQuery
Method getDeviceInfo getDirectoryListing REMQUERY DIR /-C /TW /4 %path% Handled by getFileInfo call. REMQUERY CMD /C DIR /-C /TW /4 %path% Script Notes Handled by getHostInfo call.
getFileContent getFileInfo
Page 681
REMQUERY CMD /C TYPE %path% getFileMetadata REMQUERY CMD /C DIR /-C /TW /4 %path% REMQUERY FCINFO /DETAILS REMQUERY HBACMD LISTHBAS REMQUERY HBACMD HBAATTRIB %wwpn% REMQUERY LPUTIL LISTHBAS REMQUERY LPUTIL COUNT Requires Microsoft FCINFO.EXE to be installed on the target system. Requires Emulex HBAnywhere to be installed on the target system. Requires Emulex HBAnywhere to be installed on the target system. Requires Emulex LPUTIL.EXE to be installed on the target system. Requires Emulex LPUTIL.EXE to be installed on the target system. Requires Emulex LPUTIL.EXE to be installed on the target system.
getHBAInfo
REMQUERY LPUTIL FWLIST %board_id% getHostInfo* REMQUERY WMIC BIOS GET SERIALNUMBER REMQUERY WMIC CSPRODUCT GET UUID REMQUERY SYSTEMINFO /fo csv /nh REMQUERY "HOSTNAME && VER" getIPAddresses REMQUERY
REMQUERY IPCONFIG /ALL getMACAddresses* REMQUERY Uses Windows API to query MAC addresses.
REMQUERY IPCONFIG /ALL getNetworkConnectionList REMQUERY NETSTAT -ano REMQUERY NETSTAT -an getNetworkInterfaces REMQUERY Uses Windows API to query interface details.
REMQUERY IPCONFIG /ALL getPackageList REMQUERY Uses Windows API to request same registry keys as WMI Queries. Handled by getHostInfo call. REMQUERY Uses Windows API to query process information.
getPatchList getProcessList
REMQUERY TASKLIST /fo /csv /nh /v getProcessToConnectionMapping REMQUERY TCPVCON -ano Requires TCPVCON.EXE to be installed on the target system. Optional, must be enabled in the Proxy configuration. Requires OPENPORTS.EXE to be installed on the target system.
getRegistryListing
Page 682
getRegistryValue
REMQUERY REG QUERY %hive%%key% /v %value% REMQUERY Uses Windows API to query process information.
getServices
SNMP
Method getDeviceInfo * MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 LanMgr-Mib-II-MIB::domPrimaryDomain.0 getHostInfo * HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 getIPAddresses IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] getMACAddresses* IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ] getNetworkConnectionList TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.4.1.77.1.4.1.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
Page 683
getNetworkInterfaces
IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [hrSWInstalledName]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
getProcessList
*indicates methods that must succeed for a Host to be created. h7. Windows proxy permissions This document describes the user permissions required by the Windows proxy to obtain information from target Windows hosts for each discovery method available to the Windows proxy. The discovery methods are: WMI RemQuery For each of the discovery methods, different levels of permissions are defined and the corresponding effect on the discovery function calls is described. The permissions are defined in sets, with the first set being the minimal set of permissions required by a normal (non-administrator) user for the discovery method to return any data. Subsequent permission sets build on the minimal permission set so that more data can be retrieved. The last permission set is for accessing the target machine as an administrator user. In each case, OK means that the all possible information for that function is retrieved. Not available means that no information can be retrieved for that function. The permissions defined in the permissions sets are: DCOM: Configuring DCOM for remote access WMI: Granting permissions on a WMI namespace User Rights: Granting system privileges Remote Registry: Granting access to the Remote Registry service Windows NT 4.0 discovery targets have not been considered in this section.
Page 684
1 (minimal)
Not available
No speed/negotiation No msndis_link_speed No manufacturer No msndis_link_speed No manufacturer No manufacturer No manufacturer OK getRegistryListing getRegistryValue Not available getServices
Not available
Not available Not available Not available OK getProcessList No arguments No command path No user name No arguments No command path No user name No arguments No command path No user name No user name OK
OK OK OK OK
OK
OK
4 5 (administrator)
OK OK
Notes
getNetworkConnectionList is not available using WMI. The NIC manufacturer cannot be retrieved by a non-administrator because the Plug and Play Manager is queried and there is no way to grant a non-administrator access to this. The user name for a process cannot be retrieved by a non-administrator user. A user assigned the Debug programs user right can take control of the operating system, which is a security risk. An error is written in the Windows proxy's log when discovering a Windows 2003 machine as non-administrator. For example:
ERROR: Query [performanceData] failed: Invalid query [SELECT SystemUpTime FROM Win32_PerfFormattedData_PerfOS_System] on [10.10.10.55]: [-2147217405:Access denied ]
This does not lead to any missing information because a different method is then used to retrieve the system's uptime. If the error is a problem, the user can be assigned to the "Performance Monitor Users" group, which allows this WMI query to succeed.
Page 685
The RemQuery utility cannot be run as a non-administrator user. This is because you can only create a service as administrator, which RemQuery needs to do after copying its service to the ADMIN$ share on the remote machine.
Granting permissions
The following sections list possible ways to grant the various permissions required to a user. This should be seen as a guide only.
Method 1
Add the user to the Distributed COM Users group. This group was made available in Windows 2003 SP1.
Method 2
Use Group Policy Objects in an Active Directory environment to grant the permission; this is described in this Microsoft article.
Method 3
Use the following steps to configure DCOM permissions on a machine: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Select Start => Run, enter dcomcnfg and click OK - this launches the Component Services configuration GUI. Expand Console Root => Component Services => Computers => My Computer. Right-click My Computer and select Properties. Go to the COM Security tab. In the Launch and Activation Permissions section, click Edit Limits.... Click Add. Enter your domain user name or group name in the text entry field and click Check Names. Click OK. Set the permissions for the user to Allow for Local Launch, Remote Launch, Local Activation, and Remote Activation. Click OK to close the permissions dialog, then OK again on My Computer Properties. The user should now be able to remotely access DCOM applications including WMI.
Page 686
AIX
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata df hbainfo host_info ifconfig_ip lsdev_lscfg_mac netstat_link_mac netstat ifdetails netstat_link_if lslpp ps lsof-i Privileges Privileges Privileges Privileges Privileges
getNetworkConnectionList getNetworkInterfaces
SNMP
Method getDeviceInfo* getHostInfo* MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ]
getIPAddresses
Page 687
IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] getMACAddresses * IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ] getNetworkConnectionList TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ] getNetworkInterfaces IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ] IBM-AIX-MIB::aixProcEntry [ aixProcPID, aixProcUID, aixProcPPID, aixProcCMD ]
1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
getProcessList
Page 688
FreeBSD
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata df host_info ifconfig_ip ifconfig_mac netstat pkg_info ps lsof-i Privileges Privileges Privileges Privileges Privileges
SNMP
Method getDeviceInfo* getHostInfo* MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] getMACAddresses* IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 getIPAddresses 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ]
Page 689
getNetworkConnectionList
TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ]
1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
getNetworkInterfaces
IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ] 1.3.6.1.2.1.25.6.3.1 [ .2 ]
getPackageList
getProcessList
HPUX
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. Privileges Privileges
Page 690
getFileMetadata getFileSystems getHBAList getHostInfo * getIPAddresses getMACAddresses * getNetworkConnectionList getNetworkInterfaces getPackageList getPatchList getProcessList getProcessToConnectionMapping
file_metadata bdf fcmsutil host_info ifconfig_ip lanscan_mac netstat interface_commands swlist patch_list ps lsof-i
Privileges
Privileges
Privileges Privileges
Privileges
SNMP
Method getDeviceInfo* getHostInfo* MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] getMACAddresses * IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ] getNetworkConnectionList TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 getIPAddresses 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
Page 691
TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ] getNetworkInterfaces IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ]
getProcessList
IRIX
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata df host_info ifconfig_ip netstat_mac netstat ifconfig_if Privileges Privileges Privileges
Page 692
SNMP
Method getDeviceInfo* getHostInfo* MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] getMACAddresses * IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ] getNetworkConnectionList TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 getIPAddresses 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
Page 693
getNetworkInterfaces
IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
getProcessList
Linux
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata df hba_sysfs hba_procfs hba_hbacmd hba_lputil Privileges Privileges Privileges Privileges Privileges Privileges
getHostInfo * getIPAddresses
host_info ip_addr_ip ifconfig_ip ip_link_mac ifconfig_mac netstat ip_link_if ifconfig_if rpmx rpm
getMACAddresses *
getNetworkConnectionList getNetworkInterfaces
getPackageList
Page 694
SNMP
Method getDeviceInfo* getHostInfo* MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] getMACAddresses * IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ] getNetworkConnectionList TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 getIPAddresses 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
Page 695
getNetworkInterfaces
IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ]
getProcessList
Mac OS X
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata df host_info ifconfig_ip ifconfig_mac netstat ifconfig_if system_profiler-apps ps lsof-i Privileges Privileges Privileges Privileges
getFileMetadata getFileSystems getHostInfo * getIPAddresses getMACAddresses * getNetworkConnectionList getNetworkInterfaces getPackageList getProcessList getProcessToConnectionMapping
SNMP
Method MIB Values OID
Page 696
getDeviceInfo* getHostInfo*
SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ]
getIPAddresses
1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
getMACAddresses*
IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ]
getNetworkConnectionList
TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
getNetworkInterfaces
IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ]
Page 697
getProcessList
NetBSD
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata df host_info ifconfig_ip ifconfig_mac netstat ifconfig_if pkg_info ps lsof-i Privileges Privileges Privileges Privileges Privileges
getFileMetadata getFileSystems getHostInfo * getIPAddresses getMACAddresses * getNetworkConnectionList getNetworkInterfaces getPackageList getProcessList getProcessToConnectionMapping
SNMP
Method getDeviceInfo* getHostInfo* MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] getMACAddresses * IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 getIPAddresses 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ]
Page 698
IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ] getNetworkConnectionList TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ] getNetworkInterfaces IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ]
1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ]
getProcessList
OpenBSD
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
Page 699
getFileContent getFileInfo
file_content Handled by the getFileMetadata and getFileContent calls. file_metadata df host_info ifconfig_ip ifconfig_mac netstat ifconfig_if pkg_info ps lsof-i
Privileges
getFileMetadata getFileSystems getHostInfo * getIPAddresses getMACAddresses * getNetworkConnectionList getNetworkInterfaces getPackageList getProcessList getProcessToConnectionMapping
Privileges
Privileges
Privileges
SNMP
Method getDeviceInfo* getHostInfo* MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] getMACAddresses * IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ] getNetworkConnectionList TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
getIPAddresses
Page 700
TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ] getNetworkInterfaces IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ]
getProcessList
OpenVMS
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method initialise getDeviceInfo * getFileSystems getHostInfo * getIPAddresses Script init deviceinfo show-mount hostinfo ifconfig_ip tcpip_show_ip tcpip_show_mac show-network ifdetails show-product show-system Privileges required
SNMP
Page 701
MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ]
getIPAddresses
1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
getMACAddresses *
IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ]
getNetworkConnectionList
TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
getNetworkInterfaces
IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ]
Page 702
getProcessList
POWER HMC
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method initialise getDeviceInfo * getFileSystems getHostInfo * getIPAddresses getMACAddresses * getNetworkConnectionList getNetworkInterfaces Script init device_info monhmc-rdisk host_info lshmc-n netstat-ine netstat lshmc_netstat Privileges required
SNMP
Method getDeviceInfo* getHostInfo* MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] getMACAddresses * IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 getIPAddresses 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ]
Page 703
getNetworkConnectionList
TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ]
1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
getNetworkInterfaces
IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ]
getProcessList
Solaris
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata Privileges Privileges Privileges
getFileMetadata
Page 704
getFileSystems getHBAList
host_info ifconfig_ip netstat_mac ifconfig_mac netstat interface_commands pkginfo patch_list ps lsof-i pfiles
Privileges
Privileges
SNMP
Method getDeviceInfo* getHostInfo* MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] getMACAddresses * IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ] getNetworkConnectionList TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
getIPAddresses
Page 705
TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ] getNetworkInterfaces IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ] SUN-SNMP-MIB::psEntry [ psProcessID, psParentProcessID, psProcessUserName, psProcessUserID, psProcessName]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ] 1.3.6.1.2.1.25.4.2.1 [ .1, .2, .4, .5 ] 1.3.6.1.4.1.42.3.12.1 [ .1, .2, .8, .9, .10 ]
getProcessList
Tru64
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata df host_info ifconfig_ip ifconfig_mac Privileges Privileges Privileges
Page 706
SNMP
Method getDeviceInfo* getHostInfo* MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] getMACAddresses* IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ] getNetworkConnectionList TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 getIPAddresses 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
Page 707
getNetworkInterfaces
IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ]
getProcessList
UnixWare
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata df host_info ifconfig_ip ifconfig_mac netstat ifconfig_if pkginfo ps lsof-i Privileges Privileges Privileges Privileges
getFileMetadata getFileSystems getHostInfo * getIPAddresses getMACAddresses * getNetworkConnectionList getNetworkInterfaces getPackageList getProcessList getProcessToConnectionMapping
SNMP
Method MIB Values OID
Page 708
getDeviceInfo* getHostInfo*
SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ]
getIPAddresses
1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
getMACAddresses *
IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ]
getNetworkConnectionList
TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
getNetworkInterfaces
IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ]
Page 709
getProcessList
VMware ESX
Click here to show the vSphere API queries and shell scripts used
vSphere
Method Managed Object Reference HostSystem ServiceInstance getFileSystems Datastore Properties Notes
getDeviceInfo*
name content.about summary.name summary.type summary.capacity summary.freeSpace summary.url info.vmfs.uuid info.nas.remoteHost info.nas.remotePath info.nas.userName info.path VMFS volumes only NFS & CIFS volumes only NFS & CIFS volumes only CIFS volumes only LOCAL volumes only
getHBAInfo
HostSystem
getHostInfo*
HostSystem
hardware.memorySize hardware.systemInfo.model hardware.systemInfo.vendor hardware.systemInfo.uuid hardware.systemInfo.otherIdentifyingInfo[].identifierType.key hardware.systemInfo.otherIdentifyingInfo[].identifierValue hardware.cpuPkg.threadId hardware.cpuPkg.numCpuCores hardware.cpuInfo.threadId Not used if unset Not used if unset or "unknown" Not used if unset Not used if unset
Page 710
config.hyperThread.active config.network.dnsConfig.dnsDomainName config.network.dnsConfig.hostName getIPAddresses HostSystem config.network.pnic[].device config.network.pnic[].spec.ip.ipAddress config.network.pnic[].spec.ip.subnetMask config.network.pnic[].spec.ip.ipV6Config.ipV6Address[].ipAddress config.network.pnic[].spec.ip.ipV6Config.ipV6Address[].prefixLength config.network.vnic[].device config.network.vnic[].spec.ip.ipAddress config.network.vnic[].spec.ip.subnetMask config.network.vnic[].spec.ip.ipV6Config.ipV6Address[].ipAddress config.network.vnic[].spec.ip.ipV6Config.ipV6Address[].prefixLength config.network.consoleVnic[].device config.network.consoleVnic[].spec.ip.ipAddress config.network.consoleVnic[].spec.ip.subnetMask config.network.consoleVnic[].spec.ip.ipV6Config.ipV6Address[].ipAddress config.network.consoleVnic[].spec.ip.ipV6Config.ipV6Address[].prefixLength getPatchList HostSystem configManager.patchManager Not used if unset Not used if unset Uses queryHostPatch() task Not used if unset Not used if unset Not used if unset Not used if unset Not used if unset
getNetworkInterfaces
HostSystem
getMACAddresses *
HostSystem
getVirtualMachines
VirtualMachine
Page 711
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata df hba_sysfs hba_procfs hba_hbacmd hba_lputil Privileges Privileges Privileges Privileges Privileges Privileges
getHostInfo * getIPAddresses
host_info ip_addr_ip ifconfig_ip ip_link_mac ifconfig_mac netstat ip_link_if ifconfig_if rpmx rpm dpkg
getMACAddresses *
getNetworkConnectionList getNetworkInterfaces
getPackageList
getProcessList getProcessToConnectionMapping
ps lsof-i Privileges
VMware ESXi
Click here to show the vSphere API queries and shell scripts used
vSphere
Method Managed Object Reference Properties Notes
Page 712
getDeviceInfo*
HostSystem ServiceInstance
name content.about summary.name summary.type summary.capacity summary.freeSpace summary.url info.vmfs.uuid info.nas.remoteHost info.nas.remotePath info.nas.userName info.path VMFS volumes only NFS & CIFS volumes only NFS & CIFS volumes only CIFS volumes only LOCAL volumes only
getFileSystems
Datastore
getHBAInfo
HostSystem
getHostInfo*
HostSystem
hardware.memorySize hardware.systemInfo.model hardware.systemInfo.vendor hardware.systemInfo.uuid hardware.systemInfo.otherIdentifyingInfo[].identifierType.key hardware.systemInfo.otherIdentifyingInfo[].identifierValue hardware.cpuPkg.threadId hardware.cpuPkg.numCpuCores hardware.cpuInfo.threadId config.hyperThread.active config.network.dnsConfig.dnsDomainName config.network.dnsConfig.hostName Not used if unset Not used if unset Not used if unset or "unknown" Not used if unset Not used if unset
getIPAddresses
HostSystem
config.network.pnic[].device config.network.pnic[].spec.ip.ipAddress config.network.pnic[].spec.ip.subnetMask config.network.pnic[].spec.ip.ipV6Config.ipV6Address[].ipAddress config.network.pnic[].spec.ip.ipV6Config.ipV6Address[].prefixLength config.network.vnic[].device config.network.vnic[].spec.ip.ipAddress Not used if unset Not used if unset
Page 713
config.network.vnic[].spec.ip.subnetMask config.network.vnic[].spec.ip.ipV6Config.ipV6Address[].ipAddress config.network.vnic[].spec.ip.ipV6Config.ipV6Address[].prefixLength config.network.consoleVnic[].device config.network.consoleVnic[].spec.ip.ipAddress config.network.consoleVnic[].spec.ip.subnetMask config.network.consoleVnic[].spec.ip.ipV6Config.ipV6Address[].ipAddress config.network.consoleVnic[].spec.ip.ipV6Config.ipV6Address[].prefixLength getNetworkInterfaces HostSystem config.network.pnic[].device config.network.pnic[].driver config.network.pnic[].mac config.network.pnic[].spec.linkSpeed.speedMb config.network.pnic[].spec.linkSpeed.duplex config.network.vnic[].device config.network.vnic[].spec.mac config.network.consoleVnic[].device config.network.consoleVnic[].spec.mac getMACAddresses * HostSystem config.network.pnic[].mac config.network.vnic[].spec.mac config.network.consoleVnic[].spec.mac getPatchList HostSystem configManager.patchManager Uses queryHostPatch() task Not used if unset Not used if unset Not used if unset Not used if unset
getVirtualMachines
VirtualMachine
Shell scripts
Method initialise getDeviceInfo * getDirectoryListing getFileContent Script init device_info ls file_content Privileges Privileges Privileges required
Page 714
Handled by the getFileMetadata and getFileContent calls. file_metadata df host_info vim-cmd-addresses vim-cmd-mac vim-cmd-if-details ps Privileges
Mainframe
Some of the detail data that the z/OS Discovery Agent collects is dependent on additional MainView products being installed. The following table shows the MainView products used to obtain additional detail information. See the BMC Discovery for Z/OS documentation for further information. Method getMainframeInfo * getMFPart * getApplication Script Views View Z View Z Views 49Z, 4HZ and 4WZ (requested as View 49HWZ) Requires MainView for WebSphere, Application Server, and MainView Transaction Analyzer products to be installed. View Z Views 1IZ and 1XZ (requested as View 1IXZ) Requires MainView for DB2 and MainView for IMS products to be installed. Views 1IJZ, 1ILZ and 1IKZ (requested as View 1IJKLZ) Requires MainView for DB2 product to be installed. View Z View #Views 5UZ, 5VZ and 5NZ (requested as View 5NUVZ) Requires MainView for WebSphere MQ product to be installed. Views 0Z, 1Z, 2Z, 3Z, 4Z, 5Z, 6Z and 7Z (requested as View 01234567Z) View View Z View -@ View 0YZ Requires MainView for CICS and MainView for IMS products to be installed. Version 9.0 SP1 View 0YZ Requires MainView for CICS and MainView for IMS products to be installed.
getCouplingFacility getDatabase
getDatabaseDetail
getTransactionProgram
IBM i
IBM i (AS/400) discovery is undertaken using SNMP. SNMP discovery is supported for ALL devices with an accessible SNMP agent. Discovery supports SNMP v1, v2c and v3. For some older platforms (for example Netware) the use of SNMP v1 may be required. This is defined on a per credential basis. Note that only read (GET, GETNEXT, GETBULK) access is required. This page shows the method used, MIB values, and OIDS used. Method MIB Values OID
Page 715
SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IP-MIB::ipAddrTable [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IF-MIB::ifTable [ ifIndex, ifDescr, ifSpeed, ifPhysAddress ] TCP-MIB::tcpConnTable [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] UDP-MIB::udpConnTable [ udpLocalAddress, udpLocalPort ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ]
1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.2.2.1 [ .1, .2, .5, .6 ] 1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.25.6.3.1 [ .2 ] 1.3.6.1.2.1.25.4.2.1 [ .1, .2, .4, .5 ]
getNetworkConnectionList
getPackageList getProcessList
If no matching credentials are found this is stated in the banner. The Show matching credentials button is available on the following credential pages: Hosts SNMP vCenter vSphere Mainframe
Database credentials
Page 716
Credential: Contains the information (for example, database credentials, driver, IP address) to create a connection from BMC Atrium Discovery to the target database. Query: The SQL query that is passed to the target database to extract the required information. The query is supplied by the pattern. A Database Credential Group is created by the activation of a Database Pattern. A Database Pattern has its type defined as sql_discovery.
Deep database discovery Any database driver listed on the JDBC management page can be used to create database connections for querying from patterns. The TKU shipped with BMC Atrium Discovery provides patterns for deep discovery of the following databases: IBM DB2 mainframe only IBM Information Management System (IMS) mainframe only Microsoft SQL Server MySQL Oracle Sybase
definitions MySQLDetails 1.0 """Queries for MySQL to recover detailed content information""" // provides the description text for the Credential Group type := sql_discovery; // sql_discovery places the credential group in the Databases group := "MySQL"; // tab with the heading MySQL. The group also populates the // Name field. define showDatabases // defines a query called showDatabases """Return a list of defined databases""" // provides the description text for the showDatabases query // the query query := "SHOW DATABASES"; end define; define showTables // defines a query called showTables """Return a list of defined tables within the given database""" // provides the description text for the showTable query // the query query := "SELECT table_name FROM information_schema.tables WHERE table_schema = %db_name%"; parameters := db_name; end define; end definitions;
1. 2. 3. 4. 5. 6. 7.
Ensure that the MySQL_AB.MySQL_RDBMS_Extended.DatabasesAndTables pattern is activated. From the Discovery home page, click Credentials. Click the Databases tab. Click the MySQL credential group heading. Click the Credentials tab. Select Create New Credential. Enter the following information: Field Name Description Enter a name for the credential (for example, ExtendedMySQL).
Page 717
A checkbox to define whether or not the credential is enabled. Enter a free text description of the credential. The username used to log in to the target database. The password corresponding to the username. Select an appropriate database driver from the drop-down list. When one is selected, additional fields are added to the form depending on the driver selected. Select "Match All" to match all endpoints. Deselect it to enter values that will be used to determine if this credential is suitable for a particular endpoint. They can be one or more of the following, separated by commas: IPv4 address: for example 192.168.1.100. IPv4 range: for example 192.168.1.100-105, 192.168.1.100/24, or 192.168.1.*. IPv6 address: for example fda8:7554:2721:a8b3::3. IPv6 network prefix: for example fda8:7554:2721:a8b3::/64.
The following address types cannot be specified IPv6 link local addresses (prefix fe80::/64) IPv6 multicast addresses (prefix ff00::/8) IPv4 multicast addresses (224.0.0.0 to 239.255.255.255)
As you enter text, the user interface (UI) divides it into pills, discrete editable units, when you enter a space or a comma. According to the text entered, the pill is formatted to represent one of the previous types or presented as invalid. Click here for more information on using the pill UI. Invalid pills are labelled with a question mark. You can also paste a list of IP addresses or ranges into this field. There is no paste option on the context sensitive (right click) menu. You cannot paste a comma-separated list of IP address information into the Range field in Firefox. This may crash the browser. You can use a space separated list without any problems.
To edit a pill, click the pill body and edit the text. To delete a pill, click the X icon to the right of the pill, or click to edit and delete all of the text. To view the unformatted source text, click the source toggle switch. The source view is useful for copying to a text editor or spreadsheet. Click the source toggle switch again to see the formatted pill view. You can also sort the pill view by value or type. Value sorts by ascending numerical value. Type sorts by type, placing invalid pills first. Underneath the entry field is a filter box. Enter text in the filter box to only show matching pills.
Page 718
Oracle Database Server service_name Obtaining a list of Schemas Obtaining a list of Tables within a Schema Obtaining a list of Tablespaces Obtaining a list of DataFiles within a Tablespace
Query
SELECT username FROM all_users ORDER BY username SELECT table_name FROM all_tables WHERE owner = %schema% SELECT * FROM dba_tablespaces SELECT * FROM DBA_DATA_FILES WHERE TABLESPACE_NAME = %tablespace_name%
The following table describes the additional fields: Field Database Description The database name. If you leave the Match Regular Expression check box clear, the connection will be made to any database whose name is supplied by the pattern. If the check box is selected, the connection will be made to any database whose name matches the regular expression. Select the default value check box and specify a value to be used if none is specified by a pattern. The database name (Informix). If you leave the Match Regular Expression check box clear, the connection will be made to any database whose name is supplied by the pattern. If the check box is selected, the connection will be made to any database whose name matches the regular expression. Select the default value check box and specify a value to be used if none is specified by a pattern. The database instance name (Microsoft SQL Server). If you leave the Match Regular Expression check box clear, the connection will be made to any database whose name is supplied by the pattern. If the check box is selected, the connection will be made to any database whose name matches the regular expression. Select the default value check box and specify a value to be used if none is specified by a pattern. The name of the Informix server as it appears in the sqlhosts file. If you leave the Match Regular Expression check box clear, the connection will be made to any server name supplied by the pattern. If the check box is selected, the connection will be made to any database whose name matches the regular expression. Select the default value check box and specify a value to be used if none is specified by a pattern. The Oracle System ID, or instance running on the Oracle host. If you leave the Match Regular Expression check box clear, the connection will be made to any database whose SID supplied by the pattern. If the check box is selected, the connection will be made to any database whose SID matches the regular expression. Select the default value check box and specify a value to be used if none is specified by a pattern. The Oracle service name, an alias to an instance, or multiple instances in for example a clustered environment. If you leave the Match Regular Expression check box clear, the connection will be made to any database whose service name is supplied by the pattern. If the check box is selected, the connection will be made to any database whose service name matches the regular expression. Select the default value check box and specify a value to be used if none is specified by a pattern. The port number to use to connect to the database. If you leave the Match Regular Expression check box clear, any port supplied by the pattern will be used to connect to the database. If the check box is selected, any port supplied which matches the regular expression will be used to connect to the database. Select the default value check box and specify a value to be used if none is specified by a pattern. Any additional JDBC parameters to use when making the connection. Select the default value check box and specify a value to be used if none is specified by a pattern. These are specified as key=value pairs in a semicolon separated list.
Database Name
Server Name
System ID (SID)
Service name
Port
1. To save the details, click Apply. 2. Click the Details tab to return to the main summary page for the Credential Group.
Middleware credentials
Page 719
A Middleware Credential Group is a container for information used to query middleware such as web and application servers and similar tools that support application development and delivery. They contain one or more credentials and one or more queries. Credential: contains the login credentials and IP address required to create a connection from BMC Atrium Discovery to the middleware target. Query: the query which is passed to the Expert Discovery Module (EDM). Only one query is supported by EDMs, that query is scan which triggers a discovery scan of the target. A Middleware Credential Group is created by the activation of a Middleware Pattern. A Middleware Pattern has its type defined as one of the following: tomcat_discovery weblogic_discovery
definitions EDMRequest 1.0 """Queries to recover detailed Tomcat information""" // provides the description text for the Credential Group type := tomcat_discovery; // tomcat_discovery places the credential group in group := "Apache Tomcat"; // the Middleware tab with the heading Apache Tomcat. // The group also populates the Name field. define scan // defines a query called scan """EDMs do support only one query, 'scan'.""" // provides the description text for the query // the query query := "scan"; end define; end definitions;
Typically, the recommended procedure to add or edit vCenter credentials is from the Devices tab. For more information about adding or editing vCenter credentials from the Devices tab, see Configuring vCenter credentials
Page 720
Type Credentials
Queries Actions
Page 721
Details
This tab has the following sections: Details: Name: Name of the credential. Description: A description of what the credential does and how. (For example, Queries for getting licensing information for ESX/ESXi hosts via vCenter.) Type: The type of credential. The type is displayed as vCenter. Connection setup: Describes whether the connection setup is dynamic or static. Related Patterns: Pattern: The list of related patterns. Each listed pattern links to the view pattern page. Details: The list of queries related to the patterns. Each listed query links to the view query page. Related Integration Results by Pattern: Pattern: The pattern that triggered this result. Each listed pattern links to the view pattern page. State: Describes whether the pattern is active or not. Results:Displays the integration result and links to the Integration Result page.
Credentials Name TPL Create Displays the name of the credentials. The credential name links to the credential details tab. Displays the pattern that triggered this result. To create a new credential, you can select this option. The Create New Credential page is displayed. However, the recommended procedure to add a new vCenter credentials is from the Devices tab. For more information about adding vCenter credentials from the Devices tab, see Configuring vCenter credentials. This list contains the following options: Edit: You can select this to update existing vCenter credentials. However, the recommended procedure to edit vCenter credentials is from the Devices tab. For more information about editing vCenter credentials from the Devices tab, see Configuring vCenter credentials. Test: Select this to test the credential. For more information, see Testing vCenter credentials. Copy: If you want to copy the credential, select this option. Delete: To delete the credential, select this option.
Actions
Queries Name TPL Query Actions Show Displays the name of the queries. Each query is a link to the view query details page. It also displays whether the query was successfully used or not. Displays the pattern that triggered this result. Displays the query name. To see the query details, select View. From the list, select the number items to view from this page. The available options are 5, 10, 20, 30, and 50.
Page 722
1. On the vCenter Credentials page, click the credential to test. The vCenter details page is displayed. 2. Click the Credentials tab. 3. From the Actions list, click Test. A dialog box is displayed with the the credential values, and a field in which you enter the IP address against which to test the credential. 4. Enter the target IP address to test. 5. Click Test. The page is refreshed to show that the test in progress and when complete, the results are shown on the Device Credentials page; this may take a few minutes.
The VMware vSphere API is used to discover VMware ESX and ESXi systems. When the vSphere API cannot be accessed, discovery falls back to an ssh login to access the underlying Linux operating system.
vSphere credentials are not host credentials The vSphere credentials that you enter in this page are only used for the vSphere API. If this is not accessible and ssh discovery is attempted, a separate host login credential for the endpoint is required.
From the Management System tab, you can manually add new vSphere credentials and update existing vSphere credentials. However, it is much easier to add or update vSphere credentials from the Devices tab. When you edit a vSphere credential on the Devices tab, the same changes are reflected for that credential on the Management System tab.
Typically, the recommended procedure to configure the vSphere credentials is from the Devices tab. For more information about configuring vSphere credentials from the Devices tab, see Configuring vSphere credentials
Type Credentials
Queries
Page 723
Actions
A list with the following options: View: Select this to view the details of the credential. For more information, see Viewing vSphere credential details. Edit: You can select this to update existing vSphere credentials. However, the recommended procedure to edit vSphere credentials is from the Devices tab. For more information about editing vSphere credentials from the Devices tab, see Configuring vSphere credentials. Delete: This is a dimmed field.
The vSphere page contains the following tabs: Details: Contains details about the vSphere credential, related patterns, and integration results. Credentials: Displays the available vSphere credentails. The tab also displays the availabe number of credentials in brackets. Queries: Displays the list of queries related to the patterns. To see the VSphere credential details, click on a tab. The following table describes the fields for each tab: Details Details This tab has the following sections: Details: Name: Name of the credential. Description: A description of what the credential does and how (for example, vSphere queries for getting licensing information from ESX/ESXi hosts). Type: The type of credential. The type is displayed as vSphere. Connection setup: Describes whether the connection setup is dynamic or static. Related Patterns: Pattern: The list of related patterns. Each listed pattern links to the view pattern page. Details: The list of queries related to the patterns. Each listed query links to the view query page. Related Integration Results by Pattern: Pattern: The pattern that triggered this result. Each listed pattern links to the view pattern page. State: Describes whether the pattern is active or not. Results:Displays the integration result and links to the Integration Result page.
Credentials Name TPL Create Displays the name of the credentials. The credential name links to the credential details tab. Displays the pattern that triggered this result. To create a new credential, you can select this option. However, the recommended procedure to add a new vSphere credentials is from the Devices tab. For more information about adding vSphere credentials from the Devices tab, see Configuring vSphere credentials.
Page 724
Actions
This list contains the following options: Edit: You can select this to update existing vSphere credentials. However, the recommended procedure to edit vSphere credentials is from the Devices tab. For more information about editing vSphere credentials from the Devices tab, see Configuring vSphere credentials. Test: Select this to test the credential. For more information, see Testing vSphere credentials. Copy: If you want to copy the credential, select this option. Delete: To delete the credential, select this option.
Queries Name TPL Query Actions Show Displays the name of the queries. Each query is a link to the view query details page. It also displays whether the query was successfully used or not. Displays the pattern that triggered this result. Displays the query name. To see the query details, select View. From the list, select the number items to view from this page. The available options are 5, 10, 20, 30, and 50.
Page 725
The following information is shown for each credential: Credential link Description Usage Actions This is the first part of the heading link for the credential. The range of IP addresses on which this credential is intended to be used. A link is also provided showing the last successful use of the credential. This links to the Discovery Access for that use. A free text description of the credential supplied by the user who created the credential. A summary of the success rate when the credential has been used, information about failures, and links to DiscoveryAccesses, credential lists and other useful diagnostic pages. A list with the following options: Edit: To edit the credential, select Edit. See To configure mainframe credentials below. Disable: To disable a credential, select Disable. The credential is a marked as disabled in the credential list. When a credential is disabled, this option is replaced with an Enable option. To enable the credential, click Enable. Delete: Delete the credential. Test: Select this to test the credential. See Testing mainframe credentials below for more information. Move to top: Moves the credential to the top of the list. Move to bottom: Moves the credential to the bottom of the list.
Page 726
Matching Criteria
The IP address or addresses against which to use this mainframe credential. Select "Match All" to match all endpoints. Deselect it to enter IP address information which determines whether this credential is suitable for a particular endpoint. Enter IP address information in one of the following formats: IPv4 address (for example 192.168.1.100). Labelled v4. IPv4 range (for example 192.168.1.100-105, 192.168.1.100/24, or 192.168.1.*). Labelled v4.
The following address type cannot be specified IPv4 multicast addresses (224.0.0.0 to 239.255.255.255)
As you enter text, the user interface (UI) divides it into pills, discrete editable units, when you enter a space or a comma. According to the text entered, the pill is formatted to represent one of the previous types or presented as invalid. Click here for more information on using the pill UI. Invalid pills are labelled with a question mark. You can also paste a list of IP addresses or ranges into this field. There is no paste option on the context sensitive (right click) menu. You cannot paste a comma-separated list of IP address information into the Range field in Firefox. This may crash the browser. You can use a space separated list without any problems.
To edit a pill, click the pill body and edit the text. To delete a pill, click the X icon to the right of the pill, or click to edit and delete all of the text. To view the unformatted source text, click the source toggle switch. The source view is useful for copying to a text editor or spreadsheet. Click the source toggle switch again to see the formatted pill view. You can also sort the pill view by value or type. Value sorts by ascending numerical value. Type sorts by type, placing invalid pills first. Underneath the entry field is a filter box. Enter text in the filter box to only show matching pills.
Pills are not currently supported in Opera. Enabled Username Set password Description Custom Mainframe Host Server Port A checkbox enabling you to enable or disable the credential. Enter the z/OS username with which to log in to the mainframe computer. When updating a credential, the password is shown as a series of asterisks in this field and it cannot be edited. To enter a new password, select the check box. The password entry field is cleared. Enter the password into the password entry field; the password text is not echoed to the screen. Enter a free text description of the credential. The port to use to connect to the mainframe. By default this is 3940. To use a different port, select the Enable custom mainview port? check box and choose a port number from the list. The list is populated with port numbers specified on the Discovery Configuration page.
To test an existing mainframe credential: 1. From the mainframe credential list, select Actions => Test. 2. Enter the IP address for which you want to test credentials. 3. Click Test. When the test completes, the Management System Credentials page is refreshed with the result of either success or a failure. By clicking
Page 727
3.
through the state link, you can view the results of the credential test in more detail.
The credential test result page shows the discovery methods enabled in BMC Atrium Discovery, whether corresponding views are detected in the Discovery for z/OS agent, and the z/OS agent version.
The mainframe credential test page displays the credential tests that are currently in progress and the tests run in the last ten minutes. You can delete completed tests, or run them again. However, you cannot cancel tests that are currently in progress. You can also test whether credentials exist for any IP address on your network. To delete a completed credential test, in the Actions column, click Delete. To run a credential test again, in the Actions column, click Retry.
Page 728
From the Discovery section of the Administration tab, select Vault Management. From this page you can open or close the credential vault and specify a passphrase to secure it. You can also change the passphrase or remove it.
Setting a passphrase
To set a passphrase: 1. Enter the new passphrase in the New Passphrase field. 2. Repeat it in the verify New Passphrase field. 3. Click Set Passphrase. The passphrase is now set.
Changing a passphrase
To change a passphrase: 1. Enter the new passphrase in the New Passphrase field. 2. Repeat it in the Verify New Passphrase field. 3. Click Change Passphrase. The passphrase is now changed.
Setting or changing a passphrase does not change whether the vault is open or closed.
Clearing a passphrase
To clear a passphrase: 1. Enter the current passphrase in the Current Passphrase field. 2. Click Clear Passphrase. The passphrase is now cleared.
Page 729
Where a credential has been used successfully, a link showing the number of successful uses is displayed. If there is a single use then it links to the DiscoveryAccess page for that credential use. If there are multiple uses, it links to a list of DiscoveryAccess pages. Where a credential has failed periodically but has been used successfully, a link is provided to the last successful Discovery Access, and links are also provided to a list of Session Results for failure cases. Where a credential has never been used successfully there is a link to the SessionResult page for that attempted access.
Credential Windows proxy note The Credential Windows proxy uses credentials defined on the Login Credentials page, so the usage statistics display in the Login Credentials page, rather than the Windows proxy Management page.
DiscoveryAccess page
A DiscoveryAccess is a single access to a Discovery Endpoint. When an endpoint is scanned, a DiscoveryAccess node is created which records information on the interaction that BMC Atrium Discovery has with that endpoint. When BMC Atrium Discovery is unable to access a host, the DiscoveryAccess is a good starting point for troubleshooting. If a DiscoveryAcess is about to be deleted as part of DDD aging, a red banner stating "This node will be removed shortly as part of DDD aging".
Example DiscoveryAccess
The following screens show DiscoveryAccess pages for a Unix host, a Windows host, a network device, and a mainframe computer.
Page 730
Attribute Endpoint Section Endpoint Start Time End Time Total Duration Discovery Run Previous Discovery Access Next Discovery Access Device Summary
Description
The endpoint (IP address) scanned in this discovery access. The time at which the scan started. The time at which the scan finished. The time it took to discover and process the data (Start Time to End Time). A link to the Discovery Run that this DiscoveryAccess is part of. A link to the previous DiscoveryAccess with the same endpoint. It is not displayed if it is the first in a list. A link to the next DiscoveryAccess with the same endpoint. It is not displayed if itis the last in a list. A read-only summary showing the Host Type, Discovered OS Type, and Discovered OS Version (for example, Unix Server, Debian Linux, 4.0).
Page 731
A link to the host that was created or updated as part of the scan. It is not displayed if no host was created or updated.
The current state of the DiscoveryAccess. This may be Started or Finished. The end state of the discovery run. This may be one of the following: Successful discovery results in GoodAccess; during processing you may see first DeviceIdentified and later HostInferred. For optimised discovery, that is, started but stopped for a reason, you may see Opt1stScan, OptNotBestIP, OptRemote, OptAlreadyProcessing. For unsuccessful discovery you may see NoResponse, UnsupportedDevice, NoAccess, Excluded, Error. See the table below for information on how the end state and result relate to the discovery scenario. The result of the DiscoveryAccess. This may be Skipped, NoResponse, Success, NoAccess, or Error. See the table below for information on how the end state and result relate to the discovery scenario. If there were any errors detected by the ECA engine during discovery this will link to those errors. Examples are: Error detected by the ECA engine: This is typically an internal rule error or pattern error. Usually triggered when the data returned does not match what is expected. Unable to get the deviceInfo: ExecutionFailure: Discovery has attempted to run a command but a failure has been reported. Unable to get the deviceInfo: NoAccessMethod: No access method, this is frequently because it is not meaningful, such as getting patches on Linux. Unable to get the deviceInfo: NoSuchDevice: There is a NoResponse endstate. That is, nothing detected on the IP address. Unable to get the deviceInfo: TRANSIENT Unable to get the deviceInfo: TRANSIENT_CallTimedout: Probably caused by the reasoning timeout; the call to discovery is taking too long to complete. Unable to get the deviceInfo: UNKNOWN: Some other CORBA error. Contact Customer Support.
Result
Errors
If there were any failures attempting to get a session on the endpoint this will link to a list of failures and successes. See below for details.
Does this Discovery Access originate from this appliance, come from a scanning file, or was it consolidated from a scanning appliance. A link to the Windows proxy or Credential used to in this Discovery Access. The link name is a hash of details of the credential; it does not provide the credential itself. You are not shown the Credential pages if you do not have permissions to view them. This field is not displayed as a link on the consolidation appliance for scans which have been consolidated from a scanning appliance. The time at which discovery started on the scanning appliance. This field is only displayed on the consolidation appliance for scans which have been consolidated from a scanning appliance. The time at which discovery completed on the scanning appliance. This field is only displayed on the consolidation appliance for scans which have been consolidated from a scanning appliance. The time it took to establish the session, that is, to log onto the host.
Discovery Start Time Discovery End Time Session Establishment Duration Total Discovery Duration On Hold Since
Page 732
If the discovery has been paused, the elapsed time since it was paused.
The discovery method used. The methods available on each platform are shown on the following pages: UNIX and related operating systems Windows operating systems Mainframe IBM i
The status of the discovery access for the method. This is OK or the failure reason. The name of the script used, if any. The access method used to connect to the endpoint (for example, ssh, telnet, rlogin, and so on). A link to the node or nodes created by this discovery method.
The additional discovery method used. These are discovery methods called by patterns, for example: getFileInfo getFileMetadata runCommand getDirectoryListing
The status of the discovery access for the method. This is OK or the failure reason summarised into links. The name of the script used. The access method used to connect to the endpoint. For example: ssh, telnet, rlogin, and so forth. A link to the node or nodes created by this discovery method.
DiscoveryAccess state
The following table shows the possible discovery scenarios and the resulting end_state and result attributes of the DiscoveryAccess node: In addition to end_state and result attributes on the DiscoveryAccess there is also a reason attribute that contains further details. There is no fixed set of values for reason. The following state diagram and table may be of use understanding the results of an attempted access.
Discovery Scenario IP Injected In Exclude list Example: The user has requested a scan of an IP that is in an exclude range. IP Injected Already Processing this IP Example: The user has requested a scan of an IP that is currently being processed as part of an earlier scan.
end_state=OptAlreadyProcessing result=Skipped
Page 733
IP Injected Second Scan Optimization (Best IP) Example: The user has requested a scan of an IP that is on a host that has already been scanned using another IP address that is considered to be the best IP to use to scan that host. IP Injected No IP Response Example: No response was received from the scanned IP address. IP Injected IP Response Example: After a sweep scan of an IP address, Discovery has received a response. No identification has taken place other than that there is a device of some description that has responded to the sweep scan. If the device is subsequently not recognized then the end_state is set to UnsupportedDevice as below. IP Injected IP Response Device Type not supported UnsupportedDevice is the default end_state for any device that BMC Atrium Discovery can detect, but do not have any knowledge of in Reasoning to build an inferred node. We may still be able to identify what the device is and some key properties, or we may know nothing more than the fact it returned a ping. Example: Discovery has received a response from an endpoint but has not been able to identify the device further. This may mean that the OS is unrecognized, garbage has been returned or there is a shell error. This can be caused by a firewall. For more information, see the IP Fingerprinting information in the Base Device Discovery section. IP Injected IP Response No HostInfo recovered
end_state=OptNotBestIP result=Skipped
end_state=UnsupportedDevice result=Skipped
end_state=NoAccess result=NoAccess end_state=NoAccess result=NoAccess end_state=HostInferred result=Success end_state=Opt1stScan result=Skipped end_state=GoodAccess result=Success end_state=Error|ExistingState result=Error end_state=OptRemote result=Skipped
IP Injected IP Response HostInfo and InterfaceList recovered First Scan Optimization IP Injected IP Response HostInfo and InterfaceList recovered First Scan Optimization not needed IP Injected Traceback captured
Pattern management
Page 734
The Pattern Management user interface (UI) enables you to upload patterns to the appliance, activate or deactivate patterns on the appliance, and to delete patterns which are no longer required. You can also download a zip archive of all patterns that are on the appliance. The following terms are used in the Pattern Management UI: Pattern: A sequence of commands written in the Pattern Language (TPL), which contain instructions that identify scanned entities which are then used to create the BMC Atrium Discovery data model. Package: A text file written in TPL which contains one or more patterns, or a zip archive of TPL files which contain one or more patterns. Module: The logical grouping to which patterns belong. For example, the following patterns are all in the VMWareVM module: VMware VMwareConsolidateFromSI VMwareConsolidateFromVirtualHost This is an example package name: VMWareVM.VMwareConsolidateFromSI Activating a pattern means that it is loaded and can be used to identify software instances. You cannot activate a pattern with a lower version number than one that has previously been activated. Delete the pattern with the higher version number first. All patterns specify the TPL version number that they use. If a pattern uses TPL language features which are from a later TPL version than specified, you cannot activate that pattern. This page describes the management of packages through the UI. Configuring and editing the actual packages and patterns is described in Pattern configuration and editing.
TKU packages contain multiple components, not simple a zip archive of TPL patterns. These should be uploaded and activated using the Knowledge Update page.
1. Click Upload New Package. 2. Enter the pattern filename into the Upload New Package dialog box, or click Browse to locate a file. The Name field is automatically populated with the file name without the .tpl suffix. You can enter a description of the pattern into the Description field, this description is displayed in the UI. 3. To upload the file to the appliance, click Upload. 4. The pattern is uploaded to the appliance and is placed in the Inactive state. Before you can use the pattern you will need to Activate it.
Page 735
To view the pattern templates that are available, click Pattern Templates. In the Pattern Templates window is a table with each row corresponding to a template pattern. For each pattern the following information or link is displayed: Name: The name of the pattern. This is a link to the View Pattern template window. This is described in The View Pattern Template Window. Description: A read-only description of the pattern. Options: A download link. Clicking this enables you to download a copy of a particular pattern to your local filesystem. There is also a Download All Patterns (Zip) link which enables you to to download a zip archive of all of the template patterns.
To view one of the packages, click the corresponding row in the Package table. The following buttons are provided on the Package Information page. Deactivate: deactivates the package.
Page 736
History: shows a history comparison of the package. The comparison UI is the same as the node comparison which is described in Comparing the history of nodes. The Package Information page shows the following information on the package: Field Name Package Active Submitting user Submission date Pattern Module Details The name of the package. Whether the package is active or inactive. This is shown by a Yes or No. The user name of the person submitting the package. The date that the package was submitted. A link showing the number of modules which links to a page showing all of the modules included in the package.
Click Pattern Module. The Pattern Module List window, which lists the pattern modules in the package, is displayed. Each row in the table is a link to the Pattern Module Source window where you can view the pattern module source. If only one pattern module is in the package, the Pattern Module link leads to the Pattern Module Source window.
Pattern configuration
Patterns can have configuration blocks defined in the TPL source. This provides a number of items that can be configured through the user interface. For information on TPL configuration blocks, see Pattern Configuration. Pattern Configuration is shown on the Pattern Module page for modules that contain configuration. The Pattern Configuration section provides a table with the following columns: Column Name Description Details The description of the configuration item that can be edited in this row. For example, Default Port.
Page 737
The current value of the configuration item that can be edited in this row. The default value for the configuration item. The name of the variable in the TPL pattern that the configuration item controls.
The Pattern Configuration section provides the following buttons: Reset Configuration: resets the configuration items in the pattern to the default values specified in the pattern. Edit Configuration: shows the values of configuration items in editable fields. Depending on the variable type being edited, radio buttons, text input fields, or multiple selection boxes are provided.
The replacement package is made active if the one you edited was active.
What is Configipedia?
Configipedia is BMC's community website that facilitates knowledge sharing around TPL patterns, how they function and the real world issues they help address. Configipedia also provides visibility of the Technology Knowledge Update release schedule and contents. A link to Configipedia is provided for the Pattern Module lists in the Pattern Management section of the Discovery Page. There is also a link to Configipedia provided from the Software Instance nodes and Business Application Instance nodes that are maintained by TPL software patterns. See Managing your IT infrastructure and Managing your business applications for more information.
Page 738
To run a pattern
1. Copyright 2003-2013, BMC Software Page 739
1. 2. 3. 4. 5.
From the Discovery tab, click Pattern Management. Click the Package containing the pattern you want to run from the package list. Click the Pattern Modules link. Select the Pattern Module containing the pattern that you want to run. From this page you can edit the pattern source or configuration if necessary. Editing the pattern is described in Pattern configuration and editing. Click this link to continue this procedure after editing the pattern. After the pattern is edited, the Pattern Management: Browse Packages page displays.
a. Select the Package containing the pattern you want to run from the package list. b. Click the Pattern Modules link. c. Click the Pattern Package link. 6. Click the Pattern link in the heading table. 7. From the Actions list, select Run Pattern. 8. Select the Group that you want to run the pattern against using the Run against Group list. Then choose the settings for the pattern run. Set Expand, Execution Logging, and Additional Discovery. The settings are described in the following table. Field Run against Group Description Provides the list to select the group to run the pattern against. If you do not have any Working Sets then the check box for showing only Working Sets will be disabled. If you do have at least one working set then clearing this check box enables you to choose Groups that are not in your working set. The text beneath shows the number of nodes in the group that are the correct node kind to match the pattern's trigger. If the group contains a host node, select the Expand check box to check the host for additional nodes that match the pattern's trigger. For example, the ApacheBasedWebserver pattern triggers on DiscoveredProcess nodes. If the group contains one DiscoveredProcess node and one host node (containing, 162 DiscoveredProcess nodes) this field shows 1 Discovered Process node if Expand is not checked and 163 Discovered Process nodes (including 162 via 1 Host node) if it is checked. Select the logging level for this pattern run. This is one of Debug, Info, Warning, Error, or Critical. Choose whether discovery commands that perform additional discovery should perform live discovery of the host. For example, the runCommand method performs additional discovery by calling remote commands from patterns. Do not get extra data: Use any existing data that is available on the appliance. Get data as needed: Use any existing data that is available on the appliance. If additional data is required, perform discovery on the target to obtain it. Get data as needed will only make a request if that request has not been made before. Get all new discovery data: Always perform a new discovery. Do not use any previously discovered data.
Template patterns
BMC Atrium Discovery includes patterns that model commonly deployed software. Through monthly Technology Knowledge Updates (TKUs), the Atrium content team updates and creates new patterns to increase coverage of the software found in our customers' data centers. BMC Atrium Discovery includes many pattern templates that can be downloaded and modified to add support for software that isn't supported through TKU. These templates are written to support the most common scenarios where a custom pattern is required. The templates are available from the Pattern Templates section of the Pattern Management page. The following patterns are included: template_simple_si: Maintain a simple Software Instance based on identifying a process. template_si_instances: Maintain an identifiable Software Instance triggered on a process with its identity in its command line. template_si_version_path: Maintain a grouped Software Instance triggered on a process, with the Software Instance version found in the path. template_si_version_path_table: Maintain a grouped Software Instance triggered on a process, with the Software Instance version found in the path and looked up in a table. template_si_version_command: Maintain a grouped Software Instance triggered on a process, with the Software Instance version found by running a command. template_si_version_registry: Maintain a grouped Software Instance triggered on a process, with the Software Instance version found in the Windows registry. template_si_version_xml_file: Maintain a grouped Software Instance triggered on a process, with the Software Instance version found in an XML file. template_si_version_package: Maintain a grouped Software Instance triggered on a process, with the Software Instance version
Page 740
found in package information. template_si_collect_children: Maintain a Software Instance based on identifying a process, which collects the process' children into a single SI. template_bai_search: Maintain a Business Application Instance based on identifying a Software Instance and searching for some other components. template_bai_add: Maintain a Business Application Instance based on identifying each of several Software Instances that contribute to it. template_host_location: Maintain relationships linking Hosts to Locations based on hostname. template_sql_asset_integration: Use business contextual information stored in a SQL database to annotate Hosts with ownership and location information. template_sql_deep_discovery: Issue SQL queries to a discovered database to extract further information about it or other software. template_mainframe_mq: Maintain an identifiable Mainframe MQDetail and relationships to the parent Message Server SoftwareInstance. template_mainframe_storage: Maintain an identifiable item of Mainframe Storage based on a TapeDrive or DASDDrive and relationships to the containing StorageCollection and MFParts. template_mainframe_transaction: Maintain an identifiable Mainframe Transaction and relationships to the parent Transaction Server Software Instance. template_cmdb_cs_augment: Augment BMC_ComputerSystem CIs synchronized to the CMDB with additional information taken from the BMC Atrium Discovery Host node. template_cmdb_hostname_override: Override the HostName attribute of BMC_ComputerSystem in the CMDB so it takes only the first component of compound dot-separated hostnames. template_cmdb_location: Relate BMC_ComputerSystem CIs to BMC_PhysicalLocation CIs in the CMDB. template_cmdb_ss_augment: Augment BMC_SoftwareServer CIs synchronized to the CMDB with additional attributes taken from the BMC Atrium Discovery SoftwareInstance node. template_cmdb_ss_cti: Override the default Category, Type, and Item values for BMC_SoftwareServer CIs mapped from particular BMC Atrium Discovery SoftwareInstance nodes.
Importing data
This section describes how to import data into BMC Atrium Discovery. Importing network device data Importing Hardware Reference Data Importing CSV data
Use of imported data and direct discovery is not supported If you use the CiscoWorks import tool to import network devices, and also scan the same network devices, this will potentially generate duplicate network devices. BMC recommends that you use either direct discovery or import, not both.
Page 741
1. From the Discovery section of the Administration tab, select Ciscoworks Import. The options on this page are described below: Field Name File To Upload File Type File Delimiter File Has Header Line Remove existing NetworkDevices Logging Level Details The file to upload. Click Browse... to display a file browser window. Select the file to import and click Open. Choose the type of file to import. Choose either XML or delimited from the File Type list. If you select XML, the lists for File Delimiter and File Has Header Line are dimmed. For a delimited file only, select the delimiter type from the File Delimiter list. This may be one of Comma, Tab, or Space. Choose Yes or No from the File Has Header Line list depending on whether the delimited file has a heading line. Do you want to remove existing network devices from the BMC Atrium Discovery datastore that are not present in the file you are importing? Choose Yes or No from the Remove existing NetworkDevices list. Choose the logging level from the Logging Level list. This may be Debug, Info, Warning, Error, or Critical.
2. To import the CiscoWorks data file, click Apply. There is also a command line utility available for importing CiscoWorks data. It is described in tw_imp_ciscoworks.
a. Specify the name of the layout as StandardTideway and enter a description. b. To create the StandardTideway layout, click Add.
Page 742
This command produces a Java stack trace. This is a known issue and can be ignored. The file that is produced can be imported by running the following command on the BMC Atrium Discovery appliance:
$TIDEWAY/bin/tw_imp_ciscoworks --delimiter=',' --username name --password password --file ~/tmp/data.csv
The file is written into the following directory: C:\PROGRAM FILES\CSCOpx\files\cmexport\ut The file that is produced can be imported with the following command on the BMC Atrium Discovery appliance:
$TIDEWAY/bin/tw_imp_ciscoworks --xml --username name --password password --file ~/tmp/2006516154526ut.xml
The following attributes are imported into the data store, where they exist in the input file: Attribute MacAddress Description MAC address of port on connected device. This is not the MAC address of the network device IP address of port on connected device. This is not the IP address of the network device Name of network device IP address of network device Example 01-23-45-67-89-ab Notes Must be a valid MAC address Must be a valid IP address
IPAddress
192.168.0.1
DeviceName Device
3/19 full-duplex 100M Must be a power of ten with K/M/G suffix Must be AUTO or FORCED
PortNegotiation
AUTO
Page 743
Power W Power VA Heat Output BTU Import of HRD into BMC Atrium Discovery will enable customers to associate discovered systems with physical assets and report on both directly from BMC Atrium Discovery.
Logging Level
2. Click Apply.
File format
You can import csv files which must have a heading row specifying the HRD node attributes. For example:
vendor,model,known_model_names,config_num_processors,processor_type,max_ram,u_size,power_va,power_watts,btu_h,va_ru,wat Microsystems,Sun Fire V440,"['Sun Fire V440','SUNW,Sun Fire V440']",4,UltraSPARC-IIIi,32768,4,908,636,2172,227,159,543 Sun Microsystems,Sun Fire 280R,['Sun Fire 280R'],2,UltraSPARC-III+,4096,4,454,318,1086,114,80,272
Incorrect usage may result in data loss Before using the Import CSV Data page you should fully understand the system taxonomy and the changes that you are going to make to your data. Using Import CSV Data incorrectly can cause irreparable damage to your data. The data you submit using this tool is applied directly to the production data without any validation. Always back up your datastore before using this tool.
Page 744
Do not import the following node kinds You must never import DDD nodes. You should avoid importing Host nodes and other system maintained nodes. If in doubt, contact Customer Support.
Note The behavior differs from the tw_imp_csv utility, which uses the first column of the input file as the key unless others are specified. When controlling an import using the UI Import page, only the keys that are selected are used.
Choose the required delimiter from the list. This may be Comma, Semicolon, or Tab. Choose the logging level from the list. This may be Debug, Info, Warning, Error, or Critical.
2. To import the CSV file, click Apply. See CSV import examples for example of using the CSV importer.
Keys
The CSV importer searches the datastore for nodes of a specified kind which have a key or keys matching rows in the supplied CSV data. Where the keys match, the node is updated, or deleted and recreated depending on the options selected. The first attribute column is used as a key by
Page 745
default. This can be overridden with the Keys option described in the table above.
Relationships
Relationship columns describe a relationship between the node being modified and one or more other nodes. Relationships are defined using a full key expression. The first example shows a link to a single node:
#role:relationship:role:kind.attribute
This example shows a link to multiple target nodes. Notice that you will need to surround the relationship definition and data in double quotes so that the subsequent attributes do not get treated as local nodes:
"#role:relationship:role:kind.attribute,attribute,attribute"
The first role part is the kind of the role that the node being created or updated plays in the relationship. The relationship part is the kind of the relationship being created. The second role part is the kind of role that the other node plays in the relationship. The kind part is the node kind of the other node. Finally, the attribute or attribute list is the name of the attribute used to find the other node. See the Node Lifecycle Document for information on relationships. Only a single traversal is supported.
Type conversion
By default, the CSV importer attempts to convert attribute values to the type defined in the taxonomy. If an attribute is not defined, it is left as a string. If a conversion fails, it is also left as a string.
Note: For list types, the value in the CSV file must be in Python list syntax.
When the tw_imp_csv utility is used with the --force option, all attributes are left as strings.
Example 1
To free up rack space for other applications, some hosts have been moved from the Campus data centre to the newly acquired Firehouse data centre. Discovery and Reasoning have handled the IP address and subnet changes but the Host nodes are still linked to the wrong location. Here is the CSV file to process, called firehouse_move.csv:
The script reads the file called firehouse_move.csv line by line. It uses the first line to name the columns. The first column is called name, which does not begin with a # character so it is treated as an attribute name. The second column does begin with a # character so it is treated as a specification for some relationships. No explicit key has been specified so the first (and in this case, only) attribute name is taken to be the key. Now the script reads the second line. The first field, egon, is in the name column which was selected as the key earlier. So the script uses the search service to find a node of kind Host (from the --kind command line option) that has a name attribute equal to egon. It finds exactly one node matching that search. If it had not found that node, it would have been created. If it had found multiple nodes, an error would have been reported and processing would continue with the next line, NO nodes would have been updated. Having found the node, it updates it using the other fields on the row. Were there any other attribute columns in the the file, the script would have
Page 746
used these to update the node before looking at the relationships. The file has only one relationship column. The name is #ElementInLocation:Location:Location:Location.name. The script searches for a Location node with a name attribute equal Firehouse, this row's value for the column. Having found the Location node, the script creates a HostLocation relationship to it with the Host node playing the HostInLocation role and the Location playing the LocationOfHost role. The script then processes the second and third data rows, updating the ray and peter nodes with the new location.
Example 2
A new host has been installed in the Firehouse data centre. Due to pressure from the organisation's E-services Protection Adviser, there is now a firewall preventing discovery of hosts on that site. Until a remote Windows proxy can be installed, the Firehouse sysadmins have been sending us spreadsheets with the changes. Here is a CSV file, called new_host.csv:
name,fqdn,#ElementInLocation:Location:Location:Location.name,#OwnedItem:Ownership:ITOwner:Person.name,ip_addrswinston,w
The command line is nearly the same as in Example 1. The differences are that the filename has changed because we are processing a different file and we are using Create Only mode. As in Example 1, the script searches for Host nodes with a name attribute equal to winston. This time, because it is in create mode, the script checks that there are no matching nodes. If there were, the script would report an error and skip the row. Now the script creates a new node. It populates the attributes of the node from the non-relationship fields in the data. The ip_addrs field is a list and the value starts with a '[' character so it is converted into a list. The new node has attributes: name = 'winston' fqdn = 'winston.example.com' ip_addrs = [ '10.3.4.1', '192.168.101.45' ] Then the script adds the relationships. The location relationship column is processed in the same way as in Example 1. The column called #OwnedItem:Ownership:ITOwner:Person.name creates an OwnedItem:Ownership:ITOwner relationship to a Person node with a name equal to Melnitz,J. The quotes around that field are needed because the field contains a comma.
Configuration files
Page 747
Additional link configuration has changed in version 7.3 to become a form of Custom reporting. The xml configuration files are held in the following directories: 1. /usr/tideway/data/installed/reports/ 2. /usr/tideway/data/custom/reports/ The directories are parsed in the order given (installed before custom), and the files contained in these directories are parsed in alphabetical order, with numbers before letters. This order is important as later definitions for a named link override those loaded earlier. The standard additional links are contained in /usr/tideway/data/installed/reports/00additional_links.xml. Upgrades from previous versions remove /usr/tideway/data/default/additional_links.xml and add the following new file: /usr/tideway/data/installed/reports/00additional_links.xml If you are upgrading, you must copy /usr/tideway/data/customer/additional_links.xml to /usr/tideway/data/custom/reports/00additional_links.xml and manually convert the file to the new format which is described on this page. Links configured in the installed subdirectory are displayed beneath the default links, those from the custom subdirectory follow them. However, if a custom link overrides an installed link it will appear in place of the installed link.
Notes:
1. The link must be specified in valid XML. The XML equivalents of certain characters must be used (for example, & instead of &, > instead of >, and < instead of <). 2. The XML files are read when the Application server starts. Usually you would restart the whole tideway service. 3. Parse errors in the XML do not prevent the Application server starting but may cause one or all external link definitions to be ignored. 4. Failure to return a value for a requested attribute will be caught, resulting in the link being discarded for this particular node, on this request. For future requests an attempt to build the link will be made again. 5. You can substitute any attribute in a tag. Use %(attribute_name)s to do this. If the attribute contains a list value there will also be a string representation containing a comma separated list of values in %(comma_separated_attribute_name)s. 6. The following keys are provided: id: the node ID as set in the URL, for example nodeID=%(id)s kind: the node kind. uiModule: the UI module that normally handles display of the node kind. urlPrefix: the URL prefix of the appserver. This is usually ui. 7. In addition <link> with <attribute> entries contain: translated: the locale translated attribute name. otherNodeID: if the attribute represents a link then the node ID of the related node.
Link tags
The link tag specifies the type of link that is being created. These may be one of the following: Normal Link: specified with a <link> tag. Report Link: specified with a <node-report> tag. A report that is run against a single node and the link should be visible when viewing a node of the correct node kind. Chart Link: specified with a <node-chart> tag. A chart that is run against a single node. The link should be visible when viewing a node of the correct node kind. Set Report Link: specified with a <refine-report> tag. A report that is run against a nodeset and the link should be visible when viewing a set of nodes of the correct node kind. Set Chart Link: specified with a <refine-chart> tag. A chart that is run against a nodeset and the link should be visible when viewing a set of nodes of the correct node kind. Each tag must specify a name using the name attribute. Note that each link tag is in a distinct namespace.
Tags available
The following xml tags are available in all form of links: Tag Title Description Details Provides the main page title for the link. This tag is mandatory. <title>View trouble ticket for %(hostname)s</title> Populates the title attribute of the HTML link. Presented as a tool tip. Also shown as a subtitle below the main page title. This tag is mandatory. <description>Shows the distribution of OS type on all hosts</description> Link will be shown against this node kind. This tag is mandatory. <kind>Host</kind>
Kind
Page 748
Image
Specifies an image to display. The image should already exist on the appliance as the specification is from the appliance image root ({{ http://appliance/styles/default/images/image_name }}). This tag is mandatory. 
Permissions
Specifies a list of permissions that the user must possess in order to display the link. <permissions> contains a list of <permission> entries, one per required permission. <permissions><permission>PERM_REPORTS_WRITE<permission><permissions>
The following xml tags are available in <link>: Tag URL Details The URL that the link points to. This tag is mandatory. {{<url> http://tickets.tideway.com </url>}} Attribute Displays a link in the UI next to the attribute if the attribute is not displayed, or not present on the node, then the link is not displayed. <attribute>hostname</attribute> Relationships can be specified by concatenating role, relationship, role and destination node kind with _ . <attribute>Previous_EndpointIdentity_Next_Host</attribute> Populates the HTML anchor for the link. Points to the named window or frame that the results of this query will be displayed in. Frequently this will be _new so that the link will open in a new window. <target>_new</target>
Target
The following xml tags are available in the remaining link types: Tag Lookup Details Supported in <node-chart> and <node-report>. Indicates if the query is relative to the node. Defaults to true which will generate a search of the form LOOKUP '%(id)s'. If you need a general search then use this tag to override the behaviour. <lookup>False</lookup> Any flags to include in the search query. <flags>include_destroyed</flags> Any with functions to evaluate in the search query. <with>value(NODECOUNT(TRAVERSE Host:HostedSoftware:RunningSoftware:SoftwareInstance)) AS si_count</with> Any conditions in the search query. This tag takes an optional keyword attribute which, when given value False, prevents the addition of the WHERE keyword when constructing the query. For example, TRAVERSE :::Host must not be prefixed by WHERE. <where>version = ''</where> Sort results of query by specified order. <order-by>name</order-by> List of attributes to display. For <node-chart> and <refine-chart> this represents the list of attributes shown when clicking through the chart. <show>SUMMARY</show> Supported in <node-chart> and <refine-chart>. Indicates the column to chart. This tag takes an optional columns attribute which specifies the number of columns generated by the split. For example <split columns="2">endtime PROCESSWITH bucket(3600, 3, 48)</split> retrieves the endtime and puts the values into a list of buckets producting 2 columns, the time and the count. The title of the column is used as the axis title. A maximum of 2 <split> tags may be defined, the first for the x-axis, the second the y-axis. <split>os AS 'Detailed OS'</split> Supported in <node-chart> and <refine-chart>. Indicates the title of the y axis column. This will default to 'Count' unless the <split> provides a second column. <y-axis-title>Hosts</y-axis-title>
Flags With
Where
Order-by Show
Split
Y-axis-title
Page 749
The default type of chart can be specified by adding the default attribute to <node-chart> or <refine-chart>. Supported charts depend upon the number of columns specified by <split>. For single columns the supported charts are: pie (default) bar column For two columns the supported charts are: line (default) column
External links
Each external link is defined under a <link> tag. The following example shows a label populated with the text View trouble ticket for hostname, Where hostname is the name of the current host. The URL in the example is to a trouble-ticketing system with a web interface. This assumes that there is a host attribute called ticketNum which contains the trouble ticket number.
<link name="Link.Host.Ticket"> <title>View trouble ticket for %(hostname)s</title> <description>Browse trouble ticket system for this host.</description> <kind>Host</kind>  <url>http://tickets.tideway.com/tickets/%(ticketNum)s.cgi</url> </link>
Report links
Each report link is defined under a <node-report> tag. The following examples retrieves a list of hosts with the same operating system type as the currently viewed host.
<node-report name="Node.Report.Host.SameOS"> <title>Hosts with the same OS type</title> <description>List of hosts with the same OS type</description> <kind>Host</kind>  <lookup>False</lookup> <where>os_type = %(os_type)r</where> </node-report>
Chart links
Each chart link is defined under a <node-chart> tag. A chart showing access methods for this host node.
Page 750
<node-chart name="Node.Chart.Host.AccessMethod" default="column"> <title>Host Login Summary Chart</title> <description>Host Login Methods on a column chart</description> <kind>Host</kind>  <where keyword="False">TRAVERSE :::DeviceInfo</where> <split>last_access_method as '_|Method|_' </split> <show>SUMMARY</show> </node-chart>
SetReport links
Each set report link is defined under a <refine-report> tag. Show the number of hosts per software package on which the currently viewed set of packages are installed.
<refine-report name="Refine.Report.Host.Density"> <title>Package Host Density</title> <description>How many Hosts are these Packages installed on</description> <kind>Host</kind>  <show> name, version, NODECOUNT(TRAVERSE InstalledSoftware:HostedSoftware:Host:Host) AS '_|Number of Hosts|_' </show> </refine-report>
SetChart links
Each set chart link is defined under a <refine-chart> tag. Summarise the currently viewed set of Software Instance nodes in a pie chart segmented by type.
<refine-chart name="Refine.Chart.Host.SISummary" default="pie"> <title>Software Instance Type Summary</title> <description>Show a Summary of the Software Instance Types</description> <kind>Host</kind>  <order-by>type</order-by> <split>type</split> <show>SUMMARY</show> </refine-chart>
Completion
Finally, the XML file is completed with the closing tag.
</reports>
Custom reporting
The custom reporting feature enables you to perform customization of the reports and channels on the appliance. Channels may be created more simply using the Channels page.
Configuration files
The reporting facilities and selection of UI channels shown on each page are configured using xml configuration files which are held in the following directories: 1. Copyright 2003-2013, BMC Software Page 751
1. /usr/tideway/data/installed/reports/ 2. /usr/tideway/data/custom/reports/ The directories are parsed in the order given (installed before custom), and the files contained in these directories are parsed in alphabetical order, with numbers before letters. This order is important as later definitions for a named report override those loaded earlier. The standard reports are contained in /usr/tideway/data/installed/reports/00reports.xml. The configuration is read when the UI process starts. Changes to the report configuration files require a UI restart. Errors in the configuration files do not prevent the Application server from starting but may cause various reports and channels to be missing.
You cannot add context sensitive reports to the discovery status page.
The root element of each reports.xml file must be <reports version="2.0">. The <reports> element can have several types of elements: Standard reports: <chart>: a chart. See <chart> elements. <report>: a report. See <report> elements. Channels: <chart-channel>: a simple chart channel. See <chart-channel> elements. <chart-multi-channel>: a chart channel with multiple values per column. See <chart-multi-channel> elements. <report-channel>: a channel containing a list of charts and reports. See <report-channel> elements. <rss-channel>: a rss feed channel. See <rss-channel> elements. <summary-channel>: a channel containing node kind counts. See <summary-channel> elements. <time-series-channel>: a chart channel containing values over time. See <time-series-channel> elements. <video-channel>: a video channel. See <video-channel> elements. <web-channel>: a channel containing a web feed. See <web-channel> elements. Page information: <page>: the selection of channels on one page. See <page> elements.
<report> elements
The <report> element was extended in version 7.3 by merging in functionality from <extrareport> which no longer exists. A <report> element requires a name attribute giving a unique name to the report. Tag Title Description Details Provides the main page title for the report. This tag is mandatory. <title>Hosts ordered by Host Owner</title> Populates the title attribute of the HTML link. Presented as a tool tip. Also shown as a subtitle below the main page title. This tag is mandatory. <description>Shows a list of all Discovery Hosts, ordered by their Owner</description> Kind to use in search. Multiple kinds can be specified in a comma separated list or by using the * wildcard. <kind>Host</kind> Any flags to include in the search query. This tag takes an optional keyword attribute which, when given value False, prevents the addition of the FLAGS( prefix and ) postfix. The attribute is normally used if parameters are used control the flags. <flags>include_destroyed</flags> Any with functions to evaluate in the search query. <with>value(NODECOUNT(TRAVERSE Host:HostedSoftware:RunningSoftware:SoftwareInstance)) AS si_count</with> Any conditions in the search query. This tag takes a couple of options attributes: keyword which, when given value False, prevents the addition of the WHERE keyword when constructing the query. For example, TRAVERSE :::Host must not be prefixed by WHERE. start which, when given value False, places the parameter conditions at the start of the query rather than the end. Normally parameter queries are added after the <where> text. However, if there is a need to perform tests before a traversal then they can be placed at the beginning. <where>version = ''</where>
Kind
Flags
With
Where
Page 752
Order-by Show
Sort results of query by specified order. <order-by>name</order-by> List of attributes to display. This tag takes an optional keyword attribute which, when given value False, prevents the addition of the SHOW keyword. <show>name, os, #InferredElement:Inference:Primary:HostInfo.hostid</show> Redirect browser when report is selected. <url>NetworkMismatchSummaryReport</url> List of <import> each of which contains a Python module to import. These are used by the <parameters> for <convert>, <default> and <eval> described below. <imports><import>common.timeutil</import></imports> List of <parameter> each of which contains an interactive parameter filled in by the user. These are described in detail below.
Url Imports
Parameters
<parameter> element
Interactive parameters are defined using <parameter> elements. They must have a name, representing the name of the parameter, that is unique in the report. Tag Title Type Key Where Details Label of the parameter shown when it is rendered in a form. This tag is mandatory. <title>Start date</title> Type of control widget to use for the parameter in the form and is detailed below. This tag is mandatory. References a replacement construct in the constructed query. <key>from</key> Query fragment to add to where clause of the parent element's search string. The reports system inserts the value of the parameter using string substitution with a key of value. <where>name HAS SUBSTRING %(value)r</where>
When the query is built from parameters all the parameters are evaluated as specified by <type>. If a parameter has a where clause it is substituted into the where clause with a key of value. All parameters without a key are then joined together using AND and inserted either at the beginning (<where start="True"> specified) or end of the where clause in the report. The complete query is then constructed. If there are any parameters with a key the completed query has those parameters inserted using string substitution with the specified key of the parameter. The <type> tag must have a name attribute which specifies the type of the parameter. It should have one of the following values: TextField: a text entry field. SelectField: a list. RelationshipSearchField: a popover node-selection field. DateTimeField: a time field. The tags that can be used with <type> depend upon these values. Tag All Details If type is SelectField add an All option to the select list (the default). When given value False prevents the addition of the 'All' option. <all>False</all> Function used to convert the default value and user-typed value for use in query. For example, the function could be common.timeutil.convertFromUnix. This will convert the value to Unix form, which can then be added to the query. Referenced functions should be fully specified, and the packages and modules required should appear in <imports> elements, described above. Note that functions are referenced rather than called so do not need parentheses. The function can also be a lambda function taking one parameter. <convert>common.timeutil.convertFromUnix</convert> Expression that defines a default value for the parameter. Referenced functions should be fully specified, and the packages and modules required should appear in <imports> elements, described above. <default>common.timeutil.convertToUnix(common.timeutil.currentTime())</default> Escape string value. By default string values are escaped. When given value False prevents the escaping. <escape>False</escape>
Convert
Default
Escape
Page 753
Kind
If type is RelationshipSearchField is used to determine which source kind when finding a list of relationships defined in taxonomy. <kind>Host</kind> If type is TextField specifies the size of the text field otherwise specifies the number of visible options in the list. <size>2</size> If type is SelectField then there are several ways to specify the selection options: eval which evaluates the expression. Referenced functions should be fully specified, and the packages and modules required should appear in <imports> elements, described above. <eval>api.audit.getEvents()</eval> option which contains the option text. The <option> takes an optional value attribute which is used when the option is selected. If no value attribute is present the option text is used. Multiple <option> values can be provided. <option value="21">FTP</option> query which performs a search query the first column of which is used as selection values. <query>SEARCH Location SHOW name</query> Only one of the selection option forms may be specified.
Size
Options
Validate
How to validate the parameter. Currently supports: integer: value is an integer number: value is a number boolean: value is a boolean NotEmpty: value is not empty If the value does not validate the form will present an error to the user. <validate>NotEmpty</validate>
Bringing all this together, the following element defines a parameter which appears in the UI labelled Type as a list with three specified values and no All entry. The <where> tag means that should the user choose Host then kind(#) = 'Host' will be added to the report query.
<parameter name="DQ_Report_Global_type"> <title>Type</title> <type name="SelectField"> <all>False</all> <options> <option value="Host">Host</option> <option value="SoftwareProductVersion">Software Product Version</option> <option value="BusinessApplicationInstance">Application Instance</option> </options> </type> <where>kind(#)='%(value)s'</where> </parameter>
The following report will count the number of instances of software instances on host optionally limiting the software instances to a particular type chosen by the user from a list.
Page 754
<report name="Software.Report.InstanceSummary"> <title>Software Inventory</title> <description>Shows summary of distribution of a piece of software</description> <kind>SoftwareInstance</kind> <show> type AS "_|Type|_", product_version AS "_|Product Version|_" PROCESS WITH countUnique(1,0) </show> <parameters> <parameter name="Software_Report_InstanceSummary_type"> <title>Product type</title> <type name="SelectField"> <options> <query> SEARCH SoftwareInstance ORDER BY type SHOW type PROCESS WITH unique() </query> </options> </type> <where>type = %(value)s</where> </parameter> </parameters> </report>
<chart> elements
The <chart> element in fundamentally the same as the <report> element with the following changes: the default type of chart can be specified by adding the default attribute to <chart>, the <show> tag is used to determine the click through columns of the chart. Additional tags are: Tag Split Details Indicates the column to chart. This tag takes an optional columns attribute which specifies the number of columns generated by the split. For example, <split columns="2">endtime PROCESSWITH bucket(3600, 3, 48)</split> retrieves the endtime and puts the values into a list of buckets producing 2 columns, the time and the count. The title of the column is used as the axis title. A maximum of 2 <split> tags may be defined, the first for the x-axis, the second the y-axis. <split>os AS 'Detailed OS'</split> Indicates the title of the y axis column. This will default to Count unless the <split> provides a second column. <y-axis-title>Hosts</y-axis-title>
Y-axis-title
For single columns the supported charts are: pie (default) bar column For two columns the supported charts are: line (default) column The following chart will show a count of different OS classifications. By default a pie chart will be shown and when clicked through a summary report for the appropriate hosts. When displaying bar and column charts the y-axis will be labelled Hosts.
Page 755
<chart name="Infrastructure.Chart.OSClassification" default="pie"> <title>Host OS Classification</title> <description>Show count of Hosts for each OS Classification</description> <kind>Host</kind> <order-by>os_class</order-by> <split>os_class AS "_|OS Class|_"</split> <y-axis-title>_|Hosts|_</y-axis-title> <show>SUMMARY</show> </chart>
<chart-channel> elements
The <chart-channel> element requires a name attribute giving a unique name to the chart channel. An optional default attribute gives the initial chart type. Tag Title Description Kind Details Channel title. This tag is mandatory. <title>Operating Systems</title> Shown when editing the channels. This tag is mandatory. <description>Operating System Reports and Charts</description> Kind to use in search. Multiple kinds can be specified in a comma separated list or by using the * wildcard. <kind>Host</kind> Any with functions to evaluate in the search query. <with>value(NODECOUNT(TRAVERSE Host:HostedSoftware:RunningSoftware:SoftwareInstance)) AS si_count</with> Any conditions in the search query. <where>version = ''</where> Sort results of query by specified order. <order-by>name</order-by> Indicates the column to chart. The title of the column is used as the axis title. <split>os AS 'Detailed OS'</split> List of attributes to display when click through chart. <show>name, os, #InferredElement:Inference:Primary:HostInfo.hostid</show>
With
<chart-multi-channel> elements
The <chart-multi-channel> element requires a name attribute giving a unique name to the chart multi channel. An optional default attribute gives the initial chart type. Tag Title Details Channel title. This tag is mandatory. <title>Operating Systems</title>
Page 756
Description Kind
Shown when editing the channels. This tag is mandatory. <description>Operating System Reports and Charts</description> Kind to use in search. Multiple kinds can be specified in a comma separated list or by using the * wildcard. <kind>Host</kind> Any with functions to evaluate in the search query. <with>value(NODECOUNT(TRAVERSE Host:HostedSoftware:RunningSoftware:SoftwareInstance)) AS si_count</with> Any conditions in the search query. <where>version = ''</where> Sort results of query by specified order. <order-by>name</order-by> Indicates the column to chart. The title of the column is used as the axis title. There must be 2 <split> tags. <split>os AS 'Detailed OS'</split> List of attributes to display when click through chart. <show>name, os, #InferredElement:Inference:Primary:HostInfo.hostid</show>
With
Show
The following channel shows a column chart of database versions with a column for each product type. Each column contains a list of found versions.
<chart-multi-channel name="Channel.SWDBVersion" default="column_nolegend"> <title>Database versions</title> <description>Number and version of all the Database instances</description> <split>type</split> <split>(product_version or 'Unknown')</split> <kind>Pattern</kind> <where> 'Relational Database Management Systems' in categories TRAVERSE Pattern:Maintainer:Element:SoftwareInstance </where> <show>SUMMARY</show> </chart-multi-channel>
<report-channel> elements
The <report-channel> element requires a name attribute giving a unique name to the report channel. Tag Title Description Image Chart Details Channel title. This tag is mandatory. <title>Operating Systems</title> Shown when editing the channels. This tag is mandatory. <description>Operating System Reports and Charts</description> Image to show for channel.  Chart name. The chart definition must appear before this reference. Multiple <chart> tags may be specified. <chart>Infrastructure.Chart.OSClassificationPie</chart> Report name. The report definition must appear before this reference. Multiple <report> tags may be specified. <report>Virtualization.Reports.ContainedHosts</report>
Report
Page 757
<report-channel name="Infrastructure.Channel.NetworkPolicy"> <title>Network Policies</title> <description>Shows a list of Network Infrastructure Reports</description> <report>Infrastructure.Report.NetworkMismatchSummary</report> <chart>Infrastructure.Chart.IPAddressDistribution</chart> </report-channel>
<rss-channel> elements
The <rss-channel> element requires a name attribute giving a unique name to the rss channel. Tag Title Description Render-description Url Details Channel title. This tag is mandatory. <title>Operating Systems</title> Shown when editing the channels. This tag is mandatory. <description>Operating System Reports and Charts</description> Flag if to show RSS feed description, defaults to False. <render-description>True</render-description> URL for RSS feed. This tag is mandatory. {{<url> http://www.tideway.com/confluence/createrssfeed.action?types=page </url>}}
<summary-channel> elements
The <summary-channel> element requires a name attribute giving a unique name to the summary channel. Tag Title Description Image Kind-count Details Channel title. This tag is mandatory. <title>Operating Systems</title> Shown when editing the channels. This tag is mandatory. <description>Operating System Reports and Charts</description> Image to show for channel.  Node kind for which to show count. Multiple <kind-count> entries can be specified. <kind-count>Host</kind-count>
The following channel shows the number of BusinessApplicationInstance and SoftwareInstance nodes.
<summary-channel name="Applications.Channel.Summary"> <title>Application Summary</title> <description>_|Shows a summary of Applications|_</description>  <kind-count>BusinessApplicationInstance</kind-count> <kind-count>SoftwareInstance</kind-count> </summary-channel>
Page 758
<time-series-channel> elements
The <time-series-channel> element requires a name attribute giving a unique name to the time series channel. This is identical to <chart-channel> with two exceptions: It performs a historical search and partition the result into a number of buckets. By default, this is 5 but can be overridden by specifying the time-series attribute on the <time-series-channel> element. <split> is optional and if not present the chart will be based on the number of the specified node kind rather than the value of the specified attribute. The following channel shows the number of Hosts for each Unix OS over time.
<time-series-channel name="Channel.OSUNIX" time-series="7" default="line"> <title>UNIX Operating Systems</title> <description>Shows a count of Hosts for each UNIX OS</description> <split>os_type as "OS Version"</split> <kind>Host</kind> <where>os_class = "UNIX"</where> <order-by>os_type</order-by> <show>SUMMARY</show> </time-series-channel>
<video-channel> elements
The <video-channel> element requires a name attribute giving a unique name to the video channel. Tag Title Description Image Src Details Channel title. This tag is mandatory. <title>Operating Systems</title> Shown when editing the channels. This tag is mandatory. <description>Operating System Reports and Charts</description> Image to show for channel.  Video source. This tag is mandatory. <src>/videos/Prerelease3.swf</src>
The following channel shows an video feed for creating host profiles.
<video-channel name="Channel.Video.HostProfiles"> <title>Feature Tutorial: Host Profiles</title> <description>Video Tutorial that explains the Host Profiles feature</description> <src>/videos/HostProfiles.swf</src> </video-channel>
<web-channel> elements
The <web-channel> element requires a name attribute giving a unique name to the web channel. Tag Title Description Details Channel title. This tag is mandatory. <title>Google</title> Shown when editing the channels. This tag is mandatory. <description>Shows the Google search engine.</description>
Page 759
URL
URL for the web feed. This tag is mandatory. {{<url> http://www.google.com </url>}}
The following channel shows a web feed for the BMC Atrium Discovery Community forum.
<web-channel name="Channel.Web.Community"> <title> Tideway Community Update</title> <description>Shows the Tideway Community update.</description> <url>http://www.tideway.com/widgets/foundation-forum/</url> </web-channel>
<page> elements
The <page> element requires a name attribute giving a unique name to the page. Tag Channel Details Reference a previously defined channel. There is an optional builtin attribute which must be specified if the channel is builtin. Multiple channel tags can be specified. The order in which the channels appear on a page is governed by the order of the <channel> tags. <chart builtin="True">Discovery.Channel.Status</chart>
Built-in channels
The following built-in channels are defined: Built-In Channel Name Discovery.Channel.Status Infrastructure.Channel.Explore Description Summary for Discovery status. Summary for the Automatic Grouping.
Legal Information
Copyright 2012 BMC Software, Inc. BMC, BMC Software, and the BMC Software logo are the exclusive properties of BMC Software, Inc., are registered with the U.S. Patent and Trademark Office, and may be registered or pending registration in other countries. All other BMC trademarks, service marks, and logos may be registered or pending registration in the U.S. or in other countries. All other trademarks or registered trademarks are the property of their respective owners. The information included in this documentation is the proprietary and confidential information of BMC Software, Inc., its affiliates, or licensors. Your use of this information is subject to the terms and conditions of the applicable End User License agreement for the product and to the proprietary and restricted rights notices included in the product documentation.
Page 760
BMC Software, Inc. 5th Floor 157-197 Buckingham Palace Road London SW1W 9SP United Kingdom Tel: +44 (0) 800 279 8925 Email: customer_support@bmc.com Web: http://www.bmc.com/support
Table of Contents
1 Table of Contents 2 Deployment guide 2.1 Deployment prerequisites 2.1.1 Appliance Installation 2.1.2 Windows proxy deployment 2.1.3 Credential Deployment - UNIX 2.1.4 Credential Deployment - Windows 2.1.5 Credential Deployment - SNMP 2.1.6 Identify Target Environment 2.1.7 Approval to Perform Discovery 2.1.8 Access 2.1.9 Network Management (Ciscoworks) data 2.1.10 Service Transition 2.2 Deployment requirements 2.3 Performance data 2.3.1 Factors affecting performance 2.3.2 Network usage 2.4 Appliance specification 2.4.1 Virtual Appliance 2.4.2 Appliance hardware platforms 2.4.3 Disk IO Performance Guidelines 3 Legal Information
Deployment guide
This section provides the recommended customer deployment prerequisites for an efficient deployment of BMC Atrium Discovery and a regularly implemented deployment approach. It also provides sections on performance data and specifications of physical and virtual appliances. Goal Complete prerequisites necessary before deploying BMC Atrium Discovery. Implement requirements for deployment of BMC Atrium Discovery. Review performance data. Review appliance specifications for all physical and virtual appliances. Related information Deployment prerequisites Deployment requirements Performance data Appliance specification Benefit Know and implement recommended customer deployment prerequisites for an efficient deployment of BMC Atrium Discovery. Know and implement deployment requirements for a regularly implemented approach. Be able to understand performance data that helps you plan physical and virtual appliance deployments. Be able to plan appliance deployment based on specifications relevant to your environment.
Page 761
No additional software is supported on the appliance BMC Atrium Discovery is built as an appliance that is not intended to have any additional software installed on it, with the single exception of BMC PATROL. If you have an urgent business need to install additional software on the appliance, please contact BMC Customer Support before doing so.
No OS customizations are supported on the appliance The BMC Atrium Discovery software is delivered as an appliance model (either virtual or physical), and includes the entire software stack from a Linux operating system to the BMC application software. The operating system must not be treated for general purpose use, but rather as a tightly integrated part of the BMC Atrium Discovery solution. As such, customizations to the operating system should only be made at the command line level if explicitly described in this online documentation, or under the guidance of BMC Customer Support. If you have an urgent business need to change any operating system configuration, please contact BMC Customer Support before doing so.
Deployment prerequisites
The deployment prerequisites are listed below in a suggested sequence. Completing all of these should result in a successful deployment. 1. Appliance Installation describes the information required by BMC and provides specification information for physical and virtual appliances. 2. Security overview of the security aspects of BMC Atrium Discovery. 3. Windows proxy deployment describes the requirements for a Windows proxy deployment. 4. Credential Deployment - UNIX describes the requirements for UNIX credential deployment. 5. Credential Deployment - Windows describes the requirements for Windows credential deployment. 6. Credential Deployment - SNMP describes the requirements for SNMP credential deployment. 7. Identify Target Environment requests identification of the subnets and IP addresses to be scanned, and those to be excluded. 8. Approval to Perform Discovery suggests the functional area from which scanning approval should be sought. 9. Access details that methods of access that the BMC Atrium Discovery administrator will require. 10. Network Management (Ciscoworks) data describes importing CiscoWorks data. 11. Service Transition suggests training which may be appropriate for staff after the deployment has completed.
Kickstart DVD
The kickstart DVD can be used to install BMC Atrium Discovery on customer-supplied hardware. See Installing BMC Atrium Discovery for more information. See also hardware platforms for more information about the hardware onto which you can install BMC Atrium Discovery using the kickstart DVD.
Physical Appliance
The BMC Atrium Discovery solution was previously supplied as a physical appliance with all software pre-installed, including the BMC Atrium Discovery application.
Page 762
Virtual Appliance
Ensure that you have entitlement with BMC to download the VA from the BMC site http://www.bmc.com/support/downloads-patches. To check the entitlement, contact customer_care@bmc.com. Raise an internal service request to allocate ESX/VMware Server space and to install the Virtual Appliance.
Packaging
The Virtual Appliances are packaged as a compressed zip file that contain a single VMware virtual machine and a pdf of the Getting Started Guide guide. The VAs are built using VMware ESX version 4.1, and are compatible with current versions of VMware ESX, ESXi, Workstation and Player. However, to deploy the Virtual Appliance into older versions, or VMware Server, an additional conversion process is required.
Support
For information on how the Virtual Appliance is supported, see supported virtualization platforms
Community Edition
Users of the Community Edition can get support from the BMC Community. This is best achieved by reading and posting to the public forums. In addition to those products listed above, the BMC Atrium Discovery Virtual Appliance will also run in the following tools: VMware Player 4: Available here: https://my.vmware.com/web/vmware/free#desktop_end_user_computing/vmware_player/4_0 VMware Workstation 8: Available here: https://my.vmware.com/web/vmware/info/slug/desktop_end_user_computing/vmware_workstation/8_0 A link to the relevant VMware support site is provided below: VMware support
Specifications
The following sections specify the default specifications of the Virtual Appliances supplied by BMC.
Production Appliance
CPU: 1 RAM: 2048MB Hard Disk: 1 x SCSI 55 GB (set to grow as necessary in 2 GB file increments) CD/DVD Drive: 1 (auto detect) Network Interface Cards: 1 (eth0) bridged, configured to use DHCP to obtain an IP address. The Network Interface Card (eth0) is configured to use DHCP. However, this can be changed to use a static IP. During the install process, the following network configuration is required: DHCP or static IP for eth0 interface If Static IP then provide the following: Appliance IP Address Gateway Subnet Mask DNS for address lookup
Defaults For initial testing at a small scale, the default configuration is sufficient. For production use, or use at scale you must increase the RAM, CPU and disk configuration in accordance with the sizing guidelines. One size Virtual Appliance is supplied, because a single size simplifies delivery and does not require you to download various sized VMs for various applications such as scanning and consolidation appliances. 2GB RAM is insufficient to activate a TKU in an acceptable time. To use a TKU in testing, you must increase the amount of RAM in the system to 4GB or more.
Page 763
getFileSystems The getFileSystems discovery method was introduced with BMC Atrium Discovery version 8.2. Windows proxies before version 8.2 do not support this method although are supported in all other respects.
Page 764
Specification As stated in tables above 2GHz Intel Pentium 4 CPU 512k Cache (or equivalent from other manufacturer) 2GB 60GB
To avoid any impact during resource-intensive periods of discovery, it is strongly recommended not to install the Windows proxy on any host supporting other business services. This is true even if the minimum Windows proxy specification is exceeded, since the Windows proxy will attempt to use what resources are available, in order to optimize scan throughput.
AIX
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata df hbainfo host_info ifconfig_ip Privileges Privileges Privileges
Page 765
getMACAddresses *
getNetworkConnectionList getNetworkInterfaces
SNMP
Method getDeviceInfo* getHostInfo* MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] getMACAddresses * IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ] getNetworkConnectionList TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
getIPAddresses
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
Page 766
getNetworkInterfaces
IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ] IBM-AIX-MIB::aixProcEntry [ aixProcPID, aixProcUID, aixProcPPID, aixProcCMD ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
getProcessList
FreeBSD
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata df host_info ifconfig_ip ifconfig_mac netstat pkg_info ps lsof-i Privileges Privileges Privileges Privileges Privileges
SNMP
Page 767
MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ]
getIPAddresses
1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ]
getMACAddresses*
IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ]
1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
getNetworkConnectionList
TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
getNetworkInterfaces
IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
Page 768
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ]
getProcessList
HPUX
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata bdf fcmsutil host_info ifconfig_ip lanscan_mac netstat interface_commands swlist patch_list ps lsof-i Privileges Privileges Privileges Privileges Privileges Privileges Privileges
getFileMetadata getFileSystems getHBAList getHostInfo * getIPAddresses getMACAddresses * getNetworkConnectionList getNetworkInterfaces getPackageList getPatchList getProcessList getProcessToConnectionMapping
SNMP
Method getDeviceInfo* getHostInfo* MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 getIPAddresses 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ]
Page 769
IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] getMACAddresses * IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ] getNetworkConnectionList TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ] getNetworkInterfaces IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ]
1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ]
getProcessList
IRIX
Click here to show the shell scripts and SNMP queries used
Shell scripts
Page 770
Method
Script
Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata df host_info ifconfig_ip netstat_mac netstat ifconfig_if showprods ps lsof-i Privileges Privileges Privileges Privileges
getFileMetadata getFileSystems getHostInfo * getIPAddresses getMACAddresses * getNetworkConnectionList getNetworkInterfaces getPackageList getProcessList getProcessToConnectionMapping
SNMP
Method getDeviceInfo* getHostInfo* MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] getMACAddresses * IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 getIPAddresses 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ]
Page 771
getNetworkConnectionList
TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ]
1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
getNetworkInterfaces
IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
getProcessList
Linux
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata Privileges Privileges Privileges
getFileMetadata
Page 772
getFileSystems getHBAList
getHostInfo * getIPAddresses
host_info ip_addr_ip ifconfig_ip ip_link_mac ifconfig_mac netstat ip_link_if ifconfig_if rpmx rpm dpkg
getMACAddresses *
getNetworkConnectionList getNetworkInterfaces
getPackageList
getProcessList getProcessToConnectionMapping
ps lsof-i Privileges
SNMP
Method getDeviceInfo* getHostInfo* MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] getMACAddresses * IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 getIPAddresses 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ]
Page 773
getNetworkConnectionList
TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ]
1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
getNetworkInterfaces
IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ]
getProcessList
Mac OS X
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata Privileges Privileges Privileges
getFileMetadata
Page 774
SNMP
Method getDeviceInfo* getHostInfo* MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] getMACAddresses* IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ] getNetworkConnectionList TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 getIPAddresses 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
Page 775
TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ] getNetworkInterfaces IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ]
getProcessList
NetBSD
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata df host_info ifconfig_ip ifconfig_mac netstat ifconfig_if Privileges Privileges Privileges Privileges
Page 776
SNMP
Method getDeviceInfo* getHostInfo* MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] getMACAddresses * IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ] getNetworkConnectionList TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 getIPAddresses 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
Page 777
getNetworkInterfaces
IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ]
getProcessList
OpenBSD
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata df host_info ifconfig_ip ifconfig_mac netstat ifconfig_if pkg_info ps lsof-i Privileges Privileges Privileges Privileges Privileges
getFileMetadata getFileSystems getHostInfo * getIPAddresses getMACAddresses * getNetworkConnectionList getNetworkInterfaces getPackageList getProcessList getProcessToConnectionMapping
SNMP
Method MIB Values OID
Page 778
getDeviceInfo* getHostInfo*
SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ]
1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
getIPAddresses
getMACAddresses *
IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ]
getNetworkConnectionList
TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
getNetworkInterfaces
IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ]
getProcessList
Page 779
OpenVMS
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method initialise getDeviceInfo * getFileSystems getHostInfo * getIPAddresses Script init deviceinfo show-mount hostinfo ifconfig_ip tcpip_show_ip tcpip_show_mac show-network ifdetails show-product show-system Privileges required
SNMP
Method getDeviceInfo* getHostInfo* MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] getMACAddresses * IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 getIPAddresses 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ]
Page 780
getNetworkConnectionList
TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ]
1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
getNetworkInterfaces
IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ]
getProcessList
POWER HMC
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method initialise getDeviceInfo * getFileSystems getHostInfo * getIPAddresses getMACAddresses * getNetworkConnectionList Script init device_info monhmc-rdisk host_info lshmc-n netstat-ine netstat Privileges required
Page 781
getNetworkInterfaces
lshmc_netstat
SNMP
Method getDeviceInfo* getHostInfo* MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] getMACAddresses * IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ] getNetworkConnectionList TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 getIPAddresses 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
Page 782
getNetworkInterfaces
IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ]
getProcessList
Solaris
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata df hba_fcinfo hba_emlxadm hba_hbacmd hba_lputil Privileges Privileges Privileges Privileges Privileges Privileges Privileges Privileges
Privileges
Privileges
Page 783
getProcessList getProcessToConnectionMapping
ps lsof-i pfiles
SNMP
Method getDeviceInfo* getHostInfo* MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] getMACAddresses * IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ] getNetworkConnectionList TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
getIPAddresses
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
Page 784
getNetworkInterfaces
IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ] SUN-SNMP-MIB::psEntry [ psProcessID, psParentProcessID, psProcessUserName, psProcessUserID, psProcessName]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ] 1.3.6.1.2.1.25.4.2.1 [ .1, .2, .4, .5 ] 1.3.6.1.4.1.42.3.12.1 [ .1, .2, .8, .9, .10 ]
getProcessList
Tru64
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata df host_info ifconfig_ip ifconfig_mac netstat ifconfig_if setld ps lsof-i Privileges Privileges Privileges Privileges Privileges
getFileMetadata getFileSystems getHostInfo * getIPAddresses getMACAddresses * getNetworkConnectionList getNetworkInterfaces getPackageList getProcessList getProcessToConnectionMapping
SNMP
Copyright 2003-2013, BMC Software Page 785
MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ]
getIPAddresses
1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] 1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
getMACAddresses*
IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ]
getNetworkConnectionList
TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
getNetworkInterfaces
IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ]
Page 786
getProcessList
UnixWare
Click here to show the shell scripts and SNMP queries used
Shell scripts
Method Script Privileges required
init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata df host_info ifconfig_ip ifconfig_mac netstat ifconfig_if pkginfo ps lsof-i Privileges Privileges Privileges Privileges
getFileMetadata getFileSystems getHostInfo * getIPAddresses getMACAddresses * getNetworkConnectionList getNetworkInterfaces getPackageList getProcessList getProcessToConnectionMapping
SNMP
Method getDeviceInfo* getHostInfo* MIB Values SNMPv2-MIB::sysDescr.0 SNMPv2-MIB::sysName.0 HOST-RESOURCES-MIB::hrSystemUptime.0 HOST-RESOURCES-MIB::hrMemorySize.0 IF-MIB::ifEntry [ ifDescr, ifType, ifOperStatus ] IP-MIB::ipAddressEntry [ ipAddressAddr, ipAddressIfIndex, ipAddressType, ipAddressPrefix ] IP-MIB::ipAddrEntry [ ipAdEntAddr, ipAdEntIfIndex, ipAdEntNetMask ] IPV6-MIB::ipv6AddrEntry [ ipv6AddrAddress, ipv6AddrPfxLength ] 1.3.6.1.2.1.4.20.1 [ .1, .2, .3 ] 1.3.6.1.2.1.55.1.8.1 [ .1, .2 ] OID 1.3.6.1.2.1.1.1.0 1.3.6.1.2.1.1.5.0 1.3.6.1.2.1.25.1.1.0 1.3.6.1.2.1.25.2.2.0 getIPAddresses 1.3.6.1.2.1.2.2.1 [ .2, .3, .8 ] 1.3.6.1.2.1.4.34.1 [ .2, .3, .4, .5 ]
Page 787
getMACAddresses *
IF-MIB::ifEntry [ ifDescr, ifType, ifPhysAddress, ifOperStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaPhysAddress, ipNetToMediaType ]
1.3.6.1.2.1.4.20.1 [ .2, .3, .6, .8 ] 1.3.6.1.2.1.4.35.1 [ .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .2, .4 ] 1.3.6.1.2.1.6.19.1 [ .2, .3, .5, .6, .7, .8 ] 1.3.6.1.2.1.6.20.1 [ .2, .3, .4 ] 1.3.6.1.2.1.7.7.1 [ .2, .3, .8 ]
getNetworkConnectionList
TCP-MIB::tcpConnectionEntry [ tcpConnectionLocalAddress, tcpConnectionLocalPort, tcpConnectionRemAddress, tcpConnectionRemPort, tcpConnectionState, tcpConnectionProcess ] TCP-MIB::tcpListenerEntry [ tcpListenerLocalAddress, tcpListenerLocalPort, tcpListenerProcess ] UDP-MIB::udpEndpointEntry [ udpEndpointLocalAddress, udpEndpointLocalPort, udpEndpointProcess ] TCP-MIB::tcpConnEntry [ tcpConnState, tcpConnLocalAddress, tcpConnLocalPort, tcpConnRemAddress, tcpConnRemPort ] IPV6-TCP-MIB::ipv6TcpConnEntry [ ipv6TcpConnLocalAddress, ipv6TcpConnLocalPort, ipv6TcpConnRemAddress, ipv6TcpConnRemPort, ipv6TcpConnState ] UDP-MIB::udpConnEntry [ udpLocalAddress, udpLocalPort ] IPV6-UDP-MIB::ipv6UdpEntry [ ipv6UdpLocalAddress, ipv6UdpLocalPort ]
1.3.6.1.2.1.6.13.1 [ .1, .2, .3, .4, .5 ] 1.3.6.1.2.1.6.16.1 [ .1, .2, .3, .4, .6 ] 1.3.6.1.2.1.7.5.1 [ .1, .2 ] 1.3.6.1.2.1.7.6.1 [ .1, .2 ]
getNetworkInterfaces
IF-MIB::ifEntry [ ifIndex, ifDescr, ifType, ifSpeed, ifPhysAddress, ifOperStatus ] IF-MIB::ifXEntry [ ifAlias, ifName, ifHighSpeed ] MAU-MIB::ifMauEntry [ ifMauIfIndex, ifMauType, ifMauAutoNegSupported ] EtherLike-MIB::dot3StatsEntry [ dot3StatsDuplexStatus ] IP-MIB::ipNetToPhysicalEntry [ ipNetToPhysicalIfIndex, ipNetToPhysicalPhysAddress, ipNetToPhysicalType ] IP-MIB::ipNetToMediaEntry [ ipNetToMediaIfIndex, ipNetToMediaPhysAddress, ipNetToMediaType ] HOST-RESOURCES-MIB::hrSWInstalledTable [ hrSWInstalledName ] HOST-RESOURCES-MIB::hrSWRunTable [ hrSWRunIndex, hrSWRunName, hrSWRunPath, hrSWRunParameters ]
1.3.6.1.2.1.2.2.1 [ .1, .2, .3, .5, .6, .8 ] 1.3.6.1.2.1.31.1.1.1 [ .1, .15, .18 ] 1.3.6.1.2.1.26.2.1.1 [ .1, .3, .12 ] 1.3.6.1.2.1.10.7.2.1 [ .19 ] 1.3.6.1.2.1.4.35.1 [ .1, .4, .6 ] 1.3.6.1.2.1.4.22.1 [ .1, .2, .4 ]
getPackageList
1.3.6.1.2.1.25.6.3.1 [ .2 ]
getProcessList
VMware ESX
Click here to show the vSphere API queries and shell scripts used
vSphere
Method Managed Object Reference HostSystem Properties Notes
getDeviceInfo*
name
Page 788
content.about summary.name summary.type summary.capacity summary.freeSpace summary.url info.vmfs.uuid info.nas.remoteHost info.nas.remotePath info.nas.userName info.path VMFS volumes only NFS & CIFS volumes only NFS & CIFS volumes only CIFS volumes only LOCAL volumes only
getHBAInfo
HostSystem
getHostInfo*
HostSystem
hardware.memorySize hardware.systemInfo.model hardware.systemInfo.vendor hardware.systemInfo.uuid hardware.systemInfo.otherIdentifyingInfo[].identifierType.key hardware.systemInfo.otherIdentifyingInfo[].identifierValue hardware.cpuPkg.threadId hardware.cpuPkg.numCpuCores hardware.cpuInfo.threadId config.hyperThread.active config.network.dnsConfig.dnsDomainName config.network.dnsConfig.hostName Not used if unset Not used if unset Not used if unset or "unknown" Not used if unset Not used if unset
getIPAddresses
HostSystem
config.network.pnic[].device config.network.pnic[].spec.ip.ipAddress config.network.pnic[].spec.ip.subnetMask config.network.pnic[].spec.ip.ipV6Config.ipV6Address[].ipAddress config.network.pnic[].spec.ip.ipV6Config.ipV6Address[].prefixLength config.network.vnic[].device config.network.vnic[].spec.ip.ipAddress config.network.vnic[].spec.ip.subnetMask Not used if unset Not used if unset
Page 789
config.network.vnic[].spec.ip.ipV6Config.ipV6Address[].ipAddress config.network.vnic[].spec.ip.ipV6Config.ipV6Address[].prefixLength config.network.consoleVnic[].device config.network.consoleVnic[].spec.ip.ipAddress config.network.consoleVnic[].spec.ip.subnetMask config.network.consoleVnic[].spec.ip.ipV6Config.ipV6Address[].ipAddress config.network.consoleVnic[].spec.ip.ipV6Config.ipV6Address[].prefixLength getPatchList HostSystem configManager.patchManager
getNetworkInterfaces
HostSystem
getMACAddresses *
HostSystem
getVirtualMachines
VirtualMachine
Shell scripts
Method Script Privileges required
Page 790
getFileContent getFileInfo
file_content Handled by the getFileMetadata and getFileContent calls. file_metadata df hba_sysfs hba_procfs hba_hbacmd hba_lputil
Privileges
Privileges
getHostInfo * getIPAddresses
host_info ip_addr_ip ifconfig_ip ip_link_mac ifconfig_mac netstat ip_link_if ifconfig_if rpmx rpm dpkg
getMACAddresses *
getNetworkConnectionList getNetworkInterfaces
getPackageList
getProcessList getProcessToConnectionMapping
ps lsof-i Privileges
VMware ESXi
Click here to show the vSphere API queries and shell scripts used
vSphere
Method Managed Object Reference HostSystem ServiceInstance getFileSystems Datastore Properties Notes
getDeviceInfo*
name content.about summary.name summary.type summary.capacity summary.freeSpace summary.url info.vmfs.uuid info.nas.remoteHost VMFS volumes only NFS & CIFS volumes only
Page 791
info.nas.remotePath info.nas.userName info.path HostSystem config.storageDevice.hostBusAdapter[].device config.storageDevice.hostBusAdapter[].nodeWorldWideName config.storageDevice.hostBusAdapter[].portWorldWideName config.storageDevice.hostBusAdapter[].model config.storageDevice.hostBusAdapter[].driver getHostInfo* HostSystem hardware.memorySize hardware.systemInfo.model hardware.systemInfo.vendor hardware.systemInfo.uuid hardware.systemInfo.otherIdentifyingInfo[].identifierType.key hardware.systemInfo.otherIdentifyingInfo[].identifierValue hardware.cpuPkg.threadId hardware.cpuPkg.numCpuCores hardware.cpuInfo.threadId config.hyperThread.active config.network.dnsConfig.dnsDomainName config.network.dnsConfig.hostName getIPAddresses HostSystem config.network.pnic[].device config.network.pnic[].spec.ip.ipAddress config.network.pnic[].spec.ip.subnetMask config.network.pnic[].spec.ip.ipV6Config.ipV6Address[].ipAddress config.network.pnic[].spec.ip.ipV6Config.ipV6Address[].prefixLength config.network.vnic[].device config.network.vnic[].spec.ip.ipAddress config.network.vnic[].spec.ip.subnetMask config.network.vnic[].spec.ip.ipV6Config.ipV6Address[].ipAddress config.network.vnic[].spec.ip.ipV6Config.ipV6Address[].prefixLength config.network.consoleVnic[].device config.network.consoleVnic[].spec.ip.ipAddress config.network.consoleVnic[].spec.ip.subnetMask config.network.consoleVnic[].spec.ip.ipV6Config.ipV6Address[].ipAddress config.network.consoleVnic[].spec.ip.ipV6Config.ipV6Address[].prefixLength getNetworkInterfaces HostSystem config.network.pnic[].device config.network.pnic[].driver
NFS & CIFS volumes only CIFS volumes only LOCAL volumes only
getHBAInfo
Page 792
config.network.pnic[].mac config.network.pnic[].spec.linkSpeed.speedMb config.network.pnic[].spec.linkSpeed.duplex config.network.vnic[].device config.network.vnic[].spec.mac config.network.consoleVnic[].device config.network.consoleVnic[].spec.mac getMACAddresses * HostSystem config.network.pnic[].mac config.network.vnic[].spec.mac config.network.consoleVnic[].spec.mac getPatchList HostSystem configManager.patchManager Uses queryHostPatch() task
getVirtualMachines
VirtualMachine
Shell scripts
Method initialise getDeviceInfo * getDirectoryListing getFileContent getFileInfo getFileMetadata getFileSystems getHostInfo * getIPAddresses getMACAddresses * getNetworkInterfaces getProcessList Script init device_info ls file_content Handled by the getFileMetadata and getFileContent calls. file_metadata df host_info vim-cmd-addresses vim-cmd-mac vim-cmd-if-details ps Privileges Privileges Privileges Privileges required
Page 793
SNMP provides sufficient discoverable detail for UNIX and Windows Servers to create a Host record in BMC Atrium Discovery when login credentials have not been deployed. In this case the SNMP community string for these estates should be provided to the BMC Atrium Discovery Administrator.
Access
The BMC Atrium Discovery administrator will require a workstation with network access to the appliance through both a web browser and a terminal emulator (e.g. PuTTy or F-Secure SSH terminal). WinSCP will also be helpful for moving files to or from the appliance.
Use of imported data and direct discovery is not supported If you use the CiscoWorks import tool to import network devices, and also scan the same network devices, this will potentially generate duplicate network devices. BMC recommends that you use either direct discovery or import, not both.
The CiscoWorks importer loads the data file exported from CiscoWorks LMS using the Campus Manager User Tracking tool, and loads any existing network device data from the BMC Atrium Discovery datastore. It then compares the sets of data. The supported versions are Campus Manager 3.x and 4.x (UT CLI 1.0, 1.1, and 1.1.1). The network device name is used as the index. Therefore, if two network devices have the same name, they are treated as the same network device. If a new network device is found in the exported file, it is added to the BMC Atrium Discovery datastore. The name of the network device is used as its primary key. See the note below regarding deletion of network devices. The ports on each network device are updated. These are indexed by IP address (the primary key). Port attributes that are updated include speed, duplex, and IP address. New ports (identified by IP address) are added, and any ports removed from the CiscoWorks data are removed from the datastore.
2. To import the CiscoWorks data file, click Apply. There is also a command line utility available for importing CiscoWorks data. It is described in tw_imp_ciscoworks.
Page 795
UserName MACAddress HostName IPAddress Subnet DeviceName Device Port PortName PortState VTPDomain LastSeen Notes PortDuplex PortSpeed
User Name MAC Address Host Name IP Address Subnet Device Name Device Port Port Name Port State VTP Domain Last Seen Prefix Port Duplex Port Speed
a. Specify the name of the layout as StandardTideway and enter a description. b. To create the StandardTideway layout, click Add.
This command produces a Java stack trace. This is a known issue and can be ignored. The file that is produced can be imported by running the following command on the BMC Atrium Discovery appliance:
$TIDEWAY/bin/tw_imp_ciscoworks --delimiter=',' --username name --password password --file ~/tmp/data.csv
The file is written into the following directory: C:\PROGRAM FILES\CSCOpx\files\cmexport\ut The file that is produced can be imported with the following command on the BMC Atrium Discovery appliance:
$TIDEWAY/bin/tw_imp_ciscoworks --xml --username name --password password --file ~/tmp/2006516154526ut.xml
The following attributes are imported into the data store, where they exist in the input file: Attribute MacAddress Description MAC address of port on connected device. This is not the MAC address of the network device Example 01-23-45-67-89-ab Notes Must be a valid MAC address
Page 796
IPAddress
IP address of port on connected device. This is not the IP address of the network device Name of network device IP address of network device
192.168.0.1
DeviceName Device
3/19 full-duplex 100M Must be a power of ten with K/M/G suffix Must be AUTO or FORCED
PortNegotiation
AUTO
Service Transition
Nominate staff for operational awareness sessions with required skill set. Nominate staff to attend formal classroom training (if in-scope).
Page 797
DHCP or static IP for eth0 interface If Static IP then provide the following: Appliance IP Address Gateway Subnet Mask DNS for address lookup
Defaults For initial testing at a small scale, the default configuration is sufficient. For production use, or use at scale you must increase the RAM, CPU and disk configuration in accordance with the sizing guidelines. One size Virtual Appliance is supplied, because a single size simplifies delivery and does not require you to download various sized VMs for various applications such as scanning and consolidation appliances. 2GB RAM is insufficient to activate a TKU in an acceptable time. To use a TKU in testing, you must increase the amount of RAM in the system to 4GB or more.
Windows proxy
The following table provides information about the compatibility between Windows proxy types and versions, and the operating systems that the Windows proxy runs on for BMC Atrium Discovery version 9.0. Windows Proxy Type Credential Windows proxy Earliest Windows Proxy Version Supported 8.1 See getFilesystems notes below this table. Windows Proxy Available for Supported Operating System Windows 2003 SP2 (x86 and x86_64) IPv4 discovery only Windows 2008 - Service Pack 2 (x86 and x86_64) Windows 2008 R2 Windows 2003 SP2 (x86 and x86_64) IPv4 discovery only Windows 2008 - Service Pack 2 (x86 and x86_64) Windows 2008 R2
getFileSystems The getFileSystems discovery method was introduced with BMC Atrium Discovery version 8.2. Windows proxies before version 8.2 do not support this method although are supported in all other respects.
Page 798
Hard disk
60GB
To avoid any impact during resource-intensive periods of discovery, it is strongly recommended not to install the Windows proxy on any host supporting other business services. This is true even if the minimum Windows proxy specification is exceeded, since the Windows proxy will attempt to use what resources are available, in order to optimize scan throughput.
Hosting platform for ongoing data consumption of baseline data after solution
Page 799
decommissioning
Snapshot of Baseline on a view only BMC Atrium Discovery version Virtual Appliance for Community Edition of BMC Atrium Discovery Alternatively a desktop or Laptop
Additonal information
Firewall access
For convenience a summary of ports potentially used are listed here. Please see other references in the Security Document for full details of the use of these ports. Ports that may well be customized in your environment are written in italics
Discovery communications System communications System communications Discovery communications Discovery communications
513
rlogin
Outbound
UNIX Discovery
Page 800
636 902 1433 1521 3306 3940 4100 4321 4323 7001 25032 ARTCPPORT Value
LDAPS vSphere API MS SQL Oracle SQL MySQL SQL Discovery for z/OS Agent Sybase SQL CORBA CORBA JMX CORBA AR System
Outbound Outbound Outbound Outbound Outbound Outbound Outbound Outbound Outbound Outbound Outbound Outbound
LDAPS UI User Authentication VMware ESX/ESXi Discovery MS SQL Extended Discovery Oracle SQL Extended Discovery MySQL SQL Extended Discovery Mainframe Discovery Sybase ASE SQL Extended Discovery AD Windows proxy Windows Discovery Credential Windows proxy Windows Discovery J2EE Extended Discovery Consolidation CMDB Sync Standalone appliance. Scanning appliances do not sync to CMDB, this is done from the consolidating appliance.
System communications Discovery communications Discovery communications Discovery communications Discovery communications Discovery communications Discovery communications Discovery communications Discovery communications Discovery communications Discovery communications System communications
Page 801
Proxy port changes in 8.3 SP2 In BMC Atrium Discovery 8.3 SP2 and later, proxies are not limited to the default ports. It is also possible to install multiple proxies of each type on a single host. Consequently, in BMC Atrium Discovery 8.3 SP2 and later you must check the proxy manager to determine which ports the proxies are using. The defaults are the same as previous releases, but installations of additional proxies use incremental ports. You can also use the proxy manager to modify the port that each proxy uses.
Port Number 135 139 389 445 1024-1030 1024-65535 4321 4323
Port assignment DCE RPC Endpoint Manager. DCOM Service Control Netbios Session Service LDAP Microsoft Directory Services SMB Firewall Restricted DCOM Unrestricted DCOM CORBA CORBA
Use Windows Discovery Windows Discovery AD User Authentication Windows Discovery Windows Discovery Windows Discovery AD Windows proxy Windows Discovery Credential Windows proxy Windows Discovery
Reference Discovery communications Discovery communications System communications Discovery communications Discovery communications Discovery communications Discovery communications Discovery communications
UNIX discovery
Discovery uses: Credentials Access Methods Discovery Commands
Page 802
UNIX credentials
Login via: SSH (keys) OR user name/password The preferred method is SSH Key authentication. This is based on public-key cryptography where "encryption and decryption are done using separate keys, and it is not possible to derive the encryption key from the encryption key. The server knows the public key, and only the user knows the private key". The BMC Atrium Discovery appliance counts as the 'user' (or 'client'), because it is trying to log in to the target host(s) (the 'server'). For this deployment you would access the private key that matches the public key already deployed in each target host's authorized_keys file. The private key is usually contained in a file named id_dsa or id_rsa and should be put in the /usr/tideway/.ssh/ directory with 600 (rw-------) permissions.
UNIX commands
Standard user with non-root privileges Can only run commands that any standard user could run on the target host sudo is used for privilege escalation When setting up the sudo rules on the target Host, BMC Software specifies the command and arguments so that only that command with the designated argument can be run. This prevents the risk of spawning any arbitrary commands
Windows discovery
Windows credentials
Uses the Active Directory (AD) Windows proxy. The AD Windows proxy does not use any credentials entered using the BMC Atrium Discovery user interface. Each functional area has its own user account and dedicated Windows proxy. The BMC Atrium Windows proxy is deployed on a customer-supplied standard Windows host managed by the local AD operator in each functional area. Multiple windows AD Windows proxies can be connected to one BMC Atrium Discovery appliance. By using this approach BMC Software reduces the exposure in each functional area to the same access level that an AD operator in that functional area would have. The BMC Atrium Discovery appliance or operator would never know the Windows AD password.
Page 803
Standard Patching and Service Packs Two distinct accounts Windows proxy Discovery Service Log in to Windows Server running the service The user managing the AD Windows proxy will never have access to the account which performs discovery Cannot use the Windows proxy service account to log in to Windows servers interactively
Appliance specification
Physical (provided by BMC Software with BMC Atrium Discovery bundled with RedHat Linux OS). The appliance specification is sufficient for daily full discovery of at least 5000 OSI, with keeping a discovery history of 100 days (a typical configuration). Physical Appliance Spec Specification Physical Specification Power Specifications Environmental Specifications
lsof
lsof(1) is a UNIX specific diagnostic tool. The name lsof stands for "LiSt Open Files" and is developed by Victor A. Abell, retired Associate Director of the Purdue University Computing Centre. lsof(1) is a command used in many UNIX systems that is used to report a list of all open files and the processes that opened them. It works in and supports several UNIX types. Open files in the system include disk files, pipes, network sockets and devices opened by all processes. One use for this command is when a disk cannot be unmounted because (unspecified) files are in use. The listing of open files can be consulted (suitably filtered if necessary) to identify the process that is using the files. If the lsof(1) command is not used. BMC Atrium Discovery will not be able to extract communications open (systemwide) by each process. More information is available on lsof is available at http://freshmeat.net/projects/lsof.
Tcpvcon EULA Recent versions of this tool require a GUI-based license agreement to be confirmed the first time it is run. This will cause problems if ADDM tries to run it on a server that has not had this. After confirmation, a registry key HKEY_USERS/<UID>/Software/Sysinternals/TCPView/EulaAccepted=1 is created. OpenPorts this utility was supplied by DiamondCS but is no longer available. If Windows NT or 2000 is to be discovered as the platforms for the business applications, one of these tools will need to have been deployed before scanning the target host.
Performance data
Performance figures are guidelines only and will differ between environments. Based on observations in real customer deployments it has been observed that it is realistic to be able to scan around 500 Hosts per hour. There are certain factors, configurable and otherwise that can affect performance. These are described in the Factors affecting performance section. When scanning with BMC Atrium Discovery, the rate at which the IPs can be processed will vary depending on how many of them are actually devices, as opposed to empty space. For example, scanning a range of 100 IPs, where only one is in use, will behave differently to one where there are ninety in use.
Page 804
Consolidation
As described in Consolidation a BMC Atrium Discovery instance can be configured as a Consolidation Appliance that receives data from multiple Scanning Appliances. A Consolidation Appliance can process scan data at the same rate as an equivalent Scanning Appliance can. Therefore the time required to consolidate the data from all configured Scanning Appliances is approximately the same were you to sum the time taken by all Scanning Appliances to perform their scans. For example if you had five Scanning Appliances each taking one hour to scan, then the corresponding Consolidation Appliance might take 5 hours to consolidate all this data. This should be taken into account when planning your deployment as it is possible to end up in a situation where the Consolidation cannot keep up with the amount of data being sent to it by the Scanning Appliances.
Page 805
Network usage
BMC Atrium Discovery, through its interaction with the IT infrastructure, generates some network traffic. Based on actual BMC Atrium Discovery deployments, a typical peak load of about 3 Megabits per second can be expected on the appliance. However, it is likely that differences between environments will result in peaks of network load larger than that. Some factors that could influence variations in peak load include: Custom patterns: Most of the network load is generated by BMC Atrium Discovery scanning the target environment. In addition to the core discovery required to build Host nodes, discovery requests resulting from patterns will generate some network traffic. BMC builds the patterns that are shipped in the Technology Knowledge Updates (TKU) to limit the amount of data that is requested. Similar care should be taken when developing custom patterns for your environment. For example, if a pattern were to retrieve the contents of a very large file that is common in the target environment, this could result in a larger consumption of network bandwidth. Consolidation: Consolidation results in the transfer of data between appliances. Consequently, some additional network load will be added. Moving appliance backups: It is possible to move appliance backups to or from BMC Atrium Discovery appliances; however, higher network traffic will occur during the transfer of these potentially large files.
Appliance specification
The appliance specification describes the platforms used for the BMC Atrium Discovery appliance. It provides a brief introduction to the features of the appliance, and the basic tasks that need to be performed to start the appliance for the first time on a customer site. It also provides specifications for each of the appliance models. The Appliance Specification contains the following sections: Virtual Appliance Virtual Appliance delivery Configuring the Virtual Appliance Appliance hardware platforms Disk IO Performance Guidelines This section is intended as a brief overview and is not comprehensive. It contains information taken from the documentation supplied with the appliance hardware. You are strongly advised to read the documentation to fully familiarize yourself with the appliance hardware and its operation.
Virtual Appliance
Page 806
Pursuant to the terms of VMwares VMware Ready program. BMC Atrium Discovery is a qualifying partner product in that it has been approved by VMware as meeting all of the requirements of the VMware Ready program. BMC Atrium Discovery is certified in the Application Software category of the VMware Ready program and is fully supported by BMC. The product has been fully tested to ensure reliable operation under fully-loaded conditions. This certification can be validated via this article, in the Installing the virtual appliance guide, and on VMwares Solution Exchange. VMware and VMware Ready is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions.
Packaging
The Virtual Appliances are packaged as a compressed zip file that contain a single VMware virtual machine and a pdf of the Getting Started Guide guide. The VAs are built using VMware ESX version 4.1, and are compatible with current versions of VMware ESX, ESXi, Workstation and Player. However, to deploy the Virtual Appliance into older versions, or VMware Server, an additional conversion process is required.
Support
For information on how the Virtual Appliance is supported, see supported virtualization platforms
Community Edition
Users of the Community Edition can get support from the BMC Community. This is best achieved by reading and posting to the public forums. In addition to those products listed above, the BMC Atrium Discovery Virtual Appliance will also run in the following tools: VMware Player 4: Available here: https://my.vmware.com/web/vmware/free#desktop_end_user_computing/vmware_player/4_0 VMware Workstation 8: Available here: https://my.vmware.com/web/vmware/info/slug/desktop_end_user_computing/vmware_workstation/8_0 A link to the relevant VMware support site is provided below: VMware support
Specifications
The following sections specify the default specifications of the Virtual Appliances supplied by BMC.
Production Appliance
CPU: 1 RAM: 2048MB Hard Disk: 1 x SCSI 55 GB (set to grow as necessary in 2 GB file increments) CD/DVD Drive: 1 (auto detect) Network Interface Cards: 1 (eth0) bridged, configured to use DHCP to obtain an IP address. The Network Interface Card (eth0) is configured to use DHCP. However, this can be changed to use a static IP. During the install process, the following network configuration is required:
Page 807
DHCP or static IP for eth0 interface If Static IP then provide the following: Appliance IP Address Gateway Subnet Mask DNS for address lookup
Defaults For initial testing at a small scale, the default configuration is sufficient. For production use, or use at scale you must increase the RAM, CPU and disk configuration in accordance with the sizing guidelines. One size Virtual Appliance is supplied, because a single size simplifies delivery and does not require you to download various sized VMs for various applications such as scanning and consolidation appliances. 2GB RAM is insufficient to activate a TKU in an acceptable time. To use a TKU in testing, you must increase the amount of RAM in the system to 4GB or more.
Dedicated VMware Resources It is strongly recommended that the CPU and RAM resources that are allocated to the ADDM appliance are reserved, and are not shared with other VMware guest OSes. If this is not the case, performance may be inconsistent and might not achieve expectations. For more details contact your VMware administrator.
Proof of Concept The Proof of Concept class has minimal storage allowance as they are only intended for a limited period of scanning such as a week long trial. For longer periods or a continuously used development or UAT system, the Baseline class is the minimum recommended.
Page 808
Memory and swap usage Memory and swap usage depends on the nature of the discovery being performed, with Mainframe and VMware (vCenter/vSphere) devices requiring more than basic UNIX and Windows devices.
Where the following tables refer to CPUs, full use of a logical CPU (core) is assumed. For example, if eight CPUs are required, then you may provide them in the following ways: Eight virtual CPUs in your virtualization platform, such as VMware Infrastructure. Four dual core physical CPUs. Two quad core physical CPUs.
The disk configuration utility uses the following calculation to determine the best swap size. Where the amount of memory is less than 16 GB swap size is set at double the memory size. Where the amount of memory is between 16 and 32 GB swap is set at 32 GB. Where the amount of memory exceeds 32 GB swap is set to equal them memory. The disk requirement with local backup is lower than in previous versions as the appliance backup feature does not use as much disk space as appliance snapshot when creating the backup.
Memory requirements for POC class 2GB RAM is sufficient for normal operation, but is insufficient to activate a new TKU. To activate a TKU requires 4GB of RAM. You can increase the memory for activation and then reduce it for normal operation if required.
Page 809
RAM
24GB
Only servers can be used in a production environment. Workstations and laptops are not up to production specification.
Legal Information
Copyright 2012 BMC Software, Inc. BMC, BMC Software, and the BMC Software logo are the exclusive properties of BMC Software, Inc., are registered with the U.S. Patent and Trademark Office, and may be registered or pending registration in other countries. All other BMC trademarks, service marks, and logos may be registered or pending registration in the U.S. or in other countries. All other trademarks or registered trademarks are the property of their respective owners. The information included in this documentation is the proprietary and confidential information of BMC Software, Inc., its affiliates, or licensors. Your use of this information is subject to the terms and conditions of the applicable End User License agreement for the product and to the proprietary and restricted rights notices included in the product documentation. BMC Software, Inc. 5th Floor 157-197 Buckingham Palace Road London SW1W 9SP United Kingdom Tel: +44 (0) 800 279 8925 Email: customer_support@bmc.com Web: http://www.bmc.com/support
Application mapping
Release Date October 2012
Table of Contents
1 Table of Contents 2 Application mapping 2.1 Introduction to collaborative application mapping 2.2 Gathering seed data 2.3 Searching and investigating the datastore 2.4 Creating the prototype application map 2.4.1 Dragging and dropping items 2.5 Sharing the prototype application map 2.6 Mapping the application 2.6.1 Creating functional components 2.6.2 Using and manipulating named values 2.6.3 Generating patterns for the application model 2.6.4 Exporting and importing application mapping definitions 3 Legal Information
Application mapping
The BMC Atrium Discovery Application Mapping guide is intended for anyone who wants to use BMC Atrium Discovery to map applications in their estate. This guide describes the application mapping methodology, how to gather and organize data and collaborate on a mapping prototype, and how BMC Atrium Discovery generates and maintains the resultant application map.
Page 810
Related information Introduction to collaborative application mapping Gathering seed data Searching and investigating the datastore Creating the prototype application map Sharing the prototype application map Mapping the application
Benefit Understand the workflow of Collaborative Application Mapping and the roles that support personnel play in the process.
Find out some hosts and technologies involved in the application. Use the seed data as a starting point to search the datastore for the software and hardware components that support the application. Build a static prototype application map.
With very little input from the application owner, be able to begin searching and investigating the Discovery datastore for clues about the application. Find your application without additional help from the application owner. Arrange the components found earlier into a manual application map to share with the application owner. Obtain feedback on the application prototype to be able to build it out further. Enables you to establish rules so that BMC Atrium Discovery can build and maintain the application map automatically.
Finalize and share the prototype application map with the application owner. Build the dynamic map of the application.
The CAM approach is designed to capture the rules that define where an application is running, not to simply define that information statically. This means that as you deploy the applications more widely in your estate, or you migrate them around your infrastructure, your service maps will stay current.
Page 811
Video demonstrations
Each stage in the CAM workflow contains a video demonstration to illustrate the process. The following table describes the number of each video and the corresponding goals for each stage described. Demonstration Video 1 Stages Described Goals Introduce the goal and characteristics of CAM.
Overview
Page 812
Video 2 1. Obtain enough basic information from the application owner to begin searching and investigating the Discovery datastore. 2. Use the seed data as a starting point to search the datastore for the software and hardware components that support the application. 3. Using the components found by searching and investigating, build a prototype application map. This is a manually built, static map that is used to assist in understanding the application. It is not the end goal of the process.
Seed Data; Search and Investigate; Prototype Video 3 Share the prototype application map with the application owner by generating a preview report in PDF. This stage might involve revisiting the Prototype stage to redefine your working map based on feedback from the report.
Share Video 4 Configure rules by creating functional components that enable BMC Atrium Discovery to build and maintain the application map dynamically.
Map Application (Part 1) Video 5 Divide the application into instances to enable BMC Atrium Discovery to identify the environment to build and maintain the application map dynamically.
Map Application (Part 2) Video 6 Generate patterns to create the model in BMC Atrium Discovery.
Page 813
Business roles
The following people are involved in the CAM process.
Application owner
The application owner is usually part of the application support team, and he may not have any knowledge of BMC Atrium Discovery. He handles trouble tickets and maintains the running application. Every application has an application owner; however, he takes direction from business owner of the application. The application owner has no stake in the mapping initiative, so getting full cooperation can be difficult. He is too busy maintaining the application and can only spare small increments of time to collaborate on the maps. Consequently, the application owners are the greatest single cause of failure in the application mapping process. The goal is to create the application map without a large initial investment by the application owner. An effective application mapping process minimizes the reliance on the application owner, and any interaction must be as non-intrusive as possible. In the business examples used in this guide, the application owner is named George.
Application mapper
The application mapper is part of the team responsible for BMC Atrium Discovery rollout and maintenance. He knows BMC Atrium Discovery well, particularly how to report, search, analyze, and use the application mapping user tools. The application mapper is responsible for executing and driving the mapping process, and to do this he must be familiar with the way in which business applications are put together, such as the roles of middleware, databases, web infrastructure, and message brokers in application architectures. The application mapper's collaboration with the application owner should be as limited as possible, requiring only the basic information at the outset of the process and some feedback during the Prototype and Share stages of the process. In the business examples used in this guide, the application mapper is named Mike.
Example
George, the application owner, administers the Friends application, a web-based corporate social networking application. Mike, the application mapper, is the "discovery guy". George is always busy responding to requests from the business owners, reacting to incidents, performing software updates, rolling out new versions of Friends, and so on. Friends is not the only application that George maintains. Mike knows that George is the guy to speak to, but George does not need a map of the Friends application, because he has one in his memory. It is difficult for Mike to get George to commit much of his time to the application mapping initiative. Mike must keep George on his side, because he is going to need George's cooperation in the future.
Demonstration
Video 1 that follows provides an overview of collaborative application mapping, and highlights its approach and benefits.
Page 814
The first stage in mapping the application is to gather seed data, a small sample of host names that are involved in the application or component names. The goal of gathering seed data is to provide just a few pieces of information to the application owner (typically communicated through e-mail or instant message) that are clues to help determine what to start investigating. The application mapper should not invest any significant effort upfront to share this knowledge with the application owner.
Note The information should be just detailed enough to understand what infrastructure supports what application. For example, the application owner might specify that the application has two tiers, runs several hosts and uses a database. This is enough information for the mapper to get going. Later, in the mapping stage, this information will be coded in patterns and BMC Atrium Discovery will be able to keep the service models up to date.
Example
Mike determines that George, the application owner maintains an application named "Friends", a web-based corporate social networking application, which is the next on his list of business applications to map. He sends George the following e-mail: George, I see from my Applications and Owners report that you're the Application Support Lead for the "Friends" application. This is the application that we're mapping next for our BSM rollout initiative, which means that I'm going to be mapping this application in our discovery tool. I don't need much from you - I know you're busy. Can you drop me a quick email with whatever information you think will be relevant? Just whatever you know right now. For example: 1. Do you know any hosts that it runs on? 2. What sort of app is it? A native executable? A J2EE web application? A multi-tier application distributed across several bits of middleware? Do you know any of the module or executable names? 3. Does it have a database? Do you know anything about that database off hand (Oracle/SQL Server, Name)? I'll do some investigating in our discovery tool based on your reply and check back with you once I have something you can look at. Thanks for your help, Mike George replies: Mike, Thanks for the mail. Just offhand, Friends is a tomcat application. The web module is named "friends.war", and it's connected to a MySQL database. The test/UAT environment is running on lon416; I don't know right now where the PROD one is (our production maintenance guys handle the deployment there). George This is an example of effective seed data. It includes a host name, a module name, and a clue about what database to look for. This is more than enough to get started.
Page 815
Demonstration
Video 2 demonstrates how to search and investigate seed data and start building the prototype map.
The Search and Investigate stage of the mapping process is when you use the seed data obtained from the application owner as a starting point to figure out the structure of the application. You perform searches in the datastore based on search terms from the seed data, and use the search results as clues to do more searches until you eventually have the whole structure of the application. As you go along, you add what you find to the prototype application map. See the following example for a walk-through.
Example
George has responded to Mike's e-mail. He tells Mike the following about the application: The name of the application is Friends. There is a Web component and a database, but he can't provide any further details.
Page 816
Beginning with an index search, Mike starts to search the application for all the relevant infrastructure items, and continues his investigation until he can see how the application holds together, or until he needs to ask George more questions.
Demonstration
Video 2 demonstrates how to find out how to search and investigate seed data and start building the prototype map.
While the application mapper searches in the datastore and investigates the results, he strives to build a picture of the application out of the IT components he finds. This picture is called the prototype application map, and is the goal of the Prototype stage in the Collaborative Application Mapping (CAM) process. A successful prototype is crucial to CAM, and fulfills two purposes: 1. It provides a logical framework for the application mapper to capture what he finds about the application, and is a vital part of his toolset. 2. It becomes something that can be shared with the application owner and to solicit feedback, thereby creating a very effective communication tool. See Sharing the prototype application map for more details.
Page 817
you maintain your understanding of the application as the prototype potentially becomes increasingly complicated. The questions are useful for soliciting feedback from the application owner when the time comes.
5. Click Create. You can add an empty subgroup from the home page for the group.
Example
In the following example, Mike continues where he left off in the previous stage, having gathered enough information from his searching the Discovery datastore for seed data given to him by George. After locating hosts, SIs, and DBs, Mike gathers all this information into subgroups to group application components into more refined blocks that represent the tiers in the application (one for the web tier, and one for the DB tier). 1. Create a workgroup for the application Friends, the application name that George provided as seed data.
2. Perform an index search of the BMC Atrium Discovery datastore for Friends.
Page 818
2.
The results of the search indicate that two software instances (SIs) and two software components were located. Mike clicks through to the SI details, and notices two Tomcat application servers, likely representing the Web component of the Friends application that George mentioned previously.
Drilling into each of the servers shows: Software components, which are modules that run inside middleware (for example, J2EE EARs and WARs, .Net applications, IIS virtual directories, and so forth). Client to server communications to a MySQL database A dependency to that database (SQL Database Schema mellon for one server, one appended with _dev for the other) 3. Choose Manual Groups=>Friends from the Actions menu, type web in the Subgroup field, and click Create.
4. Perform an index search of the BMC Atrium Discovery datastore for Mellon (the name associated with the database). 5. Click on the Database Detail link. 6. From the Database Detail list, select all instances of the database and create a manual subgroup in the Friends group named db.
7. View the manual group to assess the details of the current prototype.
8. If necessary, add notes at the group, subgroup, or top level, or drag and drop items to rearrange them in the map to provide better
Page 819
Demonstration
Video 2 that follows illustrates the first three stages in the CAM process and shows how, by starting with seed data, the application mapper can begin searching the discovery datastore and building the prototype map.
Note You must click on the four-pointed icon to grab the item to be dragged. You will notice that the icon is moving with the cursor motion during the dragging process. You cannot try to click on the selection itself. 2. To delete an item from a group, click the four-pointed icon selector corresponding to the node you want to delete and drag it to the trash can icon.
If you have removed all items from a subgroup, a message displays in the group that the subroup is empty.
If the subgroup is empty, the trash can icon no longer displays in the subgroup.
Page 820
The Share stage involves using a dynamic PDF report to make collaboration more effective and informative. The report, which contains a preview of the mapping prototype, enables you to share the captured data you have collected in groups and subgroups and solicit feedback to determine any missing pieces. The application mapper provides the report to the application owner so that the owner can review the contents of the discovered applications and determine whether the prototype map should be refined further.
Recommendation Although you can create a PDF preview of the application map at any time, typically you perform the sharing exercise at least twice; once after the initial prototype, and once after the application mapper has added rules to the prototype and has mapped the application. This enables you to gather feedback before you evaluate your initial data gathering and finalize the map.
2. Click Save. The notes are displayed in the format that you specified. The following screen illustrates an example.
Page 821
b. Choose one of the following reporting levels from the Actions menu: Level Set PDF Detail High Set PDF Detail Medium Set PDF Detail Low Type of report generated Details the names of the nodes in each group and a list of attributes for each node, and displays functional components with all relationships to other components.
Details the names of the nodes in each group and a list of important attributes for each node. This is the default reporting level.
Details the names of the nodes in each group, but no further information on the nodes. You might choose this level, for example, if you have complex diagrams in your report where the links or relationships are not readable. This would display only the functional components and not their relationships to other components.
An example of all three levels combined in one report to represent a three-tier application is shown here.
Note Reports do not show functional component section headings for functional components that have no messages to display.
3. Choose Generate PDF Report from the Actions menu. A confirmation displays indicating what types of content will be included in the report.
4. From the status link at the top of the Group page, click the Reports Tab link. 5. From the Reports tab, click PDF Reports. 6. Select the Group Report in the list and click Download.
Example
After Mike has organized groups and created a prototype, he wants to collaborate on his findings with George to ensure that the right information is being accounted for. As an important first step in this process, he uses both the drag-and-drop and notes features to tailor the PDF preview report in the way that might make the collaboration more effective. 1. In the Notes field, type a summary of the prototype in wiki markup and questions you want to ask the application owner before finishing the prototype.
Page 822
2.
Recommendation Choose a low level of detail if there are complex diagrams in the report where the links or relationships are not readable. This displays only the functional components and not their relationships to other components.
4. Review the report. The following example shows the first page of the report with the Bookmarks pane open and the notes that were applied previously displayed at the top.
5. Attach the PDF to an e-mail and send it to the application owner. Based on the feedback from the George, Mike can go back and refine his prototype. He can perform a PDF preview as many times as he wants until he and George are satisfied with the prototype.
Demonstration
Video 3 that follows demonstrates how the application mapper begins his collaboration with the application owner by creating and sharing the prototype map.
After you have generated a prototype and have solicited feedback through a PDF preview, you are ready to move to the next stage and map your application. In your prototype, the specifications are only in your internal configuration. You have to
Page 823
ensure that they are created into patterns so that, during discovery, BMC Atrium Discovery can build the BAIs and push them into the CMDB. The Map Application stage transforms your static map to a dynamic mapping structure that enables BMC Atrium Discovery to recognize and maintain the application during and after your scans. The stage is divided into two steps. 1. Build rules by adding functional components. 2. Identify the environment by dividing into instances through named values.
Demonstration
See the following videos to learn how to map the application:
Video 4 demonstrates how to configure rules by creating functional components that enable BMC Atrium Discovery to build and maintain the application map dynamically.
Video 5 illustrates how to divide the application into instances to enable BMC Atrium Discovery to identify the environment to build and maintain the application map dynamically.
Video 6 demonstrates how to generate patterns to create the model in BMC Atrium Discovery.
Page 824
Atrium Discovery can infer from the Functional Component Definition the necessary details to create the model. In this manner, BMC Atrium Discovery enables you to model simple to sophisticated applications without the need to write TPL. There is no need to download and then upload a TPL file. BMC Atrium Discovery automatically activates the pattern and submits it into pattern management.
Non-ASCII Unicode characters in CAM In Collaborative Application Mapping (CAM), if you create components such as group names or functional components, that contain non-ASCII Unicode characters, the Business Application Instance (BAI) that results from running the pattern displays with unreadable characters.
The Functional Component Definition opens by default on the Settings tab. 3. Choose one of the following options from the Start with menu to define how to find the contents of the functional component: Software Instance Database Detail Detail Software Component Database Instance 4. Choose one of the following options from the Type menu to define the functional component: Required the functional component must be present for the BAI to be created. If you delete this functional component, the BAI is deleted too. Optional the functional component may exist but is not required for the BAI to exist. The functional component is only created if the appropriate information is found. Always Present similar to Optional though the functional component is always created even if the associated content is not found. For example, you know there must be a database but have failed to discover it, possibly because you do not have rights or cannot scan the endpoint. This type of functional component exists so it is possible to build a complete functional model. It is not removed until the BAI is removed. Working Step Only enables you to navigate through the model without creating a functional component. You can think of this as a placeholder for named values to be used for modeling other functional components. 5. In the Query Builder section, build and refine the queries to better define how BMC Atrium Discovery can find the functional components.
6. View the results in the Results section at the bottom of the page and go back and refine your queries as necessary. You can also define traversals, helping you browse around your data. For example, you could traverse from a set of hosts to the software running on all of those hosts.
Page 825
Recommendation Do not automatically choose the first entry in the query list. Sometimes you need to scroll down the list to get the best choice.
Note It is important to refresh the results to ensure that BMC Atrium Discovery will pick up any changes in the datastore resulting from changing the contents of groups.
SI to SI traversals
The following relationships are among those available when specifying a traversal from Software Instance to Software Instance: Software Instances observed to be communicating with this Software Instance Server Software Instances that this Software Instance is communicating with (Client to Server Comms) Client Software Instances that are communicating with this Software Instance (Server to Client Comms) Peer Software Instances that are communicating with this Software Instance (Peer to Peer Comms) The last three are explicit, meaning they are built by BMC Atrium Discovery for later use. Examples of explicit relationships are those created by patterns that parse application configuration files to find communicating hosts. The first relationship is a pseudo-traversal based on the communicatingSIs function. While the pseudo-traversal may be useful where relationships corresponding to the explicit relationships do not exist, it should be used with caution due to its potential impact on performance. These relationships can be created from the output of commands such as netstat. Note that the first relationship is available only in the the Traverse to dialog, and not the Filter dialog. You cannot filter by the first relationship because it is not a key expression.
Undoing actions
Creating Functional Component Definitions involves making anything from simple, discrete type changes to much more complex traversals and conditions, and sometimes during the process you might lose track of what you have done. Other times, you might want to undo a step to change your orientation and start again. The Undo feature enables you to cancel the one or a series of recent actions, and it displays the result in a banner at the top of the Functional Component Definition page. To undo an action, click the Undo last change link to the right of the Functional Component Definition page.
Note You must perform an action on the page before the Undo last change link will display.
In the following example screen, the Type has been changed to Required and the link has displayed on the page.
Hovering over the Undo link displays a tooltip that displays the action that will be undone. After the link is clicked, an information banner displays at the top of the page indicating the change.
The Undo action applies to a single state change or the link can be clicked multiple times to cycle back through multiple changes. The Undo sequence persists until you log off the system; therefore, you can navigate away from page as necessary.
Page 826
The Undo action is very definitive, so that even if you made a single character change in a query, for example, a single Undo action would only retrieve that character (making it appear that nothing has actually changed). The action works for all functions on the Functional Component Definition page, including traversals and deletions.
Example
Functional components help BMC Atrium Discovery find your applications by breaking them down into more defined blocks of information. From there, you define those components into rules that BMC Atrium Discovery can use to find the components and the application that uses them. You also used named values to differentiate and identify each component. Functional components can form hierarchical structures (for example, database, application server, and client tiers), but can also be used to form parallel blocks of data flows. The structure does not have to be tiered. They can contain software instances, business applications, and software components. For example, in the two-tier Friends application structure, the Web tier is a functional component, because it can help express redundancy to measure the impact on data. The database tier is also a functional component. Mike, the application mapper, has completed the first round of information gathering and has collaborated with George, the application owner, on the prototype map using a PDF preview report. Mike's next step is to create functional components. 1. Create a functional component for each subgroup or tier (Web, and DB) that you created in the prototype.
Next, you begin to define the rules that BMC Atrium Discovery will use to identify the applications. 2. Select the type of data to start with.
3. Select the type associated with the data. In this example, Required is selected for the Web tier because it represents a crucial Web application that must be present in the map.
4. Run queries with conditions to tailor the Results list to show only Software Components with the name Friends.
Page 827
5. The two software components that include the name friends are displayed in the Results pane.
Recommendation Remember that you can click the Undo last change link to revert back to an earlier action if you make a mistake at any point in the procedure. 6. Repeat steps 1 through 5 for the database component, while adhering to the following conditions for the database: Remember that the database has a relationship, so start with Web (Software Component) and choose Required for the Type. Traverse from Software Component to Software Instance through the Software Instance this Software Component is running on relationship. Traverse from Software Instance to Database Detail through the Detail depended upon by this Software Instance relationship.
Demonstration
Video 4 that follows demonstrates how the application mapper can begin building rules in his prototype map by adding functional components, the first step in mapping the application.
Page 828
Why can't I apply transformations? Transformations are manipulations of string attributes, so they cannot be applied to non-string attributes. Why can't I see all named values here? Named values can be compared only with named values of the same type (for example, string with string, number with number, and so forth). Therefore, if you choose a named value of one type, not all other named values may be available for comparison. Why can't I select named values here? Named values can be compared only with named values of the same type. If there are no other named values of that type, then no values can be selected. Also, named values will not display from other trees in the drop-down list, or for named values defined on the Functional Component Definition that you are adding a condition for. You cannot create named values based on date or time attributes. The following table describes the transformations that you can apply to named values. Transformation type append string [ constant] left [index] lower middle [index1, index2] prepend string [constant] regular expression [regex, default] See the note below. right [index ] split on [index, string] table [ default, ( key, value )...] trim upper Description Append the specified string to the named value string.
Use the first n characters from the named value string, where n is specified as the index. Convert any uppercase characters in the named value string to lowercase characters. Use a section of the named value string. The character to start from is specified with index, and the length of the string is specified with length. Prepend, or add the the specified string to the start of the named value string.
Use a regular expression to extract matching characters from the named value string.
Use the last n characters from the named value string, where n is specified as the index. Extract the group of characters from the named value string using string as a delimiter. The index specifies which group to extract. A table of key and value pairs. Where the named value string matches the left hand value, it is converted into the value on the right. The default value is used where the named value string does not match any of the values in the left hand column. Remove leading or trailing whitespace characters from the named value string. Convert any lowercase characters in the named value string to uppercase characters.
The following table shows examples of each of the transformations described in the previous table: Transformation type Example input Transformation After transformation webdav instance web webdav Dav in
append string [ constant] left [index] lower middle [index1, index2] prepend string [constant]
webdav
" instance"
3 4, 9
WebDav
"My "
My WebDav
Page 829
WebDav wlcmPROD wlcmTEST wlcmUAT webdav module on App Server on rhel5-tomcat55 deb-tomcat55 rhel5-tomcat55 w2k3-tomcat55 " webdav " webdav
([A-Z]+)$, Unknown Using a regular expression to extract uppercase characters from the end of the string. 3 1 " on "
trim upper
Using regular expressions in transformations Using a complex regular expression (one that takes a long time to process) as a transformation can render the user interface unresponsive to all logged in users. For information on regular expressions in BMC Atrium Discovery, see Writing efficient regular expressions.
3. Click Apply. The transformation displays in the field below the Add New Transformation menu, marked with an X. 4. In the Transformation field, type a value to see how the transformation applies to the corresponding name. In the following example, typing MYNAME in the field displays myname in the field corresponding to the selected transformation, indicating that the lowercase transformation works as expected.
If you type a value that exists in the datastore, the system will fetch the values and match the text, and then display those on the Editor. The following screen illustrates values for the process User Name being pulled from actual data in the datastore for each instance and applying the selected transformation to each one (in this example, a lowercase, prepended "I am the".
5. Click the X near the transformations you have validated and want to remove. 6. Repeat Steps 1 and 2 to add transformations that better define the value as necessary. 7. In the Make available as field, type the name of the identifier for an instance, select the Is identity check box, and click Apply.
Page 830
7.
After you have created values with names, you can also manipulate the values so that BMC Atrium Discovery can: provide the identity in human-readable naming conventions that help you differentiate between instances link all the components of an application instance make the names more logical to identify the application
Example
In his questions to George in his PDF, Mike needs to know if there are both a Production and Development instance in the Friends application. Rather than wait for an answer and be blocked from further progress, he knows that there are two database instances (mellon, and mellon_dev), which clues him that the latter is the Development instance, and the former might be the Production instance.
Based on this assumption, Mike begins to define a named value for the database tier in the application to differentiate across the Production and Development instances. He notices that, in the Results section, there are some names assigned to the instances that would require stripping the names to match the Friends application. So, because the database has instances, he decides to check that attribute first. 1. In the Named Value Editor, click show all in the Attributes section to expand the list of attributes. 2. Click the Instance attribute. The Sample values show the mellon names that he was looking for.
3. Transform the mellon instance to the Production instance, and mellon_dev to the Development instance.
4. Make the value available as the identifier *friends_db_id *. Next, Mike wants to run a PDF preview and verify with George that the model is taking shape. The PDF displays one page per functional component and a tree view of the functional component (which can also be used as an index). You can control the level of reporting in the PDF to respond to requests for more or less information, evaluate a bigger picture of the environment, or drill down to the details of the functional components and their relationships to all other nodes. 5. Copyright 2003-2013, BMC Software Page 831
5. From the Generate PDF Report dialog, choose the visualization detail you want to display in the report by selecting one of the following option buttons: No visualization: Generates a report with no illustrative examples of the functional components and their relationships to other hosts. Functional Components only: Generates a report with descriptive and visual information on functional components, but not about their relationships with other hosts. Functional Components and related hosts (the default): Generates a report with descriptive and visual information on functional components, including relationships with other hosts. For an example visualization, see page 10 of the example report. The *Generate PDF Report * dialog displays any guidance or warning messages associated with your application mapping exercise to this point.
6. If necessary, go back and make changes to the mapping in response to any messages. 7. Click OK. 8. Download the PDF and review it. The PDF now contains visualizations of both the database and Web tiers of the Friends application. In the following example, the functional components are indicated by the gray boxes for both the database and Web tiers, and the rules that Mike defined for each tier are labeled inside the boxes.
Checking his e-mail, Mike notices that George has responded to his questions that he noted in his earlier PDF report, confirming that the Friends application contains both a Production and Development instance. This validates Mike's findings in the PDF report and gives him the confidence to send his latest report to George. 9. Attach the PDF to an e-mail it to George for his feedback. George is impressed that Mike was able to generate such a rich report with all the details of the application. More importantly, George is happy that Mike was able to map the application accurately and efficiently with little help.
Demonstration
Video 5 that follows demonstrates how the application mapper can begin dividing the application into instances, the second step in mapping the application.
Page 832
After you have created named values, you are ready to generate TPL to encode all the details of your map into BMC Atrium Discovery. The pattern is automatically uploaded to BMC Atrium Discovery through the pattern management interface, after which it will run during discovery and build the appropriate model whenever your application is discovered. BMC Atrium Discovery can discover and automatically build Business Application Instances (BAIs) based on the generated patterns that contain Functional Component Definitions, and it maintains the definitions in patterns so that it is ready to use in your environment.
2. 3. 4. 5.
From the Discovery tab, click Recent Runs. Click Rescan Now to scan the architecture again. Click the Applications tab. In the Application Summary section, click Application Instances. The applications are displayed in the list.
6. Click one of the application instances in the list. The Application Instance page displays the complete BAI, detailing the functional components and software instances in the application.
Example
In the second step the mapping stage, Mike was able to identify the name of the Friends application (representing the transformed values) to divide the results into different instances (PROD and DEV). Now, at discovery time, BMC Atrium Discovery can use the transformed identity to split discovered results into different instances of the Friends application being mapped. Where Mike left off, the results of the identity change are displayed from the results of the Web and DB functional components that Mike has mapped.
The final step is for Mike to encode the definitions into TPL.
Page 833
1. Click Generate TPL. The Pattern Management page displays showing the changes being applied to BMC Atrium Discovery. When the pattern is finished generating, a banner displays at the top of the Pattern Management page indicating that the requested changes were successful. The Friends Application Model is listed on the Pattern Management: Browse Packages page.
Note Don't forget to scan any Mainframe instances. Also, make sure that you see the number of instances you expected, and remember that you can filter the list to get a more granular view.
When Mike performs a scan, he notices that both the Friends instances DEV and PROD have been built. Drilling down into the DEV instance, he can see the functional components, software instances, software components, and hosts that make up the application. He can also see that the new pattern (applicationmodel.Friends.build_Friends) has been generated.
Mike has successfully mapped the Friends application and created the model in BMC Atrium Discovery.
Demonstration
Video 6 that follows demonstrates how the application mapper can generate TPL to encode the application map into patterns and complete the model.
Page 834
You can export a single group or set of groups containing Functional Component Definitions. You would do this typically to back up the functional definitions and notes in the application that can be restored to the same appliance. The back-up can also serve as a: working back-up during mapping snapshot of the state of your application mapping work for troubleshooting during the mapping stage application mapping back-up in case of required appliance maintenance (for example, a model reinitialization) You can also export application mapping definitions and move them to another system. The same application mapping definitions can be deployed in different environments, such as production, development, and test. If you want to edit a definition that exists in more than one environment, you must do one of the following: Edit the application mapping definition in each environment where it is present. This is the recommended procedure to edit definitions. Edit the application mapping definition in one environment and export or import the edited definition to other environments. This is not the recommended procedure because, before attempting to export or import the edited definition, you must destroy the existing definition on the destination environment. If CMDB synchronization is in the continuous mode on the destination environment, destroying the existing definition causes the BMC_Application CI to be marked as deleted in the CMDB. However, after a successful export or import, the BMC_Application CI is inserted into the CMDB again.
Importing existing mapping definitions Importing an existing mapping definition set does not overwrite the original; a new set is created. You do not need to delete the mapping definition set after exporting it.
Note CAM definitions exported from BMC Atrium Discovery version 9.0 cannot be imported into version 8.3. However, definitions exported from version 8.3 can be imported into version 9.0.
1. On the Administration tab, click Application Mapping Import in the Model section.
2. On the Application Mapping Import page, click Browse and select a file to import. 3. Click Apply to import the file.
Page 835
3. A confirmation message and icon display at the top of the page, with a link that enables you to quickly import another file. Additionally, the Results area displays the results of the import, listing the groups that were imported and the associated functional components that were created.
If the file you selected was previously imported, the application name is appended with the next available number to represent a copy of the same application saved as a different name. 4. To import another file, click the New Import link under the confirmation message.
4. Save the file to be exported and click OK. 5. To import the file, click Application Mapping Import in the Model section of the Administration tab.
Legal Information
Copyright 2012 BMC Software, Inc. BMC, BMC Software, and the BMC Software logo are the exclusive properties of BMC Software, Inc., are registered with the U.S. Patent and Trademark Office, and may be registered or pending registration in other countries. All other BMC trademarks, service marks, and logos may be registered or pending registration in the U.S. or in other countries. All other trademarks or registered trademarks are the property of their respective owners. The information included in this documentation is the proprietary and confidential information of BMC Software, Inc., its affiliates, or licensors. Your use of this information is subject to the terms and conditions of the applicable End User License agreement for the product and to the proprietary and restricted rights notices included in the product documentation. BMC Software, Inc. 5th Floor 157-197 Buckingham Palace Road London SW1W 9SP United Kingdom Tel: +44 (0) 800 279 8925 Email: customer_support@bmc.com Web: http://www.bmc.com/support
Table of Contents
1 Table of Contents 2 Node lifecycle 2.1 Model Summary 2.2 The default data model
Page 836
2.3 How Nodes are Identified 2.4 Discovered Attributes by Platform 2.5 How nodes are removed 2.6 Relationships 2.7 Attachments 2.8 Partitions and History 2.9 Key Terminology 2.10 Inferred Nodes 2.10.1 Host Node 2.10.2 Host Container Node 2.10.3 Software Instance Node 2.10.4 Software Component Node 2.10.5 Business Application Instance Node 2.10.6 Functional Component Node 2.10.7 Mainframe Node 2.10.8 MFPart Node 2.10.9 Coupling Facility Node 2.10.10 Cluster Node 2.10.11 File Node 2.10.12 File System Node 2.10.13 Package node 2.10.14 Patch Node 2.10.15 Fiber Channel Nodes 2.10.16 Network Device Node 2.10.17 Printer Node 2.10.18 SNMP Managed Device Node 2.10.19 IPAddress node 2.10.20 Network Interface node 2.10.21 Subnet node 2.10.22 Detail Node 2.10.23 Database Detail Node 2.10.24 Storage Node 2.10.25 Storage Collection Node 2.10.26 Generic Element Node 2.10.27 Runtime Environment Node 2.11 DDD Nodes 2.11.1 Introductory Information 2.11.2 Discovery Run Node 2.11.3 Discovery Access Node 2.11.4 Device Info Node 2.11.5 Host Info Node 2.11.6 Directory Listing Node 2.11.7 Registry Listing Node 2.11.8 Service List Node 2.11.9 Discovered Directory Entry Node 2.11.10 Discovered Registry Entry Node 2.11.11 Discovered Service 2.11.12 Other DDD Nodes 2.11.13 Integration DDD Nodes 2.12 Pattern Management Nodes 2.12.1 Pattern Node 2.12.2 Pattern Module Node 2.12.3 Pattern Package Node 2.12.4 Pattern Configuration Nodes 2.12.5 Pattern Definitions Node 2.12.6 Pattern Define Nodes 2.12.7 Rule Module Node 2.13 Conjecture Nodes 2.13.1 AutomaticGroup Node 3 Legal Information
Node lifecycle
This Node Lifecycle Document (NLD) is intended for customers and partners of BMC Software and Integrations Developers to help you understand the BMC Atrium Discovery node lifecycle. This document details the internal components of BMC Atrium Discovery, the data model (for example, how virtualization hosts are modeled), and how to extend the data model. It is a good reference guide for integrating BMC Atrium Discovery with other systems. The prerequisite for understanding the content in this document is basic familiarity with BMC Atrium Discovery, and the document should be used in conjunction with the BMC Atrium Discovery User guide, Configuration guide, and the Using the Search and Reporting service. Model Summary The default data model
Page 837
How Nodes are Identified Discovered Attributes by Platform How nodes are removed Relationships Attachments Partitions and History Key Terminology Inferred Nodes DDD Nodes Pattern Management Nodes Conjecture Nodes For a summary description of the BMC Atrium Discovery data model, see Model Summary. For a diagram of the BMC Atrium Discovery data model and an overview of the node kinds, see The default data model. The Key and Name attributes are described in How Nodes are Identified. Information about the different methods which can be used to remove inferred nodes is described in How nodes are removed. An overview of Relationships and how they work is detailed in Relationships. For information about attachments, see Attachments. For information about partitions and history, see Partitions and History. The document includes sections specific to the available node types: Inferred nodes Directly Discovered nodes Pattern Management nodes Conjecture nodes Functional Component nodes The nodes are described in terms of their attributes, relationships and lifecycle. Nodes can also be created by a user; some of their attributes and relationships are only populated by the user through the User Interface. These specific instances are noted in the documentation.
Download a PDF Version of the BMC Atrium Discovery Node Lifecycle Document
PDF Download You can download a PDF version of the Node Lifecycle Document here. Be aware that the PDF document is simply a snapshot and will be outdated should updates occur on the wiki. The PDF file is generated from the wiki on each request so it may take some time before the download dialog is displayed.
Model Summary
The BMC Atrium Discovery data model and the approach to data storage enables you to model complex IT environments in such a way that the model is a very close representation of the actual environment. This document describes the mapping between the model and the real-world environment in detail. The model is constructed from discovery information, possibly augmented with data imported from other sources, and is an approximation of the actual state of the environment. See The default data model.
The Datastore
The datastore uses a graph model to represent entities in the environment and the relationships between them. The basic elements of the storage model showing the nodes, roles and relationships are represented in the following example.
All data is stored as Nodes. Nodes have a kind, such as 'Host', and a set of key-value pair attributes representing the state of the node. Values can be of any type, including complex structures and sequences. However, in the majority of cases, the values are simple scalar types such as strings and numbers, or sequences of scalar types. Relationships represent connections between Nodes, for example, to indicate that a process is running on a particular host computer. Relationships are attached to Nodes via Roles. The Roles remove ambiguities in two main situations: When both Nodes have the same kind - for example to model an 'Employment' relationship between two 'Person' nodes, one Person would have the role 'Employer' and other would have the role 'Employee'. When the same kind of relationship can exist for different reasons - for example a Person can have the role 'BusinessOwner' or 'SupportOwner' in an 'Ownership' relationship, while many different kinds of node have the 'OwnedItem' role at the other end of the relationship. The datastore itself does not maintain any knowledge of what kinds of nodes, roles and relationships are expected, or of the keys and value types
Page 838
expected as node and relationship state. These details are maintained by the Taxonomy subsystem. The Taxonomy defines the expected Node, Role and Relationship kinds, and the attributes which they are expected to have. The definitions in the Taxonomy determine the model of the environment.
Types of Information
The model makes explicit distinctions between the different types of information that are stored. The core types of information are as follows: Inferred see Inferred Nodes. Directly Discovered see Directly Discovered Data Nodes. Pattern Management see Pattern Management Nodes. BMC Atrium Discovery also stores provenance and imported information. Inferred information is the type of information that is likely to be of most interest to most users and is inferred from other information using rules in the reasoning engine. This type of information is unlikely to change between scans and includes classifications of hosts, groupings of discovered information into running instances of products, and so forth. See Inferred Nodes. In contrast, directly discovered information is obtained directly from a target host via particular discovery techniques. This includes items such as a list of running processes associated with a host, network connections and so forth. See Directly Discovered Data Nodes (DDD Nodes). Imported information is information about discovery targets acquired from a source other than the target itself, for example, a storage management system, or an Active Directory domain manager. Pattern Management information is any information that is included in a pattern and describes what could be discovered and inferred about an environment, as opposed to what has actually been discovered. The Technology Knowledge Network (TKN) provides some of this information, but it also includes customer-specific knowledge about the applications they use and rules about how they are connected. See Pattern Management Nodes. Provenance relationships are meta-information describing how the other information came to exist. It is automatically generated as Reasoning builds and maintains the model. For more information about provenance, see Provenance.
The relationships displayed in the taxonomy are not exhaustive. That is, all possible relationships are not displayed in the taxonomy. For example, the following containment relationships are valid: Group:Container:Containment:ContainedItem:SoftwareInstance Group:Container:Containment:ContainedItem:Host
The diagrams below shows the BMC Atrium Discovery Default Data Model with its different nodes, attributes and relationships between the different parts of BMC Atrium Discovery. It is split across 3 diagrams. Main Diagram contains all the core entities used in modeling and discovery of Hosts. Network and Printer Diagram contains entities used in the modeling and discovery of NetworkDevices and Printers. Mainframe Diagram contains entities used in the modeling and discovery of Mainframes. It is important to note that the model itself is not in separate sections, the three diagrams exist in order to more clearly convey information. Main Network and Printer Mainframe
Page 839
Color-Code Key
Inferred View Nodes and connection links Blue. Directly Discovered Data Nodes and connection links Green. Knowledge (Pattern Management) View Nodes and connection links Pink. Provenance Relationship connection links Red. Auxiliary Nodes and connection links Brown (used internally by BMC Atrium Discovery).
Node Kinds
Inferred Nodes
For details on these nodes see Inferred Nodes. Host Node Host Container Node Software Instance Node Software Component Node Business Application Instance Node Functional Component Node Mainframe Node MFPart Node Coupling Facility Node Cluster Node File Node File System Node Package node Patch Node Fiber Channel Nodes Network Device Node Printer Node SNMP Managed Device Node IPAddress node Network Interface node Subnet node Detail Node Database Detail Node Storage Node Storage Collection Node Generic Element Node Runtime Environment Node
Page 840
Other DDD Nodes Integration DDD Nodes Integration Point Node Integration Result Node SQL Result Row Node Provider Access Node
Conjecture Nodes
For details on these nodes see Conjecture Nodes. AutomaticGroup Node
Page 841
Some of my attributes are missing! Some attributes may not be discovered on a given operating system in your environment due to privilege requirements, hardware and software configurations, customizations, firewall configurations, security policies, and so forth. Additional attributes may be discovered by patterns, whether TKU provided, or your own custom patterns.
Page 842
os_version pdc processor_speed processor_type psu_status ram serial service_pack threads_per_core uuid vendor workgroup zone_uuid zonename Node/Attribute Patch name Node/Attribute Package arch description epoch name pkgname revision vendor version Node/Attribute NetworkInterface adapter_type aggregated_by aggregated_with aggregates aggregation_mode bonded default_gateway description dhcp_enabled dhcp_server
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X X
X X
X X
X X
X X
X X
X X
X X
X X
X X
X X X
X X
X X
X X
X X A B C D E F G H I J K L M N P R
X A B C D E F G H I J K L M
X N
X P
X R
X X X X X X X X X X
X X X
X X X
X X
X X
X X
X X
X X
X X X P R
X A
X B
X C
X D
X E
X F
X G
X H
X I J
X K
X L
X M
X N
X X X X X X X X X X X X X X X X X X X X X X X X X X X X X
Page 843
dns_servers duplex group_name ifindex interface_type is_console mac_addr name negotiation physical_adapters Node/Attribute physical_location ppa primary_wins_server secondary_wins_server service_name setting_id shared_adapters speed virtual_adapters Node/Attribute IPAddress broadcast ip_addr netmask prefix prefix_length temporary zone Node/Attribute FibreChannelHBA aix_manufacturer_code aix_part_number boot_bios driver_name driver_version firmware hba_id model X X X X X X X X X X X X X X X X X X X X X X X X X X A B C D E F G H I J X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X K L M N P X X X X X X X X X X X X X X X X X X X X X X X X A B C D E F G H I J K L M N P X X X X X X X X X X X X X X X X X X A X X B C D E F G H I J K L M N P X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X
X X
X X X
X X X X
X X X X X X
X X X X
Page 844
option_rom_version optrom_bios_version optrom_efi_version optrom_fcode_version optrom_fw_version serial Node/Attribute FibreChannelNode firmware wwnn Node/Attribute FibreChannelPort fabric_name port_speed port_state port_type supported_classes supported_speeds wwpn X X X A B C D A B C D
X X X X X X E F G H I J
X X X X X
X K L M
X N P R
X X E F G H I J
X X K L M
X X N X P
X X R
X X X X X X X
X X X X X X X
X X X X X X X X
Page 845
Authoritative
BMC Atrium Discovery has inferred the presence of a node from data such as an interface list, where there is clear evidence that the node exists. When BMC Atrium Discovery cannot see any evidence of this node, it no longer exists. Therefore, BMC Atrium Discovery removes the node automatically from the model. Aging is not applied to this type of removal.
Non-authoritative (aging)
BMC Atrium Discovery has inferred the presence of a node from data such as a process list, where there is clear evidence that the node exists. However, in cases when BMC Atrium Discovery cannot see any evidence that the node still exists, it does not necessarily imply that the node no longer exists. There might be other reasons why the node appears to no longer exist. For example, a process that relates to a software instance (SI), such as an Adobe FrameMaker desktop application, might appear to be running during an initial scan, but not on a subsequent scan. This does not necessarily indicate that Adobe FrameMaker no longer exists; instead, it is possible that the user has stopped running the application during the subsequent scan. Because it is the nature of IT infrastructure to have frequent minor changes to configurations, hosts, and software, discovered data becomes less relevant over time. Aging is applied to this type of removal to prevent frequent removal and recreating of SIs. When the node has been in the aging state for some time, it is then removed from the model. Only the following classes of nodes age: Root nodes (Host, NetworkDevice, MFPart, Printer, and SNMPManagedDevice) First order SoftwareInstances (ones created directly from DDD) RuntimeEnvironment Storage The SoftwareInstance, RuntimeEnvironment, and Storage nodes mentioned above are only automatically aged if they are triggered on certain types of DDD. These are described on the individual node pages. Although there are important considerations before doing so, you can modify data aging limits in the Model Maintenance settings of the user interface.
Secondary authoritative
This type of removal is authoritative, however it is based on different criteria than the original evidence for its existence. An example of this would be a batch process, where a node creation (and therefore existence) is triggered from seeing the job running in the process list, but BMC Atrium Discovery may check the timestamp of a log file to see if it believes it is running regularly. This type will apply to any removal which involves actively seeking evidence about whether something exists.
Containment
This type of removal applies when BMC Atrium Discovery has inferred nodes which have a containment relationship to another node. These are generally nodes which model physical items such as network interface cards. If the non-authoritative removal process (Aging) has caused BMC Atrium Discovery to remove a host, then the network interfaces that are contained in that host will also cease to exist. Therefore, Aging is not applied and the inferred node is removed.
Cascade
This type of removal applies where further inferred nodes are constructed from first order or other inferred nodes. Any aging or specific removal will already have taken place at the first order nodes, so there is no need to apply any further removal strategies. In this case the removal should cascade up.
Page 846
Relationships Relationships
Node associations are represented by relationships. There are different types of relationships which define hierarchy, hosting, containment, ownership, manufacturer, category, versions, location, dependency, in fact, any kind of relationship required to model an IT infrastructure. A type of relationship is represented as colon-separated strings. A relationship comprises more than the two Node endpoints, it contains the "route" between the two. This is made up in the following way:
Node:Role:RelationshipLink:Role:Node
For example the relationship from a Host Node to the Cluster Node that contains the host is shown as:
Host:ContainedHost:HostContainment:HostContainer:Cluster
The relationship from a Cluster Node to a Host Node that it contains is shown as:
Cluster:HostContainer:HostContainment:ContainedHost:Host
A relationship is a bidirectional link between two nodes. In this section, the terms "start node" and "target node" are used to describe the way that the relationship is built. However, once the relationship has been built, the terms are redundant.
The general timeline for the creation of a relationship is: 1. Create "start node". 2. Create "target node". 3. Create relationship between 1 and 2. The individual components of the relationship are described in the following sections.
Node
The Node is any kind of node that can have a relationship, such as a Host Node. All nodes can have relationships, including relationships.
Relationship Kinds
The relationship kind describes the nature of the relationship. For example, the DeviceSubnet relationship kind describes the link between a device and the subnet that it is on, but without any directionality implied.
This section refers to Relationship Kinds as the colon-separated lists which describe how two nodes are related, the roles that each node plays in the relationship, and the Relationship Link which describes the nature of the Relationship.
A complete list of Relationship Kinds with their descriptions is provided in the table below. Relationship Kind Hierarchy Relationships Hierarchy Attachment Relationships Attachment Relates a node to a file attachment. Relates nodes into a hierarchy. Description
Page 847
Category Relationships ElementCategory People Relationships Ownership Management Locations Location LocationContainment Hosts HostedSoftware HostContainment EndpointIdentity Applications SoftwareContainment SoftwareService Communication Dependency Physical Model DeviceSubnet DeviceInterface NetworkLink HostedFile RelatedFile FibreChannelNodeDevice FibreChannelNodePort FileSystemMount Directly Discovered Data DiscoveryAccessResult List Sequential Inference Status AccessFailure AccessOptimization Metadata Pattern Relationships PatternModuleContainment PatternModule PatternModuleDependency Associates a pattern module to its contents. Associates a pattern module to its pattern package. Associates a pattern module to its dependents. Relates a DiscoveryAccess to one of its results. Relates a List of Members. Relates sequential nodes. Relates an inferred node with its source node. Relationship to a status node. Access failure. Access optimization. Information about a discovery request. Relates an IP device to the subnet it is on. Relates an IP device with one of its network interfaces. Relates a network interface on a host to a port interface on a switch. Relates a file to its host. Relates a file to something that uses it. Relates a World Wide Node Number (WWNN) to an HBA. Relates a World Wide Port Number (WWPN) to a WWNN. Relates the mounter and the FileSystem. Relates an item of software to an item contained in it. Relates an item of software to a service. Relates software instances to communicating software instances. Relates elements to elements they depend upon. Relates software to the host it is running on. Relates hosts and software instances to contained virtual hosts. Relates Hosts that appear to have changed identity Relates an element to the location it is in. Relates a location to the larger location it is in. Relates an element to its owner. Relates an employee to their manager. Relates an element to a category it is in.
Page 848
Maintainer Requirement Override Deprecated Error PatternExecution PatternTouch PatternTrigger Request IntegrationImplementation ResourceUse Generic Modeling Relationships Containment Detail SupportDetail DIP Relationships Provides IntegrationPointContainment ConnectionUsage QueryUsage Rule Relationships RuleModuleDependency EnablingRange Foundation User Relationships Favorite InvokingUser Hardware Reference Data Relationships ReferenceData
Pattern that is maintaining an inferred node. Associates a pattern that will need the products of another. Associate patterns that are being overridden or are overriding others. Associated pattern has been deprecated or is deprecating others. Errors generated by this pattern at runtime. Associates a PatternExecution to its Pattern. Associate a PatternTrigger with things it changed. Associates a PatternExecution to its trigger nodes. Associates a Pattern to the DDD nodes requested by it. Relates the implementation of a resource Relates the usage of a resource
Relates an element to its container. Relates an element to details about it. Relates Host and SoftwareInstance nodes to SupportDetail nodes.
Relates an IntegrationProvider to all it's created IntegrationPoints. Containment of one node by another Relates an IntegrationResult to the connection that was used to generate that result Relates an IntegrationResult to the query that was used to generate that result
Associates a rule module to its dependents. Associates a discovery access with an IP range for revisiting purposes.
Role
The role describes the part that its node plays in the relationship. For example, where a Host Node represents a host which is running a software process: A Software Instance has been created which represents the process running on the host. The Host Node is acting in the role of Host; the host for some running software. The Software Instance is acting in the role of RunningSoftware; software which is running on a host. Roles are required because a node can play one of many parts in a relationship, and clarification is needed; the role clarifies the part that the nodes are playing in the relationship. A complete alphabetical list of Role Kinds in the default taxonomy with their descriptions is provided in the table below. Role Kind Associate Attachment Description Associate inference. Attachment.
Page 849
AttachmentContainer BusinessOwner Category Child Client CouplingFacility Contained ContainedHost ContainedLocation ContainedSoftware Container Contributor Dependant DependedUpon Detail DeviceOnSubnet DeviceWithInterface DeviceWithAddress DiscoveryAccess DiscoveryResult DiscoveryRun EdgeClient EdgeDevice Element ElementInCategory ElementInLocation ElementUsingFile ElementWithDetail Collection ElementWithStatus ManagedElement Employee EndpointRange FibreChannelDeviceWithNode FibreChannelNode FibreChannelNodeWithPort FibreChannelPort File Hardware
Node containing an attachment. Business owner of an element. Category of elements. Child in hierarchy. Client. Coupling Facility. An element contained within another. Host contained within another host. A location inside another location. Piece of software contained inside other software. An element containing others. Contributor inference. Entity that depends upon another. Entity depended upon by another. A detail belonging to an element. IP device belonging to a subnet. A device with an interface, e.g. a network interface. A device with an IP address. Discovery access. Discovery result. Discovery run. Edge client. Edge device. Element. Element belonging to a category. Element in a location. Element using a file. Element with associated details. Element containing a collection of elements. Element with an associated status. Element being managed. Employee. A range used to control endpoint access. A Fibre Channel HBA with a WWNN. A Fibre Channel WWNN for a HBA. A Fibre Channel WWNN with a WWPN. A Fibre Channel WWPN. A File. Physical host to which hardware reference data will be related.
Page 850
HardwareDetail HomeLocation Host HostContainer HostedFile ITOwner InferredElement InferredRelationship InstalledSoftware InterfaceOfDevice InterfaceWithAddress IPv4Address IPv6Address List Location LocationContainer Manager ManufacturedItem Manufacturer Member MountedFileSystem Mounter NetworkEndpoint New Old Next OperatingSystem OSDetail OwnedItem Owner Parent Pattern PatternConfiguration PatternDefine PatternDefinitions PatternModule PatternPackage PatternExecution PatternTrigger
Support Detail Data is for Hardware. Home location of an element. Host for Software Instances. Host containing other hosts. A File on a Host. IT owner of an element. Inferred element. Inferred relationship. Installed software. Interface of an IP device. Interface with an IP address. An IPv4 address. An IPv6 address. List. Current location of an element. A location containing other locations. Manager Manufactured item. Manufacturer of an item. Member of a list or collection. Mounted file system. Mounter of a file system. One end of a network connection. A new node. A replaced node. Next node in sequence. Operating system of a host. Support Detail Data is for Operating System. Item owned by someone or something. Owner of an item. Parent in hierarchy. Pattern. Pattern configuration. Pattern definitions function. Pattern definitions. Pattern module. Pattern package. Pattern execution. Pattern trigger.
Page 851
PatternCreated PatternModified PatternConfirmed PatternDestroyed Peer Previous Primary ReferenceData Relationship Resource ResourceUser RunningSoftware AggregateSoftware Service ServiceProvider Server SoftwareContainer SoftwareDetail Status StorageContainer ContainedStorage Storage Subnet SupportOwner Overridden Overrider Error PatternWithError
Created node. Modified node. Confirmed node. Destroyed node. Peer. Previous node in sequence. Primary inference. Holds hardware reference data. Relationship. Resource being used. User of resource. Software running on a host. Aggregate software running on a host. Service being provided. Provider of a service. Server. Piece of software containing other software. Support Detail Data is for Software. Status. Element containing storage. Storage contained in an element. Storage for an element. Subnet of an IP device. Person or group responsible for supporting an element. A pattern that is overridden by another. A pattern that is overriding another. An ECAError related to a pattern. A pattern with ECAErrors.
Provenance
By linking data items and their attributes to the evidence for them, BMC Atrium Discovery enables its data to be easily verified; a prerequisite for trusting it. This feature is called Provenance. Provenance Data is created by the Reasoning Engine and consists of relationships between Directly Discovered Data items and entities inferred from them - a relationship back to the source from the inferred entity. For example, there is a provenance relationship between each Software Instance (SI) and the data that caused the SI to be inferred - typically one or more processes, files, or the like. Provenance information is meta-information describing how the other information came to exist. It is generated as Reasoning builds and maintains the model. Provenance information is stored on relationships in the model. Inference Types Primary indicate that the existence of the evidence node is the reason that the inferred node was created. Contributor indicate that the evidence node provided information used in building the inferred node. Associate indicate that BMC Atrium Discovery knows there is a relationship between the evidence node and the inferred node. Relationship indicate that BMC Atrium Discovery knows that a relationship exists because of the evidence node. Deletion indicate that the removal of the inferred node was due to withdrawal of the evidence node. Maintaining Pattern the pattern maintaining a node.
Page 852
If you need to access the data in your model you should access the UI functions provided in BMC Atrium Discovery that enable this. Apart from specialist purposes, you should not need to understand the provenance implementation details in BMC Atrium Discovery. However, an overview is provided below.
A client-server relationship between SIs. This can be inferred based on the presence of network connections between processes. It can also be based on other evidence such as configuration files or directly querying the endpoints.
SoftwareInstance:Client:Communication:Server:SoftwareInstance
Equivalent to the client-server case, but where the endpoints in a communication are peers, rather than client and server.
SoftwareInstance:Peer:Communication:Peer:SoftwareInstance
A dependency between SIs, other than a communication link. Attributes on the relationship can indicate the kind of dependency.
SoftwareInstance:Dependant:Dependency:DependedUpon:SoftwareInstance
Page 853
Represents the relationship between logical hosts and their container for Sun E15Ks and similar.
HostContainer:HostContainer:HostContainment:ContainedHost:Host
Represents the relationship between network interfaces and the subnets they are connected to.
Subnet:Subnet:DeviceSubnet:DeviceOnSubnet:NetworkInterface
In addition to these relationships, the reasoning engine also constructs relationships to the Pattern Management Nodes, for example, patterns.
Attachments
Attachments are files that can be associated with many different kinds of nodes in the datastore, but they are usually used with hosts. A typical use of attachments is for disaster recovery purposes; all necessary documents can be stored and managed by BMC Atrium Discovery. Any number of attachments can be associated with a single node, and the same file can be attached to multiple nodes. A copy of the attached file is uploaded to the BMC Atrium Discovery appliance. If the same file is attached to different nodes, multiple copies are held on the appliance. When a node with attachments is marked as destroyed, the attachments are not destroyed, and remain attached to the destroyed node.
Page 854
The Default and Taxonomy partitions store a history of all changes for all nodes by default. The Default and Taxonomy partitions can be purged by a user with the model/datastore/partition/partitionname/write permission (where partitionname is a named partition or * for all). History is not recorded for attributes whose names begin with a double underscore, for example __patch_checksum.
When any changes are made to a node, they are recorded by default. The changes may be made by any of the following: Automatic changes made by: Reasoning Import tools User instigated changes made through the User Interface Changes are stored on the node that has been changed. The following items are stored regarding the change: Date Time timestamp for the change. User the user who made the change. Internal users, such as Reasoning, are recorded as such.
Key Terminology
This document assumes a familiarity with BMC Atrium Discovery and as a reminder here are some of the key terminology and concepts that are used throughout this guide: Data Aging - Discovered data is regarded as valid at the time of its last successful scan. The nature of IT infrastructure means that frequent, minor changes to configurations, hosts, and software are common. Consequently, discovered data can be regarded as becoming less current with the passing of time. In BMC Atrium Discovery when data passes a certain configurable aging threshold, it is destroyed. Datastore - All data used by the BMC Atrium Discovery system is held in an object database. The datastore treats data as a set of objects and the relationships between them. Directly Discovered Data - Data that the Discovery Engine has discovered; it has not undergone any processing beyond simple parsing. Everything that the Discovery Engine finds that may be of interest is stored, regardless of whether it is understood or not. It is stored in a structured form that can be queried and reported on, making it easy to construct certain kinds of discovery reports, and aids in developing new patterns. Event - The Rules Engine (ECA Engine) executes rules in response to Events. Host - A node in the model which represents a physical or virtual computer system including information about its operating system and its physical or virtual hardware. A host is sometimes referred to as an OSI (Operating System Instance). See Host Node. ID - The ID of a node is a unique identifier for the node itself. If the node corresponding to an entity is destroyed, and a new node is subsequently created for it, the new node will have a different ID, but will have the same key. Inferencing - This is the act of drawing conclusions based on other data. Key - The key of a node is a unique identifier for the entity that the node represents. However, the key of a node is persistent unlike the node ID. Lifecycle - The lifecycle of an entity describes the conditions under which it comes into existence, and the conditions under which it ceases to exist. For Nodes in the BMC Atrium Discovery model, the lifecycle stages are: Current - The lifecycle stage used to describe nodes which exist in your model. BMC Atrium Discovery has evidence that they currently exist in your environment. Aging - The lifecycle stage used to describe nodes which also exist in your model. However, these nodes represent entities that BMC Atrium Discovery has not seen in a certain period of time and has chosen to 'age out' of the model. Note that not all entities that BMC Atrium Discovery fails to see evidence for after a certain period of time are aged. Destroyed - The lifecycle stage used to describe nodes which have been specifically marked as destroyed. These nodes are 'destroyed', however, they remain in the model. Purged - The lifecycle stage used to describe destroyed nodes which have been specifically 'purged' from the model. Purging a node means that it no longer exists in the model and has been actually removed from the datastore. Node - A Node is an object in the BMC Atrium Discovery datastore, which represents an entity in the environment. Nodes have a kind, such as 'Host', and a number of named attributes. Nodes can be connected to other Nodes via Relationships. Most Node kinds have a 'key' which uniquely identifies the entity in the environment. Node Kind - The type of a node, such as Host or Software Instance. The default set of nodes and their associated attributes and relationships are defined in the BMC Atrium Discovery taxonomy. Pattern - Patterns are written in the Pattern Language (TPL). Patterns are responsible for creating and maintaining the model. Each pattern in TPL has a corresponding Pattern node in the model, which is related to the nodes that the pattern is maintaining. Patterns are used to extend the functionality of the reasoning engine. Provenance - Meta-information describing how inferred information came to exist. It is generated as Reasoning builds and maintains the model. Provenance information is stored as relationships in the model. Reasoning Engine - The Reasoning Engine is an event based engine which orchestrates and drives the population of the different parts of the data model through execution of a series of rules that make up the core functionality of the product. It is extensible through the use of patterns. Relationship - The way in which two or more nodes are associated with each other. Like nodes, relationships have a kind, such as 'Dependency' and a number of named attributes. Relationships are linked to Nodes via Roles. Removal - The concept of taking data out of the model using one or more of the BMC Atrium Discovery lifecycle methodologies (Aging,
Page 855
Destroyed or Purged). Removal is the collective term used in this document. Role - A node with a relationship to another node acts in a Role in the relationship, which indicates which end of the relationship it is. For example, in a 'Dependency' relationship, one node has the Role 'Dependant' and the other has the Role 'DependedUpon'. Rules Engine - This is another term used to describe the reasoning engine. The Rules Engine processes the rules that are generated from Patterns, in order to maintain the model. The Rules Engine is an Event Condition Action (ECA) engine. Rules - Rules are small fragments of executable code that run in the Rules Engine. Rules are generated from Patterns when they are activated. Other core rules are distributed with BMC Atrium Discovery. Taxonomy - The template defining the nodes, attributes, and relationships used by BMC Atrium Discovery and stored in the datastore. The BMC Atrium Discovery taxonomy also defines how much of the data model is represented in the user interface. Trigger - The Trigger for a pattern describes the conditions under which the pattern executes. Triggers correspond to the creation, modification or destruction of node. For further information on the BMC Atrium Discovery terminology, see the Glossary.
Inferred Nodes
This section describes the BMC Atrium Discovery Inferred Nodes. Inferred information is the type of information that is likely to be of most interest to most users and is inferred from other information using rules in the Reasoning Engine. Host Node Host Container Node Software Instance Node Software Component Node Business Application Instance Node Functional Component Node Mainframe Node MFPart Node Coupling Facility Node Cluster Node File Node File System Node Package node Patch Node Fiber Channel Nodes Network Device Node Printer Node SNMP Managed Device Node IPAddress node Network Interface node Subnet node Detail Node Database Detail Node Storage Node Storage Collection Node Generic Element Node Runtime Environment Node
Host Node
A Host Node represents a general purpose computer on the network; any physical or virtual computer system in your organization. A host is of a specific type, such as a server or a desktop which BMC Atrium Discovery gains high quality interactive access to. A host is not an IP-connected device, such as a switch or a router. The attributes of a Host node contain stable information about the host, such as processor and memory details, its primary host name, operating system information, whether it is virtual, and so forth. Hosts are considered inferred information because rules inside the reasoning engine are used to determine which hosts exist based on the DiscoveryAccess results obtained. Hosts will only be created for discovery targets that should be considered hosts, as opposed to switches, printers or other infrastructure.
Creation/update
In order to maintain the model of hosts in the environment, BMC Atrium Discovery identifies hosts using an internal weighting algorithm, see Uniquely Identifying a Host for more information. Hosts are identified by a number of criteria using this algorithm. The creation, update and removal of a Host node are based on these identities. A Host node is created in the datastore when a new host is inferred from a successful discovery. A Host is only created when good quality access to the target computer has been established. Good quality access is necessary for the BMC Atrium Discovery internal system to establish that there is sufficient stable information available to identify a Host. Stable information is considered to be data such as hardware details, how many
Page 856
A Host node will not be created unless information is available from Device Info, Host Info and the discovered Network Interfaces.
For as long as BMC Atrium Discovery's host identification algorithm can determine that a particular discovery target corresponds to a particular Host node, the Host node is updated every time the target is scanned on any of its IP addresses. Once a Host node is created, BMC Atrium Discovery will update its attributes as the host changes over time, while still retaining the original Host node and key. When a Host node is first created, BMC Atrium Discovery constructs a key attribute that uniquely identifies the host. The key is based on a combination of attribute values from the Host, but those attributes can change over time. As the host is rescanned, the Host node attributes that contributed to the key can all change, but the key itself will not be changed. If a Host node is destroyed and then the corresponding host is rescanned, a new Host node will be created. If the attribute values that were originally used to construct the key have not changed, the new Host node will be given the same key as the original one, but if any of the attributes have changed, the new Host node will have a different key to the old one.
Removal
By default, a Host node will be automatically destroyed, if BOTH of the following circumstances are fulfilled: No update has been made in ten days from any of the IP addresses known for this host. No update has been made in the last seven scans on the IP address currently being scanned. Thus, if a host is not successfully scanned, by default, the Host node representing it will remain in the model for at least 10 days. In this context, "seen" means anything other than no update. The table below shows the default removal criteria for the Host Node. Node Kind Host Days 10 Scans 7
These are the default settings for the number of days and number of scans. You can change these settings on the Model Maintenance page. See Configuring model maintenance settings for more information.
When a Host node is destroyed, its related nodes are destroyed too. That is, if a Host node is destroyed, all of its software instances, network interfaces, files, and HBAs are also destroyed.
When a node with attachments is destroyed, the attachments are not destroyed, and remain attached to the destroyed node. See Attachments for an introduction to attachments. If a host node is destroyed from the datastore, all of its relationships are destroyed as well. For example, a host may have a relationship to a location. If the host is destroyed, the relationship to that location will also be destroyed. See How nodes are removed.
Page 857
for all the attributes are then added together. Once all the candidates have been evaluated, Host nodes with a negative ranking are ignored, and the highest positive-ranked Host node will be chosen. If no Host nodes have a positive score, or if no candidates have been found, a new Host node is created. In all cases, the Host node is then updated with the new data.
Endpoint identity
BMC Atrium Discovery scans IP addresses, or endpoints. It is the job of the Host identification algorithm to match endpoints to Host nodes. When an endpoint is scanned, the host identification algorithm sometimes decides that the endpoint now corresponds to a different Host node to the previous scan of that endpoint. This can happen for three main reasons: 1. IP addresses have been reassigned so the endpoint really does correspond to a different host. 2. The endpoint still corresponds to the same host, but the host has changed so much since the previous scan that BMC Atrium Discovery cannot tell that it is the same one. 3. The endpoint still corresponds to the same host, but BMC Atrium Discovery's access credentials have changed substantially, meaning the data appears very different. This can happen when switching between SNMP and login access, for example. All three situations look the same from BMC Atrium Discovery's point of view the discovered data is significantly different from the data previously retrieved from the endpoint. To facilitate understanding of these scenarios, BMC Atrium Discovery creates an EndpointIdentity relationship between the Host nodes. The Host view displays the relationship and allows comparison of the new and old Host nodes. In all situations, BMC Atrium discovery will ultimately resolve the condition. If the change is due to an IP address reassignment, the original Host may be found by scanning a different endpoint. If that happens, the Host is connected to the new endpoint and the EndpointIdentity relationship is removed. In other situations, the original Host node will age out according to the normal ageing rules.
Virtual Hosts
Virtual Hosts are modeled so that the virtual host is related through the software instance to the physical host. For example: VMware Virtual Machines Solaris Containers
Contained Hosts
Where a number of IP devices are discovered that are logical devices on hosts, they are modeled as shown in the diagram below. See Host Node lifecycle. For example, the following types of servers are treated in this manner: Sun Enterprise 10000 - E10K Sun Fire 12000 - F12K Sun Fire 15000 - F15K Sun Fire 20000 - F20K Sun Fire 25000 - F25K
Page 858
Clustered Hosts
Where a number of collaborative hosts are discovered (they are seen as individual IP devices and there is no parent), they are consolidated into clusters which are modeled as Cluster Nodes, see Cluster Lifecycle. For example, the following types of cluster are treated in this manner: Microsoft Clusters Veritas Cluster Servers
Host Attributes
The attributes and relationships of a Host Node are described in the tables below. Where hidden attributes are included, they are described in the most relevant section. Not all attributes are discovered for every platform, see the Discovered Attributes by Platform page for information on this. Fields that are displayed in the user interface but have no related attribute on a Host Node, that is, they are inherited or populated by relationships, are also shown. UI Name Attribute Name and Type General Details name string Not displayed in UI hostname string Not displayed in UI key string Not displayed in UI description string Not displayed in UI url string Not displayed in UI business_continuity_critical boolean Not displayed in UI third_party_support boolean Not displayed in UI synonyms list:string The name that the host is known by, typically the same as hostname. The name that the host is known by systems external to this host. Globally unique key. Description of the host. Legacy attribute not currently used. Can be used by patterns if desired. URL for information about the host. Legacy attribute not currently used. Can be used by patterns if desired. If true, the host is critical to operation of the business. Legacy attribute not currently used. Can be used by patterns if desired Description
If true, the host is supported by a third party. Legacy attribute not currently used. Can be used by patterns if desired Other names by which this host is known. Legacy attribute not currently used. Can be used by patterns if desired
Page 859
Not displayed in UI host_type string Type type string Hardware Vendor vendor string Virtual virtual boolean Partition partition boolean E10K SSP Hostname ssphostname string SunFire Domain sunfire_domain string Primary Domain Controller pdc string Zonename zonename string LDOM Name ldom_name string LDOM Role ldom_role string Power LPAR Name lpar_name string Power LPAR Number lpar_partition_number : int Identity Local FQDN local_fqdn string DNS Domain dns_domain string NIS/Windows Domain domain string Windows Workgroup workgroup string Operating System Discovered OS os string Discovered OS Class os_class string Discovered OS Type os_type string Discovered OS Version os_version string Discovered OS Edition os_edition string Discovered OS Architecture os_arch string Discovered OS Update Level os_level string
Type such as 'UNIX Server', 'Windows Desktop', and so on. This attribute is deprecated, use type instead. Type such as 'UNIX Server', 'Windows Desktop', and so on. The hardware vendor, for example, Dell Computer Corporation. Flag if the host is virtual. Flag if the host is a partition. The E10K SSP Hostname, the physical host for a Sun Microsystems "Blade" Servers. This is available on Sun Enterprise 10000 systems only. F15K SunFire domain. Common to all blades in a SunFire server. The attribute is set for all Host Nodes which are members of the server. Windows Primary Domain Controller. The Solaris Zonename of this host and its container. Currently only Solaris 10 supports zonenames. Solaris LDOM name. Solaris LDOM role. IBM Power LPAR Name. IBM Power LPAR Number.
The fully qualified domain name of the host. The DNS domain to which the host belongs. The NIS/Windows domain to which the host belongs. The Windows workgroup to which this host belongs.
The name and version of the Operating system. For example, Mandrake Linux release 9.1 (Bamboo) for i586. Operating system class. Operating system type. The name part of the os attribute. For example, Mandrake Linux. Operating system version. The version part of the os attribute. For example, 9.1. Operating system edition. Operating system architecture. Operating system update level.
Page 860
Discovered OS Build os_build string Discovered OS Vendor os_vendor string Service Pack service_pack int Discovered Kernel kernel string Patches and Packages Package Count package_count int Patch Count patch_count int Attributes used internally to short circuit some processing Not displayed in UI __patch_checksum_ string Not displayed in UI __package_checksum_ string Host aging attributes Not displayed in UI age_count int Not displayed in UI last_update_success date Not displayed in UI last_update_failure date Hardware and Network Model model string Serial Number serial string UUID uuid string Unique Host Id hostid string RAM ram int
Operating system build information. Operating system vendor. Service Pack number. Windows only. The version of the kernel. For example, on an AIX host, 5.3.
The number of consecutive successful (positive) or failed (negative) accesses, from any endpoint. The time at which a scan was last successfully associated with this host. The time at which a scan associated with this host failed.
The model name of the hardware. The serial number of the hardware. The universally unique identifier (UUID) of the host. Unique Host identification string. When HP-UX partitions are scanned, the partition ID is determined and appended to the original hostid. This differentiates individual hosts in an HP-UX domain. The amount of RAM (in MB) on the host as reported by the host Operating System (OS). For VMware guests this is the amount of RAM exposed to the guest which will probably be less than the total physical RAM on the container hardware. On an ESX server it is possible to overcommit; that is the total RAM exposed to guests can be greater than the physical RAM on the physical hardware. For Solaris Zones, all global and sub zones report the same value which is that of the physical RAM in the server. The number of physical processors on the host. The type of processor used in the host. For example, Intel(R) Pentium(R) 4 CPU. The speed of each processor in MHz. The number of logical processors available to the OS.
Number of Processors num_processors int Processor Type processor_type string Processor Speed processor_speed int Number of Logical Processors num_logical_processors int
Page 861
Cores per Processor cores_per_processor int Threads per Processor Core threads_per_core int CPU Threading Enabled cpu_threading_enabled boolean Number of Processor Types num_processor_types int All Processor Types _all_processor_types list: string All Processor Speeds _all_processor_speeds list: int Logical Processor Counts _all_processor_logical_counts list:int Physical Processor Counts _all_processor_physical_counts list:int Not displayed in UI _all_threads_per_core list: int Not displayed in UI _all_cores_per_processor list:int Power Supply Status psu_status list:string Attributes used only to facilitate search queries Not displayed in UI __all_ip_addrs_ list:string Not displayed in UI __all_dns_names_ list:string Attributes used for grouping _excluded_from_grouping boolean Attributes used for CMDB Sync CDM Virtual System Type cdm_virtual_system_type int Last successful CMDB sync last_cmdb_sync_success date Last failed CMDB sync last_cmdb_sync_failure date CMDB sync duration last_cmdb_sync_duration float CMDB CI count last_cmdb_sync_ci_count int CMDB Relationship count last_cmdb_sync_rel_count int
The number of cores per physical processor available to the OS. The number of threads per core in multi/hyper threaded processors available to the OS. Whether CPU hardware threading is enabled. The number of physical processor types. List of all processor types.
Displays the status for each power supply. This can be either OK or Fail.
BMC Atrium Discovery internal use only. Do not use. Internal attribute to aid searching Hosts by IP address. BMC Atrium Discovery internal use only. Do not use. Internal attribute to aid searching Hosts by name.
BMC Atrium Discovery internal use only. Records that a node has been marked as excluded from automatic grouping. Do not use.
Attribute for populating the CDM BMC_ComputerSystem.VirtualSystemType attribute. The time at which this Host was last successfully synchronized into the CMDB. The time at which an attempt to synchronize this Host into the CMDB failed. The time in seconds spent performing the last CMDB synchronization of this Host. The number of CIs corresponding to this Host at the last CMDB synchronization. The number of relationships between CIs corresponding to this Host at the last CMDB synchronization.
Host Relationships
Page 862
The relationships on a Host Node are described in the table below. UI name Containing AutomaticGroup Relationship Host: ContainedItem: Containment: Container: AutomaticGroup Host: ContainedItem: Containment: Container: Group Host: ElementInLocation: Location: Location: Location Host: Host: HostedSoftware: AggregateSoftware: BusinessApplicationInstance Host: Host: HostedSoftware: RunningSoftware: SoftwareInstance Host: Host: HostedSoftware: AggregateSoftware: SoftwareInstance Host: Host: HostedSoftware: RunningSoftware: RuntimeEnvironment Host: ContainedHost: HostContainment: HostContainer: HostContainer Host: ContainedHost: HostContainment: HostContainer: Cluster Host: ContainedHost: HostContainment: HostContainer: SoftwareInstance Host: DeviceWithInterface: DeviceInterface: InterfaceOfDevice: NetworkInterface Host: DeviceWithAddress: DeviceAddress: IPv4Address: IPAddress Description Owning AutomaticGroup.
Containing Group
Owning Group.
Location
Hosted Applications
Software Instances
Runtime Environments
Host Container
Cluster
Containing VM
Network Interfaces
IPv4 Addresses
Page 863
IPv6 Addresses
Host: DeviceWithAddress: DeviceAddress: IPv6Address: IPAddress Host: DeviceWithInterface: DeviceInterface: InterfaceOfDevice: FibreChannelHBA Host: Host: HostedFile: HostedFile: File Host: InferredElement: Inference: Primary: DeviceInfo Host: InferredElement: Inference: Primary: HostInfo Host: InferredElement: Inference: Associate: DiscoveryAccess Host: Host: HostedSoftware: InstalledSoftware: Patch Host: InferredElement: Inference: Contributor: DiscoveredPatches Host: Host: HostedSoftware: InstalledSoftware: Package Host: InferredElement: Inference: Contributor: DiscoveredPackages Host: Mounter: FileSystemMount: MountedFileSystem: FileSystem Host: Next: EndpointIdentity: Previous: Host
HBA Interfaces
Files
Device Info
Host Info
Discovery Access
Patches
Not displayed in UI
Packages
Not displayed in UI
File System
Page 864
Host: Previous: EndpointIdentity: Next: Host Host: Next: EndpointIdentity: Previous: NetworkDevice Host: Previous: EndpointIdentity: Next: NetworkDevice Host: Next: EndpointIdentity: Previous: Mainframe Host: Previous: EndpointIdentity: Next: Mainframe Host: Next: EndpointIdentity: Previous: Printer Host: Previous: EndpointIdentity: Next: Printer Host: Next: EndpointIdentity: Previous: SNMPManagedDevice Host: Previous: EndpointIdentity: Next: SNMPManagedDevice Host: Hardware: ReferenceData: ReferenceData: HardwareReferenceData Host: ElementWithDetail: SupportDetail: HardwareDetail: SupportDetail Host: ElementWithDetail: SupportDetail: OSDetail: SupportDetail
OS Support Details
Page 865
Details
Host: ElementWithDetail: Detail: Detail: Detail Host: Member: Collection: Collection: Collection Host: Host: ObservedCommunication: Host: Host Host: ManagedElement: Management: Manager: SoftwareInstance Host: ElementWithCondition: DiscoveryCondition: DiscoveryCondition: DiscoveryCondition Host: InferredElement: AccessFailure: DiscoveryAccess: DiscoveryAccess Host: InferredElement: AccessOptimization: DiscoveryAccess: DiscoveryAccess Host: AttachmentContainer: Attachment: Attachment: Attachment Host: ElementInCategory: ElementCategory: Category: LifecycleStatus Host: ElementInCategory: ElementCategory: Category: RecoveryTime Host: ElementInCategory: ElementCategory: Category: Family Host: OwnedItem: Ownership: Owner: OrganisationalUnit
Collections
Communicating With
Discovery Conditions
Not displayed in UI
Status
Recovery Time
Family
Organizational Unit
Page 866
Location
Host: ElementInLocation: Location: Location: Location Host: OwnedItem: Ownership: SupportOwner: Person Host: OwnedItem: Ownership: BusinessOwner: Person Host: OwnedItem: Ownership: ITOwner: Person
Support Manager
Business Owner
IT Owner
Creation
The creation of a Host Container Node is under the full control of patterns. Host Containers are created when they are detected on a host by a pattern which is searching for the host which would be contained. The pattern contains the conditions which enable it to detect a contained host. Once found, the pattern creates the Host Container that it needs. A relationship link is automatically created to the Host Node and the Host Container will have a key generated. The generated key for a Host Container Node is entirely dependent on the kind of Host Container.
Update
The update procedure for a Host Container is also under full control of patterns. When the pattern has identified a contained host, a key is generated for the Host Container. If a Host Container with that key already exists, the Host Container is updated accordingly.
Removal
The default destruction of a Host Container is not under the control of patterns. A Host Container Node will be removed when the last remaining contained Host Node is unlinked and there is no longer any evidence for that host or any of its relationships. This is a Cascade Removal type, see Cascade Removal.
Page 867
Not displayed in UI key string Description description string Hardware Vendor vendor string Serial Number serial string System Identifier systemid string Not displayed in UI url string Not displayed in UI business_continuity_critical boolean Not displayed in UI third_party_support boolean Not displayed in UI synonyms string Type type string HostID Range hostid_range list:string
Globally unique key. Description of the Host Container. Legacy attribute not currently used. Can be used by patterns if desired. Hardware vendor of the Host Container. Serial number of the Host Container. IBM Power Systems System Identifier. URL for information about the Host Container. Legacy attribute not currently used. Can be used by patterns if desired. If true, the Host Container is critical to operation of the business. Legacy attribute not currently used. Can be used by patterns if desired.
If true, the Host Container is supported by a third party. Legacy attribute not currently used. Can be used by patterns if desired. Other names by which this Host Container is known. Legacy attribute not currently used. Can be used by patterns if desired. Type of host container. Range of host ids contained in this container.
Not displayed in UI
Not displayed in UI
Page 868
Recovery Time
HostContainer: ElementInCategory: ElementCategory: Category: RecoveryTime HostContainer: OwnedItem: Ownership: Owner: OrganisationalUnit HostContainer: OwnedItem: Ownership: SupportOwner: Person HostContainer: OwnedItem: Ownership: BusinessOwner: Person HostContainer: OwnedItem: Ownership: ITOwner: Person
Organizational Unit
Support Manager
Business Owner
IT Owner
Why an SI and not a Runtime Environment? Software Instances represent pieces of software running on a host. A Runtime Environment represents something supporting running software. For example, Weblogic is represented by an SI, and Java by a Runtime Environment Node. However, it is still important to keep track of the supporting Runtime Environments and their versions as some may require updates.
Page 869
updated, even if the node was previously maintained by a different pattern. In this case, the pattern takes over as the maintainer of the Software Instance. If a node with the specified key does not exist, a new Software Instance Node is created. In both cases, the Software Instance Node is linked to the pattern with a maintainer relationship. If a key for the Software Instance Node is not specified by the pattern, the system creates or updates a group Software Instance with an automatically generated key. The key is based upon the key of the Host upon which the Software Instance is running, the specified type of the Software Instance and, optionally, a key group that can be used to separate the nodes into a number of groups. The count attribute is set to the number of instances in the group identified in the collection of Directly Discovered Data. Each time the host is scanned, the count attribute is changed to represent the number of instances seen in that scan.
Removal
The age_count attribute of the (first order) Software Instance Node contains information about when the Software Instance Node was last confirmed by its maintaining pattern. If the age_count is positive, it represents the number of consecutive scans of the Host Node in which the Software Instance was confirmed. If the age_count is negative, it represents the number of consecutive scans in which the Software Instance Node was not confirmed. The last_update_success and last_update_failure attributes contain the date and time at which the Software Instance Node was last confirmed, and not confirmed, respectively. If the pattern does not have a removal block, Software Instance Nodes are removed using an aging strategy based on the age_count and last_update_success attributes. The default aging parameters are the same as for a Host Node, that is, if a Software Instance Node has not been seen for at least 7 scans, over a period of at least 10 days, it is destroyed. The parameters may be changed in the options, see Configuring model maintenance settings for more information. The default aging strategy only applies to Software Instance Nodes created from patterns triggering on the following node kinds and maintaining the Software Instances: DiscoveredProcess DiscoveredService DiscoveredListeningPort DiscoveredSoftware DiscoveredVirtualMachine If the Software Instance is triggered on anything else, for example, a discovered file, then aging must be implemented in the pattern using a removal block. If the pattern maintaining a node does have a removal block, the block can override the default aging scheme to destroy its nodes either earlier or later than normal. For TKU patterns, refer to the documentation accompanying each pattern for details of special removal behavior. Regardless of the presence or absence of a removal block in the pattern, if the Host corresponding to a DDD-triggered Software Instance Node is destroyed, the Software Instance Node is immediately destroyed (see How nodes get removed).
Removal
Software Instance Nodes created as a result of Software Instance triggers are destroyed using the Cascade removal type; when the triggering Software Instance Node is destroyed, the destruction is cascaded to the higher-level Software Instance Node. See Cascade Removal.
It is possible for other triggers to be used. If any other trigger is used, BMC Atrium Discovery has no automatic removal behavior. Patterns must be used to explicitly destroy any Software Instance Nodes created as a result of other triggers.
Page 870
Type type string Instance Count count int Not displayed in UI key string Instance instance string VM Type vm_type string Cluster Identifier cluster_id string Cluster Internal Name cluster_name string Publisher publisher string
Type of the Software Instance. Number of instances grouped together. Globally unique key. The product's own name for this instance. The type of VM technology used by this instance of a virtual machine. The internal identifier of the cluster this element is contributing to. The internal name of the cluster this element is contributing to. The publisher of the Software Instance. Only populated in cases that a Pattern identifies products from more than one publisher. This information is normally found in the Pattern node's publishers attribute. The product name. Only populated in cases that a Pattern identifies more than one product. This information is normally found in the Pattern node's products attribute. Full-resolution version. Version publicised by the vendor.
Product Name product string Full Version version string Product Version product_version string Release release string Edition edition string Service Pack service_pack string Build build string Patch patch string Revision revision string Not shown in the UI age_count int Not shown in the UI last_update_success date Not shown in the UI last_update_failure date Not shown in UI _explicit_removal _string
Release number. Edition. Service Pack. Build number. Patch level. Revision. The number of consecutive successful (positive) or failed (negative) times the SoftwareInstance has been seen during host scans. The time at which a scan was last successfully associated with this SoftwareInstance.
Page 871
Host(s)
MFPart
MFPart(s)
Maintaining Pattern
Contains
Components
Contained within
Containing Applications
Cluster
Depends On Cluster
Database Details
RunningSoftware: HostedSoftware: Host: Host SoftwareInstance: Host(s) on which this aggregate Software Instance is AggregateSoftware: running. HostedSoftware: Host: Host SoftwareInstance: MFPart on which this Software Instance is running. RunningSoftware: HostedSoftware: Host: MFPart SoftwareInstance: MFPart(s) on which this aggregate Software Instance AggregateSoftware: is running. HostedSoftware: Host: MFPart SoftwareInstance: Pattern that is maintaining this Software Instance. Element: Maintainer: Pattern: Pattern SoftwareInstance: Server Software Instances that this Software Client: Instance is communicating with. Communication: Server: SoftwareInstance SoftwareInstance: Client Software Instances that are communicating Server: with this Software Instance. Communication: Client: SoftwareInstance SoftwareInstance: Peer Software Instances that are communicating Peer: with this Software Instance. Communication: Peer: SoftwareInstance SoftwareInstance: Software instances that make up this aggregate SoftwareContainer: Software Instance. SoftwareContainment: ContainedSoftware: SoftwareInstance SoftwareInstance: Software Components inside this Software Instance. SoftwareContainer: SoftwareContainment: ContainedSoftware: SoftwareComponent SoftwareInstance: Aggregate Software Instances containing this ContainedSoftware: Software Instance. SoftwareContainment: SoftwareContainer: SoftwareInstance SoftwareInstance: Business applications containing this Software ContainedSoftware: Instance. SoftwareContainment: SoftwareContainer: BusinessApplicationInstance SoftwareInstance: Cluster containing this Software Instance. ServiceProvider: SoftwareService: Service: Cluster SoftwareInstance: Cluster this Software Instance is dependant on. Dependant: Dependency: DependedUpon: Cluster SoftwareInstance: Details of this Software Instance. ElementWithDetail: Detail: Detail: DatabaseDetail
Page 872
Software Support Details SoftwareInstance: ElementWithDetail: SupportDetail: SoftwareDetail: SupportDetail Depended Upon By SoftwareInstance: Software Instances DependedUpon: Dependency: Dependant: SoftwareInstance Depends On Software SoftwareInstance: Instance Dependant: Dependency: DependedUpon: SoftwareInstance Depended Upon By SoftwareInstance: Software Components DependedUpon: Dependency: Dependant: SoftwareComponent Depends On Software SoftwareInstance: Component Dependant: Dependency: DependedUpon: SoftwareComponent Contained Virtual Host SoftwareInstance: HostContainer: HostContainment: ContainedHost: Host Contained MFPart SoftwareInstance: HostContainer: HostContainment: ContainedHost: MFPart Files SoftwareInstance: ElementUsingFile: RelatedFile: File: File Depended Upon By SoftwareInstance: DependedUpon: Dependency: Dependant: Detail Depends On SoftwareInstance: Dependant: Dependency: DependedUpon: Detail Database Elements This SoftwareInstance: Depends On Dependant: Dependency: DependedUpon: DatabaseDetail Details SoftwareInstance: ElementWithDetail: Detail: Detail: Detail Collections SoftwareInstance: Member: Collection: Collection: Collection Managed by Software SoftwareInstance: Instance ManagedElement: Management: Manager: SoftwareInstance Manages Software SoftwareInstance: Instances Manager:
Page 873
Management: ManagedElement: SoftwareInstance Manages Hosts SoftwareInstance: Hosts that this Software Instance manages. Manager: Management: ManagedElement: Host Manages Host Containers SoftwareInstance: Host Containers that this Software Instance Manager: manages. Management: ManagedElement: HostContainer Manages Clusters SoftwareInstance: Cluster this Software Instance manages. Manager: Management: ManagedElement: Cluster Primary processes SoftwareInstance: Discovered process from which the existence of this InferredElement: Software Instance was inferred. Inference: Primary: DiscoveredProcess Contributor processes SoftwareInstance: Discovered process from which one or more InferredElement: attributes of this Software Instance were inferred. Inference: Contributor: DiscoveredProcess Associated processes SoftwareInstance: Discovered process related in some way to this InferredElement: Software Instance. Inference: Associate: DiscoveredProcess Not displayed in UI SoftwareInstance: Discovered network connection from which one or InferredElement: more attributes of this Software Instance were Inference: inferred. Contributor: DiscoveredNetworkConnection Associated network SoftwareInstance: Discovered network connection related in some way connections InferredElement: to this Software Instance. Inference: Associate: DiscoveredNetworkConnection Not displayed in UI SoftwareInstance: Discovered listening port from which the existence of InferredElement: this Software Instance was inferred. Inference: Primary: DiscoveredListeningPort Not displayed in UI SoftwareInstance: Discovered listening port from which one or more InferredElement: attributes of this Software Instance were inferred. Inference: Contributor: DiscoveredListeningPort Associated listening ports SoftwareInstance: Discovered listening port related in some way to this InferredElement: Software Instance. Inference: Associate: DiscoveredListeningPort Not displayed in UI SoftwareInstance: Package from which one or more attributes of this InferredElement: Software Instance were inferred. Inference: Contributor: Package Associated packages SoftwareInstance: Package related in some way to this Software InferredElement: Instance. Inference: Associate: Package Not displayed in UI SoftwareInstance: Host from which one or more attributes of this InferredElement: Software Instance were inferred. Inference: Contributor: Host Not displayed in UI SoftwareInstance: Host related in some way to this Software Instance.
Page 874
Not displayed in UI
Not displayed in UI
Associated files
Primary software
Contributor software
Not displayed in UI
Not displayed in UI
Not displayed in UI
Not displayed in UI
Not displayed in UI
InferredElement: Inference: Associate: Host SoftwareInstance: Discovered file from which the existence of this InferredElement: Software Instance was inferred. Inference: Primary: DiscoveredFile SoftwareInstance: Discovered file from which one or more attributes of InferredElement: this Software Instance were inferred. Inference: Contributor: DiscoveredFile SoftwareInstance: Discovered file related in some way to this Software InferredElement: Instance. Inference: Associate: DiscoveredFile SoftwareInstance: Discovered software from which the existence of this InferredElement: Software Instance was inferred. Inference: Primary: DiscoveredSoftware SoftwareInstance: Discovered software from which one or more InferredElement: attributes of this Software Instance were inferred. Inference: Contributor: DiscoveredSoftware SoftwareInstance: Discovered software related in some way to this InferredElement: Software Instance. Inference: Associate: DiscoveredSoftware SoftwareInstance: Discovered command result from which one or more InferredElement: attributes of this Software Instance were inferred. Inference: Contributor: DiscoveredCommandResult SoftwareInstance: Discovered command result related in some way to InferredElement: this Software Instance. Inference: Associate: DiscoveredCommandResult SoftwareInstance: Discovered Windows Registry value from which one InferredElement: or more attributes of this Software Instance were Inference: inferred. Contributor: DiscoveredRegistryValue SoftwareInstance: Discovered Windows Registry value related in some InferredElement: way to this Software Instance. Inference: Associate: DiscoveredRegistryValue SoftwareInstance: Discovered WMI query result from which one or InferredElement: more attributes of this Software Instance were Inference: inferred. Contributor: DiscoveredWMI SoftwareInstance: Discovered WMI query result related in some way to InferredElement: this Software Instance. Inference: Associate: DiscoveredWMI SoftwareInstance: Pattern from which one or more attributes of this InferredElement: Software Instance were inferred. Inference: Contributor: Pattern SoftwareInstance: Application whose existence was inferred from this Primary: Software Instance. Inference: InferredElement: BusinessApplicationInstance
Page 875
Not displayed in UI
Associated Applications
Not displayed in UI
Not displayed in UI
Associated to Software
Not displayed in UI
Not displayed in UI
Associated Software
Not displayed in UI
Not displayed in UI
SoftwareInstance: Application whose attributes have been partly or Contributor: wholly determined from this Software Instance. Inference: InferredElement: BusinessApplicationInstance SoftwareInstance: Application related in some way to this Software Associate: Instance. Inference: InferredElement: BusinessApplicationInstance SoftwareInstance: Software whose existence was inferred from this Primary: Software Instance. Inference: InferredElement: SoftwareInstance SoftwareInstance: Software whose attributes have been partly or wholly Contributor: determined from this Software Instance. Inference: InferredElement: SoftwareInstance SoftwareInstance: Software related in some way to this Software Associate: Instance. Inference: InferredElement: SoftwareInstance SoftwareInstance: Software whose existence was inferred from this InferredElement: Software Instance. Inference: Primary: SoftwareInstance SoftwareInstance: Software whose attributes have been partly or wholly InferredElement: determined from this Software Instance . Inference: Contributor: SoftwareInstance SoftwareInstance: Software related in some way to this Software InferredElement: Instance. Inference: Associate: SoftwareInstance SoftwareInstance: MFPart from which the existence of this Software InferredElement: Instance was inferred. Inference: Primary: MFPart SoftwareInstance: MFPart from which one or more attributes of this InferredElement: Software Instance were inferred. Inference: Contributor: MFPart
Creation/update
This is under the full control of patterns and as a result there is no default SoftwareComponent Node behavior. The key for a SoftwareComponent Node is entirely dependent on the pattern that creates the SoftwareComponent Node. It is advised therefore that you take extra care when constructing the key attribute, as it will need to be unique amongst all SoftwareComponent nodes. Achieving this uniqueness would typically be done by including the following information in the key: The type attribute The parent node's key attribute
Page 876
Removal
Authoritative removal by the pattern that creates/updates the SoftwareComponent node should be considered. The pattern not only needs to create the correct SoftwareComponent structure, it also needs to maintain it as the configuration changes Built in removal rules will remove all the contained SoftwareComponent nodes if an SI/BAI/Host is removed.
If true, the Software Component is supported by a third party. Legacy attribute not currently used. Can be used by patterns if desired. Other names by which this Software Component is known. Legacy attribute not currently used. Can be used by patterns if desired.
Page 877
The relationships on a SoftwareComponent Node are described in the table below. UI name Software Instance Relationship SoftwareComponent: ContainedSoftware: SoftwareContainment: SoftwareContainer: SoftwareInstance SoftwareComponent: ContainedSoftware: SoftwareContainment: SoftwareContainer: BusinessApplicationInstance SoftwareComponent: ContainedFunctionality: FunctionalContainment: FunctionalContainer: FunctionalComponent SoftwareComponent: Element: Maintainer: Pattern: Pattern SoftwareComponent: DependedUpon: Dependency: Dependant: SoftwareInstance SoftwareComponent: Dependant: Dependency: DependedUpon: SoftwareInstance SoftwareComponent: DependedUpon: Dependency: Dependant: SoftwareComponent SoftwareComponent: Dependant: Dependency: DependedUpon: SoftwareComponent SoftwareComponent: DependedUpon: Dependency: Dependant: Detail SoftwareComponent: Dependant: Dependency: DependedUpon: Detail SoftwareComponent: InferredElement: Inference: Primary: DiscoveredProcess Description Software Instance this Software Component is running in.
Containing Applications
Maintaining Pattern
Depends On Detail
Primary processes
Discovered process from which the existence of this Software Component was inferred.
Page 878
Contributor processes
SoftwareComponent: InferredElement: Inference: Contributor: DiscoveredProcess SoftwareComponent: InferredElement: Inference: Associate: DiscoveredProcess SoftwareComponent: InferredElement: Inference: Contributor: DiscoveredNetworkConnection SoftwareComponent: InferredElement: Inference: Associate: DiscoveredNetworkConnection SoftwareComponent: InferredElement: Inference: Primary: DiscoveredListeningPort SoftwareComponent: InferredElement: Inference: Contributor: DiscoveredListeningPort SoftwareComponent: InferredElement: Inference: Associate: DiscoveredListeningPort SoftwareComponent: InferredElement: Inference: Contributor: Package SoftwareComponent: InferredElement: Inference: Associate: Package SoftwareComponent: InferredElement: Inference: Contributor: Host SoftwareComponent: InferredElement: Inference: Associate: Host SoftwareComponent: InferredElement: Inference: Primary: DiscoveredFile
Discovered process from which one or more attributes of this Software Component were inferred.
Associated processes
Not displayed in UI
Discovered network connection from which one or more attributes of this Software Component were inferred.
Not displayed in UI
Discovered listening port from which the existence of this Software Component was inferred.
Not displayed in UI
Discovered listening port from which one or more attributes of this Software Component were inferred.
Not displayed in UI
Package from which one or more attributes of this Software Component were inferred.
Associated packages
Not displayed in UI
Host from which one or more attributes of this Software Component were inferred.
Not displayed in UI
Not displayed in UI
Discovered file from which the existence of this Software Component was inferred.
Page 879
Not displayed in UI
SoftwareComponent: InferredElement: Inference: Contributor: DiscoveredFile SoftwareComponent: InferredElement: Inference: Associate: DiscoveredFile SoftwareComponent: InferredElement: Inference: Contributor: DiscoveredCommandResult SoftwareComponent: InferredElement: Inference: Associate: DiscoveredCommandResult SoftwareComponent: InferredElement: Inference: Contributor: DiscoveredRegistryValue SoftwareComponent: InferredElement: Inference: Associate: DiscoveredRegistryValue SoftwareComponent: InferredElement: Inference: Contributor: DiscoveredWMI SoftwareComponent: InferredElement: Inference: Associate: DiscoveredWMI SoftwareComponent: InferredElement: Inference: Contributor: Pattern SoftwareComponent: Primary: Inference: InferredElement: BusinessApplicationInstance SoftwareComponent: Contributor: Inference: InferredElement: BusinessApplicationInstance SoftwareComponent: Associate: Inference: InferredElement: BusinessApplicationInstance
Discovered file from which one or more attributes of this Software Component were inferred.
Associated files
Not displayed in UI
Discovered command result from which one or more attributes of this Software Component were inferred.
Not displayed in UI
Discovered Windows Registry value from which one or more attributes of this Software Component were inferred.
Discovered Windows Registry value related in some way to this Software Component.
Not displayed in UI
Discovered WMI query result from which one or more attributes of this Software Component were inferred.
Discovered WMI query result related in some way to this Software Component.
Not displayed in UI
Pattern from which one or more attributes of this Software Component were inferred.
Not displayed in UI
Not displayed in UI
Application whose attributes have been partly or wholly determined from this Software Component.
Associated Applications
Page 880
Not displayed in UI
SoftwareComponent: Primary: Inference: InferredElement: SoftwareInstance SoftwareComponent: Contributor: Inference: InferredElement: SoftwareInstance SoftwareComponent: Associate: Inference: InferredElement: SoftwareInstance SoftwareComponent: InferredElement: Inference: Primary: SoftwareInstance SoftwareComponent: InferredElement: Inference: Contributor: SoftwareInstance SoftwareComponent: InferredElement: Inference: Associate: SoftwareInstance
Not displayed in UI
Software whose attributes have been partly or wholly determined from this Software Component.
Associated to Software
Not displayed in UI
Not displayed in UI
Software whose attributes have been partly or wholly determined from this Software Component.
Associated Software
Creation/update
When a pattern declares the existence of a Business Application Instance Node, it must provide a key for it. If the key matches the key of an existing BAI Node, the node is updated, otherwise a new BAI Node is created.
Removal
Business Application Instance Node patterns normally trigger on the creation or modification of Software Instance or Business Application Instance Nodes. In this case, the Business Application Instance Nodes are removed using the Cascade removal type; when the triggering Software Instance Node or Business Application Instance Node is destroyed, the destruction is cascaded to the Business Application Instance. See Cascade Removal.
Page 881
If a Business Application Instance Node is created by a pattern triggered on a node kind other than a Software Instance Node or Business Application Instance Node, BMC Atrium Discovery has no automatic removal behavior. Patterns must be used to explicitly destroy any such Business Application Instance Node .
Page 882
If true, the Business Application is supported by a third party. Legacy attribute not currently used. Can be used by patterns if desired Other names by which this Business Application is known. Legacy attribute not currently used. Can be used by patterns if desired
Page 883
Software Components
Hosts
MFParts
Details
Collections
Not displayed in UI
Not displayed in UI
Software instance from which the existence of this application was inferred.
Not displayed in UI
Software instance from which one or more attributes of this application were inferred.
Page 884
Associated Software
Not displayed in UI
Application instance from which the existence of this application was inferred.
Not displayed in UI
Application instance from which one or more attributes of this application were inferred.
Associated Applications
Not displayed in UI
Not displayed in UI
Associated to Applications
Creation/update
Functional Component Nodes are created and updated in exactly the same way, and in parallel with their related BAI Nodes.
Removal
Functional Component Nodes are removed in exactly the same way, and in parallel with their related BAI Nodes.
Page 885
Database Detail
Software Component
Software Instance
Maintaining Pattern
Page 886
Mainframe Node
A Mainframe Node represents a mainframe computer Central Electronic Complex (CEC) or Central Processing Complex (CPC).
Creation/update
Mainframe Nodes are created and updated in exactly the same way as Host Nodes.
Removal
Mainframe Nodes are removed in exactly the same way as Host Nodes.
The name that the Mainframe is known by. Globally unique key. Description of the Mainframe. Legacy attribute not currently used. Can be used by patterns if desired. Model of mainframe. Vendor of mainframe. Serial id. Number of processors. Available central processors. URL for information about the Mainframe. Legacy attribute not currently used. Can be used by patterns if desired. If true, the Mainframe is critical to operation of the business. Legacy attribute not currently used. Can be used by patterns if desired.
If true, the Mainframe is supported by a third party. Legacy attribute not currently used. Can be used by patterns if desired. Other names by which this Mainframe is known. Legacy attribute not currently used. Can be used by patterns if desired. The number of consecutive successful (positive) or failed (negative) times the SoftwareInstance has been seen during host scans. The time at which a scan was last successfully associated with this SoftwareInstance. The time at which a scan associated with this SoftwareInstance failed.
Page 887
Discovery Access
Not displayed in UI
Page 888
Mainframe: Previous: EndpointIdentity: Next: Mainframe Mainframe: Next: EndpointIdentity: Previous: Printer Mainframe: Previous: EndpointIdentity: Next: Printer Mainframe: AttachmentContainer: Attachment: Attachment: Attachment
Not displayed in UI
Status
Mainframe: ElementInCategory: ElementCategory: Category: LifecycleStatus Mainframe: ElementInCategory: ElementCategory: Category: RecoveryTime Mainframe: OwnedItem: Ownership: Owner: OrganisationalUnit Mainframe: OwnedItem: Ownership: SupportOwner: Person Mainframe: OwnedItem: Ownership: BusinessOwner: Person Mainframe: OwnedItem: Ownership: ITOwner: Person
Recovery Time
Organizational Unit
Support Manager
Business Owner
IT Owner
MFPart Node
An MFPart Node represents a mainframe OSI either as a native LPAR running zOS, or a zVM-supported VM running zOS.
Page 889
MFPart Nodes are created and updated in exactly the same way as Host Nodes.
Removal
MFPart Nodes are removed in exactly the same way as Host Nodes. An MFPart Node is also destroyed when its parent mainframe is destroyed. This is a Containment Removal type because an MFPart cannot exist without its mainframe host. See Containment Removal. For information about the destruction of Host Nodes, see Host Node lifecycle.
Name of mainframe part. Description of mainframe part. Model of mainframe part. Vendor of mainframe part. Operating system. Operating system type. Operating system class. Operating system version. Operating system name. Flag if the MFPart is virtual. Flag if the MFPart is a partition. The virtual machine ID. Mainframe part name. Mainframe part type. This attribute is deprecated, use type instead. Mainframe part type. The number of consecutive successful (positive) or failed (negative) accesses, from any endpoint. The time at which a scan was last successfully associated with this MFPart.
Page 890
Last successful CMDB sync last_cmdb_sync_success date Last failed CMDB sync last_cmdb_sync_failure date CMDB sync duration last_cmdb_sync_duration float CMDB CI count last_cmdb_sync_ci_count int CMDB Relationship count last_cmdb_sync_rel_count int
The time at which this MFPart was last successfully synchronized into the CMDB.
The time at which an attempt to synchronize this MFPart into the CMDB failed.
The time in seconds spent performing the last CMDB synchronization of this MFPart.
The number of CIs corresponding to this MFPart at the last CMDB synchronization.
The number of relationships between CIs corresponding to this MFPart at the last CMDB synchronization.
Hosted Applications
Mainframe
Sysplex
CouplingFacility
Storage Collection
Page 891
Storage
MFPart: Host: Storage: Storage: Storage MFPart: ContainedHost: HostContainment: HostContainer: SoftwareInstance MFPart: InferredElement: Inference: Associate: DiscoveryAccess MFPart: InferredElement: AccessFailure: DiscoveryAccess: DiscoveryAccess MFPart: InferredElement: Inference: Primary: DiscoveredMFPart MFPart: AttachmentContainer: Attachment: Attachment: Attachment MFPart: ElementInCategory: ElementCategory: Category: LifecycleStatus MFPart: ElementInCategory: ElementCategory: Category: RecoveryTime MFPart: OwnedItem: Ownership: Owner: OrganisationalUnit MFPart: OwnedItem: Ownership: SupportOwner: Person MFPart: OwnedItem: Ownership: BusinessOwner: Person MFPart: OwnedItem: Ownership: ITOwner: Person
Containing VM
Discovery Access
Not displayed in UI
Not displayed in UI
Status
Recovery Time
Organizational Unit
Support Manager
Business Owner
IT Owner
Page 892
A CouplingFacility Node represents the mainframe computer's coupling facility, the component of the mainframe that enables multiple LPARs to access the same data.
Removal
CouplingFacility Nodes are removed when no relationships to MFPart nodes remain. A CouplingFacility Node is also destroyed when its parent mainframe is destroyed. This is a Containment Removal type because a CouplingFacility cannot exist without its mainframe host. See Containment Removal. For information about the destruction of Host Nodes, see Host Node lifecycle.
The name that the Coupling Facility is known by. Globally unique key. Description of the Coupling Facility. Legacy attribute not currently used. Can be used by patterns if desired. URL for information about the Coupling Facility. Legacy attribute not currently used. Can be used by patterns if desired. If true, the Coupling Facility is critical to operation of the business. Legacy attribute not currently used. Can be used by patterns if desired.
If true, the Coupling Facility is supported by a third party. Legacy attribute not currently used. Can be used by patterns if desired. Other names by which this Coupling Facility is known. Legacy attribute not currently used. Can be used by patterns if desired. Internal Storage capacity (bytes). Resource Manager Policy. Node descriptor.
Page 893
Sysplex
CouplingFacility: CouplingFacility: HostContainment: HostContainer: Cluster CouplingFacility: CouplingFacility: CouplingFacility: Host: MFPart CouplingFacility: AttachmentContainer: Attachment: Attachment: Attachment
Member MFParts
Not displayed in UI
Status
CouplingFacility: ElementInCategory: ElementCategory: Category: LifecycleStatus CouplingFacility: ElementInCategory: ElementCategory: Category: RecoveryTime CouplingFacility: OwnedItem: Ownership: Owner: OrganisationalUnit CouplingFacility: OwnedItem: Ownership: SupportOwner: Person CouplingFacility: OwnedItem: Ownership: BusinessOwner: Person CouplingFacility: OwnedItem: Ownership: ITOwner: Person
Recovery Time
Organizational Unit
Support Manager
Business Owner
IT Owner
Cluster Node
A Cluster node represents a group of hosts collaborating to form a cluster. The Cluster Node represents the group rather than a single physical entity. They are also used represent mainframe sysplexes. Cluster Nodes are under the full control of patterns. For detailed information about patterns, see the The Pattern Language TPL and Configipedia.
Cluster lifecycle
The following section describes the scenarios in which a Cluster Node is created, updated or removed.
Creation
The creation of a Cluster Node is under the full control of patterns. Clusters are created when they are detected on a host by a pattern which is searching for them. The pattern contains the conditions which enable it to detect a cluster. Once found, the pattern creates the Cluster that it needs. A relationship link is automatically created to the Host Node and the Cluster will have a key generated.
Page 894
The generated key for a Cluster Node is entirely dependent on the kind of Cluster.
Update
The update procedure for a Cluster is also under full control of patterns. When the pattern has identified a cluster, it calculates a key for the Cluster. If a Cluster with that key already exists, the Cluster is updated accordingly.
Removal
The default removal of a Cluster is not under the control of patterns. A Cluster Node will be removed when the last remaining Host Node or MFPart is unlinked. In some cases, particular patterns may not create a cluster until hosts are detected. In these cases, the pattern will remove the Cluster Node when there is only one host linked to it still. This is a Cascade Removal type, see Cascade Removal.
Cluster attributes
The attributes and relationships on a Cluster Node are described in the tables below. UI Name Attribute Name and Type Type type string ID id string General Details Name name string Not displayed in UI key string Description description string Not displayed in UI url string Not displayed in UI business_continuity_critical boolean Not displayed in UI third_party_support boolean Not displayed in UI synonyms list:string The name that the cluster is known by. Globally unique key. Description of the cluster. Legacy attribute not currently used. Can be used by patterns if desired URL for information about the cluster. Legacy attribute not currently used. Can be used by patterns if desired If true, the cluster is critical to operation of the business. Legacy attribute not currently used. Can be used by patterns if desired Description Type of cluster. Internal Identifier of cluster.
If true, the cluster is supported by a third party. Legacy attribute not currently used. Can be used by patterns if desired. Other names by which this cluster is known. Legacy attribute not currently used. Can be used by patterns if desired.
Cluster relationships
The relationships on a Cluster Node are described in the table below. UI name Member Hosts Relationship Cluster: HostContainer: HostContainment: ContainedHost: Host Description Logical hosts that are part of this logical cluster.
Page 895
Member MFParts
Cluster: HostContainer: HostContainment: ContainedHost: MFPart Cluster: HostContainer: HostContainment: CouplingFacility: CouplingFacility Cluster: Service: SoftwareService: ServiceProvider: SoftwareInstance Cluster: DependedUpon: Dependency: Dependant: SoftwareInstance Cluster: Element: Maintainer: Pattern: Pattern Cluster: ManagedElement: Management: Manager: SoftwareInstance Cluster: AttachmentContainer: Attachment: Attachment: Attachment
Service Software
Dependant Software
Maintaining Pattern
Managing Software
Not displayed in UI
Status
Cluster: ElementInCategory: ElementCategory: Category: LifecycleStatus Cluster: ElementInCategory: ElementCategory: Category: RecoveryTime Cluster: OwnedItem: Ownership: Owner: OrganisationalUnit Cluster: OwnedItem: Ownership: SupportOwner: Person Cluster: OwnedItem: Ownership: BusinessOwner: Person
Recovery Time
Organizational Unit
Support Manager
Business Owner
Page 896
IT Owner
File Node
A File Node is most often used to track a configuration file. The files that are tracked for each operating system are specified by patterns. File Nodes are under the full control of patterns. For detailed information about patterns, see the The Pattern Language TPL and Configipedia.
Creation/update
This is under the full control of patterns and as a result there is no default File Node behavior. The generated key for a File Node is entirely dependent on the pattern that creates the File Node.
Removal
A File Node can be destroyed in one of two ways: Explicitly by the pattern that created it. See Removal in the Pattern Language guide If the File node is triggered from another Inferred node, then when this inferred node is destroyed, so will the File node. This is a Cascade Removal type, see Cascade Removal.
File Attributes
The attributes and relationships on a File Node are described in the tables below. UI Name Attribute Name and Type Path path string Size size int MD5 Checksum md5sum string Last Modified last_modified string Content content string Description
The absolute path to the tracked configuration file. The size (in bytes) of the tracked configuration file. The md5 checksum of the file. The time and date of the last modification to tracked configuration file. This date and time is taken from the file's timestamp on the host machine.
The raw contents of the tracked configuration file. This attribute is populated by a cat command on UNIX or a type command in Windows. Binary files can be modelled, but the contents will not included.
File Relationships
The relationships on a File Node are described in the table below. UI name Relationship Description
Page 897
Host
File: HostedFile: HostedFile: Host: Host File: File: RelatedFile: ElementUsingFile: SoftwareInstance File: DependedUpon: Dependency: Dependant: Detail File: Element: Maintainer: Pattern: Pattern
Software Instances
Depended Upon By
Maintaining Pattern
Currently File System Nodes are not synced to CMDB by the default syncmapping sets. This can be extended.
FileSystem nodes come in three types: LOCAL: A filesystem that is LOCAL to the device, for instance a directly attached disk drive. SAN storage appears to the operating system as LUNs on a SCSI chain filesystem, so these devices are reported as LOCAL. REMOTE: A filesystem that is attached over the network from another device. This is the client side of a network filesystem. EXPORTED: A filesystem (or part of a filesystem) which is available for other devices to use. This is the server side of a network filesystem. An important aspect of this model is that the REMOTE type is the client's view of an EXPORTED filesystem. If both server and client ends are discovered there is a single EXPORTED FileSystem node attached to the server Host node and multiple REMOTE FileSystem instances, a single instance attached to each client Host node. Network file systems generally export a fraction of an underlying LOCAL filesystem. Thus, on a server exporting filesystems it is expected that the LOCAL and EXPORTED FileSystems instances will be related by the FileSystem:Local:NetworkFileSystem:Exported:FileSystem relationship showing which LOCAL filesystem supports the EXPORTED filesystem. Note that there may be several EXPORTED FileSystem nodes related to the same LOCAL FileSystem node, for instance on a Windows server the two shares C$ and ADMIN$ will usually both be exported from the same C: local filesystem. The above diagram shows a client (right side) and server (left side) Host. The server may be a UNIX server exporting a section of a local filesystem via NFS, or a Windows server exporting a shared folder via SMB. For each client mounting the NFS/SMB filesystem, the filesystem will be represented by a REMOTE FileSystem.
It is important to remember this distinction if you are considering synchronizing these nodes to the CMDB. The BMC Atrium Discovery model and the Common Data Model (CDM) have very different semantics around "remote" filesystems. Currently it is recommended to simply synchronize these to BMC_FileSystem.
Page 898
Page 899
Host
FileSystem: MountedFileSystem: FileSystemMount: Mounter: Host Exported: NetworkFileSystem: Remote: FileSystem Remote: NetworkFileSystem: Exported: FileSystem Exported: NetworkFileSystem: Local: FileSystem Local: NetworkFileSystem: Exported: FileSystem InferredElement: Inference: Primary: DiscoveredFileSystem
Mounted As
Mounted From
Exported From
Exported Directories
Inference
Primary inference.
Package node
A Package node represents a software package in your environment and is linked to all Host Nodes which contain it.
Creation
Package nodes are created when they are first detected on a host. The generated key of a Package node consists of the following critical properties; name version revision epoch architecture (e.g.: i386, x86_64) operating system information. Package nodes are shared by all host nodes with the same package installed, the same architecture and that are running the same operating system.
Update
A Package node is never updated because its key consists of the critical properties described above, and a change to any of these properties will result in a new Package node being created.
Removal
Package nodes are never removed.
Page 900
The attributes and relationships of a Package node are described in the table below. UI Name Attribute Name and Type Name name string Vendor vendor string Version version string Epoch epoch int Revision revision string Operating System os string Not displayed in UI key string Architecture arch string Description description string Solaris Full Name pkgname string Description
Name. Package vendor. Package version. Package epoch. Package revision. Operating System this Package is for. Derived from the os_type attribute of the Host Nodes it is installed on. Unique key. CPU type Package runs under. Package description.
Not displayed in UI
Software whose attributes have been partly or wholly determined from this package.
Patch Node
A Patch Node represents a software patch in your network environment and is linked to any Host Node which contains it.
You will only find Patch Nodes linked to operating systems which recognize patches. Currently only three operating systems that BMC Atrium Discovery models have the concept of a patch; Solaris, HPUX and Windows.
Page 901
Creation
Patch Nodes are created when they are first detected on a host. The key of a Patch Node consists of two critical properties; name and operating system information. Therefore, Patch Nodes are shared by all Host Nodes with the same patch installed.
Update
A Patch Node is never updated because its key consists of two critical properties and a change to either of them will result in a new Patch Node being created.
Removal
Patch Nodes are never removed.
A Fiber Channel Host Bus Adaptor (HBA) is a controller card that connects a host to a storage area network (SAN). The controller card contains logical devices (FCNs) that process data for transmission. Each FCN has a number of ports which are the hardware attachments that allow the node to send and receive information. The following nodes are used to model connections to a SAN: HBA Node models the Host Bus Adaptor card FCN Node models the logical devices (FCNs) FCP Node models the ports
Page 902
These nodes all have the same lifecycle, that is, they are created, or removed at the same time, as the result of the same discovery data. The same data also updates the nodes, though not all nodes may change at the same time. The lifecycle for all three node types is shown in the following sections, and the attributes and links are shown for individual node types. The three node types are referred to collectively as Fiber Channel Nodes. BMC Atrium Discovery currently models QLogic and Emulex HBA cards.
HBA Node
An HBA Node represents a Fiber Channel Host Bus Adaptor, the controller card that connects a host, usually via an optical connection to a storage area network (SAN). The HBA Node has child nodes; one or more FCN Nodes, which in turn have child nodes; one or more FCP Nodes.
FCN Node
The FCN Node represents the logical devices that process data for transmission. The logical devices are called FCNs and each has a unique World Wide Node Number (WWNN). FCNs have firmware on board which may be updated frequently. The firmware version may differ between nodes on the same HBA card.
FCP Node
FCN Nodes have one or more ports to provide a physical interface to communicate. The port is a hardware attachment that allows the node to send and receive information via the physical interface. The ports are called FCPs and each has a unique World Wide Port Number (WWPN).
Creation
The Fiber Channel Nodes are created in the datastore when a host is scanned with sufficient credentials to run the HBA discovery commands. These are hbainfo on UNIX and lputil and hbacmd on Windows. New HBA Nodes and their children are only created if a HBA Node with the same name (hba_id) does not exist on that Host Node.
Update
The Fiber Channel Nodes are updated in the datastore when a host is scanned with sufficient credentials to run the HBA discovery commands.
Removal
The Fiber Channel Nodes are not automatically removed, by default. They are only removed if their parent Host Node is destroyed. This is a Containment Removal type, see Containment Removal.
Page 903
UI Name Attribute Name and Type HBA Identifier hba_id string Vendor vendor string Firmware firmware list:string Model model string Description description string Serial serial string Driver Version driver_version string Optrom Bios Version optrom_bios_version string Optrom Efi Version optrom_efi_version string Optrom Fcode Version optrom_fcode_version string Optrom Fw Version optrom_fw_version string Option ROM Version option_rom_version string Driver Name driver_name string Boot Bios boot_bios string AIX Part Number aix_part_number string AIX Manufacturer Code aix_manufacturer_code string
Description The HBA identifier. For example 20:00:00:00:c9:38:dc. The HBA vendor. For example, Emulex. A list of firmware versions present on FCNs on this HBA. For example TS1.91A0. FCNs on the same HBA may have different firmware versions. Model name. Model description. Serial number. Driver version.
Optrom fw version.
Page 904
Discovered HBA
Primary inference.
Discovered HBA
Primary inference.
Page 905
Port Speed port_speed string Supported Speeds supported_speeds string Fabric Name fabric_name string Supported Classes supported_classes string
Discovered HBA
Primary inference.
Page 906
There is an option on the Edge Switch import tool to destroy any existing Network Device Nodes that are not included in the new import.
Page 907
Not displayed in UI last_update_success date Not displayed in UI last_update_failure date Last successful CMDB sync last_cmdb_sync_success date Last failed CMDB sync last_cmdb_sync_failure date CMDB sync duration last_cmdb_sync_duration float CMDB CI count last_cmdb_sync_ci_count int CMDB Relationship count last_cmdb_sync_rel_count int Key key string Name name string Description description string URL url string Business Continuity Critical business_continuity_critical boolean Supported by 3rd Party third_party_support boolean Synonyms synonyms list:string
The time at which a scan was last successfully associated with this Network Device. The time at which a scan associated with this Network Device failed. The time at which this NetworkDevice was last successfully synchronized _int_o the CMDB. The time at which an attempt to synchronize this NetworkDevice _int_o the CMDB failed. The time in seconds spent performing the last CMDB synchronization of this NetworkDevice.
The number of CIs corresponding to this NetworkDevice at the last CMDB synchronization. The number of relationships between CIs corresponding to this NetworkDevice at the last CMDB synchronization.
Globally unique key. Primary name. Description of the element. URL for information about the element. If true, element is critical to operation of the business.
True if the element is supported by a third party. Other names for the element.
Page 908
Not displayed in UI
NetworkDevice: InferredElement: Inference: Primary: DeviceInfo NetworkDevice: Next: EndpointIdentity: Previous: NetworkDevice NetworkDevice: Previous: EndpointIdentity: Next: NetworkDevice NetworkDevice: Next: EndpointIdentity: Previous: Host NetworkDevice: Previous: EndpointIdentity: Next: Host NetworkDevice: Next: EndpointIdentity: Previous: Mainframe NetworkDevice: Previous: EndpointIdentity: Next: Mainframe NetworkDevice: Next: EndpointIdentity: Previous: Printer NetworkDevice: Previous: EndpointIdentity: Next: Printer NetworkDevice: Next: EndpointIdentity: Previous: SNMPManagedDevice NetworkDevice: Previous: EndpointIdentity: Next: SNMPManagedDevice NetworkDevice: DeviceWithInterface: DeviceInterface: InterfaceOfDevice: NetworkInterface
Network Interfaces
Page 909
IPv4 Addresses
NetworkDevice: DeviceWithAddress: DeviceAddress: IPv4Address: IPAddress NetworkDevice: DeviceWithAddress: DeviceAddress: IPv6Address: IPAddress NetworkDevice: DeviceOnSubnet: DeviceSubnet: Subnet: Subnet NetworkDevice: AttachmentContainer: Attachment: Attachment: Attachment
IPv6 Addresses
Subnets
Not displayed in UI
Status
NetworkDevice: ElementInCategory: ElementCategory: Category: LifecycleStatus NetworkDevice: ElementInCategory: ElementCategory: Category: RecoveryTime NetworkDevice: ElementInCategory: ElementCategory: Category: Family NetworkDevice: OwnedItem: Ownership: Owner: OrganisationalUnit NetworkDevice: ElementInLocation: Location: Location: Location NetworkDevice: OwnedItem: Ownership: SupportOwner: Person NetworkDevice: OwnedItem: Ownership: BusinessOwner: Person NetworkDevice: OwnedItem: Ownership: ITOwner: Person
Recovery Time
Family
Organizational Unit
Location
Support Manager
Business Owner
The person or owner responsible for this element from a business perspective.
IT Owner
Printer Node
Copyright 2003-2013, BMC Software Page 910
Creation/update
Printer Nodes are created and updated in exactly the same way as Network Device Nodes.
Removal
Printer Nodes are removed in exactly the same way as Network Device Nodes.
Name of printer. Type of printer. This attribute was called device_type in BMC Atrium Discovery releases prior to version 9.0. The type attribute should now be used. Globally unique key. Model of printer. Vendor of printer. System name. System object id. Serial id. Operating system type. Operating system class. Operating system version. Operating system vendor. Internal attribute to aid searching Devices by IP address. Internal attribute to aid searching Devices by name. The number of consecutive successful (positive) or failed (negative) accesses, from any endpoint.
Not displayed in UI key string Model model string Vendor vendor string System Name sysname string System Object ID sysobjectid string Serial ID serial string Discovered OS Type os_type string Discovered OS Class os_class string Discovered OS Version os_version string Discovered OS Vendor os_vendor string Not displayed in UI __all_ip_addrs list:string Not displayed in UI __all_dns_names list:string Not displayed in UI age_count_int_
Page 911
Not displayed in UI last_update_success date Not displayed in UI last_update_failure date Last successful CMDB sync last_cmdb_sync_success date Last failed CMDB sync last_cmdb_sync_failure date CMDB sync duration last_cmdb_sync_duration_float_ CMDB CI count last_cmdb_sync_ci_count_int_ CMDB Relationship count last_cmdb_sync_rel_count_int_ Description description string Not displayed in UI url string Not displayed in UI business_continuity_critical boolean Not displayed in UI third_party_support boolean Not displayed in UI synonyms string
The time at which a scan was last successfully associated with this Printer. The time at which a scan associated with this Printer failed. The time at which this Printer was last successfully synchronized into the CMDB. The time at which an attempt to synchronize this Printer into the CMDB failed. The time in seconds spent performing the last CMDB synchronization of this Printer. The number of CIs corresponding to this Printer at the last CMDB synchronization. The number of relationships between CIs corresponding to this Printer at the last CMDB synchronization. Description of the Printer. Legacy attribute not currently used. Can be used by patterns if desired. URL for information about the Printer. Legacy attribute not currently used. Can be used by patterns if desired. If true, the Printer is critical to operation of the business. Legacy attribute not currently used. Can be used by patterns if desired.
If true, the Printer is supported by a third party. Legacy attribute not currently used. Can be used by patterns if desired. Other names by which this Printer is known. Legacy attribute not currently used. Can be used by patterns if desired.
Page 912
Printer: Previous: EndpointIdentity: Next: NetworkDevice Printer: Next: EndpointIdentity: Previous: Host Printer: Previous: EndpointIdentity: Next: Host Printer: Next: EndpointIdentity: Previous: Mainframe Printer: Previous: EndpointIdentity: Next: Mainframe Printer: Next: EndpointIdentity: Previous: Printer Printer: Previous: EndpointIdentity: Next: Printer Printer: Next: EndpointIdentity: Previous: SNMPManagedDevice Printer: Previous: EndpointIdentity: Next: SNMPManagedDevice Printer: DeviceWithInterface: DeviceInterface: InterfaceOfDevice: NetworkInterface
Network Interfaces
IPv4 Addresses
Printer: DeviceWithAddress: DeviceAddress: IPv4Address: IPAddress Printer: DeviceWithAddress: DeviceAddress: IPv6Address: IPAddress
IPv4 Addresses
Page 913
Not displayed in UI
Status
Printer: ElementInCategory: ElementCategory: Category: LifecycleStatus Printer: ElementInCategory: ElementCategory: Category: RecoveryTime Printer: OwnedItem: Ownership: Owner: OrganisationalUnit Printer: OwnedItem: Ownership: SupportOwner: Person Printer: OwnedItem: Ownership: BusinessOwner: Person Printer: OwnedItem: Ownership: ITOwner: Person
Recovery Time
Organizational Unit
Support Manager
Business Owner
IT Owner
Creation/update
SNMP Managed Device Nodes are created and updated in exactly the same way as Network Device Nodes.
Removal
SNMP Managed Device Nodes are removed in exactly the same way as Network Device Nodes.
Page 914
UI Name Attribute Name and Type Name name string Type type string Not displayed in UI key string Model model string Vendor vendor string System Name sysname string System Object ID sysobjectid string Serial ID serial string Discovered OS Type os_type string Discovered OS Class os_class string Discovered OS Version os_version string Discovered OS Vendor os_vendor string Not displayed in UI __all_ip_addrs list:string Not displayed in UI __all_dns_names list:string Not displayed in UI age_count_int_ Not displayed in UI last_update_success date Not displayed in UI last_update_failure date Last successful CMDB sync last_cmdb_sync_success date Last failed CMDB sync last_cmdb_sync_failure date CMDB sync duration last_cmdb_sync_duration_float_ CMDB CI count last_cmdb_sync_ci_count_int_ CMDB Relationship count last_cmdb_sync_rel_count_int_ Description description string Not displayed in UI url string
Description Name of device. Type of device. Globally unique key. Model of device. Vendor of device. System name. System object id. Serial id. Operating system type. Operating system class. Operating system version. Operating system vendor. Internal attribute to aid searching Devices by IP address. Internal attribute to aid searching Devices by name. The number of consecutive successful (positive) or failed (negative) accesses, from any endpoint. The time at which a scan was last successfully associated with this device. The time at which a scan associated with this device failed. The time at which this device was last successfully synchronized into the CMDB. The time at which an attempt to synchronize this device into the CMDB failed. The time in seconds spent performing the last CMDB synchronization of this device. The number of CIs corresponding to this device at the last CMDB synchronization. The number of relationships between CIs corresponding to this device at the last CMDB synchronization. Description of the device. Legacy attribute not currently used. Can be used by patterns if desired. URL for information about the device. Legacy attribute not currently used. Can be used by patterns if desired.
Page 915
Not displayed in UI business_continuity_critical boolean Not displayed in UI third_party_support boolean Not displayed in UI synonyms string
If true, the device is critical to operation of the business. Legacy attribute not currently used. Can be used by patterns if desired.
If true, the device is supported by a third party. Legacy attribute not currently used. Can be used by patterns if desired. Other names by which this device is known. Legacy attribute not currently used. Can be used by patterns if desired.
Page 916
SNMPManagedDevice: Previous: EndpointIdentity: Next: Mainframe SNMPManagedDevice: Next: EndpointIdentity: Previous: Printer SNMPManagedDevice: Previous: EndpointIdentity: Next: Printer SNMPManagedDevice: Next: EndpointIdentity: Previous: SNMPManagedDevice SNMPManagedDevice: Previous: EndpointIdentity: Next: SNMPManagedDevice SNMPManagedDevice: DeviceWithInterface: DeviceInterface: InterfaceOfDevice: NetworkInterface
Network Interfaces
IPv4 Addresses
SNMPManagedDevice: DeviceWithAddress: DeviceAddress: IPv4Address: IPAddress SNMPManagedDevice: DeviceWithAddress: DeviceAddress: IPv6Address: IPAddress SNMPManagedDevice: AttachmentContainer: Attachment: Attachment: Attachment
IPv4 Addresses
Not displayed in UI
Status
SNMPManagedDevice: ElementInCategory: ElementCategory: Category: LifecycleStatus SNMPManagedDevice: ElementInCategory: ElementCategory: Category: RecoveryTime SNMPManagedDevice: OwnedItem: Ownership: Owner: OrganisationalUnit
Recovery Time
Organizational Unit
Page 917
Support Manager
SNMPManagedDevice: OwnedItem: Ownership: SupportOwner: Person SNMPManagedDevice: OwnedItem: Ownership: BusinessOwner: Person SNMPManagedDevice: OwnedItem: Ownership: ITOwner: Person
Business Owner
IT Owner
IPAddress node
IPAddress node
An IPAddress node represents an IPv4 or IPv6 address of a host or a device.
Creation
An IPAddress node is created in the datastore when a host, network device, printer, or SNMP managed device is scanned successfully. The new IPAddress node will only be created if one with the same key does not already exist as a child of that node.
Update
An IPAddress node is updated every time the parent host or a device is scanned successfully with sufficient credentials to run the discovery command.
Removal
An IPAddress node is automatically destroyed if it is not seen on a subsequent scan of its parent host or device. This is an Authoritative Removal type, see Authoritative Removal. The IPAddress node is also destroyed when its parent host or device is destroyed. This is a Containment Removal type because an IPAddress cannot exist without its parent host or device, see Containment Removal. For information on the destruction of Host Nodes, see Host Node lifecycle.
Page 918
Broadcast Address broadcast string Netmask netmask string Prefix prefix string Prefix Length prefix_length int Link Local link_local boolean Site Local site_local boolean FQDNs fqdns list:string Interface Aliases interface_aliases list: string Temporary temporary boolean Zone zone string
IPv4 broadcast address. IPv4 subnet mask. IPv6 prefix in canonical (CIDR) form. IPv6 prefix length in bits. Whether it is an IPv6 link local address. Whether it is an IPv6 site local address. Fully Qualified Domain Names. The interface alias on platforms that report addresses in this way.
Indicates a temporary IPv6 address where this can be detected/reported. The name of the zone this address is in.
Host
Network Device
Network Device
Printer
Printer
Page 919
IPAddress: IPv4Address: DeviceAddress: DeviceWithAddress: SNMPManagedDevice IPAddress: IPv6Address: DeviceAddress: DeviceWithAddress: SNMPManagedDevice IPAddress: DeviceOnSubnet: DeviceSubnet: Subnet: Subnet IPAddress: IPv4Address: InterfaceAddress: InterfaceWithAddress: NetworkInterface
The SNMP managed device that this IPv4 address belongs to.
The SNMP managed device that this IPv6 address belongs to.
Subnet
Network Interface
Network Interface
Discovered IP Address
IPAddress: InferredElement: Inference: Primary: DiscoveredIPAddress IPAddress: InferredElement: Inference: Contributor: DiscoveredFQDN
Primary Inference.
Discovered FQDNs
Contributor Inference.
Creation
A NetworkInterface node is created in the datastore when a host, network device, printer, or SNMP managed device is scanned successfully. The new NetworkInterface node will only be created if one with the same key does not already exist as a child of that node.
Update
A NetworkInterface node is updated every time the parent host or device is scanned successfully with sufficient credentials to run the discovery command.
Removal
A NetworkInterface node is automatically destroyed if it is not seen on a subsequent scan of its parent host or device. This is an Authoritative Removal type, see Authoritative Removal. The NetworkInterface node is also destroyed when its parent host or device is destroyed. This is a
Page 920
Containment Removal type because a NetworkInterface cannot exist without its parent host or device, see Containment Removal. For information on the destruction of Host Nodes, see Host Node lifecycle.
Unique key. Full name of the network interface. Interface name of the network interface. Physical (MAC) Address. Current speed in Mbps. Current speed in bps. Duplex setting (Full/Half). Negotiation setting. A flag which indicates that this interface represents a bonded (teamed/aggregated) interface. Shows which mode the aggregation is in. The interfaces aggregated by this interface. The interface which aggregates this interface. The interfaces aggregated with this interface.
The ifIndex value. The ifName value. The ifAlias value. A flag which indicates that this interface represents the service console for an ESX host. Type of interface where it is known. The SNMP\WMI description for the interface. The name of the IPMP group this interface is in. The virtual adapters dependent on this physical adapter.
Page 921
Physical Adapters physical_adapters list:string Shared Adapters shared_adapters list:string Physical Location physical_location string Device ID device_id string Adapter Type adapter_type string DHCP Enabled dhcp_enabled boolean Setting GUID setting_id string Service Name service_name string Index index int DNS Hostname dns_hostname string Database Path database_path string DNS Servers dns_servers list:string Default Gateway default_gateway string Manufacturer manufacturer string DNS Domain dns_domain string DHCP Server dhcp_server string Primary WINS Server primary_wins_server string Secondary WINS Server secondary_wins_server string Index ppa string
The AIX adapter location. Windows: network device ID. The type of this network adapter. Indicates if this interface is configured by DHCP. Windows: Setting GUID. Windows: Service Name. Windows: Interface index, used as zoneid for IPv6. DNS Hostname for this interface. Windows: path to the network database files. DNS Servers for this interface. Default gateway. Network adapter manufacturer. DNS Domain for this interface. DHCP server for this interface. Primary WINS server for this interface.
Page 922
Network Device
NetworkInterface: InterfaceOfDevice: DeviceInterface: DeviceWithInterface: NetworkDevice NetworkInterface: InterfaceOfDevice: DeviceInterface: DeviceWithInterface: Printer NetworkInterface: InterfaceOfDevice: DeviceInterface: DeviceWithInterface: SNMPManagedDevice NetworkInterface: InterfaceWithAddress: InterfaceAddress: IPv4Address: IPAddress
Printer
The SNMP managed device that this network interface belongs to.
IP Address
IP Address
Edge Device
NetworkInterface: EdgeClient: NetworkLink: EdgeDevice: NetworkInterface NetworkInterface: EdgeDevice: NetworkLink: EdgeClient: NetworkInterface NetworkInterface: InferredElement: Inference: Primary: DiscoveredInterface
The edge device interface that this client interface is connected to.
Edge Client
The client interface that this edge device interface is connected to.
Not displayed in UI
Primary Inference.
Subnet node
A Subnet Node represents an IP subnet on which IP addresses have been scanned.
Subnet Lifecycle
The following section describes the scenarios in which a Subnet Node is created, updated or destroyed.
Creation
A new Subnet Node is created in the datastore when a host, network device, printer, or SNMP managed device is scanned and its DiscoveredIPAddress is on a subnet that is not represented in the datastore. Subnet nodes are not created for the IPv6 link local prefix. The generated key for a Subnet Node is based on the IP range.
Update
A Subnet Node's identity and entire data content are the same thing and are consequently not updated.
Page 923
Removal
Subnet Nodes are never removed.
Subnet Attributes
The attributes and relationships of a Subnet Node are described in the tables below. UI Name Attribute Name and Type Key key string Type type string IP range ip_address_range string Description Unique key. IP address type (IPv4 or IPv6). Range of IP addresses in this subnet.
Subnet Relationships
The relationships of a Subnet Node are described in the table below. UI name IP Addresses Relationship Subnet: Subnet: DeviceSubnet: DeviceOnSubnet: IPAddress Subnet: InferredElement: Inference: Primary: DiscoveredIPAddress Subnet: ElementInLocation: Location: Location: Location Description IP addresses on this subnet.
Triggering DiscoveredIPAddress
Location
Detail Node
A Detail Node is a multipurpose node kind that, as its name suggests, is used to store additional details of other nodes. Typically it would be used when there is a need to add something more than some basic attributes to augment existing nodes in the model. If you are extending database Software Instance nodes then you should use a Database Detail Node. Detail nodes have two default relationships defined that allow for multiple levels of modeling. For tree like structures use the containment relationship Detail:Contained:Containment:Container:Detail. For record/table like structures use the list relationship Detail:List:List:Member:Detail. By default SoftwareInstance, BusinessApplicationInstance and Host nodes all have relationships to Detail nodes. This does not prevent you relating Detail nodes to other node kinds.
Creation/update
This is under the full control of patterns and as a result there is no default Detail Node behavior.
Page 924
Setting the key for a Detail Node The key for a Detail Node is entirely dependent on the pattern that creates the Detail Node. It is advised therefore that you take extra care when constructing the key attribute, as it will need to be unique amongst all Detail nodes. Achieving this uniqueness would typically be done by including the following information in the key: The type attribute The parent node's key attribute
Removal
Authoritative removal by the pattern that creates/updates the Detail node should be considered. The pattern not only needs to create the correct Detail structure, it also needs to maintain it as the configuration changes Built in removal rules will remove all the contained Detail nodes if an SI/BAI/Host is removed. Built in removal rules will remove all the child Detail nodes (over both Containment and List) to allow simple deletion of part structures.
Detail attributes
The attributes and relationships on a Detail Node are described in the tables below. UI Name Attribute Name and Type Name name string Key key string Type type string Trigger Count count string Description Name of the detail. Globally unique key. Type of detail. Trigger count.
Detail relationships
The relationships on a Detail Node are described in the table below. UI name Software Instance Relationship Detail: Detail: Detail: ElementWithDetail: SoftwareInstance Detail: Detail: Detail: ElementWithDetail: RuntimeEnvironment Detail: Detail: Detail: ElementWithDetail: BusinessApplicationInstance Detail: Detail: Detail: ElementWithDetail: Host Description Software Instance with this detail.
Runtime Environment
Host
Page 925
MFPart
Detail: Detail: Detail: ElementWithDetail: MFPart Detail: Detail: Detail: ElementWithDetail: GenericElement Detail: Member: List: List: Detail Detail: List: List: Member: Detail Detail: Contained: Containment: Container: Detail Detail: Container: Containment: Contained: Detail Detail: Dependant: Dependency: DependedUpon: SoftwareInstance Detail: DependedUpon: Dependency: Dependant: SoftwareInstance Detail: Dependant: Dependency: DependedUpon: SoftwareComponent Detail: DependedUpon: Dependency: Dependant: SoftwareComponent Detail: Dependant: Dependency: DependedUpon: File Detail: ContainedFunctionality: FunctionalContainment: FunctionalContainer: FunctionalComponent
Element
Detail List
Members
Containing Details
Contents
Depends On Files
Functional Component
Page 926
Maintaining Pattern
Creation/update
This is under the full control of patterns and as a result there is no default Database Detail Node behavior.
Setting the key for a Database Detail Node The key for a Database Detail Node is entirely dependent on the pattern that creates the Database Detail Node. It is advised therefore that you take extra care when constructing the key attribute, as it will need to be unique amongst all Database Detail nodes. Achieving this uniqueness would typically be done by including the following information in the key: The type attribute The parent node's key attribute
Removal
Authoritative removal by the pattern that creates/updates the Database Detail node should be considered. The pattern not only needs to create the correct Database Detail structure, it also needs to maintain it as the configuration changes. Built in removal rules will remove all the contained Database Detail nodes if an SI/BAI/Host is removed. Built in removal rules will remove all the child Database Detail nodes (over both Containment and List) to allow simple deletion of part structures.
Page 927
DatabaseDetail: Member: List: List: DatabaseDetail DatabaseDetail: List: List: Member: DatabaseDetail DatabaseDetail: Contained: Containment: Container: DatabaseDetail DatabaseDetail: Container: Containment: Contained: DatabaseDetail DatabaseDetail: DependedUpon: Dependency: Dependant: SoftwareInstance DatabaseDetail: DependedUpon: Dependency: Dependant: BusinessApplicationInstance DatabaseDetail: Detail: Detail: ElementWithDetail: SoftwareInstance DatabaseDetail: Detail: Detail: ElementWithDetail: RuntimeEnvironment DatabaseDetail: Detail: Detail: ElementWithDetail: BusinessApplicationInstance DatabaseDetail: Detail: Detail: ElementWithDetail: Host DatabaseDetail: Detail: Detail: ElementWithDetail: MFPart DatabaseDetail: Detail: Detail: ElementWithDetail: GenericElement
Database Members
Database Contents
Software Instance
Runtime Environment
Host
MFPart
Element
Page 928
Detail List
DatabaseDetail: Member: List: List: Detail DatabaseDetail: List: List: Member: Detail DatabaseDetail: Contained: Containment: Container: Detail DatabaseDetail: Container: Containment: Contained: Detail DatabaseDetail: Dependant: Dependency: DependedUpon: SoftwareInstance DatabaseDetail: DependedUpon: Dependency: Dependant: SoftwareInstance DatabaseDetail: Dependant: Dependency: DependedUpon: SoftwareComponent DatabaseDetail: DependedUpon: Dependency: Dependant: SoftwareComponent DatabaseDetail: Dependant: Dependency: DependedUpon: File DatabaseDetail: ContainedFunctionality: FunctionalContainment: FunctionalContainer: FunctionalComponent DatabaseDetail: Element: Maintainer: Pattern: Pattern
Members
Containing Details
Contents
Depends On Files
Functional Component
Maintaining Pattern
Storage Node
A Storage Node represents a unit of storage. The type of storage is not mandated by the system; it may be physical, or logical.
Page 929
Creation/update
If a pattern triggers on a Directly Discovered Data Node, such as a Discovered Tape Drive Node, it may choose whether to specify keys for the Storage Nodes it creates and maintains. If a key is specified then the decision whether to create a new Storage Node or to update an existing one depends on the key. If a Storage Node with the specified key exists, that node is updated, even if the node was previously maintained by a different pattern. In this case, the pattern takes over as the maintainer of the Storage Node. If a node with the specified key does not exist, a new Storage Node is created. In both cases, the Storage Node is linked to the pattern with a maintainer relationship. If a key for the Storage Node is not specified by the pattern, the system creates or updates a group Storage with an automatically generated key. The key is based upon the key of the Host upon which the Storage is running, the specified type of the Storage and, optionally, a key group that can be used to separate the nodes into a number of groups. The count attribute is set to the number of instances in the group identified in the collection of Directly Discovered Data. Each time the host is scanned, the count attribute is changed to represent the number of instances seen in that scan.
Removal
The age_count attribute of the Storage Node contains information about when the Storage Node was last confirmed by its maintaining pattern. If the age_count is positive, it represents the number of consecutive scans of the MFPart Node in which the Storage was confirmed. If the age_count is negative, it represents the number of consecutive scans in which the Storage Node was not confirmed. The last_update_success and last_update_failure attributes contain the date and time at which the Storage Node was last confirmed, and not confirmed, respectively. The default aging strategy only applies to Storage Nodes created from patterns triggering on the following node kinds and maintaining the Storage Nodes: DiscoveredTapeDrive DiscoveredDiskDrive If the Storage is triggered on anything else, then aging must be implemented in the pattern using a removal block. If the pattern does not have a removal block, Storage Nodes are removed using an aging strategy based on the age_count and last_update_success attributes. The default aging parameters are the same as for a Software Instance Node, that is, if a Storage Node has not been seen for at least 7 scans, over a period of at least 10 days, it is destroyed. If the pattern maintaining a node does have a removal block, the block can override the default aging scheme to destroy its nodes either earlier or later than normal. For TKU patterns, refer to the documentation accompanying each pattern for details of special removal behavior. Regardless of the presence or absence of a removal block in the pattern, if the MFPart corresponding to a DDD-triggered Storage Node is destroyed, the Storage Node is immediately destroyed (see How nodes get removed).
Page 930
Description description string Not displayed in UI url string Not displayed in UI business_continuity_critical boolean Not displayed in UI third_party_support boolean Not displayed in UI synonyms string
Description of the Storage. Legacy attribute not currently used. Can be used by patterns if desired. URL for information about the Storage. Legacy attribute not currently used. Can be used by patterns if desired. If true, the Storage is critical to operation of the business. Legacy attribute not currently used. Can be used by patterns if desired.
If true, the Storage is supported by a third party. Legacy attribute not currently used. Can be used by patterns if desired. Other names by which this Storage is known. Legacy attribute not currently used. Can be used by patterns if desired.
Used by MFPart
Storage Collection
Not displayed in UI
Recovery Time
Organizational Unit
Page 931
Support Manager
Storage: OwnedItem: Ownership: SupportOwner: Person Storage: OwnedItem: Ownership: BusinessOwner: Person Storage: OwnedItem: Ownership: ITOwner: Person
Business Owner
IT Owner
Creation/update
Storage Collection Nodes are created and updated in exactly the same way as Host Nodes.
Removal
A Storage Collection Node is removed in one of two ways: Explicitly by the pattern that created it. See Removal in the Pattern Language guide. If the evidence for a Storage Collection is not seen on a scan then the node is removed by reasoning. This is an Authoritative Removal type, see Authoritative Removal.
Page 932
If true, the StorageCollection is supported by a third party. Legacy attribute not currently used. Can be used by patterns if desired. Other names by which this StorageCollection is known. Legacy attribute not currently used. Can be used by patterns if desired.
Used by MFPart
Storage
Not displayed in UI
Recovery Time
Organizational Unit
Support Manager
Page 933
Business Owner
StorageContainer: OwnedItem: Ownership: BusinessOwner: Person StorageContainer: OwnedItem: Ownership: ITOwner: Person
IT Owner
Creation/update
This is under the full control of patterns and as a result there is no default Generic Element Node behavior. The generated key for a Generic Element Node is entirely dependent on the pattern that creates the Generic Element Node.
Removal
A Generic Element Node can be destroyed in one of two ways: Explicitly by the pattern that created it. See Removal in the Pattern Language guide When the last related Inferred node that is linked to the Generic Element node is removed, so will the Generic Element node. This is a Cascade Removal type, see Cascade Removal.
Page 934
UI name Container
Relationship GenericElement: Contained: Containment: Container: GenericElement GenericElement: Container: Containment: Contained: GenericElement GenericElement: Dependant: Dependency: DependedUpon: GenericElement GenericElement: DependedUpon: Dependency: Dependant: GenericElement GenericElement: Element: Maintainer: Pattern: Pattern
Contained Elements
Maintaining Pattern
Why a Runtime Environment and not an SI? Software Instances represent pieces of software running on a host. A Runtime Environment represents something supporting running software. For example, Weblogic is represented by an SI, and Java by a Runtime Environment Node. However, it is still important to keep track of the supporting Runtime Environments and their versions as some may require updates.
Creation/update
If a pattern triggers on a Directly Discovered Data Node, such as a Discovered Process Node, it may choose whether to specify keys for the Runtime Environment Nodes it creates and maintains. If a key is specified then the decision whether to create a new Runtime Environment Node or to update an existing one depends on the key. If a Runtime Environment Node with the specified key exists, that node is updated, even if the node was previously maintained by a different pattern. In this case, the pattern takes over as the maintainer of the Runtime Environment. If a node with the specified key does not exist, a new Runtime Environment Node is created. In both cases, the Runtime Environment Node is linked to the pattern with a maintainer relationship. If a key for the Runtime Environment Node is not specified by the pattern, the system creates or updates a group Runtime Environment with an automatically generated key. The key is based upon the key of the Host upon which the Runtime Environment is running, the specified type of the Runtime Environment and, optionally, a key group that can be used to separate the nodes into a number of groups. The count attribute is set to the number of instances in the group identified in the collection of Directly Discovered Data. Each time the host is scanned, the count attribute is changed to represent the number of instances seen in that scan.
Removal
The age_count attribute of the Runtime Environment Node contains information about when the Runtime Environment Node was last confirmed
Page 935
by its maintaining pattern. If the age_count is positive, it represents the number of consecutive scans of the Host Node in which the Runtime Environment was confirmed. If the age_count is negative, it represents the number of consecutive scans in which the Runtime Environment Node was not confirmed. The last_update_success and last_update_failure attributes contain the date and time at which the Runtime Environment Node was last confirmed, and not confirmed, respectively. The default aging strategy only applies to Runtime Environment Nodes created from patterns triggering on the following node kinds and maintaining the Runtime Environments: DiscoveredProcess DiscoveredService DiscoveredSoftware If the Runtime Environment is triggered on anything else, for example, a discovered file, then aging must be implemented in the pattern using a removal block. If the pattern does not have a removal block, Runtime Environment Nodes are removed using an aging strategy based on the age_count and last_update_success attributes. The default aging parameters are the same as for a Software Instance Node, that is, if a Runtime Environment Node has not been seen for at least 7 scans, over a period of at least 10 days, it is destroyed. The parameters may be changed in the options, see Configuring model maintenance settings for more information. If the pattern maintaining a node does have a removal block, the block can override the default aging scheme to destroy its nodes either earlier or later than normal. For TKU patterns, refer to the documentation accompanying each pattern for details of special removal behavior. Regardless of the presence or absence of a removal block in the pattern, if the Host corresponding to a DDD-triggered Runtime Environment Node is destroyed, the Runtime Environment Node is immediately destroyed (see How nodes get removed). Runtime Environment nodes currently represent Java or .NET. The different technologies and discovery techniques lead to the following differences in their removal: Java is grouped by command line; it has aging behavior like any grouped SoftwareInstance. .NET is based on package information, so it has authoritative removal.
It is possible for other triggers to be used. If any other trigger is used, BMC Atrium Discovery has no automatic removal behavior. Patterns must be used to explicitly destroy any Runtime Environment Nodes created as a result of other triggers.
Name of the Runtime Environment. Type of the Runtime Environment. Number of instances grouped together. Globally unique key. The publisher of the Runtime Environment. Only populated in cases that a Pattern identifies products from more than one publisher. This information is normally found in the Pattern node's publishers attribute. The product name. Only populated in cases that a Pattern identifies more than one product. This information is normally found in the Pattern node's products attribute. Full-resolution version. Version publicised by the vendor.
Release number.
Page 936
Edition edition string Service Pack service_pack string Build build string Patch patch string Revision revision string Not displayed in UI age_count int Not displayed in UI last_update_success date Not displayed in UI last_update_failure date Not displayed in UI _explicit_removal _string
Edition. Service pack. Build number. Patch level. Revision. The number of consecutive successful (positive) or failed (negative) times the Runtime Environment has been seen during host scans. The time at which a scan was last successfully associated with this Runtime Environment.
The time at which a scan associated with this Runtime Environment failed.
MFPart
Maintaining Pattern
Details
Page 937
Primary processes
RuntimeEnvironment: InferredElement: Inference: Primary: DiscoveredProcess RuntimeEnvironment: InferredElement: Inference: Contributor: DiscoveredProcess RuntimeEnvironment: InferredElement: Inference: Associate: DiscoveredProcess RuntimeEnvironment: InferredElement: Inference: Contributor: Package RuntimeEnvironment: InferredElement: Inference: Associate: Package RuntimeEnvironment: InferredElement: Inference: Contributor: Host RuntimeEnvironment: InferredElement: Inference: Associate: Host RuntimeEnvironment: InferredElement: Inference: Primary: DiscoveredFile RuntimeEnvironment: InferredElement: Inference: Contributor: DiscoveredFile RuntimeEnvironment: InferredElement: Inference: Associate: DiscoveredFile RuntimeEnvironment: InferredElement: Inference: Primary: DiscoveredSoftware RuntimeEnvironment: InferredElement: Inference: Contributor: DiscoveredSoftware
Discovered process from which the existence of this Runtime Environment was inferred.
Contributor processes
Discovered process from which one or more attributes of this Runtime Environment were inferred.
Associated processes
Not displayed in UI
Package from which one or more attributes of this Runtime Environment were inferred.
Associated packages
Not displayed in UI
Host from which one or more attributes of this Runtime Environment were inferred.
Not displayed in UI
Not displayed in UI
Discovered file from which the existence of this Runtime Environment was inferred.
Not displayed in UI
Discovered file from which one or more attributes of this Runtime Environment were inferred.
Associated files
Primary software
Discovered software from which the existence of this Runtime Environment was inferred.
Contributor software
Discovered software from which one or more attributes of this Runtime Environment were inferred.
Page 938
RuntimeEnvironment: InferredElement: Inference: Associate: DiscoveredSoftware RuntimeEnvironment: InferredElement: Inference: Contributor: DiscoveredCommandResult RuntimeEnvironment: InferredElement: Inference: Associate: DiscoveredCommandResult RuntimeEnvironment: InferredElement: Inference: Contributor: DiscoveredRegistryValue RuntimeEnvironment: InferredElement: Inference: Associate: DiscoveredRegistryValue RuntimeEnvironment: InferredElement: Inference: Contributor: DiscoveredWMI RuntimeEnvironment: InferredElement: Inference: Associate: DiscoveredWMI RuntimeEnvironment: InferredElement: Inference: Contributor: Pattern RuntimeEnvironment: InferredElement: Inference: Primary: MFPart RuntimeEnvironment: InferredElement: Inference: Contributor: MFPart
Not displayed in UI
Discovered command result from which one or more attributes of this Runtime Environment were inferred.
Not displayed in UI
Discovered Windows Registry value from which one or more attributes of this Runtime Environment were inferred.
Discovered Windows Registry value related in some way to this Runtime Environment.
Not displayed in UI
Discovered WMI query result from which one or more attributes of this Runtime Environment were inferred.
Discovered WMI query result related in some way to this Runtime Environment.
Not displayed in UI
Pattern from which one or more attributes of this Runtime Environment were inferred.
Not displayed in UI
MFPart from which the existence of this Runtime Environment was inferred.
Not displayed in UI
MFPart from which one or more attributes of this Runtime Environment were inferred.
DDD Nodes
This section describes the BMC Atrium Discovery Directly Discovered Data (DDD) Nodes. Directly discovered information is obtained directly from a target host via particular discovery techniques. Introductory Information provides an introduction to the BMC Atrium Discovery Directly Discovered Data (DDD) Nodes. Discovery Run Node describes the Discovery Run Node kind. Discovery Access Node describes the Discovery Access Node kind and how one is created each time a full discovery run is performed. Device Info Node describes the Device Info Node kind. Host Info Node describes the Host Info Node kind. Other DDD Nodes describes the remaining Directly Discovered Data Node kinds in brief.
Page 939
Integration DDD Nodes describes the nodes related to Integration and SQL Discovery.
Introductory Information
All Directly Discovered Data is stored, regardless of whether it is understood or not. Target discovery devices are accessed as part of a Discovery Run, corresponding to a range of devices that are scanned. When Reasoning starts accessing a target device, it creates a Discovery Access Node to represent the access and relates it to the corresponding Discovery Run. The Discovery Access Node contains information about the time of discovery and the discovery method and credentials used. Each time a device is rediscovered, a new Discovery Access Node is created, and the discovery information is attached to it. If a target host cannot be accessed, a Discovery Access Node is created to represent the failed access.
The Host Node The Host Node itself is not considered directly discovered information, since a level of inference is required to decide which Host a Discovery Access corresponds to. See Host Node for more information.
Discovery Run Nodes are created first because they contain all other Directly Discovered Data Nodes. Discovery Access Nodes are then created, followed by the remaining DDD Nodes.
Creation
A Discovery Run Node is created when a Discovery Run starts; when a range that has been configured to be scanned starts actually scanning or, if the appliance is a Consolidation Appliance, when a Scanning Appliance's scan starts to be consolidated. The Discovery Run remains open until all of the endpoints (IP addresses) in the range have been scanned or consolidated.
Removal
Discovery Runs are removed when all of their contained Discovery Access have been destroyed through the Aging process. When the DiscoveryRun Node is destroyed any linked Integration Access Nodes and their associated DDD Nodes are also destroyed.
Page 940
Number of IPs Scanned Currently Processing IPs Cancelled Consolidation Cancelled Run Type Scan Type Scan Level Start Time End Time Discovery Start Time Discovery End Time Dark Space Suppression ECA Error Hard Failure Not displayed in UI Not displayed in UI
done _int
processing _list: string cancelled _boolean consolidation_cancelled _boolean run_type _string scan_type _string scan_level _string starttime _date endtime _date discovery_starttime _date
Type of run. Type of scan. Scan level. Date and time the run started. Date and time the run ended. Date and time the discovery started. Date and time the discovery ended. Dark space suppression scheme for this run.
True if there have been any ECA Errors for this Discovery Run. True if there have been hard failures for this Discovery Run. True if Discovery Run still executing. True if Discovery Run still executing on the discovery appliance. Number of completed consolidation endpoints.
hard_failure _boolean
__inprogress _boolean __remote_inprogress _boolean consolidation_done_count _int _consolidation_source _string _consolidation_source_name _string _consolidation _list: string _consolidation_done _list: string cdm_company _string
Identifier for the discovery appliance which provided the info for this run. Name of the discovery appliance which provided the info for this run. List of the consolidation appliance identifiers this run needs to be sent to. List of the consolidation appliance identifiers this run has been sent to.
Consolidated From
Not displayed in UI
Not displayed in UI
Multi-tenant company
Page 941
Provider Accesses
DiscoveryRun: List: List: Member: ProviderAccess DiscoveryRun: EndpointRange: EndpointRange: IPRange DiscoveryRun: Member: List: List: TopologyRun
Not displayed in UI
Topology Run
Creation
A Discovery Access Node is created when BMC Atrium Discovery scans an endpoint. By default, a Discovery Access Node is always created and stored, whether the discovery succeeded or failed. Where there is no response and there never has been a response from that particular IP address two strategies exist: Keep Most Recent. Each time another no response is detected the new one is created and all previous no response Discovery Access Nodes are removed. This is the default behaviour. Remove All. Each time another no response is detected it is immediately removed. You may choose to use this option in order to improve the quality of the stored data in your system. This can be configured in the [Dark space suppression scheme].
Removal
Aging parameters exist for Directly Discovered Data Nodes which have a particular cut-off time. The Aging process will remove any Discovery Access Nodes and all of the other DDD Nodes associated with that node if this default time period is exceeded. However, if the Discovery Access Node is the only one connected to a particular host, then it is not removed. This enables BMC Atrium Discovery to keep a permanent record of every endpoint that has ever been scanned. These Discovery Access Nodes are the only ones which can exceed the default cut-off Aging process. The cut-off time is configurable, see Configuring model maintenance settings for details.
Page 942
State state string Start Time starttime date End Time endtime date Discovery Start Time discovery_starttime date Discovery End Time discovery_endtime date Result result string End State end_state string Reason reason string On Hold Since is_being_held date On Hold Duration on_hold_duration int Not shown in UI _first_marker boolean Not shown in UI _last_marker boolean Session Establishment Duration discovery_duration int Total Discovery Duration discovery_duration_sum int Not shown in UI access_failure boolean Not shown in UI access_success string Not shown in UI consolidation_done list:_string Retrieved by scanning appliance is_consolidation boolean No Response Count no_response_count int First No Response Count first_no_response date Discovered From Scanner File is_scanner_file boolean
State of access completed, in progress, and so forth. Date and time access started. Date and time access ended. Date and time access started on a scanning appliance.
Result of access. Final state of access. Reason for failure, if any. True if access is on hold waiting for a scan window. How long the access was held waiting for a scan window. True if this is the first DiscoveryAccess for an endpoint. True if this is the last DiscoveryAccess for an endpoint. The amount of time taken to establish a session with the endpoint (in seconds).
The session type and credential used. List of the consolidation appliance identifiers this DA has been be sent to.
True if present, always have the value True and indicates that the data for this DiscoveryAccess originated from a scanning appliance before being consolidated on this one. The number of no response DAs suppressed onto a single no response DA. The date of the first no response DA suppressed onto a single no response DA.
Page 943
Host Info
Host info.
Interfaces
Interfaces.
Processes
Processes.
Services
Services.
Packages
Packages.
Patches
Patches.
Network Connections
Network Connections.
HBAs
HBAs.
File Systems
File systems.
FQDNs
FQDNs.
Page 944
Files
DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DiscoveredFile DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DiscoveredCommandResult DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DiscoveredWMIQuery DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DiscoveredRegistryValue DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: RegistryListing DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DirectoryListing DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DiscoveredSNMPTable DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DiscoveredSNMP DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DiscoveredChassisList DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DiscoveredCardList DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DiscoveredSysplexList DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DiscoveredCouplingFacilityList
Files.
Command Results
Command results.
WMI Queries
WMI Queries.
Registry Queries
Registry Queries.
Registry Keys
Registry Keys.
Directories
Directory Listing.
SNMP Queries
SNMP Query.
Sysplexes
Sysplexes.
Coupling Facilities
Coupling Facilities.
Page 945
Storage Subsystems
DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DiscoveredStorageSubsystemList DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DiscoveredDiskDriveList DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DiscoveredTapeDriveList DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DiscoveredSoftwareList DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DiscoveredApplicationComponentList DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DiscoveredTransactionList DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DiscoveredMQDetailList DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DiscoveredDatabaseList DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DiscoveredDatabaseDetailList DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: DiscoveredDependencyList DiscoveryAccess: List: List: Member: ProviderAccess DiscoveryAccess: Member: List: List: DiscoveryRun
Storage Subsystems.
Storage Disks
Storage Disks.
Storage Tapes
Storage Tapes.
Mainframe Software
Mainframe Software.
Mainframe Transactions
Mainframe Transactions.
Mainframe MQ Properties
Mainframe MQ Properties.
Mainframe Databases
Mainframe Databases.
Not displayed in UI
Provider Accesses
Provider Accesses.
Discovery Run
Discovery Run.
Page 946
Virtual Machines
DiscoveryAccess: DiscoveryAccess: DiscoveryAccessResult: DiscoveryResult: VirtualMachineList DiscoveryAccess: Next: Sequential: Previous: DiscoveryAccess DiscoveryAccess: Previous: Sequential: Next: DiscoveryAccess DiscoveryAccess: ElementWithStatus: Status: Status: ECAError DiscoveryAccess: DiscoveryAccess: AccessFailure: InferredElement: Host DiscoveryAccess: DiscoveryAccess: AccessFailure: InferredElement: NetworkDevice DiscoveryAccess: DiscoveryAccess: AccessFailure: InferredElement: Printer DiscoveryAccess: DiscoveryAccess: AccessFailure: InferredElement: MFPart DiscoveryAccess: DiscoveryAccess: AccessFailure: InferredElement: Mainframe DiscoveryAccess: DiscoveryAccess: AccessOptimization: InferredElement: Host DiscoveryAccess: DiscoveryAccess: AccessOptimization: InferredElement: NetworkDevice DiscoveryAccess: DiscoveryAccess: AccessOptimization: InferredElement: Printer
Virtual Machines.
Previous DiscoveryAccess.
Next DiscoveryAccess.
Errors
Failed Host
Failed Printer
Failed MF Part
Failed Mainframe
Optimized Host
Optimized Printer
Page 947
Optimized Mainframe
DiscoveryAccess: DiscoveryAccess: AccessOptimization: InferredElement: Mainframe DiscoveryAccess: Associate: Inference: InferredElement: Host DiscoveryAccess: Associate: Inference: InferredElement: NetworkDevice DiscoveryAccess: Associate: Inference: InferredElement: Printer DiscoveryAccess: Associate: Inference: InferredElement: Mainframe DiscoveryAccess: Associate: Inference: InferredElement: MFPart DiscoveryAccess: DiscoveryAccess: Metadata: Detail: SessionResult
Host
Network Device
Printer
Mainframes
MFParts
Session Results
Session details.
Creation
Whenever BMC Atrium Discovery scans an IP address, a Discovery Access Node is created. If there is any response to the scan a Device Info Node is also created. The level of information in the Device Info Node will vary depending on the discovered device.
Removal
A Device Info Node is removed when the Discovery Access Node which it is associated with has been destroyed through the Aging process.
The attributes of a Device Info Node are described in the table below. UI Name Attribute Name and Type Discovery Method discovery_method string Discovery Duration discovery_duration int Request Time request_time date Failure Reason failure_reason string Hostname hostname string Fully Qualified Domain Names fqdn string Discovered OS os string Discovered OS Class os_class string Discovered OS Type os_type string Discovered OS Version os_version string Service Pack service_pack int Discovered OS Architecture os_arch string Discovered OS System Directory os_directory string Discovered OS Edition os_edition string Discovered OS Build os_build string Discovered OS Vendor os_vendor string Vendor vendor string Model model string DNS Domain dns_domain string NIS/Windows Domain domain string Device Type device_type string Last Access Method last_access_method string Login Authentication Method authentication_method string Description Discovery method. Time in seconds spent in discovery. When this request was made. Reason for failure, if any. Name of host. Fully qualified domain name of the host. Operating system. Operating system class. Operating system type. Operating system version. Service pack details. Operating system architecture. Operating system directory. Operating system edition. Operating system build information. Operating system vendor. Vendor of device. Model of device. DNS domain. Windows domain. Device type. The last method used to access this device. How the login session was authenticated - may be 'key' or 'password'.
Page 949
Last Credential ID last_credential string Last Windows Proxy Pool last_slave_pool string Last Windows Proxy last_slave string SNMP sysname sysname string Capability Ids capability_ids list:int Capability Types capability_types list:string SNMP sysObjectId sysobjectid string SNMP sysDescr sysdescr string Serial number serial string ipaddress_ids list:string SNMP syscontact syscontact string SNMP syslocation syslocation string SNMP cdpGlobalDeviceId cdpdeviceid string SNMP v3 Engine Identifier snmpv3_engine_id string Operating system derived from discovery heuristics probed_os string Operating system type derived from discovery heuristics probed_os_type string Operating system version derived from discovery heuristics probed_os_version string CVE-2011-1785 Vulnerability _cve_2011_1785 _boolean Not displayed in UI command_status_failure boolean Not displayed in UI method_failure boolean Not displayed in UI method_success string Access Method access_method string
The ID of the last credential used to access this device. The name of the last Windows Proxy Pool used to access this device. The name of the last Windows Proxy used to access this device. SNMP sysname. Capability Ids. Capability Types. SNMP sysObjectId. SNMP sysDescr. Serial number. IP Addresses Ids. SNMP syscontact. SNMP syslocation. SNMP cdpGlobalDeviceId. SNMP v3 Engine Identifier. Operating system derived from discovery heuristics.
CVE-2011-1785 vulnerability flag (vSphere based ESX/ESXi hosts only). Flag that this node has command failures linked to it. Flag that this node has script failures linked to it. The name of the script that succeeded in getting this data. The access method used by the script that succeeded in getting this data.
Page 950
Requesting Pattern
DeviceInfo: DiscoveryResult: Request: Pattern: Pattern DeviceInfo: Primary: Inference: InferredElement: Host DeviceInfo: Primary: Inference: InferredElement: Mainframe DeviceInfo: Primary: Inference: InferredElement: NetworkDevice DeviceInfo: DiscoveryResult: Metadata: Detail: CommandFailure DeviceInfo: DiscoveryResult: Metadata: Detail: ScriptFailure
Host
Inferred Host.
Mainframe
Inferred Mainframe.
Network Device
Creation
Whenever BMC Atrium Discovery scans an IP address, a Discovery Access Node is created. If there is any response to the scan a Host Info Node is also created. The level of information in the Host Info Node will vary depending on the discovered device.
Removal
A Host Info Node is removed when the Discovery Access Node which it is associated with has been destroyed through the Aging process.
Page 951
UI Name Attribute Name and Type or Relationship Target Discovery Method discovery_method string Discovery Duration discovery_duration int Request Time request_time date Failure Reason failure_reason string Hostid hostid string Uptime Days uptime int Uptime Seconds uptimeSeconds int DF Output df string Discovered Kernel kernel string Not displayed in UI probed_os string Not displayed in UI probed_os_type string Not displayed in UI probed_os_version string Number of Processors num_processors int Processor Type processor_type string Processor Speed processor_speed int Number of Logical Processors num_logical_processors int Cores per Processor cores_per_processor int Threads per Processor Core threads_per_core int CPU Threading Enabled cpu_threading_enabled boolean Number of Processor Types num_processor_types int All Processor Types all_processor_types _list:string All Processor Speeds all_processor_speeds _list:int Logical Processor Counts all_processor_logical_counts _list:int
Description
Discovery method. Time in seconds spent in discovery. When this request was made. Reason for failure, if any. Unique Host identification string. The time in days since the host was booted. The time in seconds since the host was booted. Output from Unix df command. Kernel details. Operating system derived from discovery heuristics. Operating system type derived from discovery heuristics. Operating system version derived from discovery heuristics. The number of physical processors. The type of each processor. The speed of each processor in MHz. The number of logical processors available to the OS. The number of cores per physical processor available to the OS. The number of threads per core in multi/hyper threaded processors available to the OS. Whether CPU hardware threading is enabled. The number of physical processor types. List of all processor types. List of all processor speeds. List of logical processor counts.
Page 952
Physical Processor Counts all_processor_physical_counts _list:int RAM ram int E10K SSP Hostname ssphostname string SunFire Domain sunfire_domain string Zonename zonename string Windows Workgroup workgroup string Hardware Vendor vendor string Model model string Serial Number serial string Windows UUID windows_uuid string Power Supply Status psu_status list:string Cluster Instance ID cluster_instance_id string Cluster Name cluster_name string Cluster Name Resource cluster_name_resource string LPAR Active CPUs In Pool lpar_active_cpus_in_pool int LPAR Active Physical CPUs In System lpar_active_physical_cpus_in_system int LPAR Capacity Increment lpar_capacity_increment float LPAR Entitled Capacity lpar_entitled_capacity float LPAR Entitled Capacity Of Pool lpar_entitled_capacity_of_pool float LPAR Maximum Capacity lpar_maximum_capacity float LPAR Maximum Capacity Of Pool lpar_maximum_capacity_of_pool int LPAR Maximum Memory lpar_maximum_memory int LPAR Maximum Physical CPUs In System lpar_maximum_physical_cpus_in_system int LPAR Maximum Virtual CPUs lpar_maximum_virtual_cpus int
RAM value in MB. E10K SSP Hostname. F15K SunFire domain. Solaris 10 Zonename. Windows workgroup. Hardware vendor. Hardware model. Serial number. Windows UUID. Power Supply Unit status. Cluster instance identifier. Cluster name. Cluster name resource. The maximum number of CPUs available to this LPAR's shared processor pool. The current number of active physical CPUs in the system containing this LPAR.
The granule at which changes to Entitled Capacity can be made. A value in whole multiples indicates a Dedicated LPAR. The number of processing units this LPAR is entitled to receive. The number of processing units that this LPAR's shared processor pool is entitled to receive. The maximum number of processing units this LPAR was defined to ever have. Entitled capacity can be increased up to this value. The maximum number of processing units available to this LPAR's shared processor pool. Maximum possible amount of memory. The maximum possible number of physical CPUs in the system containing this LPAR.
Page 953
LPAR Minimum Capacity lpar_minimum_capacity float LPAR Minimum Memory lpar_minimum_memory int LPAR Minimum Virtual CPUs lpar_minimum_virtual_cpus int LPAR Mode lpar_mode string LPAR Node Name lpar_node_name string LPAR Online Memory lpar_online_memory int LPAR Online Virtual CPUs lpar_online_virtual_cpus int LPAR Group Identifier lpar_partition_group_id int LPAR Name lpar_partition_name string LPAR Identifier lpar_partition_number int LPAR Physical CPU Percentage lpar_physical_cpu_percentage float
The minimum number of processing units this LPAR was defined to ever have. Entitled capacity can be reduced down to this value. Minimum memory this LPAR was defined to ever have. Minimum number of virtual CPUs this LPAR was defined to ever have. Indicates whether the LPAR processor capacity is capped, or if it is uncapped and allowed to consume idle cycles from the shared pool. Dedicated LPAR is capped or donating. Hostname of the LPAR. The amount of memory currently online (in MB). Number of CPUs (virtual engines) currently online. LPAR group that this LPAR is a member of. Logical partition name as assigned by the HMC. Number of the logical partition. Fractional representation relative to whole physical CPUs that these LPARs virtual CPUs equate to. This is a function of Entitled Capacity / Online CPUs. Dedicated LPARs would have 100% Physical CPU Percentage. A 4-way virtual with Entitled Capacity of 2 processor units would have a 50% physical CPU Percentage. The number of physical CPUs available for use by shared processor LPARs.
LPAR Shared Physical CPUs In System lpar_shared_physical_cpus_in_system int LPAR Shared Pool Identifier lpar_shared_pool_id int LPAR Type lpar_type string LPAR Unallocated Capacity lpar_unallocated_capacity float
Identifier of Shared Pool of Physical processors that this LPAR is a member. LPAR type: dedicated or shared, SMT enabled or not. The sum of the number of processor units unallocated from shared LPARs in an LPAR group. This sum does not include the processor units unallocated from a dedicated LPAR, which can also belong to the group. The unallocated processor units can be allocated to any dedicated LPAR (if it is greater than or equal to 1.0) or shared LPAR of the group. Number of variable processor capacity weight units currently unallocated within the LPAR group. The priority weight assigned to this LPAR which controls how extra (idle) capacity is allocated to it. A weight of -1 indicates a soft cap is in place. The identifier (key) of the AIX WPAR. An identifier of 0 indicates that the system is the WPAR container. An identifier that links all the AIX/VIO LPARs in a physical system (including WPARs). Desired entitled CPU capacity for LPAR. Desired amount of main memory for LPAR in MB.
LPAR Unallocated Weight lpar_unallocated_weight int LPAR Variable Capacity Weight lpar_variable_capacity_weight int WPAR Identifier wparid int System Identifier systemid string LPAR Desired Entitled Capacity lpar_desired_capacity float LPAR Desired Memory lpar_desired_memory int
Page 954
LPAR Desired CPUs lpar_desired_cpus int LPAR Minimum Capacity per Virtual Processor lpar_minimum_capacity_per_vp float Not displayed in UI command_status_failure boolean Not displayed in UI method_failure boolean Not displayed in UI method_success string Access Method access_method string
Desired number of virtual processors for LPAR. Minimum entitled capacity required per virtual processor for LPAR. Flag that this node has command failures linked to it. Flag that this node has script failures linked to it. The name of the script that succeeded in getting this data. The access method used by the script that succeeded in getting this data.
DiscoveryResult:Request:Pattern:Pattern
Primary:Inference:InferredElement:Host
Primary:Inference:InferredElement:FileSystem
DiscoveryResult:Metadata:Detail:CommandFailure
DiscoveryResult:Metadata:Detail:ScriptFailure
Creation
Whenever BMC Atrium Discovery scans an IP address, a Discovery Access Node is created. A directory listing can be obtained using the discovery.listDirectory function. A Directory Listing Node is created for the listing.
Removal
A Directory Listing Node is removed when the Discovery Access Node which it is associated with has been destroyed through the Aging process.
Page 955
Path of the directory. Flag if request was made by system pattern. Look in both redirected and original file location on 64-bit Windows. Actual file location (if different to path). Discovery method. Time in seconds spent in discovery. When this request was made. Reason for failure, if any. Flag that this node has command failures linked to it.
Flag that this node has script failures linked to it. The name of the script that succeeded in getting this data. The access method used by the script that succeeded in getting this data.
Discovery Access
Requesting Pattern
Page 956
DirectoryListing: DiscoveryResult: Metadata: Detail: CommandFailure DirectoryListing: DiscoveryResult: Metadata: Detail: ScriptFailure
Creation
Whenever BMC Atrium Discovery scans an IP address, a Discovery Access Node is created. A registry listing can be obtained using the discovery.listRegistry function. A Registry Listing Node is created for the listing.
Removal
A Registry Listing Node is removed when the Discovery Access Node which it is associated with has been destroyed through the Aging process.
Page 957
Not displayed in UI method_failure boolean Not displayed in UI method_success string Access Method access_method string
Flag that this node has script failures linked to it. The name of the script that succeeded in getting this data. The access method used by the script that succeeded in getting this data.
Discovery Access
Requesting Pattern
Creation
Whenever BMC Atrium Discovery scans an IP address, a Discovery Access Node is created. A Service List Node is created for the listing.
Removal
A Service List Node is removed when the Discovery Access Node which it is associated with has been destroyed through the Aging process.
Page 958
UI Name Attribute Name and Type Discovery Method discovery_method string _system Discovery duration discovery_duration int Request time request_time date Failure reason failure_reason string command_status_failure boolean method_failure boolean method_success string access_method string
Description The name of the discovery method used to connect to this host. Flag if request was made by system pattern. Time spent in discovery (seconds). The time that this request was made. Reason for failure, if any. Flag showing whether this node has command failures linked to it. Flag showing whether this node has script failures linked to it. The name of the script that succeeded in getting this data. The access method used by the script that succeeded in getting this data.
Discovery Access
Requesting Pattern
Page 959
The following section describes the scenarios in which a Discovered Directory Entry Node is created or destroyed. Directly Discovered Data Nodes are never updated.
Creation
Whenever BMC Atrium Discovery scans an IP address, a Discovery Access Node is created. A directory listing can be obtained using the discovery.listDirectory function. A Discovered Directory Entry Node is created for each file in the listing.
Removal
A Discovered Directory Entry Node is removed when the Discovery Access Node which it is associated with has been destroyed through the Aging process.
File Name. Full path to the file including drive letter (Windows). Drive letter (Windows). DOS 8.3 format file name (Windows). File Type. Possible file types are: Directory File Symbolic Link Character Device Block Device Socket FIFO Solaris Door HP-UX Network Special BSD UnionFS White Out XENIX Semaphore XENIX Shared Memory Unknown
Links To link string Size size int Major Device Number major int Minor Device Number minor int Owner owner string Group group string
Path to symbolically linked file (UNIX Symbolic Links only). Size of the file. Major device number (UNIX Devices only). Minor device number (UNIX Devices only). Name of the owner of the file. Name of the group of the file (UNIX).
Page 960
List of file permissions. Possible permissions are: OwnerRead OwnerWrite OwnerExecute (OwnerSearch for Directories) GroupRead Group Write GroupExecute (GroupSearch for Directories) WorldRead WorldWrite WorldExecute (WorldSearch for Directories) MandatoryLocking (Solaris only) SetUID SetGID Sticky
UNIX Permissions String permissions_string string Mode mode string Last Modified last_modified string Hidden File mode boolean System File system boolean
UNIX Permissions string (UNIX). UNIX File Mode (UNIX, Octal format). Last modified date/time - platform specific format. Indicates a Hidden file (Windows). Indicates a System file (Windows).
Software instance whose existence was inferred from this directory entry.
Software instance whose attributes have been partly or wholly determined from this directory entry.
Page 961
Creation
Whenever BMC Atrium Discovery scans an IP address, a Discovery Access Node is created. A registry listing can be obtained using the discovery.listRegistry function. A Discovered Registry Entry Node is created for each discovered registry key.
Removal
A Discovered Registry Entry Node is removed when the Discovery Access Node which it is associated with has been destroyed through the Aging process.
Software instance whose existence was inferred from this registry key.
Software instance whose attributes have been partly or wholly determined from this registry key.
Discovered Service
A Discovered Service Node stores information about an entry in a registry listing.
Page 962
Creation
Whenever BMC Atrium Discovery scans an IP address, a Discovery Access Node is created. A Discovered Service Node is created for each discovered service.
Removal
A Discovered Service Node is removed when the Discovery Access Node which it is associated with has been destroyed through the Aging process.
Software instance whose attributes have been partly or wholly determined from this service.
Page 963
Page 964
Simple Identity Discovered Process Nodes also contain one piece of information that is not Directly Discovered Data which is called a simple_identity. This is an inferred attribute value. It is set by identify tables defined in the Pattern Language (TPL), to quickly and simply identify known processes. The simple_identity attribute is only intended to be an initial hint about the purpose of a process; processes that are sufficiently important for complete identification have associated Software Instance Nodes created in the inferred model.
Creation/update
The creation and update of Integration Point Nodes is done manually by the User using the BMC Atrium Discovery UI, or as a result of uploading pattern modules containing appropriate definitions blocks. See Integration points for more details.
Removal
Integration Points are removed manually by the user. When an Integration Point is destroyed its child SQL Integration Connection and SQL Integration Query nodes are also destroyed.
The name of the integration point. The description of what the integration point aims to achieve. The type of the provider. Whether the integration point is for dynamic (endpoint supplied) or static (endpoint fixed)
Page 965
Creation
An Integration Result node is created whenever a pattern makes a new request to a SQL database.
Update
Integration Result nodes are never updated.
Removal
Integration Results are removed when the related Provider Access Node or Discovery Access Node is destroyed.
Page 966
Provider provider string Connection Parameters connection_xxx Any type Query Parameters query_xxx Any type
The name of the provider that handled this request to an integration. The value of a given connection parameter passed from the pattern. The value of a given query parameter passed from the pattern.
SQLIntegrationQuery
Pattern
IntegrationAccess
DiscoveryAccess
SQLResultRow
SessionResult
Session details
Creation
SQL Result Row nodes are created when the related Integration Result Node is created, as the result of running a SQL query.
Update
Page 967
Removal
SQL Result Rows are removed when the related Integration Result Node is destroyed.
Creation
An Provider Access node is created when the first query is made to a database integration at that endpoint during the course of a Discovery Run. A Provider Access Node is always created and stored, whether individual Integration Result Node queries succeed or fail.
Update
The endpoint attribute on a Provider Access Node is never updated. The hard_failure attribute may become true if a newly performed query represents a hard failure.
Removal
Provider Access nodes are removed when the Discovery Run node they are linked to is destroyed. See Discovery Run Node for more details. When a Provider Access node is destroyed, its related Integration Result Nodes and their children are also destroyed.
Page 968
Provider Access Key key string Provider Access Type provider_type string Endpoint endpoint string Dynamic Connections is_dynamic boolean Hard Failure hard_failure boolean Retrieved by scanning appliance is_consolidation boolean
The host that was accessed. Whether the integration point is for dynamic (endpoint supplied) or static (endpoint fixed).
True if there have been hard failures for this Provider Access type.
If present, always have the value True and indicates that the data for this ProviderAccess originated from a scanning appliance before being consolidated on this one.
Discovery Run
Discovery Run.
Discovery Access
Discovery Access.
Session Results
Session details.
Page 969
Rule Module Node describes the Rule Module Node kind. Pattern Configuration Nodes describes the Pattern Configuration Node kind. Pattern Definitions Node describes the Pattern Definitions Node kind. Pattern Define Nodes describes the Pattern Define Node kind.
Pattern Node
The Pattern Node is the node that represents an individual pattern, defined in the Pattern Language. For each pattern in a TPL file there is a Pattern Node which contains meta-data about the pattern, such as its name, description and the products that it is responsible for identifying. Patterns have relationships to all of the nodes that they are responsible for maintaining so that the nodes can be related back to where they originated. See Configipedia for more information about BMC Atrium Discovery patterns.
Page 970
Not displayed in UI cdm_software_server_type list: int URLs urls list:string TKN Identity tkn_identity list:string Additional Attributes additional_attributes list: string
URLs relating to the pattern. Identity in the Technology Knowledge Network. Additional attributes that may be set on nodes maintained by the pattern.
Maintained Detail
Page 971
Not displayed in UI
Pattern: Pattern: Maintainer: Element: SupportDetail Pattern: Pattern: Maintainer: Element: GenericElement Pattern: Pattern: Maintainer: Element: Collection Pattern: Pattern: Maintainer: Element: StorageCollection Pattern: Pattern: Maintainer: Element: Storage Pattern: Pattern: Maintainer: Element: File Pattern: Dependant: Dependency: DependedUpon: Pattern Pattern: DependedUpon: Dependency: Dependant: Pattern Pattern: Overrider: Override: Overridden: Pattern Pattern: Overridden: Override: Overrider: Pattern Pattern: Contributor: Inference: InferredElement: SoftwareInstance Pattern: Old: Deprecated: New: Pattern
Maintained Collection
Maintained Storage
Maintained File
Requires
Required patterns.
Required By
Overrides
Overridden patterns.
Overridden By
Overriding patterns.
Not displayed in UI
Software instance whose attributes have been partly or wholly determined from this pattern.
Deprecating pattern
Page 972
Deprecated pattern
Pattern: New: Deprecated: Old: Pattern Pattern: PatternWithError: Error: Error: ECAError Pattern: Pattern: PatternExecution: PatternExecution: PatternExecution Pattern: ResourceUser: ResourceUse: Resource: PatternDefine
Pattern Errors
Manual Runs
Pattern resource.
Page 973
Pattern
PatternModule: PatternModule: PatternModuleContainment: Pattern: Pattern PatternModule: PatternModule: PatternModuleContainment: PatternConfiguration: PatternConfiguration PatternModule: PatternModule: PatternModuleContainment: PatternDefinitions: PatternDefinitions PatternModule: PatternModule: PatternModule: PatternPackage: PatternPackage PatternModule: Dependant: Dependency: DependedUpon: PatternModule PatternModule: DependedUpon: Dependency: Dependant: PatternModule
Patterns.
Pattern Configuration
Pattern configurations.
Pattern Definitions
Pattern definitions.
Pattern Package
Packages.
Depends On
Depended Upon By
Page 974
The relationships of a Pattern Package Module Node are described below. UI name Pattern Module Relationship PatternPackage: PatternPackage: PatternModule: PatternModule: PatternModule Description Pattern Modules in this package.
Creation
Pattern Configuration nodes are created as a result of activating a Pattern Module that contains a configuration block. As such the creation of Pattern Configuration nodes is driven by the management of Patterns. See Pattern management for details.
Update
The update of a Pattern Configuration node is driven by the User modifying the attributes in the UI.
Removal
Pattern Configuration nodes are removed when their associated Pattern Modules are destroyed.
Page 975
UI name PatternModule
Relationship PatternConfiguration: PatternConfiguration: PatternModuleContainment: PatternModule: PatternModule PatternConfiguration: Old: Deprecated: New: PatternConfiguration PatternConfiguration: New: Deprecated: Old: PatternConfiguration
Deprecating Configuration
Deprecated Configuration
Creation
Pattern Definitions nodes are created as a result of activating a Pattern Module that contains a definitions block. As such the creation of Pattern Definitions nodes is driven by the management of Patterns. See Pattern management for details.
Update
Pattern Definitions nodes are never updated as they are defined in Pattern Modules and the act of editing a Pattern Module creates a new instance.
Removal
Pattern Definitions nodes are removed when their associated Pattern Modules are destroyed.
Page 976
The relationships of a Pattern Definitions Node are described below. Destination PatternModule Relationship PatternDefinitions: PatternDefinitions: PatternModuleContainment: PatternModule: PatternModule PatternDefinitions: Old: Deprecated: New: PatternDefinitions PatternDefinitions: New: Deprecated: Old: PatternDefinitions PatternDefinitions: PatternDefinitions: PatternModuleContainment: PatternDefine: PatternDefine PatternDefinitions: PatternDefinitions: IntegrationImplementation: IntegrationPoint: IntegrationPoint Description Pattern module
PatternDefinitions
PatternDefinitions
PatternDefine
IntegrationPoint
Implements resource
Creation
Pattern Define Nodes are created as a result of activating a Pattern Module that contains a definitions block that has one or more define blocks. As such the creation of Pattern Define nodes is driven by the management of Patterns. See Pattern management for details.
Update
Pattern Define Nodes are never updated as they are defined in Pattern Modules and the act of editing a Pattern Module creates a new instance.
Removal
Pattern Define Nodes are removed when their associated Pattern Modules are destroyed.
Page 977
Pattern Define Version version int Pattern Define Description description string
Version. Description.
Pattern
IntegrationQuery
Implements resource
Name. Version. Description. Date of compilation. Date of last modification. Rule package name.
Page 978
Description Rule modules that are required. Rule modules that are dependants.
Conjecture Nodes
This section describes the BMC Atrium Discovery Conjecture Nodes. Conjectured information is derived from observed evidence using specialized algorithms. For example, Host Groups are Conjecture Nodes. There is no attribute on individual hosts that states that they belong in a particular group. However, using observed network communication between hosts BMC Atrium Discovery can assign hosts to logical groups. See AutomaticGroup Node for more information. AutomaticGroup Node describes the node kind that is used to model host groups.
AutomaticGroup Node
An Automatic Group is the term used for a number of hosts that are grouped according to observed network communications using the Automatic grouping feature. The sole purpose of an Automatic Group node is to build the visualizations that are part of the feature. Automatic Group nodes are only created or destroyed if the Enable Automatic Grouping setting is enabled on the Discovery Configuration page. Automatic Group nodes are stored in the Conjecture partition.
Creation
Automatic Group Nodes are created at the end of every discovery run, or when the Update button in the automatic grouping UI is clicked.
Update
Automatic Group nodes are not updated.
Removal
Automatic Group nodes are destroyed at the end of every discovery run, or when the Update button in the automatic grouping UI is clicked. New Automatic Group nodes are then created taking into account any changes to hosts and network communications.
Page 979
Members
AutomaticGroup: List: List: Member: Host AutomaticGroup: AutomaticGroup: ObservedCommunication: HostGroup: HostGroup
Communicating With
Legal Information
Copyright 2012 BMC Software, Inc. BMC, BMC Software, and the BMC Software logo are the exclusive properties of BMC Software, Inc., are registered with the U.S. Patent and Trademark Office, and may be registered or pending registration in other countries. All other BMC trademarks, service marks, and logos may be registered or pending registration in the U.S. or in other countries. All other trademarks or registered trademarks are the property of their respective owners. The information included in this documentation is the proprietary and confidential information of BMC Software, Inc., its affiliates, or licensors. Your use of this information is subject to the terms and conditions of the applicable End User License agreement for the product and to the proprietary and restricted rights notices included in the product documentation. BMC Software, Inc. 5th Floor 157-197 Buckingham Palace Road London SW1W 9SP United Kingdom Tel: +44 (0) 800 279 8925 Email: customer_support@bmc.com Web: http://www.bmc.com/support
Table of Contents
1 Table of Contents 2 The Pattern Language TPL 2.1 What's new in TPL 2.2 TPL recommendations 2.3 TPL file structure 2.4 Import 2.5 Metadata 2.6 Pattern Overview 2.6.1 Constants 2.6.2 Triggers 2.6.3 Body 2.6.4 User defined functions 2.6.5 Removal 2.7 Pattern Configuration 2.8 Static Tables 2.9 Enumerations 2.10 Identify 2.11 Definitions 2.11.1 SQL database integration
Page 980
2.11.2 SQL database discovery 2.12 Node Names Types and Keys 2.13 Syncmapping block 2.14 TPL Keywords 2.15 TPL Grammar 2.16 String Escape Characters 2.17 Using TPL to enrich discovered data 2.18 Adding custom attributes 3 Legal Information
Page 981
model.suppressRemovalGroup A new void keyword is added. Assigning this to an attribute in the model deletes the attributes. The statement node['attribute_name'] := void; is the equivalent of del node['attribute_name'] String and list slicing
TPL recommendations
The core purpose of TPL is to make it easy to describe the most common structures to be modeled, while remaining powerful enough to describe the vast majority of modeling situations. While the power of TPL enables you to describe the vast majority of modeling situations, it does not prevent you creating patterns that cause unwanted, or unexpected behavior. This page details recommendations for using TPL. Following these recommendation will prevent unwanted behavior.
Page 982
// This is a comment
The first non-blank, non-comment line must consist of a declaration of the form:
where module.name is a dot-separated name for the module in the module hierarchy. The 1.n specifies the version of TPL the module conforms to. The current version of TPL is 1.7, corresponding to BMC Atrium Discovery 9.0. This document highlights which version of TPL each feature is available in. Patterns may fail to activate if they use features that are not available in the version declared in the module. Following the module declaration, there can be any number of the following kinds of declaration, in any order that satisfies references between declarations: Import statements are used to import referenced declarations from other files. Pattern declarations define patterns, which are responsible for identifying entities and building the model. Table declarations are used to create look-up tables to be used in patterns.
Page 983
Identify table declarations define active look-up tables that automatically set identifying attributes on matching nodes. Configuration declarations define end-user editable configuration items. Definitions blocks are used to specify integration queries. The statements and declarations are described in detail in the following sections.
Common declarations
A number of kinds of declarations can occur throughout pattern files.
Strings
Literal strings can be declared in four ways: Started with a single quote character, terminated by an unescaped single quote; may not contain newline characters. Started with a double quote character, terminated by an unescaped double quote; may not contain newline characters. Started with three single quote characters, terminated by three unescaped single quotes; may include newline characters. Started with three double quote characters, terminated by three unescaped double quotes; may include newline characters. Triple-quoted strings are often used for multi-line string literals, especially for description strings in declarations. The escape character for strings is the backslash "\". A backslash preceding a quote prevents it from terminating a string. Strings may be "qualified" by prefixing them with a single identifier. The supported identifiers are: No qualifier characters prefixed with backslashes are converted to special characters using the same rules as Python unicode strings, as shown in String Escape Characters. String interpolation using % characters occurs, as described in section string interpolation. raw backslashes do not create special characters, and string interpolation does not occur. Backslashes preceding a closing quote character still prevent closure of the string, but the backslash remains part of the string. i.e. in the string: raw 'test\'quote' both the backslash and the quote appear in the string value. regex indicates that the string value is a regular expression. The contents are handled in the same way as raw strings. windows_cmd the string contains the name of a Windows executable to be identified in a process command line. It turns into a suitable regular expression for identifying the process, formed by prefixing the given string with "(?i)\b" and suffixing it with "\.exe$". unix_cmd the string contains the name of a Unix executable to be identified in a process command line. It turns into a suitable regular expression for identifying the process, formed by prefixing the given string with "\b" and suffixing it with "$".
Numbers
Literal numbers are integers. Integers are in base 10 unless prefixed with 0x in which case they are hexadecimal. Integers are signed 64 bit values.
Booleans
Literal booleans are true and false.
None
The value none represents the absence of any other value.
void
Assigning this to an attribute in the model deletes the attributes. The statement node['attribute_name'] := void; is the equivalent of del node ['attribute_name']
Identifiers
Identifiers used for names of variables, patterns, tags, etc. are comprised of only ASCII characters. They start with a letter or underscore character and contain letters, underscores and numbers. Names starting with two underscore characters are reserved for the system. If it is necessary to declare an identifier with a name that clashes with a keyword, the keyword can be used prefixed with a $ character.
Page 984
Tags
Many declarations contain tags, used to classify them with arbitrary names. Tags are useful when searching for declarations. Tags have the format:
Tag values must be valid identifiers. No meaning is attached to tags by the pattern language.
Data types
All data values in TPL have a type, such as text strings and integers. Patterns must be careful to be consistent in the use of types. It is an execution-time error to attempt to add an integer to a text string, for example. When attributes are set on datastore nodes and relationships, the attributes are set with the types of the supplied values, regardless of any type definitions in the taxonomy. The types supported in TPL are: text string, integer, time, list and table. Each list or table member has its own type; usually each member has the same type, but lists and tables are permitted to have mixed-type members. List is a valid list member type, so lists may be arbitrarily nested.
Import
Simple import statements take the following form:
This declaration makes the named declaration in the specified module available. Module names are dot-separated sequences of names. All imports are absolute, not relative to the importing module's name. For example, the declaration
finds a module named "a.b.c" and imports the item (pattern, for example) with the name "Example". Multiple named declarations can be accessed in one statement by separating them with commas:
Imported names must not clash with other names declared or imported in the same file. To avoid clashes, declarations can be renamed on import using an as clause:
from one import Example 1.0; from two import Example 2.3 as ExampleTwo;
When importing, it is not immediately an error to import a name that is not defined in the corresponding module, or to import from a non-existent module. It only becomes an error when the imported name is used in a context that requires it exists. In particular, when a pattern overrides another pattern (see the section on overrides), it is not an error if the overridden pattern does not exist. For all other uses of an imported name, it is an error if the import fails. Imported names are not available for further import. So, for example, if module B imports X from module A, module C cannot import X from B - it must import it directly from A.
Page 985
requirements are still met. For example, if you import version 1.1, this will allow the imported pattern to be 1.1, 1.2, and so forth, but NOT allow versions 1.0, 2.0, 2.1, 3.0, and so forth. The major version must be the same and the minor version must be at least the same as the version specified. If the version is not supported, the activation fails. Most of the time, you will not need to update the import version. It might be necessary only if either there is an incompatible change in the imported pattern (represented by a major version change), or if you modify your pattern so that is relies on additional behaviour of the updated pattern. See Pattern overview for more information about how patterns are versioned.
Recommendation Patterns contain tags that have information about the most recent Technology Knowledge Update (TKU) that updated the pattern (for example, TKU_2010_04_01). Therefore, you can perform a search for a specific TKU version to find patterns that were last changed in a specific version of TKU. Typing that value in the search box displays a list of patterns (and the containing pattern modules).
Version numbers
All top-level declarations are specified with a version number in major.minor format. When importing declarations, the version number must be specified. It is an error if the declaration imported is not compatible with the version specified in the import statement. The versions are compatible if the major number is identical and the minor number is at least as large as the requested version. So, for example, in
the "Example" declaration from the "test" module is compatible if it has version 2.3 or 2.4, but not if it has version 2.2, 1.8 or 3.0.
Circular imports
Circular imports are not permitted. It is an error for file B to import from file A when file A is still being processed.
Metadata
Metadata can be provided for a number of declarations, describing the purpose of the declarations. Metadata attributes can be defined for the pattern module as a whole, and for individual patterns and identify tables in the module. Metadata declarations take the form:
When pattern modules are loaded into BMC Atrium Discovery, the metadata attributes are stored along with the pattern modules in the data store, allowing them to be searched for and used in reports. There is not a fixed set of metadata attributes - any attributes can be set as appropriate for the pattern. However, the following attributes are recommended for all patterns and pattern modules: Name products product_synonyms known_versions publishers publisher_synonyms oem_publishers prior_publishers product_families product_editions Description The names of the product(s) identified by the pattern Other names for the product(s) Versions of the product(s) that are known to exist The publisher(s) or the product(s) at the time of pattern creation Other names the publishers are known by Any publishers that OEM the product(s) Publishers previously related to the product(s), in case the product or publisher has been bought Publisher specific product families the product(s) belong to If the product is available in different editions, which ones the pattern covers Expected? expected optional optional expected optional optional optional optional optional
Page 986
The software categories the product is in Web URLs of relevance For patterns from the Technology Knowledge Network, the identity in the TKN database
Note that all of the attributes are lists of strings (in TPL 1.4, numbers are also permitted) since it is possible for one single pattern or pattern module to be relevant to a number of different products. Attributes are tagged as "expected" if well-written patterns would be expected to have them. All patterns available through TKN will have all of the expected attributes.
Pattern Overview
Patterns are the main declarations in the pattern language. They are used to describe the structures of applications, products and other modeled entities, and encapsulate rules about how to identify them. The following sections provide descriptions of the various features of patterns: Overview Constants Triggers Body Removal
Pattern declarations
Pattern declarations have the form: pattern name version description [metadata metadata_entries end metadata;] overview overview_entries end overview; [constants constant_definitions end constants;] triggers trigger_definitions end triggers; body body_details end body; [removal removal_condition removal_details end removal;] end pattern; The name, version and description are mandatory, for example: pattern Xvnc 1.0 """ X VNC server pattern. Creates one SoftwareInstance per running VNC server. """ ...
Page 987
pattern version updates, the new pattern becomes the maintainer for the nodes. On a major version update, nodes maintained by the previous version continue to be maintained by the previous pattern; when the older pattern is removed, the nodes it is maintaining are destroyed.
Overview section
The overview is required. It contains information about the pattern and the entities it creates. It must contain a tags entry, and can have optional overrides and requires entries.
Tags
tags tag_name [ , tag ... ]; Declares tags relating to the pattern. At least one tag must be present. See the section on tags.
Overrides
overrides pattern_name [ , pattern_name ... ]; Indicates that the pattern overrides one or more other patterns. In that case, the other patterns are disabled. The named pattern must be previously declared in the file, either with a pattern declaration, or via an import. It makes little sense to override a pattern in the same file, so usually the overridden pattern will be imported. This functionality is used to modify standard Tideway-produced patterns in the field. It is not an error to name a non-existent pattern in an overrides clause. Since other patterns may rely on the behaviour of the overridden pattern, the overriding pattern should take care to construct the same form of data model as the overridden pattern.
Requires
requires pattern_name [ , pattern_name ... ]; Declares that this pattern requires the effects of the other named pattern in some way. The named pattern must be previously declared in the file, either with a pattern declaration, or via an import. It is most useful to declare requirements for patterns in other files, so usually the required pattern will be imported.
Constants
Following the overview is an optional constants section. Each constant takes the form:
name := expression;
The name is a valid variable name; the expression is a literal or any expression that depends only on other constants previously declared in the pattern. Named constants are available in the subsequent trigger, body and removal sections. Constant values are not available outside the pattern in which they are declared. Constants should be used for values that are likely to be changed by pattern authors, such as regular expressions to extract versions from paths, and so on. They should not be used for values that are expected to be changed by normal users of the pattern in those situations, a Pattern Configuration block should be used. For example, this defines two regular expressions to be used in the pattern body:
Page 988
From TPL version 1.5 (BMC Atrium Discovery 8.2) it is possible to reference constants defined in one pattern from other patterns. In the first pattern Tomcat 1.0 the constant Tomcat.si_type is defined.
tpl 1.5 module ApacheFoundation.Tomcat; pattern Tomcat 1.0 '''Define contains constants which we want to use elsewhere.''' [...] constants si_type := 'Apache Tomcat Application Server'; end constants; [...] end pattern
In the second code sample, the Tomcat 1.0 pattern is imported and in the ExtendedTomcat 1.0 pattern the Tomcat.si_type is referenced.
tpl 1.5 module ApacheFoundation.ExtendedTomcat; from ApacheFoundation.Tomcat import Tomcat 1.0; pattern ExtendedTomcat 1.0 '''Use specified constant.''' [...] triggers on process := DiscoveredProcess where type := Tomcat.si_type; end triggers; [...] end pattern
Triggers
Triggers are where the active part of patterns begin. Triggers define the conditions in which the body of the pattern are evaluated. From the point of view of pattern authors, conditions are based on data events - creation or modification of nodes in the datastore. Triggers take the form:
In a simple example:
triggers on process := DiscoveredProcess where cmd matches regex "Xvnc"; end triggers;
The name assigned to the trigger is made available as a variable in the pattern body.
Trigger conditions
Trigger conditions consist of a subset of the more general where clauses available in searches (described later in the section on search expressions). The only operations supported are simple attribute comparisons of the form:
Page 989
and combinations of such comparisons using and and or. The only operators available are: = matches exists Exact equality Regular expression match Attribute existence check does not take an expression.
Note that to avoid "greedy" triggers that trigger on almost all data, not expressions are not permitted in triggers. Expressions must be literal values, constants, or expressions based entirely on literal values and constants.
Trigger kind
By default, patterns only trigger when a node is created. In some circumstances, it may be necessary to only trigger when a node is modified, not when it is created. It may also be necessary to trigger if a node is superseded / destroyed. Patterns can trigger when a node is confirmed to exist, even if it is not modified. Such a pattern triggers when a pattern confirms the existence of a node using one of the Model node assertion functions and, for Host nodes, when a Host is confirmed to exist during a scan. Triggers can specify a comma-separated set of "change kinds". The kinds are created, confirmed, modified, and destroyed. For example:
triggers on process := SoftwareInstance confirmed where type matches regex "Apache"; end triggers;
The pattern will only trigger when a matching SoftwareInstance is confirmed to exist, not when a matching SoftwareInstance is first created. More than one change kind can be selected by separating them with commas, e.g.:
triggers on process := DiscoveredProcess created, modified where cmd matches regex "Xvnc"; end triggers;
The default change kind is created. When triggering on a node being modified, great care must be taken if the triggering node is modified in the pattern, since the modification inside the pattern causes the pattern to trigger again.
Relationship triggers
To trigger on relationships rather than nodes, simply use a prefix of relationship to the kind in the trigger:
triggers // Trigger whenever a HostContainment relationship is created on rel := relationship HostContainment; end triggers;
Existing data
Note that patterns normally only trigger when data in the datastore changes (or is confirmed by another pattern for the confirmed trigger) if nodes matching a pattern's trigger conditions already exist at the time the pattern is activated, the pattern will not execute for those nodes. Patterns can be executed against existing nodes using Manual pattern execution.
Page 990
Body
Once a pattern has triggered, its body is executed. The body consists of assignments, conditionals, loops, function calls, and termination statements. The statements in the pattern body are not guaranteed to be executed in strict sequence. The system guarantees that dependencies between statements are properly resolved, but statements with no dependencies can be executed out of order or concurrently. 1 Assignments 2 Conditionals 3 Loops 4 Body termination 5 Value expressions 6 Logical expressions 7 Node scoped values 8 Lists 9 Functions 10 String interpolation 11 Slicing 12 Search expressions
Assignments
Assignments take the form: name := expression; Where the name must be a valid variable name and the expression must adhere to one of the valid expression kinds described in this section. It is an error to assign to a name that has been declared as a constant in the same pattern. Variables are dynamically typed according to the type of the expression. It is not an error to rebind a variable to a value with a different type to its previous value.
void
The void keyword is added in TPL 1.6/BMC Atrium Discovery 8.3. Assigning this to an attribute in the model deletes the attributes. The statement: node['attribute_name'] := void; is the equivalent of: del node ['attribute_name']
Conditionals
There is just one kind of conditional statement: the if: if expression then ... [elif expression then ... ]... [else ... ] end if; There may be any number of elif clauses; the else clause is optional. The expression may have any type the values false, none, empty string, empty list, empty set, and numeric zero are considered false; all other values are considered true.
Loops
Page 991
There is only one looping construct, the for loop: for variable in expression do ... end for; The expression must be a list or set. It is a runtime error if it is not. The loop body executes for each item in the list or set; in each iteration the variable name is set to the corresponding list / set member. Looping can be terminated early with a break statement, or the next iteration started before the end of the loop body with the continue statement, e.g.: for item in collection do if item = 5 then break; elif item = 10 then continue; end if; doSomething(item); end for; On exit from a loop, the loop variable is no longer valid. The loop variable cannot be modified in the body of the loop.
Body termination
After triggering, it is common for a body to use some "existence criteria" to confirm that the entity modeled by the pattern does indeed exist. It can be useful to terminate the execution of the pattern body prematurely, using the stop statement, for example: if not existence_condition then stop; end if; If at all possible, patterns should use trigger conditions to avoid triggering when not necessary, rather than using stop in a condition. It is much more efficient to avoid triggering a pattern than to trigger and then stop. If this cannot be avoided then the stop condition should be tested as early as possible especially if additional discovery can be avoided by doing so otherwise considerable load may be put on the system.
Value expressions
There are a number of different kinds of expressions used in assignments and other declarations. Simple value expressions consist of literal values as described in the section on Common declarations and variable names. Numeric values can be involved in arithmetic expressions. Unary expressions prefix a value: + ~ Negates the value Has no effect to numeric arguments. Bitwise inverse
Infix arithmetic operations are the following: * / % + << >> & ^ Multiplication Division Modulo Addition Subtraction Bitwise left shift Bitwise right shift Bitwise and Bitwise exclusive or
Page 992
Bitwise or
The * / and % operators take binding precedence over + and -, which take precedence over << and >>, which take precedence over the other operators. Parentheses can be used to group sets of operations to override the precedence rules. String values may be concatenated using the + infix operator.
Logical expressions
Logical expressions are expressions that evaluate to true or false. In boolean operations, values are considered true or false according to the same rules as for if statements, as described in section Conditionals. Logical expressions can be combined with the following operators: a a a a a a < b <= b > b >= b = b <> b For numeric values, numeric comparisons. For strings, character (lexicographical) comparisons.
Sub-word and sub-string tests. a and b must be strings, except that a can be none, in which case the test result is false.
True if string a matches regular expression b. If a is none, the test result is false. True if a is considered false; false otherwise. True if both a and b are; false otherwise. True is either a or b or both are true; false if they are both false.
not a
a and b a or b
Binding precedence for the operators follows the order they are given above. The operators before not are grouped together with the highest precedence, followed by not, then and, then or.
Page 993
In BMC Atrium Discovery the naming convention for attributes is to use only the lowercase letters a to z with underscores (_) to separate words. Attribute names must not start or end with a space. Attribute names in searches can only be specified using the same characters. If you use other characters, for example a hyphen (-), your search will cause an error.
Attribute names cannot be changed Once an attribute has been created, you cannot change its name.
Lists
Literal lists are declared using [ and ] characters, with a comma-separated sequence of expression inside. e.g. values := [ "one", "two", "three" ]; Lists can be dereferenced using the [] operator: first := values[0]; List indexing starts from zero. Lists may be concatenated together with the + operator. The result of concatenating two lists is a new list; the original lists are left unchanged. From TPL 1.2, items can be appended to lists with the list.append function: values := [ "one", "two" ]; list.append(values, "three") You can also create a list using the list.list function: values := list.list ( "one", "two", "three" );
Functions
Functions are used to interact with discovery, the data store, and external systems. Functions can be used as statements or as expressions, depending on whether there is a return value. Functions that return a value can be used as statements, in which case the return value is discarded. Functions are defined outside of the pattern language. Function names are usually scoped, using the dot character to separate the scope from the function name. The scope is used to group related functions together. The currently defined scopes are: log, text, number, regex, xpath, table, discovery, model, email, time, and inference, plus the global scope. Functions are called as follows scope.name(arguments) Function arguments are comma-separated sequences of expressions or assignments. Assignment-style arguments are used to provide named arguments. There are three different ways that functions can process their arguments. Positional arguments In functions with positional arguments, the order in which arguments are provided is significant. Arguments can be optional. Optional arguments are populated in the order they are specified if a function takes two optional arguments, it is impossible to set the value of the second without also providing a value for the first. Positional and named arguments Functions with positional and named arguments take a number of positional arguments, followed by a number of optional named arguments. Since the named arguments are identified by their names, any of the optional arguments can be set. Arguments are given names using assignment (name := value) syntax. Implicitly and explicitly named arguments Functions whose arguments are all optional named arguments can use implicitly named arguments. In that case, arguments can be provided either with assignment syntax, or can be provided as plain variable names. When plain variable names are used, the argument name is taken from the variable name, and its value from the variable's value. In a function with implicitly named arguments, it is an error to provide non-assignment arguments that are expressions, rather than plain variable names. The functions are grouped as follows: Global scope functions Log functions Text functions Number functions Regex functions XPath functions
Page 994
Table functions Discovery functions Model functions Email functions Time functions Inference functions All functions are listed on the Functions page along with a one line overview of each.
String interpolation
Unqualified literal strings take part in string interpolation. Values to be interpolated are surrounded by % characters. Values are accessed from the pattern's variables and constants. e.g. in: foo := "world"; bar := "Hello %foo%!"; The variable bar is set to the string "Hello_world!". Node scoped variables can also be interpolated. The above command and args example can be implemented with an interpolation: cmd_and_args := "%process.cmd% %process.args%"; To embed a literal percentage symbol in a string, use %% increase := 75; report := "The size has increased by %increase% %%"; Values interpolated into strings must be strings or numbers. Sometimes it is useful to perform interpolation on a pre-existing string, particularly to specify patterns in constants. This can be achieved with an expand expression, e.g. template := raw "db_%instance%_1"; db_name := expand(template, instance := some_node.instance_name); Note how the use of the raw string suppresses the interpolation that would normally occur in setting the template variable. The values to interpolate into the string in the expand are implicitly and explicitly named arguments as described in the section on functions above.
Slicing
Slicing can be applied to lists and strings and it works in the same way as Python. So there are a couple of forms: value[lowerbound:upperbound] which will start from index lowerbound (inclusive) to upperbound (exclusive). If the bound is not specified it will represent the start (lower) or end (upper) of the value. It is also possible to use negative values which is equivalent to writing size(value) + bound, that is, value[:-1] is equivalent to value[:size(value)-1]. For example: value := 'an example string'
value[3:] is 'example string' value[:3] is 'an ' value[-3:] is 'ing' value[:-3] is 'an example str'
The second form takes a step value (which if not present is equivalent to a step of 1).
value[::2] is 'a xml tig' value[::3] is 'aemetn' value[::-1] is 'gnirts elpmaxe na'
Search expressions
Searches using the BMC Atrium Discovery search service are valid expressions in pattern bodies, with a number of minor modifications as
Page 995
Functions
This page provides a list of all TPL functions and is grouped by function type.
Log functions
log.debug Log the given message with a debug level message. The log messages that are output automatically include the name of the pattern performing the log action. log.info Log the given message with an info level message. The log messages that are output automatically include the name of the pattern performing the log action. log.warn Log the given message with a warn level message. The log messages that are output automatically include the name of the pattern performing the log action. log.error Log the given message with an error level message. The log messages that are output automatically include the name of the pattern performing the log action. log.critical Log the given message with a critical level message. The log messages that are output automatically include the name of the pattern performing the log action.
Text functions
text.lower Returns the lower-cased version of the string argument. text.upper Returns the upper-cased version of the string argument. text.toNumber Converts its string argument into a number. text.replace Returns a modified version of the string formed by replacing all occurrences of the string old with new. text.split Returns a list consisting of portions of the string split according to the separator string, where specified. text.strip Strips unwanted characters from the start and end of the given string. text.leftStrip Equivalent to text.strip, but only strips from the left side of the string. text.rightStrip Equivalent to text.strip, but only strips from the right side of the string. text.hash Returns a hashed form of the string, generated with the MD5 hash algorithm. text.ordinal Returns the ordinal value of the string argument. text.join Returns a string containing all items in a list of strings joined with the specified separator.
Number functions
number.toText Converts the integer number to a text form. number.range Generate a list containing 0 to number - 1.
Regex functions
regex.extract Returns the result of extracting the regular expression from the string, optionally with a substitution expression and a specified result if no match is found. regex.extractAll Returns a list containing all the non-overlapping matches of the pattern in the string.
XPath functions
xpath.evaluate Returns the result of evaluating the xpath expression against the XML string. xpath.openDocument Returns the DOM object resulting from parsing the XML string. xpath.closeDocument Closes the DOM object resulting from xpath.openDocument.
Table functions
table Creates a new table. table.remove Removes the specified key from the specified table.
Discovery functions
discovery.fileGet The target node may be one of the following: discovery.fileInfo Retrieves information about the specified file, but not the file content.
Page 996
discovery.listDirectory Retrieves the directory listing of the directory specified by the path on the specified target. discovery.listRegistry Returns a list of the registry entries of the registry key specified by the key_path. discovery.registryKey Retrieves a registry key from a Windows computer. discovery.wmiQuery Performs a WMI query on a Windows computer. discovery.runCommand Returns a DiscoveredCommandResult node containing the result of running the specified command. discovery.snmpGet The target node may be one of the following: discovery.snmpGetTable Performs an SNMP query that returns a table on the target. discovery.process Returns the process node corresponding to the source node, which must be a ListeningPort or NetworkConnection node. discovery.children Returns a list of the child processes for the given DiscoveredProcess node. discovery.descendents Returns a list consisting of the children of the given DiscoveredProcess node, and recursively all of the children's children. discovery.parent Returns the parent process for the given DiscoveredProcess node. discovery.allProcesses Returns a list of all processes corresponding to the directly discovered data source node. discovery.access Returns the DiscoveryAccess node for the source node, which must be a directly discovered data node.
Model functions
model.addContainment Adds the containees to the container by creating suitable relationships between the nodes. model.setContainment Equivalent to addContainment, except unconfirmed relationships are removed at the end of the pattern body model.destroy Destroy the specified node in the model. model.withdraw Removes the named attribute from the node. model.setRemovalGroup Add the specified node or nodes to a named removal group. model.anchorRemovalGroup Specify an anchor node for a named removal group. model.suppressRemovalGroup Suppress removal of the named removal group. model.host Returns the Host node corresponding to the given node. model.hosts Returns a list of all the Host nodes corresponding to the given node. model.findPackages Traverses from the node, which must be a Host or a directly discovered data node, and returns a set of all Package nodes that have names matching the provided list of regular expressions.
Related functions
related.detailContainer Returns the SoftwareComponent, SoftwareInstance, or BusinessApplicationInstance node containing to the given node. related.host Returns the Host node corresponding to the given node.
Email functions
mail.send Sends an email.
Time functions
time.current Returns the current time. time.delta Creates a time delta that can be added to or subtracted from a time. time.parseLocal The time is converted according to the appliance's time zone and daylight saving setting. time.parseUTC The time is assumed to already be UTC, and is not adjusted for timezones or daylight saving time. time.formatLocal Formats a time into a human-readable string. time.formatUTC Formats a time into a human-readable string. time.toTicks Converts a time or time delta into the DCE ticks format. time.fromTicks Converts DCE ticks into a time. time.deltaFromTicks Converts DCE ticks into a time delta.
Inference functions
inference.associate Create associate inference relationship(s) from the specified node(s) to the inferred node. inference.contributor Create contributor inference relationship(s) from the specified node(s) to the inferred node, for attribute names specified in the contributes list. inference.primary Create primary inference relationship(s) from the specified node(s) to the inferred node. inference.relation Create relation inference relationship(s) from the specified node(s) to the inferred relationship. inference.withdrawal Create withdrawal inference relationship(s) from the specified node(s) to the inferred node, indicating the withdrawal of the withdrawn attribute name. inference.destruction When destroying a node, indicate that the source node was responsible for its destruction.
Page 997
size
size
size(collection)
Returns the number of items in the collection, where the collection is a list, or a set of nodes retrieved with a search expression.
Log functions
The following functions are for logging. Each takes a single positional argument: log.debug log.info log.warn log.error log.critical
log.debug
log.debug(string)
Log the given message with a debug level message. The log messages that are output automatically include the name of the pattern performing the log action.
log.info
log.info(string)
Log the given message with an info level message. The log messages that are output automatically include the name of the pattern performing the log action.
log.warn
log.warn(string)
Log the given message with a warn level message. The log messages that are output automatically include the name of the pattern performing the log action.
log.error
log.error(string)
Log the given message with an error level message. The log messages that are output automatically include the name of the pattern performing the log action.
log.critical
log.critical(string)
Log the given message with a critical level message. The log messages that are output automatically include the name of the pattern performing the log action.
Text functions
The following are text functions that operate on text strings. They take positional arguments. text.lower text.upper text.toNumber text.replace text.join text.split
Page 998
text.lower
text.lower(string)
Returns the lower-cased version of the string argument.
text.upper
text.upper(string)
Returns the upper-cased version of the string argument.
text.toNumber
text.toNumber(string [, base ] )
Converts its string argument into a number. By default, the number is treated as a base 10 value; if the optional base is provided, the number is treated as being in the specified base. Bases from 2 to 36 are supported. Bases larger than 10 use the letters a through z to represent digits.
text.replace
text.split
text.split(string, [separator])
Returns a list consisting of portions of the string split according to the separator string, where specified. The separator does not appear in the output string. For example, text.split("Split this string", " ") results in the list ["Split", "this", "string"]. If the separator is not specified, the split will occur on all whitespace characters. If the text is not split because the separator does not occur in the string, the function returns a list containing the whole string as its only member. The separator is optional in BMC Atrium Discovery 8.3 and later.
text.strip
text.leftStrip
text.rightStrip
Page 999
text.hash
text.hash(string)
Returns a hashed form of the string, generated with the MD5 hash algorithm.
text.ordinal
text.ordinal(string)
Returns the ordinal value of the string argument. The string must be one character in length. For example the ordinal value of A is 65.
Number functions
The following are number functions that operate on numeric arguments. They take positional or keyword arguments. number.toText number.range
number.toText
number.range
number.range(number)
Generate a list containing 0 to number - 1. If number is less than 1 the list will be empty. For example, number.range(5) results in the list [0, 1, 2, 3, 4].
Regex functions
The following are regex functions. They take positional arguments. regex.extract regex.extractAll
regex.extract
Page 1000
// Extract the version using group 1 of the regular expression. version := regex.extract(process.cmd, path_regex, raw "\1", no_match := 'No match.');
A versioning string is of the form "Version 2.7.182 patch 69". To extract the patch number, the following function is used:
patch := regex.extract(cmdResult.result, regex 'patch (\d+)', raw '\1', no_match := 'Unsupported.');
The expression patch (\d+) matches patch 69 in the versioning string and it contains the first group matching \1 which is 69. If the versioning string was "Version 2.7.182 patch 69 patch 70" then the second group matching \2 is 70. If the versioning string was "Version 2.7.182", the function would return "Unsupported."
regex.extractAll
regex.extractAll(string, pattern)
Returns a list containing all the non-overlapping matches of the pattern in the string. If the pattern contains more than one group, returns a list of lists containing the groups in order.
XPath functions
The following are xpath functions. They take positional arguments. xpath.evaluate xpath.openDocument xpath.closeDocument
xpath.evaluate
xpath.evaluate(string, expression)
Returns the result of evaluating the xpath expression against the XML string. Returns a list of strings containing the selected values. The result is always a list, even if the expression is guaranteed to always return just one result. The expression must be one that returns tag attributes or body text - expressions returning DOM nodes will result in an error that terminates execution of the pattern.
xpath.openDocument
xpath.openDocument(string)
Returns the DOM object resulting from parsing the XML string. The DOM object returned is suitable for passing to xpath.evaluate. Parsing the XML once is considerably more efficient if you wish to call xpath.evaluate on the same XML. You should use xpath.closeDocument once xpath evaluation is complete or the parsed DOM will be held in memory for some time.
xpath.closeDocument
xpath.closeDocument(DOM object)
Closes the DOM object resulting from xpath.openDocument. You should close an XML document opened with xpath.openDocument once xpath evaluation is complete or the parsed DOM will be held in memory for some time.
Table functions
Patterns can create dynamic tables that provide a mapping from keys to values. Dynamic tables are new in TPL 1.2. The static form of Static Tables is supported by all versions of TPL.
Page 1001
table table.remove
table
table( [ parameters ] )
Creates a new table. With no parameters, creates an empty table. If parameters are given, initializes the table with items where the keys are the parameter names and the values are the parameter values. Table items are accessed with square brackets, and a for loop iterates over the table keys. For example:
things := table(); things["one"] := 1; things["two"] := 2; for key in things do value := things[key]; log.info("%key% -> %value%"); end for;
table.remove
table.remove(table, key)
Removes the specified key from the specified table.
Discovery functions
discovery.fileGet
Page 1002
Requires PRIV_CAT to be defined to retrieve files not readable by the current user. 64 bit Windows hosts have the following system directories for holding executables, DLLs, and so forth: %windir%\system32 64 bit executables and DLLs %windir%\SysWOW64 32 bit executables and DLLs win64_redirect is an optional flag which enables you to specify whether the search for the file should take place in the 32 bit directory if not found in the 64 bit directory. TRUE means search the 32 bit directory, FALSE means do not search the 32 bit directory if the file is not found in the 64 bit directory. The default is TRUE. Filenames that do not start with %windir%\system32 or %systemroot%\system32 are never checked for redirection. On non-Windows and 32 bit Windows systems win64_redirect has no effect. The win64_redirect flag is new in 8.2. Returns a DiscoveredFile node. The target node may be one of the following: Host DiscoveryAccess DiscoveredDirectoryEntry DiscoveredFQDN DiscoveredHBA DiscoveredListeningPort DiscoveredNetworkConnection DiscoveredNetworkInterface DiscoveredProcess DiscoveredSNMPRow DiscoveredWMI SQLResultRow DirectoryListing DiscoveredPackages DiscoveredSNMPTable DiscoveredWMIQuery FQDNList HBAInfoList IntegrationResult InterfaceList NetworkConnectionList ProcessList DeviceInfo DiscoveredCommandResult DiscoveredFile DiscoveredPatches DiscoveredRegistryValue DiscoveredSNMP HostInfo
discovery.fileInfo
Page 1003
Returns a DiscoveredFile node with no content attribute. The target node may be one of the following: Host DiscoveryAccess DiscoveredDirectoryEntry DiscoveredFQDN DiscoveredHBA DiscoveredListeningPort DiscoveredNetworkConnection DiscoveredNetworkInterface DiscoveredProcess DiscoveredSNMPRow DiscoveredWMI SQLResultRow DirectoryListing DiscoveredPackages DiscoveredSNMPTable DiscoveredWMIQuery FQDNList HBAInfoList IntegrationResult InterfaceList NetworkConnectionList ProcessList DeviceInfo DiscoveredCommandResult DiscoveredFile DiscoveredPatches DiscoveredRegistryValue DiscoveredSNMP HostInfo
discovery.listDirectory
Page 1004
SQLResultRow DirectoryListing DiscoveredPackages DiscoveredSNMPTable DiscoveredWMIQuery FQDNList HBAInfoList IntegrationResult InterfaceList NetworkConnectionList ProcessList DeviceInfo DiscoveredCommandResult DiscoveredFile DiscoveredPatches DiscoveredRegistryValue DiscoveredSNMP HostInfo
Restrictions
1. 2. 3. 4. On Windows systems, owner and group information is not recovered. Group names which include spaces are not supported. Config.sys is not listed as a system file. DirectoryListing nodes are not created for empty Windows directories.
discovery.listRegistry
discovery.registryKey
Page 1005
The target node may be one of the following: Host DiscoveryAccess DiscoveredDirectoryEntry DiscoveredFQDN DiscoveredHBA DiscoveredListeningPort DiscoveredNetworkConnection DiscoveredNetworkInterface DiscoveredProcess DiscoveredSNMPRow DiscoveredWMI SQLResultRow DirectoryListing DiscoveredPackages DiscoveredSNMPTable DiscoveredWMIQuery FQDNList HBAInfoList IntegrationResult InterfaceList NetworkConnectionList ProcessList DeviceInfo DiscoveredCommandResult DiscoveredFile DiscoveredPatches DiscoveredRegistryValue DiscoveredSNMP HostInfo
discovery.wmiQuery
discovery.runCommand
Page 1006
discovery.runCommand(target, command)
Returns a DiscoveredCommandResult node containing the result of running the specified command. The target node may be one of the following: Host DiscoveryAccess DiscoveredDirectoryEntry DiscoveredFQDN DiscoveredHBA DiscoveredListeningPort DiscoveredNetworkConnection DiscoveredNetworkInterface DiscoveredProcess DiscoveredSNMPRow DiscoveredWMI SQLResultRow DirectoryListing DiscoveredPackages DiscoveredSNMPTable DiscoveredWMIQuery FQDNList HBAInfoList IntegrationResult InterfaceList NetworkConnectionList ProcessList DeviceInfo DiscoveredCommandResult DiscoveredFile DiscoveredPatches DiscoveredRegistryValue DiscoveredSNMP HostInfo The following code examples shows discovery.runcommand being used to determine the publisher of a Java virtual machine. This comes from the TKU Java VM pattern.
publisher_command := '%javacmd_path% -version'; // Run java -version and extract the publisher ran_pub_cmd := discovery.runCommand(host, publisher_command); ... ... if ran_pub_cmd then log.debug("Publisher command output: %ran_pub_cmd.result%"); cmd_publisher := regex.extract(ran_pub_cmd.result, regex'(?i)(hotspot|gnu|ibm|bea|sun)', raw'\1'); end if;
Possible post-processing required on UNIX/Linux hosts When a pattern executes a command on a UNIX/Linux host, the result returned may contain control characters used to display colours and other formatting. The pattern should not rely on being able to use the result without some additional processing such as regular expression matching and extraction.
discovery.snmpGet
Page 1007
table oids 1.0 "1.3.6.1.2.1.1.1" -> "one"; "1.3.6.1.2.3.4.5" -> "two"; end table; DiscoveredSNMPnode := discovery.snmpGet(target, oids)
The call succeeds if any of the requested OIDs are available; the attributes corresponding to any OIDs that were not available are not set on the DiscoveredSNMP node. If none of the OIDs are available, the discovery request fails and the snmpGet function returns none. The target node may be one of the following: DeviceInfo DirectoryListing DiscoveryAccess FileSystemList FQDNList HBAInfoList Host HostInfo IntegrationResult InterfaceList NetworkConnectionList NetworkDevice ProcessList RegistryListing ServiceList SQLResultRow DiscoveredAggregatedPorts DiscoveredAggregatedPortsList DiscoveredATMVirtualCircuit DiscoveredATMVirtualCircuitList DiscoveredATMVirtualPath DiscoveredATMVirtualPathList DiscoveredBridge DiscoveredBridgeList DiscoveredCard DiscoveredCardList DiscoveredChassis DiscoveredChassisList DiscoveredCommandResult DiscoveredDevicePort DiscoveredDevicePortList DiscoveredDirectoryEntry DiscoveredFile DiscoveredFileSystem DiscoveredFQDN DiscoveredFrameRelayDLCI DiscoveredFrameRelayDLCIList DiscoveredFrameRelayLMI DiscoveredFrameRelayLMIList DiscoveredHBA DiscoveredListeningPort DiscoveredNeighbours DiscoveredNetworkConnection DiscoveredNetworkDevice DiscoveredNetworkInterface DiscoveredPackages DiscoveredPatches DiscoveredProcess DiscoveredRegistryEntry DiscoveredRegistryValue DiscoveredService DiscoveredSNMP DiscoveredSNMPRow DiscoveredSNMPTable DiscoveredVLAN DiscoveredVLANList DiscoveredWMI DiscoveredWMIQuery
Page 1008
discovery.snmpGetTable
table storage_map 1.0 "1" -> "index"; "2" -> "type"; "3" -> "descr"; "4" -> "allocation_units"; end table;
pattern example 1.0 ... body rows := discovery.snmpGetTable(host, "1.3.6.1.2.1.99.99", storage_map); ...
OID usage with snmpGetTable When using snmpGetTable, specifying the OID of a table object, a DiscoveredSNMPRow node is created for every requested value in every row in the table rather than a single node for each row in the table. To avoid this you should use the OID for the entry object. For example, rather than using .1.3.6.1.2.1.2.2 (ifTable) you need to use .1.3.6.1.2.1.2.2.1 (ifEntry).
The target node may be one of the following: DeviceInfo DirectoryListing DiscoveryAccess FileSystemList FQDNList HBAInfoList Host HostInfo IntegrationResult InterfaceList NetworkConnectionList NetworkDevice ProcessList RegistryListing ServiceList SQLResultRow DiscoveredAggregatedPorts DiscoveredAggregatedPortsList DiscoveredATMVirtualCircuit DiscoveredATMVirtualCircuitList DiscoveredATMVirtualPath DiscoveredATMVirtualPathList DiscoveredBridge DiscoveredBridgeList DiscoveredCard DiscoveredCardList DiscoveredChassis DiscoveredChassisList DiscoveredCommandResult DiscoveredDevicePort DiscoveredDevicePortList DiscoveredDirectoryEntry DiscoveredFile DiscoveredFileSystem DiscoveredFQDN
Page 1009
DiscoveredFrameRelayDLCI DiscoveredFrameRelayDLCIList DiscoveredFrameRelayLMI DiscoveredFrameRelayLMIList DiscoveredHBA DiscoveredListeningPort DiscoveredNeighbours DiscoveredNetworkConnection DiscoveredNetworkDevice DiscoveredNetworkInterface DiscoveredPackages DiscoveredPatches DiscoveredProcess DiscoveredRegistryEntry DiscoveredRegistryValue DiscoveredService DiscoveredSNMP DiscoveredSNMPRow DiscoveredSNMPTable DiscoveredVLAN DiscoveredVLANList DiscoveredWMI DiscoveredWMIQuery
discovery.process
discovery.process(source)
Returns the process node corresponding to the source node, which must be a ListeningPort or NetworkConnection node.
discovery.children
discovery.children(process)
Returns a list of the child processes for the given DiscoveredProcess node. Returns an empty list if there no children or the parameter is not a DiscoveredProcess node.
discovery.descendents
discovery.descendents(process)
Returns a list consisting of the children of the given DiscoveredProcess node, and recursively all of the children's children.
discovery.parent
discovery.parent(process)
Returns the parent process for the given DiscoveredProcess node. Returns none if the process has no parent.
discovery.allProcesses
discovery.allProcesses(source)
Returns a list of all processes corresponding to the directly discovered data source node. Returns an empty list if the source node is not a valid directly discovered data node.
discovery.access
discovery.access(source)
Returns the DiscoveryAccess node for the source node, which must be a directly discovered data node. Returns none if the source is not valid.
Page 1010
Model functions
key := "My unique key"; name := "My instance number %number%"; special_value := 123; model.SoftwareInstance(key, name, value := special_value+1);
A SoftwareInstance node with the specified key is created or updated. It has attributes key, name and value. If a SoftwareInstance with the given key already exists, it is updated; otherwise a new one is created. Any attribute name may be set in the named arguments, regardless of whether the names are defined in the taxonomy. Note that the attributes are stored with the types of the assigned values, even if the taxonomy specifies that they should have different types. Nodes created by patterns should have unique keys, both so they can be identified in subsequent executions of the same pattern, and to facilitate integrations that export the data. Keys should be unique across the entire datastore. In some cases, it is not possible to generate a key that is unique in any scope. In that case, a grouped entity is created, containing a count of the number of nodes in the group, with one group node per Host. If it is possible to separate the groups into a number of sub-groups, the key_group attribute can be set, in which case one grouping node will be created for each of the different key_group values. Node Names Types and Keys contains some advice about choosing appropriate names, types, and keys for nodes created by patterns. Each node created or modified by these node functions is automatically given a primary inference relationship to the node that triggered execution of the pattern, plus additional contributor inference relationships to represent the provenance of all attributes set on the node. SoftwareInstance and BusinessApplicationInstance nodes are also automatically related to the Host(s) on which they are running. Nodes created or modified by these functions are also given a maintainer relationship to the pattern. When a pattern is removed from the system, all the nodes it has maintainer relationships to are destroyed.
Accidental maintainers Never use the model.nodekind functions to update attributes on a node that the pattern is not maintaining. e.g. to set an attribute on a Host node, do not do this:
Page 1011
The function creates or updates a relationship between the source and destination nodes. The parameters are named after the relationship's roles, e.g.
si_node := // VMware SoftwareInstance host_node := // Virtual Host rel := model.rel.HostContainment(HostContainer := si_node, ContainedHost := host_node);
The relationship functions also accept lists of nodes in the roles, in which case, multiple relationships are created/updated between each pair of nodes and the function returns a list of the relationships.
Use of model.rel.Inference should be avoided You should avoid using the model.rel.inference function. The inference system expects specific attributes on the Inference relationship. If these attributes are missing there may be a traceback whenever the system attempts to update inference information. A warning is logged when you upload a pattern using the model.rel.inference function.
If a relationship already exists between the source and destination, its attributes are updated. If a relationship does not exist from the source to the destination it is created. All other matching relationships from the source to other destinations are destroyed.
Parameter order Unlike model.rel functions, in model.uniquerel, the order the roles is specified in is important. The first listed role belongs to the node that has the unique relationship; the node with the second listed role is permitted to have more than one matching relationship. i.e. a Host can be in only one Location, but a Location can have many Hosts in it.
A relationship matches for destruction if it has the same relationship kind, same source and destination roles, and has attributes that match the values of the specified attributes. e.g. to relate a Host to a location:
The use of uniquerel ensures that the Host nodes are only ever related to one Location. As a more advanced example, attribute matching can be used to relate to both a city location and an office location:
Page 1012
// City pattern place := // a Location node for the city model.uniquerel.Location(ElementInLocation := host, Location := place, type := "city");
// Office pattern place := // a Location node for the office model.uniquerel.Location(ElementInLocation := host, Location := place, type := "office");
With those patterns, each Host will now be related to two Location nodes. The use of uniquerel in the first pattern will only destroy Location relationships with a type attrribute of "city", and the second one will only destroy relationships with a type of "office". Like model.rel functions, lists of nodes can be provided. A list in the source role simply has the same effect as calling the function multiple times, once with each source node. A list in the destination role, however, causes the source to be related to all of the destination nodes, and destroys any relationships to other nodes.
Use of model.uniquerel.Inference should be avoided You should avoid using the model.uniquerel.inference function. The inference system expects specific attributes on the Inference relationship. If these attributes are missing there may be a traceback whenever the system attempts to update inference information. A warning is logged when you upload a pattern using the model.uniquerel.inference function.
model.addContainment
Page 1013
model.addContainment(container, containees)
Adds the containees to the container by creating suitable relationships between the nodes. containees can be a single node, or a list of nodes, for example: [node1, node2]. If the containees are SoftwareInstances or BusinessApplications instances, SoftwareContainment relationships are created; if the containees are Hosts, HostContainment relationships are created. For software containment, appropriate HostedSoftware relationships are also created to link the container to the correct Hosts according to the contained nodes.
model.setContainment
model.setContainment(container, containees)
Equivalent to addContainment, except that at the end of the pattern body, any relationships to contained nodes that have not been confirmed by setContainment or addContainment calls are removed.
model.destroy
model.destroy(node)
Destroy the specified node in the model. (Not usually used in pattern bodies, but in removal sections.)
model.withdraw
model.withdraw(node, attribute)
Removes the named attribute from the node.
model.setRemovalGroup
model.setRemovalGroup(node, [name])
Add the specified node or nodes to a named removal group. The purpose of this function is to identify a group of nodes. If a group name is not specified the default name group is used. Each identity is scoped to an individual pattern. Those nodes must be created and managed solely by the pattern making the call. Adding nodes managed by other patterns will have undefined behavior. The function may be called multiple times within the pattern. On completion of the pattern all nodes created or confirmed as belonging to a group in the previous execution of the pattern are examined. If a node does not appear in the same group in the new run it is deleted. By default the system associates the group nodes with the trigger node. This means that the trigger node must be an inferred node or it will differ with every invocation of the pattern. This can be overcome by specifying an inferred anchor node for a named group. You can prevent a removal group being removed using the model.suppressRemovalGroup function. For example, the following code creates a group named dbs. On the first run this contains all the discovered databases. On the second run, if any databases are not seen then they are automatically removed upon the completion of the pattern.
for row in dbs do db := model.DatabaseDetail(key := row.key, name := row.name); model.setRemovalGroup(db, "dbs"); end for;
model.anchorRemovalGroup
Page 1014
model.anchorRemovalGroup(node, [name])
Specify an anchor node for a named removal group. If a group name is not specified the default name group is used. By default the system associates the group nodes with the trigger node. This means that the trigger node must be an inferred node or it will differ with every invocation of the pattern. If a pattern triggers on a DDD node, you can anchor the group to a specified inferred node. When you use an anchor in a pattern you must ensure that the same anchor is used on each subsequent invocation of that pattern. If you do not do this, no nodes in the removal group will be deleted. For example, a pattern triggers on a DiscoveredProcess and creates an SI called dbsi. The following code anchors the group named dbs to the inferred node, the SI called dbsi.
model.anchorRemovalGroup(dbsi, "dbs");
model.suppressRemovalGroup
model.suppressRemovalGroup([name])
Suppress removal of the named removal group. If a group name is not specified the default name group is used. Removal groups aid removal of complex structures such as those created by deep discovery of databases (see model.setRemovalGroup for more information). In certain circumstances you may want to prevent removal of the removal group. For example, the following code sets a removal group.
for row in dbs do db := model.DatabaseDetail(key := row.key, name := row.name); model.setRemovalGroup(db, "dbs"); end for;
If the pattern fails to recover details from the database, all the database details would be removed. This can be prevented using the following code in the case for example, where the credentials are locked out:
model.suppressRemovalGroup("dbs");
model.host
model.host(node)
Returns the Host node corresponding to the given node. The given node can be any directly discovered data node, or a SoftwareInstance or BusinessApplicationInstance. If more than one Host is related to the given node (for example a BusinessApplicationInstance spread across multiple Hosts), an arbitrary one of the Hosts is returned.
Name clash The model.host function, with a lower-case "h" has an unfortunate clash with model.Host with an upper-case "H", which creates or updates an existing Host node. Be sure to use model.host with correct capitalisation. Since patterns are not normally responsible for maintaining Host nodes, the system issues a warning if model.Host is used.
model.hosts
model.hosts(node)
Returns a list of all the Host nodes corresponding to the given node. As with model.host, the given node can be any directly discovered data node or a SoftwareInstance or BusinessApplicationInstance.
model.findPackages
Page 1015
model.findPackages(node, regexes)
Traverses from the node, which must be a Host or a directly discovered data node, and returns a set of all Package nodes that have names matching the provided list of regular expressions. This is commonly required when versioning products based on installed packages. Returns an empty set if the node was not of a suitable kind, or no matching packages were found.
Related functions
The following are related functions. Each takes a single positional argument: related.host (8.3 and later) related.detailContainer (8.3 and later)
related.detailContainer
related.detailContainer(node)
Returns the SoftwareComponent, SoftwareInstance, or BusinessApplicationInstance node containing to the given node. The given node can be a Detail or DatabaseDetail. If no single container node can be found None is returned.
related.host
related.host(node)
Returns the Host node corresponding to the given node. The given node can be any directly discovered data node, or a SoftwareInstance or BusinessApplicationInstance. If more than one Host is related to the given node (for example a BusinessApplicationInstance spread across multiple Hosts), an arbitrary one of the Hosts is returned.
Email functions
The mail scope contains a function for sending email: mail.send (7.1 and later)
mail.send
Time functions
The following functions manipulate dates and times. They were added in TPL 1.2. There are two ways that dates and times are represented in BMC Atrium Discovery. Within TPL, dates and times are represented as a specific time type, that can be manipulated and stored as times. Other parts of the system, including the time-related search functions, manage times as large integers, representing the number of 100 nanosecond intervals since midnight UTC on the 15 October 1582 (the date of the introduction of the Gregorian calendar). This is known as "ticks" or "DCE format"; there are 10 million ticks per second. In both cases, the times are stored in UTC. TPL contains functions for converting between the two formats. Aside from the conversion functions, all TPL operations use the time type, not the integer ticks format.
Type mixing Be careful to understand what kind of time value you are manipulating, either the internal TPL time format or the integer "ticks" format. Mixing the two can lead to errors.
Page 1016
time.current
time.current()
Returns the current time.
time.delta
Note that time deltas cannot be stored in the datastore. Attempts to do so will result in a run-time error.
time.parseLocal
time.parseLocal(string)
The time is converted according to the appliance's time zone and daylight saving setting.
time.parseUTC
time.parseUTC(string)
The time is assumed to already be UTC, and is not adjusted for timezones or daylight saving time.
time.formatLocal
time.formatLocal(datetime [, format ])
Formats a time into a human-readable string. With no format specification, the default BMC Atrium Discovery time representation is used. If given, the format specification is a string containing special tokens that expand to formatted parts of the date and time. The format is specified as in Python's time.strftime function. As with the parsing functions, formatLocal converts the time into the appliance's timezone and daylight savings setting; formatUTC performs no timezone conversion.
time.formatUTC
time.formatUTC(datetime [, format ])
Formats a time into a human-readable string. With no format specification, the default BMC Atrium Discovery time representation is used. If given, the format specification is a string containing special tokens that expand to formatted parts of the date and time. The format is specified as in Python's time.strftime function. As with the parsing functions, formatLocal converts the time into the appliance's timezone and daylight savings setting; formatUTC performs no timezone conversion.
time.toTicks
time.toTicks(datetime)
Converts a time or time delta into the DCE ticks format.
Page 1017
time.fromTicks
time.fromTicks(ticks)
Converts DCE ticks into a time.
time.deltaFromTicks
time.deltaFromTicks(ticks)
Converts DCE ticks into a time delta.
Inference functions
The inference scope contains functions to permit explicit management of inference relationships that represent data provenance in the model. The pattern language automatically maintains inferences in most situations, so these functions are only required in rare complex circumstances. inference.associate inference.contributor inference.primary inference.relation inference.withdrawal inference.destruction
inference.associate
inference.associate(inferred_node, associate)
Create associate inference relationship(s) from the specified node(s) to the inferred node.
inference.contributor
inference.primary
inference.primary(inferred_node, primary)
Create primary inference relationship(s) from the specified node(s) to the inferred node.
inference.relation
inference.relation(inferred_relationship, source)
Create relation inference relationship(s) from the specified node(s) to the inferred relationship.
inference.withdrawal
inference.destruction
inference.destruction(destroyed_node, source)
When destroying a node, indicate that the source node was responsible for its destruction.
Page 1018
Search expressions
The full search syntax is supported with a small number of minor modifications for consistency with the rest of the pattern language. The modifications to the search syntax are as follows: Searches have syntax similar to a function call where the expression after the search keyword must be in parentheses. All search keywords must be in lower case. Regular expressions must be matched with the keyword matches rather than like. Identifiers clashing with keywords must be prefixed with $ instead of ~. Post-processing functions must be specified with the single keyword processwith, rather than two separate words process with. Outside of the pattern language, the search service is backwards-compatible and supports like as a synonym for matches, process with as a synonym for processwith, and accepts keyword escaping with ~ as well as $. The result of a search expression depends on what kind of show clause it has. If the search has no show clause, the result is a node set; if the search has a show clause with one single attribute shown, the result is a list of values; if the search has a show clause with multiple attributes shown, the result is a list of lists of values. In searches with a processwith clause, the result is either a node set or a list of lists of values, depending on the definition of the post-processing function used. A search expression has an optional in clause which triggers a refine search, as opposed to a search of the whole datastore. The expression given to in must be a single node or a node set. This example constructs a set of all processes discovered at the same time as the given process, then refines the set: process := // acquire DiscoveredProcess node from somewhere all_procs := search(in process traverse Member:List:List:ProcessList traverse List:List:Member:DiscoveredProcess); required_procs := search(in all_procs where cmd matches "httpd"); Attribute names used in search expressions (except the in clause) are scoped to the search expression, not to the body of the pattern. To access variables in the pattern body, % symbols must be used as in string interpolation: instance := // instance name from somewhere db_nodes := search(SoftwareInstance where kind = "Sybase" and instance = %instance%);
Defining functions
User defined functions are specified in definitions blocks. In order to specify a function the type parameter within the definitions block must be set to the value function or omitted. Each define within the definitions block specifies a function. A function definition consists of: A pair of parentheses containing an optional list of identifiers. The identifiers represent the number of parameters specified in a call to the function and the identifier used to reference each parameter An optional return list specified by following the close parenthesis with '->' and a list of identifiers, one for each return value. A description string A list of statements The following example shows a simple example function:
Page 1019
tpl 1.5 module example; definitions functions 1.0 'User defined functions' type := function; // Optional, default if no "type" specified. define square(x) -> y 'functions.square is a function which takes a parameter x and returns a value.' return x * x; end define; end definitions;
You can return a comma separated list of values using the return statement. This list must have the same number of values as specified by the function definition. The return value identifiers are purely symbolic and not referenced anywhere within the function call or definition. For example:
You can specify a default value for function parameters. Any parameter following the first that has a default value must also have a default value. In the following example, calling add with a single parameter x returns x+2+3.
define add(x, y:=2, z:=3) -> a 'add values, using defaults where parameters are not passed' return x+y+z; end define;
define rnd() -> y 'functions.rnd is a function which returns a value.' return 16; end define; define update(node, key, value) '''functions.update is a function which takes three parameters and does not return a value.''' node[key] := value; end define;
Using functions
The following pattern contains calls to functions defined in the same TPL module. The functions are the examples given in the preceding section.
Page 1020
pattern function_caller 1.0 'Pattern which calls functions.' overview tags example; end overview; triggers on process := DiscoveredProcess; end triggers; body a := functions.square(5); // call square function // which returns 25 b := functions.rnd(); // call rnd function //which returns 16 functions.update(process, 'key', 4); // call update, equivalent // to "process['key'] := 4;" m, n := functions.reverse(1, 2); // call reverse which returns // two values, 2 and 1. end body; end pattern;
Recursion
Recursive and mutually recursive functions are not supported. Attempts to use recursion will be diagnosed, preventing the pattern module from being activated.
Removal
Patterns that are triggered on directly discovered data can specify specific rules for removing nodes and relationships previously created by the same pattern. If there are no removal specifications, removal for nodes created by such patterns is based on aging. Removal for nodes created by patterns triggered from other data (such as other inferred nodes) is based on existence of the originally triggering nodes. Removal blocks take the form:
removal on name := node_kind removal_condition [ where condition ]; // removal body... end removal;
An example is:
removal on si := SoftwareInstance aged; // This SoftwareInstance has a flag to indicate it should // not be removed. if not si.keep_me then model.destroy(si); end if; end removal;
If a pattern creates multiple nodes, the removal block is executed for each of them when the removal condition is met. More than one removal block can be specified to support patterns that create more than one kind of node, and to support different blocks where the node kind is the same, but the conditions are different. Once a pattern has at least one removal block, the normal automatic aging of all the inferred nodes the pattern creates is disabled. Explicit
Page 1021
Removal conditions
Removal conditions specify when the removal block will be executed. Two removal conditions are currently supported: aged and unconfirmed.
aged
aged caters for the case where something is about to be aged out, but you want to do additional checks to see if the system should actually delete it. In other words, you want to override the default aging behavior in order to make the system not delete something that it otherwise would do. An example is where you have an SI modeled on a process that only runs rarely: it's likely that at some point the SI will be about to age out because that process hasn't been seen in a while, but what's actually happened is that the process has run, just not at the time that the host was discovered. If aged is specified, the removal block is executed at the time the node in question would normally be removed due to aging; that is, it has not been confirmed for a while according to the aging parameters.
Note Because only root nodes and first-order SIs are removed through aging, it is important that you maintain your other nodes created in patterns. These nodes are not maintained automatically by BMC Atrium Discovery. For more information about data aging settings and the guidelines around modifying them, see Configuring model maintenance settings.
unconfirmed
unconfirmed caters for the case where you want to remove something as soon as possible after it disappears from your environment. In other words, you want to override the default aging behavior in order to remove something before the system would have done. An example is where you have a policy engine checking your CMDB to ensure that a given computer system is running a given piece of software. Thus, if that software is not seen when you discover the host then that indicates a policy violation, and you need to instantly delete the software instance from the data store (and thus from the CMDB) so that your policy engine can notice its absence. If unconfirmed is specified, the removal block executes every time a scan of the relevant host is performed without the node in question being confirmed.
Pattern Configuration
It is sometimes useful to permit the end-user to modify certain characteristics of a pattern, such as whether to enable features that may be costly, or to change a set of default paths or package names. This is enabled with pattern configuration blocks. Pattern configuration is new in TPL 1.2.
Changes in the pattern configuration block revision number may reset the pattern configuration changes to the default values If you install a Technology Knowledge Update (TKU) which contains an update to a pattern configuration block that requires its major revision number to be incremented, any end-user configuration changes to that pattern (made through the pattern configuration block) are lost and the default pattern values are restored.
Page 1022
configuration name version """ Description of configuration block to show in UI """ "Description of first configuration item" config_name := config_value; ... end configuration;
Configuration items are given a human-readable description to be shown in the user interface, an identifier to use in patterns, and a default value. The default value is the value used if the configuration item has not been changed by the user. It also determines the type of the configuration item the user interface will only allow the user to change the setting to a value with the same type. The valid types for configuration items are text string, integer, boolean, list of strings, and list of integers. This example shows a number of configuration settings:
configuration MyConfig 1.0 """ An example configuration block. """ "A boolean flag" a_flag := true; "Command to run" command := "sudo rm -rf /"; "Port for the service" port := 12345; "Paths to look in" paths := [ "/usr/bin", "/usr/local/bin" ]; end configuration;
"This is a list of strings that's empty by default" empty_string_list := [ ] (""); "This is a list of integers that's empty by default" empty_number_list := [ ] (0);
Page 1023
Static Tables
Tables provide simple look-up tables that can be used by functions in pattern bodies. They have no active behaviour. Tables must be declared in module scope, not inside pattern declarations. Declarations have the following form:
table name version key1 -> value1; key2 -> value2; ... end table;
Keys are literal values or expressions involving only literals; values are comma-separated sequences of literal values, or expressions involving only literals. Tables can have a default value, specified with a key of default:
table example_table 1.0 "one" -> "two"; "three" -> "four"; default -> "five"; end table;
For details of dynamic table use from TPL 1.2 onwards please see Table functions
Enumerations
Enumeration maps a list of names to either a string or integer value. Within a module the enumeration has a similar form to other top level blocks. The following code defines an enumeration enum_name with three values identifier_1, identifier_2 and identifier_3.
tpl 1.4 module example; enumeration enum_name 1.0 'Enum description' identifier_0, identifier_1, identifier_2 end enumeration;
If no values are given to the names then the first value will be 0. Each subsequent value will be one larger than the previous value. So in this example identifier_0 is 0, identifier_1 is 1 and identifier_2 is 2. The default values can be overridden:
Page 1024
tpl 1.4 module example; enumeration enum_name 1.0 'Enum description' car -> 5, boat, plane -> 8, tank end enumeration;
Using the -> operator allows the value to be replaced. In this example, car is 5, boat is 6, plane is 8 and tank is 9. It is legal to have multiple names mapping to the same value though the names themselves must be unique. Alternatively an enumeration may map to string values. This is done by giving one of the values a mapping to a string. Any unmapped value will take the string representation of its name. Given
tpl 1.4 module example; enumeration enum_name 1.0 'Enum description' car, boat -> "water", plane end enumeration;
car is "car", boat is "water" and plane is "plane". Strings and numbers may not be mixed within a single enumeration.
Using enumerations
Enumeration values can be used in metadata or in an assignment within a pattern. Metadata has been extended so it now also supports numbers.
tpl 1.4 module example; enumeration vehicle 1.0 'Enum description' car, boat -> "water", plane end enumeration; pattern example 1.0 'Test pattern' overview tags example; end overview; triggers on process := DiscoveredProcess; end triggers; body a := vehicle.boat; end body; end pattern;
The pattern assigns the variable a the value "water". Enumeration values cannot be used within expressions.
Identify
Identify tables are active tables used to annotate matching nodes with particular values. As with non-active Tables, they must be declared at module scope, not inside patterns. They take the form:
Page 1025
identify name version [metadata metadata_entries end metadata;] tags tag1, tag2, ... ; node_kind matchattribute [ , . ] -> _set_attribute [ , ... ]; key1 -> value1; key2 -> value2; ... end identify; The identify table is triggered whenever a node is created with suitable attributes for the match attributes. Like patterns, identify tables must declare one or more tags. Upon triggering, it sets the set attributes on the triggering node. This is used for simply identifying processes, for example: identify common_unix_commands 1.0 tags example; DiscoveredProcess cmd -> simple_identity; unix_cmd "ls" -> "Unix directory listing command"; unix_cmd "mv" -> "Unix move command"; unix_cmd "cp" -> "Unix copy command"; ... end identify; If more than one regular expression in an identify table matches a particular node, an arbitrary one will "win" and set the corresponding value.
Definitions
Additional functions to call in patterns are described in definitions blocks. In TPL 1.5, definitions blocks are used for User defined functions and for data integration with SQL databases. The basic form of a data integration definitions block is:
definitions name version """Description of definitions block""" type := definitions_type; other_setting := setting_value; ... define function_name """Function description""" parameters := param1, param2, ...; other_setting := setting_value; end define; ... end definitions;
All definitions blocks must have a type specification. Other settings may be required by certain definitions types. The definitions block contains one or more define blocks, describing the functions being defined. All functions that take parameters must have a parameters specification, listing the names of the parameters to the function. (If a function takes no parameters, the parameters specification can be missed out.) Most definitions types also require other settings in the define blocks. Once defined, functions can be called in patterns with the usual function calling syntax. Definitions can be imported from one module into another, with the usual versioning scheme. The supported definitions types are as follows: SQL database integration SQL database discovery
Page 1026
definitions AssetDatabaseDetails 1.0 """Queries to obtain information from asset database""" type := sql_integration; name := "Example asset database"; define getLocationOfHost """Return details of the location for the given hostname""" parameters := hostname; query := """select name, address from hosts, locations where hosts.id_location = locations.id and hostname = %hostname%"""; end define; end definitions;
The defined function is used in a pattern as the function AssetDatabaseDetails.getLocationOfHost. It is called with a hostname parameter, followed by a connection parameter which refers to a connection created within the Integration Point. It returns a list of nodes, one for each row of output from the query. If the query fails in some way, the function returns none. Each row node has attributes named after the columns selected in the SQL query. So, for example, to use the query defined above:
location_rows := AssetDatabaseDetails.getLocationOfHost(hostname := host.hostname, connection := $$connection_name$$; if location_rows = none then log.warn('Could not retrieve location for host %host.hostname%'); stop; end if; for row in location_rows do // Create / update Location node for this location, with a key made from its name and address. loc := model.Location(name := row.name, address := row.address, key := "%row.name%.%row.address%"); // Relate the Location to the Host... end for;
Page 1027
// Use MySQL provider to get database info if mysql.bind_address then all_dbs := MySQLDetails.showDatabases( endpoint := host, address := mysql.bind_address, port := port; else all_dbs := MySQLDetails.showDatabases( endpoint := host, port := port); end if;
A string of the form "%type% version %version% called %instance% running on %host.name%", e.g. "Oracle Database version 10g ca As appropriate, the word "called" should be replaced with similar words such as "identified by" or "using". The version number used should either be the product version or the full version, whichever is most appropriate for the product. A string of the form "%instance%/%type%/%host.key%"
key
A string of the form "%type% called %instance% running on %host.name%", e.g. "Oracle Database version 10g called TRADING_DB run As appropriate, the word "called" should be replaced with similar words such as "identified by" or "using". A string of the form "%instance%/%type%/%host.key%"
Page 1028
In this case, the SI / BAI is composed of other SIs / BAIs. To group them together, there must be some identifying characteristic for the group. In cas application, the instance would not be set.
A string of the form "%type% version %version% called %instance%". As appropriate, the word "called" should be replaced with similar words such as "identified by" or "using". If there is one global instance, the instance The version number used should either be the product version or the full version, whichever is most appropriate for the product, or absent altogether A string of the form "%instance%/%type%"
key
Syncmapping block
The mappings performed during CMDB synchronization are specified in syncmapping blocks in TPL. A syncmapping is similar to a pattern, but it triggers from a queued synchronization action, rather than from data being updated in the data store. The form of a syncmapping is: syncmapping name version description overview overview_entries end overview; [constants constant_definitions end constants;] mapping mapping_source as source_name mapping_definitions end mapping; body body_details end body; end syncmapping; As with pattern blocks, the name, version, and description are mandatory. Template patterns are provided to help you create your own syncmappings.
Page 1029
Overview section
The overview is required. It contains information about the pattern and the entities it creates. It must contain a tags entry, and can have an optional datamodel entry, as described in Data models.
Mapping section
The mapping section declares the starting point for the mapping, the structure of source data retrieved from the Discovery model, and the target CIs created in the CMDB model. It does not describe how the source data is transformed to the target model that is performed in the body section.
Mapping source
Each mapping is either a root mapping, meaning that it is invoked by the synchronization of a single root node with the corresponding kind, or an extension mapping, meaning that it extends another mapping at a suitable point. Root mappings have a mapping declaration using the on keyword:
For example, this specifies the root mapping for Host nodes:
e.g.
The source_scoped_name is the name of a source mapping variable from another mapping block, either the source name specified in the mapping declaration, or a traversal name as described below.
Mapping traversals
The source subgraph is declared using traverse clauses inside the mapping with a syntax similar to traversals in search expressions:
The name defined by the traversal may only be used in a for each expression; it may not be used in any other context.
Where clauses
The initial source node and the results of traversals can be filtered with where clauses, specified before the as token. where clauses in mapping blocks use the same subset of search where clauses as trigger conditions in pattern blocks.
Page 1030
mapping on node_kind where condition as name mapping from source_scoped_name where condition as name traverse traversal_specification where condition as name
Target CI declarations
As the subgraph is processed in the body, target CIs are specified. The mapping block contains declarations of the CIs that are mapped, in the form:
e.g.
Targets are specified within the traversal structure. For example, part of the mapping of virtual machines is as follows:
mapping from Host_ComputerSystem.host where virtual defined as host traverse ContainedHost:HostContainment:HostContainer:SoftwareInstance as vm_si vse -> BMC_VirtualSystemEnabler; traverse RunningSoftware:HostedSoftware:Host:Host as containing_host containing_cs -> BMC_ComputerSystem; end traverse; end traverse; end mapping;
Grouping
In some circumstances, a number of nodes in the Discovery model must be grouped together to construct a single CI in the CMDB model. This is declared in the mapping with a group block. The form of a group block is
traverse traversal_specification as traversal_name group group_name group contents... expand group as expansion_name expansion contents... end expand; end group; end traverse;
The declaration indicates that nodes from the containing traversal will be grouped together (according to rules specified in a group block in the body), and then the group will be expanded to the individual group members. The expand is not required if there is no need to process the individual nodes within the group.
Syncmapping body
The body of a syncmapping is responsible for implementing the mapping described in the mapping block. The majority of language features and functions available in pattern body blocks are permitted, except that functions in the following namespaces are not available since they are only appropriate for patterns that perform discovery and construct the Discovery model.
Page 1031
discovery inference mail model Additionally, user-defined functions are not supported.
Body execution
The body of a syncmapping is executed at a time that depends upon the mapping source definition. The body of a root mapping (specified with mapping on) is executed at the time the root node is scheduled for synchronization. The body of an extension mapping (specified with mapping from) that extends the source node of another mapping is executed when the body of the extended mapping completes. The body of an extension mapping that extends a traversed-to node of another mapping is executed each time the associated for each loop (see below) completes. When a node in the Discovery data store is marked as destroyed, only the root mapping's body is executed, and the target root CI is scheduled for deletion in the CMDB. When the delete is synchronized with the CMDB, the root CI and all the related CIs previously created by the mapping are deleted. For best performance during deletion, root mappings should not perform any traversals or other time-consuming activities.
CI definition
CIs and relationships in the CMDB are specified with functions in the sync namespace similar to those in the model namespace used within pattern blocks. Any CMDB class can be specified with a function call of the form
Any class name can be specified. Specifying a class that is not defined in the CMDB results in a run-time error. The key attribute must be set, and is used to populate the ADDMIntegrationID attribute in the CMDB. Any other attribute name can be set; attributes that are not defined in the CI class are ignored. The result of the function must be assigned to a target CI name specified in the mapping block. The class specified in the function must be the same as the one specified in the mapping or a subclass of it.
CI class namespaces
CMDB classes are assumed to be in the BMC.CORE namespace. To refer to a class in a different namespace, provide a namespace parameter to the function call:
Shared CIs
The subgraph of data in the Discovery model is transformed into a subgraph of CIs in the target CMDB model. Most of the CIs belong to a single target subgraph, but some are shared by more than one subgraph. An example is the BMC_IPConnectivitySubnet CI that is shared by all the computers on a particular subnet. For deletion to work correctly, the system must know that such CIs are shared. This is achieved by calling the function in the sync.shared namespace:
External CIs
Page 1032
Similarly, it is sometimes necessary to specify a relationship to a CI that is not part of the target subgraph. An example is to relate the BMC_ComputerSystem for a physical host to the one for a virtual host the two CIs belong to different subgraphs. External CIs are specified with a function in the sync.external namespace:
The key must be specified, and namespace must be specified if required. No other attributes may be set. It is not an error if a CI with the specified key does not exist in the CMDB. In that situation, the CI and any relationships to it are simply ignored.
Cross-reference CIs
As the mapping is processed, the CIs are specified in a tree traversal across the graph. To refer to a CI specified in a different branch of the tree, it can be specified with a function in the sync.crossref namespace:
The key must be specified, and namespace must be specified if required. No other attributes may be set. It is a runtime error to specify a cross-reference to a CI that is not fully specified elsewhere within the mapping.
Relationship definition
Relationships are specified with functions in the sync.rel namespace:
The first two parameters must be Source and Destination. Any other attributes can also be set; Name is not required, but it is conventionally always set.
Traversal looping
One of the main activities performed in the body is to iterate over the nodes reached through the traversals specified in the mapping block. A for each loop is used to iterate over the named nodes:
The nesting structure of for each loops in the body must match the nesting structure of the traverse expressions in the mapping block. Note that a for each loop is required even if the corresponding traversal is expected to reach just one node. There is no other way to access the state of the traversed-to node.
Group block
When the mapping block specifies a group, there must be a corresponding group block in the body. The group block will always be inside a for each block, either directly within a single syncmapping body or in an extended source syncmapping. A group block takes the form:
Page 1033
for each traversed_to_node do ... ident := group_identifier; group group_name with ident do ... for each expand_name do ... end for; end group; end for;
The grouping is evaluated in two phases. In the first phase, every iteration of the surrounding for each loop is executed. The nodes are grouped according to the identifier provided to the group expression. After all the iterations of the for each loop, the group block is executed once for each group. If the group declaration in the mapping block contains an expand declaration, there should be a corresponding for each loop in the body. The group content is executed in a context based on an arbitrary member of the group. Any local variables from the surrounding loop will therefore be valid for a member of the group, but there is no guarantee that it will be the same group member each time a particular group is processed.
Data models
Different versions of the CMDB have subtly different data models. Syncmappings can support multiple data models with datamodel declarations. CMDB data models are assigned simple integer values: Datamodel number 0 1 2 3 4 5 CMDB version CMDB 2.1 without SIM extension CMDB 2.1 with SIM extension CMDB 7.5 CMDB 7.6 CMDB 7.6.03 CMDB 7.6.03 or later with Impact as a Relationship
A syncmapping can limit itself to a particular set of data models with a datamodel declaration in the overview:
The body of a syncmapping can further modify its behavior for different data models with a datamodel block. The datamodel block only executes if the data model in effect matches the declaration:
ci := sync.BMC_Thing(key := my_key, ...); datamodel 2, 3 do // Create Impact relationship only for CMDB 7.5 and 7.6 sync.rel.BMC_Impact(Source := ci; Destination := other_ci); end datamodel;
The data model in effect is not chosen automatically. After you configure CMDB synchronization, the data model is selected. This is described in Setting up the CMDB synchronization connection.
Page 1034
TPL Keywords
The following are TPL keywords: and as body break by configuration constants continue default define defined definitions desc do elif else end enumeration exists expand explode false flags for from has identify if import in is locale matches metadata module nodes nodecount none not on or order out overrides overview pattern processwith relationship removal requires search show step stop substring subword summary table tags taxonomy then tpl traverse triggers true
Page 1035
Module structure
<module> ::= tpl <version_number> module <ids> ';' <metadata>? ( <definition> ';' )+ <metadata> ::= metadata ( <id> := <qualified_string>+ )+ end metadata; <definition> ::= <import> | <pattern> | <table> | <identify> <import> ::= from <module_name> import <id> <version_number> (as <id>)? (',' <id> <version_number> (as <id>)?)*
Tags
<tags> ::= tags <tag_name> (',' <tag_name>)*
Page 1036
Top-level declarations
<pattern> ::= pattern <pattern_name> <version_number> <description> <metadata>? <overview> <constants>? <triggers> <pattern_body> <removal>* end pattern ::= table <table_name> <version_number> <table_row>* end table ::= identify <identify_name> <version_number> <tags> ';' <node_kind> <attribute_name> (',' <attribute_name>)+ '->' <attribute_name> (',' <attribute_name>)+ ';' <table_row>* end identify ::= <table_key> '->' <table_value> ';' | default '->' <table_value> ';' ::= <table_value> ::= <constant_expression> (',' <constant_expression>)*
<table>
<identify>
Overview
<overview> ::= overview ( <overview_entry> ';' )* end overview <overview_entry> ::= <implements> | <overrides> | <requires> | <tags> <overrides> ::= overrides <pattern_name> ( ',' <pattern_name> )* <requires> ::= requires <pattern_name> ( ',' <pattern_name> )*
Constants
<constants> ::= constants ( <pattern_constant> )+ end constants ';' <pattern_constant> ::= <id> ':=' <constant_expression> ';'
Triggers
Page 1037
<triggers>
::= triggers (scope <node_kind>)? <trigger_condition>* end triggers ';' <trigger_condition> ::= on <trigger_match_list> ';' <trigger_match_list> ::= <trigger_match_kind> <trigger_match_kind> ::= <id> ':=' ( relationship )? <id> <trigger_fires_list>? where <trigger_match> <trigger_fires_list> ::= <trigger_fires> ( ',' <trigger_fires> )* <trigger_fires> ::= created | modified | destroyed | confirmed <trigger_match> ::= <trigger_match_and> <trigger_match_and> ::= <trigger_match_attribute> | <trigger_match_and> and <trigger_match_attribute> <trigger_match_attribute> ::= <id> '=' <constant_expression> | <id> exists | <id> in <constant_expression> | <id> matches <constant_expression> | '(' <trigger_match_and> ')'
Body
<pattern_body> ::= body <statements> end body ';'
Removal
<removal> ::= removal on <variable> ':=' <node_kind> <removal_kind> ( where <trigger_match> )?';' <statement_list> end removal ';' <removal_kind> ::= aged | unconfirmed
Statements
<statement_list> ::= ( <statement> ';' )* <statement> ::= <empty_statement> | <assignment_statement> | <function_call> | <if_statement> | <for_statement> | <break_statement> | <continue_statement> | <stop_statement> <empty_statement> ::=
Assignment
<assignment_statement> ::= <lvalue_list> ':=' <logical_expression> <lvalue_list> ::= <lvalue> ( ',' <lvalue> )* <lvalue> ::= <ids> | <lvalue> '[' <logical_expression> ']'
Page 1038
If
<if_statement> ::= if <logical_expression> then <statements> (elif <logical_expression> then <statements>)* (else <statements>)? end if
For
<for_statement> ::= for <variable> in <logical_expression> do <statements> end for <break_statement> ::= break <continue_statement> ::= continue
Stop
<stop_statement> ::= stop
Functions
<function_call> ::= <function> '(' <arg_list>? ')' <arg_list> ::= <arg_spec> (',' <arg_spec>)* <arg_spec> ::= <logical_expression> | <id> ':=' <logical_expression>
Expand expression
<expand_expression> ::= expand '(' <arg_list> ')'
Expressions
Page 1039
<logical_expression>
::= <and_test> | <logical_expression> or <and_test> <and_test> ::= <not_test> | <and_test> and <not_test> <not_test> ::= <comparison> | not <not_test> <comparison> ::= <or_expression> | <or_expression> <relational> <or_expression> | '*' <relational> <or_expression> | <or_expression> is? not? defined <relational> ::= '<' | '<=' | '>' | '>=' | '=' | '<>' | in | not in | has subword | has substring | matches <or_expression> ::= <xor_expression> | <or_expression> '|' <xor_expression> <xor_expression> ::= <and_expression> | <xor_expression> '^' <and_expression> <and_expression> ::= <shift_expression> | <and_expression> '&' <shift_expression> <shift_expression> ::= <add_expression> | <shift_expression> '<<' <add_expression> | <shift_expression> '>>' <add_expression> <add_expression> ::= <multiply_expression> | <add_expression> '+' <multiply_expression> | <add_expression> '-' <multiply_expression> <multiply_expression> ::= <unary_expression> | <multiply_expression> '*' <unary_expression> | <multiply_expression> '/' <unary_expression> | <multiply_expression> '%' <unary_expression> <unary_expression> ::= <primary> | '-' <unary_expression> | '+' <unary_expression> | '~' <unary_expression> <constant_expression> ::= <logical_expression> <primary> ::= <atom> | <function_call> | <search_expression> | <expand_expression> | <subscript> | <list_expression> | <attribute> | <keyword_expression> | <group_expression> <subscript> ::= <primary> '[' <logical_expression> ']' <list_expression> ::= '[' ( <logical_expression> ',' )* <logical_expression>? ']' <scoped_attribute> ::= <scope> '.' <attribute> <attribute> ::= <ids> | <ids> '.' <key_expression> <key_expression> ::= '#' | '##' | '#id' | '#' <traversal> | '#' <traversal> '.' <attribute> | '#' <string> '(' <attributes> ')' <keyword_expression> ::= structure <group_expression> ::= '(' <expression> ( ',' <expression> )* ','? ')' <atom> ::= none | <bool> | <qualified_string> | <number>
Search expression
Page 1040
<search_expression> ::= search '(' <flags>? <search_in>? <kind_list> <with_funcs>? <where_clause>? <traversals>? <order_by_clause>? <locale>? <show>? <search_process_with>? ')' <flags> ::= flags '(' <id> ( ',' <id> )* ')' <search_in> ::= in <id> | in <string> <kind_list> ::= '*' | <kind_comma_list> <kind_comma_list> ::= <node_kind> ( ',' <node_kind> )*
with functions
<with_funcs> ::= with <with_func> ( ',' <with_func> )* <with_func> ::= <id> '(' <search_expression_commalist> ')' as <id>
traversal
<traversals> ::= <traverse_or_expand>+ <traverse_or_expand> ::= <traverse_clause> | <expand_clause> | <step_in_clause> | <step_out_clause> <traverse_clause> ::= traverse <traversal> <expand_clause> ::= expand <traversal> <step_in_clause> ::= step in <traversal> <step_out_clause> ::= step out <traversal>
where
<where_clause> ::= where <search_logical_expression>
order by
<order_by_clause> ::= order by <order_by_commalist> | order by taxonomy <order_by_commalist> ::= <order_by> ( ',' <order_by> )* <order_by> ::= <search_value_expression> <direction>? <direction> ::= desc
locale
<locale> ::= locale <string> | locale <id>
Page 1041
show
<show> ::= show <show_list>? <show_list> ::= <show_clause> ( ',' <show_clause> )* <show_clause> ::= <search_value_expression> <as>? | explode <search_value_expression> <as>? | summary | '*' <as> ::= as <id> | as <string> <search_logical_expression> ::= <search_and_expression> | <search_logical_expression> or <search_and_expression> <search_and_expression> ::= <search_and_expression> | <search_and_expression> and <search_not_expression> <search_not_expression> ::= <search_comparison> | not <search_not_expression> <search_comparison> ::= <search_value_expression> '=' <search_value_expression> | <search_value_expression> '<>' <search_value_expression> | <search_value_expression> >=' <search_value_expression> | <search_value_expression> '>' <search_value_expression> | <search_value_expression> '<=' <search_value_expression> | <search_value_expression> '<' <search_value_expression> | <search_value_expression> has subword <search_value_expression> | <search_value_expression> has substring <search_value_expression> | <search_value_expression> matches <search_value_expression> | <search_value_expression> not matches <search_value_expression> | <search_value_expression> in <search_value_expression> | <search_value_expression> not in <search_value_expression> | <search_value_expression> defined | <search_value_expression> is defined | <search_value_expression> not defined | <search_value_expression> is not defined | '*' '=' <search_value_expression> | '*' has subword <search_value_expression> | '*' has substring <search_value_expression> | '*' matches <search_value_expression> | <search_value_expression> <search_value_expression> ::= <search_multiply_expression> | <search_value_expression> '+' <search_multiply_expression> | <search_value_expression> '-' <search_multiply_expression> <search_multiple_expression> ::= <search_unary_expression> | <search_multiply_expression> '/' <search_unary_expression> | <search_multiply_expression> '*' <search_unary_expression> <search_unary_expression> ::= <search_primary_expression> | '-' <search_unary_expression> <search_primary_expression> ::= <search_value> | <id> '(' <search_expression_commalist>? ')' | '[' <search_expression_commalist>? ']' | '(' <search_logical_expression> ')' <search_value> ::= nodecount '(' <traversals>? ')' | nodes '(' <traversals>? ')' | <number> | <string> | <search_attribute> <search_attribute_commalist> ::= <search_attribute> ( ',' <search_attribute> )* <search_attribute> ::= <id> | <search_key_expression> | <search_at_expression> <search_key_expression> ::= '#' | '##' | '#id' | '#' <traversal> | '#' <traversal> '.' <search_attribute> | '#' <string> '(' <search_attribute_commalist> ')' <search_expression_commalist> ::= <search_value_expression> ( ',' <search_value_expression> )*
Page 1042
<search_process_with> ::= processwith <search_process_with_func_list> <search_process_with_func_list> ::= <search_process_with_func> | <search_process_with_func_list> ',' <search_process_with_func>
Page 1043
Page 1044
A common way of incorporating location information into a host is to store a file in the host's file system. The template_si_version_xml_file template pattern demonstrates taking information from an XML file to version an SI. The same approach can be used to extract location information from an XML file. If the information is stored in an XML file use xpath as shown in the template pattern. The TPL xpath functions are described in the TPL Guide. In the case where information is placed in a flat file, a simple regex is the most effective method of extracting the location. The TPL regex functions are described in the TPL Guide. The full specification of the regular expression syntax used in BMC Atrium Discovery can be found on the Python website. A simpler introduction is also available. Here is an example XML snippet for a file:
<?xml version="1.0"?> <config> <physical> <location="Houston"> ...
Here is the TPL which retrieves the file and uses xpath to obtain the location from the file:
// Get config file and check that it was retrieved successfully cfgfile := discovery.fileGet(host, cfgfilePath); if cfgfile then // Extract location from conf file location := xpath.evaluate(cfgfile.content, raw '/config/physical/@location');
Now you can create or update the relationship between the host and location node. The following code snippet uses the model.uniquerel function to create or update the relationship.
// Relate host to location. Using "uniquerel" rather than "rel" means that // any existing Location relationships between this Host (the first // parameter) and any Location nodes other than the one given (the second // parameter) are removed. If a host has changed location, this keeps the // model up-to-date. log.info("Subnet_2_Locations: model.uniquerel.Location %host% %location%"); model.uniquerel.Location( ElementInLocation := host, Location := location );
See the template_si_version_xml_file template pattern for an example of how these statements are used in the pattern.
Page 1045
table SubnetLocations 1.0 "101.10.10.0/24" -> "London Victoria"; "101.10.11.0/24" -> "London Egham"; "137.72.94.0/24" -> "Houston"; "137.72.95.0/24" -> "Dallas"; default -> "unknown"; end table;
The following TPL snippet shows the search from the host to associated subnet or subnets.
subnet_list := search(in host traverse DeviceWithAddress:DeviceAddress:IPv4Address:IPAddress traverse DeviceOnSubnet:DeviceSubnet:Subnet:Subnet); location_id := ''; list_size := size(subnet_list); log.debug("Subnet_2_Locations: subnet_list size is %list_size%"); for subnet_node in subnet_list do log.debug("Subnet_2_Locations: Current subnet_node is %subnet_node.ip_address_range%"); // if subnet_node is "None" get next node otherwise we can use this one if '%subnet_node.ip_address_range%' <> "None" then location_id:= '%subnet_node.ip_address_range%'; break; end if; end for;
This TPL snippet uses the table to look up the location from the subnet.
// Look up the location name from the table location_name := SubnetLocations[location_id];
Page 1046
The BMC Atrium Discovery data model allows additional attributes to be added to nodes. You should only ever add additional attributes to inferred nodes. You must never update DDD nodes. Attributes may be added by patterns at any point in time. It is strongly recommended that you add attributes at creation time when building your own inferred nodes. Modifying system created inferred nodes must be done after creation using a separate pattern. Once you have modified the node it is necessary to make the attribute visible. There are two ways to do this: Updating the taxonomy by adding a custom taxonomy extension, or Populating the _tw_meta_data_attrs attribute on the inferred node. You must ensure only one pattern modifies the _tw_meta_data_attrs value on a particular node otherwise the conflicting changes are likely to result in the loss of one of the updates.
Legal Information
Copyright 2012 BMC Software, Inc. BMC, BMC Software, and the BMC Software logo are the exclusive properties of BMC Software, Inc., are registered with the U.S. Patent and Trademark Office, and may be registered or pending registration in other countries. All other BMC trademarks, service marks, and logos may be registered or pending registration in the U.S. or in other countries. All other trademarks or registered trademarks are the property of their respective owners. The information included in this documentation is the proprietary and confidential information of BMC Software, Inc., its affiliates, or licensors. Your use of this information is subject to the terms and conditions of the applicable End User License agreement for the product and to the proprietary and restricted rights notices included in the product documentation.
Page 1047
BMC Software, Inc. 5th Floor 157-197 Buckingham Palace Road London SW1W 9SP United Kingdom Tel: +44 (0) 800 279 8925 Email: customer_support@bmc.com Web: http://www.bmc.com/support
Table of Contents
1 Table of Contents 2 Security 2.1 Security in BMC Atrium Discovery 2.2 An appliance-based solution 2.3 Appliance hardening 2.4 Understanding security audits 2.5 Information security 2.6 System communications 2.7 Discovery communications 2.8 Firewall Port Summary 2.9 DISA Secure Technical Implementation Guidelines 2.10 Running in FIPS compliant mode 3 Legal Information
Security
This document provides a brief overview of the security aspects of BMC Atrium Discovery. It is intended to provide network administrators with the information required to run BMC Atrium Discovery in their environment. It also provides the information required to enable security teams to verify that BMC Atrium Discovery is secure and does not compromise the security of their network. Security in BMC Atrium Discovery An appliance-based solution Appliance hardening Understanding security audits Information security System communications Discovery communications Firewall Port Summary DISA Security Technical Implementation Guidelines Running in FIPS compliant mode
Page 1048
BMC Atrium Discovery is an appliance-based tool which automates discovery of business applications, maps them onto the underlying physical and virtual IT infrastructure, and determines the critical dependencies between them. BMC Atrium Discovery's model-driven, data center indexing techniques cut across previously disparate silos of configuration information, automatically populating and maintaining a data store of the discovered state of Configuration Items (CI) and dependency information. BMC Atrium Discovery automates many system administrator and application management team tasks and also stores customer-sensitive data. At its core, BMC Atrium Discovery ensures the confidentiality and integrity of the discovery processes as well as the indexed data itself. This document is intended to provide network administrators with the information required to get BMC Atrium Discovery working in their environment. It also provides the information required to enable security teams to verify that BMC Atrium Discovery is secure and does not compromise the security of their network.
An appliance-based solution
A frequently asked question in the technology industry is whether one should favor appliance based solutions (hardware or virtual) or software-based solutions (which need to be installed and configured); a valid question as products in the same category often take these two different approaches. While the cost, performance, maintenance and support for these approaches are similar, the differences in security are often a source of concern.
Appliance advantages
Anything running on a host can be considered a potential security risk. If a component is not actually required then it is safer not to install it. Vulnerabilities in the many tools and utilities installed and running in a default installation of an operating system are known and exploited. The appliance approach provides a tightly controlled system in which only the essential tools and utilities are installed. These tools and utilities, including the Red Hat Enterprise Linux 6 operating system, are hardened to allow only authorized access and ensure the integrity of the system. See Appliance hardening for more information.
Security updates
A considerable advantage of the appliance approach over a software solution is a known and understood system in which the interaction between components is designed and knowingly limited to that design. When patches to the Red Hat Enterprise Linux OS are released, BMC Software check to see whether they are appropriate to the appliance. Many are inappropriate due to the subset of packages used in the appliance. Where a patch is appropriate it is tested and rolled into the next available operating system upgrade, or product release; urgent updates are released as a Hot Fix. BMC provides regular upgrades to the BMC Atrium Discovery operating system each month; each upgraded package is checked to see whether they are appropriate to the appliance. See operating system upgrades for more information.
Do not download and apply Red Hat OS patches It is most important that OS patches released by Red Hat are not downloaded and applied to the appliance; this may result in reduced rather than enhanced security. For example, a patch may reinitialize a service, modify security configurations, or change kernel parameters, all of which may cause unexpected behavior.
Page 1049
Patch versioning
Red Hat does not increment the base version of any of the packages until the whole release is incremented. Instead they continuously apply security patches. This means simpler security scanners can report false positives as they only look at the base version of the packages.
Software-based solutions
In contrast, software-based solutions are generally installed on servers that are supplied by customers. This approach has advantages as it provides the customer full control over how to implement, configure and support the solution. However it includes several aspects to consider which may impact the security of the system. Vendors often specify a minimum set of operating system packages that are required to support the software-based solutions, placing customers in the difficult position of choosing what is needed versus what is not. Not only does this allow potential security vulnerabilities, it also makes the task of hardening the system far more complicated. Finally, the more packages there are on servers, the more security patches a company must monitor to secure these servers. Since the customer generally provides the server, the burden of monitoring security patches falls on the customers.
Appliance hardening
The following measures are taken to harden the BMC Atrium Discovery appliance when it is built: Build the OS using only a small number of packages, all of which are required Only the required services are enabled Firewall specifically tuned for the appliance Unnecessary user accounts are removed Disable telnet and ftp (access is via ssh only) No remote logins as root Set specific kernel parameters such as ICMP echo broadcast Set permissions on logging, cron, and configuration to require a privileged user Mount options configured to permit only certain operations on specific partitions Password quality criteria set Remove SETUID privileges from certain applications The appliance is equipped with its own baseline monitoring system (based on the open source Tripwire product) which can be configured to automatically take action in case of unauthorized changes, such as shutting down the appliance or disabling access.
The complete package list is included in the [Release Notes] rather than this document as they can be upgraded between minor releases.
User management
BMC Atrium Discovery application's internal user management service offers all the features required to support ISO 17799 guidelines, specifically: Account management Password management policies (strength, reuse, lifecycle) Granular groups permissions Account blocking after authentication failures Automatic account lockout (for example, an account not used for 60 consecutive days) Automatic session lockout (for example, a session left idle for more than 30 minutes) Many firms have invested in identity and access management solutions to centralize user management and the permissions to the applications they can access. BMC Atrium Discovery can also integrate with a corporate LDAP solution such as Active Directory so that user accounts and group permissions can be managed directly from the LDAP. LDAP groups can be mapped as desired to BMC Atrium Discovery groups to simplify overall administration.
Appliance firewall
The appliance firewall is pre-configured to ensure only the following incoming traffic is allowed. Windows proxy communication is always initiated from the appliance so is not listed here. The open ports listed below are incoming TCP ports to the appliance. Port Number Description Reason
Page 1050
22 80 443 25032
For remote management of the appliance OS. For accessing the appliance web user interface, if enabled. For accessing the appliance secure user interface, if enabled. To enable discovery consolidation.
The appliance approach provides a known and understood system in which the interaction between components is designed; the firewall is one of those components. Consequently the appliance is expected to have full control over the firewall. Local Linux system administrators should not make any changes to the appliance firewall as this may compromise the appliance security and any changes will be lost when the it is upgraded. Where further monitoring or protection is required then it should be placed behind an additional firewall.
Proxy port changes in 8.3 SP2 and later Starting from BMC Atrium Discovery 8.3 SP2, proxies are not limited to the default ports. It is also possible to install multiple proxies of each type on a single host. Consequently, in BMC Atrium Discovery 8.3 SP2 you must check the proxy manager to determine which ports the proxies are using. The defaults are the same as previous releases, but installations of additional proxies use incremental ports. You can also use the proxy manager to modify the port that each proxy uses.
The ports given are incoming TCP ports to the Windows proxy host. Port Number 4321 4322 4323 Description Used to connect to a Active Directory Windows proxy from the BMC Atrium Discovery appliance. Used to connect to a Workgroup Windows proxy from BMC Atrium Discovery appliance. Used to connect to a Credential Windows proxy from BMC Atrium Discovery appliance.
Penetration testing
To ensure BMC Atrium Discovery data integrity and confidentiality, the BMC Quality Assurance group performs a thorough assessment on each major and minor release. UI penetration tests are made with IBM AppScan. System penetration tests are made with Tenable Nessus. Bastille Linux is used in assessment mode to confirm the security configuration of the system.
Page 1051
LibXML issues could cause crashes as there is no UI exposure of the XML system, the exploit requires command line access as the tideway user. WLAN issue with Kernel the exploit requires WLAN to be enabled and WLAN kernel extensions to be installed. Neither of these are installed on the appliance. OpenSSH X11 Port forwarding hijack X11 is not installed on the appliance. OpenSSL Record of death not applicable to the version of OpenSSL installed on the appliance. Sudo RunAs Group not applicable to the version of sudo installed on the appliance. SQL injection errors the data store does not use SQL. The next section describes ways in which you can identify similar false positives.
Example 1
This is an excerpt from the output of a vulnerability reporting tool: Risk Level High Sev Code Cat I Category Local Unix Security Audits CVE CVE-2009-0688 Description The CMU Cyrus SASL (Simple Authentication and Security Layer) library contains a buffer overflow vulnerability when encoding strings into BASE64 format. Successful exploitation could allow attackers to execute arbitrary code or cause the application using the Cyrus SASL library to crash. Note: This audit is for versions of Cyrus SASL obtained from asg.web.cmu.edu/sasl/ and may report false findings with vendor-specific backports of Cyrus SASL. Fix Upgrade the Cyrus SASL Library to version 2.1.23 or newer.
The first step is to check the details on the CVE page. To find this page, check the MITRE CVE site and search for the CVE number. Entering the CVE number (CVE-2009-0688) in a search engine should also find the CVE. The CVE shows that Red Hat have released a statement, a Red Hat Security Advisory (RHSA) describing the vulnerability:
Page 1052
Note: The Red Hat Security Advisory is where Red Hat either release update information, or the Red Hat Bugzilla issue number. In either case they are generally preceded by REDHAT. Either click on the link provided by CVE, and search for the RHSA number ( RHSA-2009-1116), or search the Red Hat Bugzilla database for the issue number. The RHSA tells you what version of the package the bug is fixed in. The edition of Red Hat we are running is referred to as Red Hat Enterprise Linux Server, all other editions can be ignored (Workstation, AS, ES, EUS, Long Life, WS, and so forth). Search for the x86_64 version (Intel/AMD 64-bit). See the example below:
Page 1053
Page 1054
The details or descriptions provided in the RHSA and CVE contain much useful information. Often Red Hat will provide reasons why the specific package is affected. Occasionally the CVE will indicate that the issue is with a specific package, as shown in this example where the CVE states that the Cyrus SASL library that has the issue. Red Hat, however, have fixed the issue in the cyrus-imapd package rather than the cyrus-sasl package, the detail of which is described in the RHSA details section. Now that the version of the package in which Red Hat have fixed the vulnerability is known, we must confirm that the package is updated in the latest version of BMC Atrium Discovery. In the following example, we can check whether or not the packages are installed. As all the affected packages listed on the RHSA page start with cyrus-imapd it is simple to search the entire RPM database and grep for the prefix rather than search individually:
tideway@disco01 ~]$ tideway@disco01 ~]$ rpm -qa | grep cyrus-imapd tideway@disco01 ~]$
The package does not appear to be installed. Performing a wider search just for cyrus shows the cyrus packages installed:
tideway@disco01 ~]$ tideway@disco01 ~]$ rpm -qa | grep cyrus cyrus-sasl-lib-2.1.22-5.el5_4.3 cyrus-sasl-lib-2.1.22-5.el5_4.3 cyrus-sasl-devel-2.1.22-5.el5_4.3 cyrus-sasl-plain-2.1.22-5.el5_4.3 cyrus-sasl-2.1.22-5.el5_4.3 cyrus-sasl-plain-2.1.22-5.el5_4.3 tideway@disco01 ~]$
The cyrus-imapd package is definitely not installed, so the issue cannot affect BMC Atrium Discovery. This issue can be reported as a false positive.
Example 2
Sometimes it is not immediately obvious whether or not the issue affects Red Hat and careful attention needs to be paid to the CVE detail. This is an excerpt from the output of a vulnerability reporting tool: Risk Level Medium Sev Code Cat II Category Web Servers CVE CVE-2010-0740 Description OpenSSL contains a vulnerability (i.e. "record of death") when handling malformed records during TLS connections. Successful exploitation could allow attackers to cause denial of service conditions (i.e. daemon crash). Note: This audit is for versions of OpenSSL obtained from OpenSSL.org and may report false findings with vendor specific backports. The first step is to check the details on the CVE page. Immediately it is evident that there is no REDHAT entry listed. The closest thing is Fedora which is another Linux distribution sponsored by Red Hat. Fix Upgrade software that is using vulnerable versions of OpenSSL libraries and/or upgrade OpenSSL to version 0.9.8n or newer.
Page 1055
The report states that the we should upgrade to version 0.9.8n or later. The following example shows checking the version currently installed.
tideway@disco01 ~]$ tideway@disco01 ~]$ rpm -q openssl openssl-0.9.8e-20.el5 openssl-0.9.8e-20.el5 tideway@disco01 ~]$
Note: In this example, the same version of the package is listed twice. This occurs when both 32 and 64 bit packages are installed. The version running is 0.9.8e, which is definitely earlier than 0.9.8n, the version the CVE recommends. Red Hat however have not provided a fix. Possibly one was never required. Careful reading of the CVE description offers a clue. The ssl3_get_record function in ssl/s3_pkt.c in OpenSSL 0.9.8f through 0.9.8m allows remote attackers to cause a denial of service (crash) via a malformed record in a TLS connection that triggers a NULL pointer dereference, related to the minor version number. According to the CVE description the vulnerability exists in versions 0.9.8f through 0.9.8m. The virtual appliance is running version 0.9.8e so the vulnerability that was created in the changes from e to f does not affect our version. Again, the issue cannot affect BMC Atrium Discovery and can be reported as a false positive.
Page 1056
System communications
Wherever possible, communications between elements of the system use high-grade encryption. The core of the application manages the discovery and reasoning engines. It consistently interacts with the security engine to ensure user authentication and request authorization so that each action taken by the application can only be triggered from the application itself or by a user through the application UI or command line. External communications between the user and the application can be configured to use HTTPS over 128bit SSL. The encryption of communication between the discovery engine (appliance or Windows proxy) and the target depends on the discovery method used. For example, ssh is encrypted, but telnet and rlogin (which may both be disabled) are not. Discovery credentials can be configured to use a user supplied SSH key per credential. These keys and their associated passphrases are stored in the credential vault. It is recommended that SSH keys are always protected with a strong passphrase.
Secure communications
Secure communications between elements of the system uses CORBA over TLS (Transport Layer Security) with the following details: Protocol: TLSv1 Encryption: AES_256_CBC Message hashing: SHA1 Key Exchange: DHE_RSA (2048) It is enabled using certificates in the following locations: Each appliance (scanning or consolidation) Each Windows proxy Certificate Authority on each appliance and proxy This refers to communications between components of the BMC Atrium Discovery system, not communications between BMC Atrium Discovery and discovery targets, or the user's web browser.
Support has not been built into the product to enable you to replace the default certificates with your own. However, it is possible to replace certificates on a like-for-like basis, as described in the following section.
Page 1057
End-user authentication
End-user application authentication is critical to the security of the entire solution. BMC Atrium Discovery supports a number of Web authentication plug-ins and various levels of authentication strength, requiring one of many authentication factors: SSL Client Certificate Verification - Strong authentication using a public key infrastructure certificate. The client's SSL Certificate is verified by the Web server. The user name is extracted from the certificate and used for authorization via LDAP SSL Certificate Lookup - The user is authenticated by looking up custom parts of the client's SSL Certificate via LDAP. The certificate is not verified, but it must be valid LDAP Authentication - The user is authenticated against an LDAP server by entering a username and password Standard Web Authentication - The user is authenticated as a local user by entering a username and password
Page 1058
22 80 443
Discovery communications
This section describes communication between the BMC Atrium Discovery appliance, Windows proxies, and discovery targets.
Page 1059
Port 4 using TCP and UDP is required if using IP Fingerprinting as Discovery must observe the response from a guaranteed closed port on the endpoint. Port 4 must be closed on the discovery target, but must be open on any firewall between the appliance and discovery target, so that the response is from the target rather than the firewall. Where this is not the case, the heuristic receives a response from two different TCP/IP stacks, leading to unpredictable results including the endpoint being classified as a firewall or an unrecognized device. This can lead BMC Atrium Discovery to skip devices (see UnsupportedDevice in the DiscoveryAccess page). The ports listed in the following table are used to determine what device is present. Port Number 4 21 22 23 80 135 161 513 3940 Port assignment Closed Port FTP SSH telnet HTTP Windows RPC SNMP rlogin Discovery for z/OS Agent
WMI
Page 1060
The ports that are used by WMI discovery methods and the corresponding assigned ports are described in the following table. Port Number 135 1024-1030 1024-65535 139 445 Port assignment DCE RPC Endpoint Manager. DCOM Service Control Restricted DCOM One of these ports is used after initial negotiation. Unrestricted DCOM One of these ports is used after initial negotiation. Netbios Session Service Microsoft Directory Services SMB
All WMI communication from BMC Atrium Discovery is sent with Packet Privacy enabled. If the host being discovered does not support Packet Privacy, the flag is ignored and WMI returns the requested information (for example, if you run a version earlier than Windows Server 2003 with Service Pack 1 (SP1)).
By default, WMI (DCOM) uses a randomly selected TCP port between 1024 and 65535. To simplify configuration of the firewall, you should restrict this usage if you scan through firewalls. See To set the DCOM Port Range for more information.
These settings should be restricted on the target host, not the Windows proxy host.
1. Using a registry editor, create the key HKEY_LOCAL_MACHINE\Software\Microsoft\Rpc\Internet 2. Within that key create a REG_MULTI_SZ (Multi-String Value) called Ports. 3. Enter in the port(s) or port range you want to use. The Windows proxy uses only one port; however, if the user has other DCOM applications in use on that machine, you might need to enable a larger range. 4. Create a REG_SZ (String Value) called PortsInternetAvailable and give it the value Y. 5. Create a REG_SZ (String Value) called UseInternetPorts and give it the value Y. 6. Restart the computer. You should also read the relevant Microsoft article about this issue: How to configure RPC dynamic port allocation to work with firewalls
RemQuery
The ports that are used by RemQuery discovery and the corresponding port assignments are described in the following table. Port Number 139 445 Port assignment Netbios Session Service Microsoft Directory Services SMB
Page 1061
Proxy port changes in 8.3 SP2 In BMC Atrium Discovery 8.3 SP2, proxies are not limited to the default ports. It is also possible to install multiple proxies of each type on a single host. Consequently, in BMC Atrium Discovery 8.3 SP2 you must check the proxy manager to determine which ports the proxies are using. The defaults are the same as previous releases, but installations of additional proxies use incremental ports. You can also use the proxy manager to modify the port that each proxy uses.
Workgroup Windows proxy support is only for pre-8.2 Windows proxies, as they no longer exist in the current release. All of their functionality has been moved into Active Directory Windows proxies.
J2EE Discovery
The port information used for J2EE discovery is determined in the patterns used to discover the particular J2EE Application Server. If no port information is discovered, then the default port is used. In addition, for full extended discovery, the port for the database that the J2EE Application Server is using is also required. This is dependent on the way that these servers are configured in your organization. The following table details the default port. Port Number 7001 Port Assignment JMX Use WebLogic
Page 1062
SQL discovery
The port information used for SQL discovery is derived in the patterns used to discover the particular database. This is dependent on the way that databases are configured in your organization. The following table details the default ports. Port Number 1521 1433 4100 3306 Port Assignment SQL SQL SQL SQL Use Oracle MS SQL Sybase ASE MySQL
Discovery of vCenter Discovery of vCenter uses standard host discovery with the creation of a vCenter SI triggered on a discovered vCenter process.
Page 1063
22 23 25 53 80 80 123 135
SSH telnet SMTP DNS HTTP HTTP NTP DCE RPC Endpoint Manager. DCOM Service Control SNMP LDAP HTTPS
UNIX Discovery UNIX Discovery Email Relay Domain Name Lookup Main UI Standard Base Device Detection Time Synchronization Windows Discovery
Discovery communications Discovery communications System communications System communications System communications Discovery communications System communications Discovery communications
Discovery communications System communications System communications Discovery communications Discovery communications System communications Discovery communications Discovery communications Discovery communications Discovery communications Discovery communications Discovery communications Discovery communications Discovery communications Discovery communications Discovery communications
513 636 902 1433 1521 3306 3940 4100 4321 4323 7001 25032
rlogin LDAPS vSphere API MS SQL Oracle SQL MySQL SQL Discovery for z/OS Agent Sybase SQL CORBA CORBA JMX CORBA
Outbound Outbound Outbound Outbound Outbound Outbound Outbound Outbound Outbound Outbound Outbound Outbound
UNIX Discovery LDAPS UI User Authentication VMware ESX/ESXi Discovery MS SQL Extended Discovery Oracle SQL Extended Discovery MySQL SQL Extended Discovery Mainframe Discovery Sybase ASE SQL Extended Discovery AD Windows proxy Windows Discovery Credential Windows proxy Windows Discovery J2EE Extended Discovery Consolidation
Page 1064
ARTCPPORT Value
AR System
Outbound
CMDB Sync Standalone appliance. Scanning appliances do not sync to CMDB, this is done from the consolidating appliance.
System communications
Port assignment DCE RPC Endpoint Manager. DCOM Service Control Netbios Session Service LDAP Microsoft Directory Services SMB Firewall Restricted DCOM Unrestricted DCOM CORBA
Use Windows Discovery Windows Discovery AD User Authentication Windows Discovery Windows Discovery Windows Discovery AD Windows proxy Windows Discovery
Reference Discovery communications Discovery communications System communications Discovery communications Discovery communications Discovery communications Discovery communications
Page 1065
4323
CORBA
Inbound
Discovery communications
To determine whether your appliance is running on RHEL5 check the footer of any UI page. The last line displays the release and build number, and if the appliance is running on RHEL5 it is stated between the release number and build number.
You cannot mount a Windows share from a FIPS enabled appliance. The mount operation fails and an error message is written to syslog.
Page 1066
[root@appliance01 ~]# /usr/tideway/bin/tw_fips_control --enable This script will enable or disable FIPS mode on your ADDM appliance. The script must be run as the root user and FIPS mode is only supported on Red Hat Enterprise Linux 6 based ADDM appliances. Please note: To enable FIPS mode the script will modify the system's boot configuration files (GRUB) and regenerate the boot-time kernel. Any manual modifications made to these components may conflict with FIPS mode configuration or have untoward effects. A reboot is required if the current kernel mode needs to change. The script will notify the user if this is the case. Do you want to continue to enable FIPS mode (yes/no)? yes Starting FIPS mode configuration. Gathering current state of the system. Enable FIPS mode in grub configuration file. ---------------------------------------------------------------------------WARNING: The default SSL keys shipped with ADDM are NOT FIPS compliant. You MUST install your own FIPS compliant SSL keys if you have not already done so. Failure to install FIPS compliant keys will mean that ADDM services will not start. ---------------------------------------------------------------------------Configuration complete. Please reboot to enable FIPS mode. [root@appliance01 ~]#
Disabling FIPS mode on the appliance is accomplished by running the tw_fips_control script with the --disable option. The script modifies the boot configuration file and regenerates the boot-time kernel. This requires a reboot. You do not need to replace SSL keys after disabling FIPS mode.
For information on using Windows in FIPS mode, see this Microsoft knowledgebase article.
Legal Information
Copyright 2012 BMC Software, Inc. BMC, BMC Software, and the BMC Software logo are the exclusive properties of BMC Software, Inc., are registered with the U.S. Patent and Trademark Office, and may be registered or pending registration in other countries. All other BMC trademarks, service marks, and logos may be registered or pending registration in the U.S. or in other countries. All other trademarks or registered trademarks are the property of their respective owners. The information included in this documentation is the proprietary and confidential information of BMC Software, Inc., its affiliates, or licensors. Your use of this information is subject to the terms and conditions of the applicable End User License agreement for the product and to the proprietary and restricted rights notices included in the product documentation.
Page 1067
BMC Software, Inc. 5th Floor 157-197 Buckingham Palace Road London SW1W 9SP United Kingdom Tel: +44 (0) 800 279 8925 Email: customer_support@bmc.com Web: http://www.bmc.com/support
Troubleshooting
Release Date October 2012
Table of Contents
1 Table of Contents 2 Troubleshooting 2.1 Writing efficient regular expressions 2.2 Contacting Customer Support 2.3 Troubleshooting Windows proxies 2.4 If you encounter a problem 2.5 Collecting additional data for support cases 2.5.1 Getting record data 2.6 Log files 2.7 Troubleshooting Frequently Asked Questions 2.7.1 Diagnosing Hostname Problems 2.8 Cannot scan IPv6 addresses 2.9 Unmounting partitions 3 Legal Information
Troubleshooting
This section provides details of possible problems and how to solve them. Writing efficient regular expressions Contacting Customer Support Troubleshooting Windows proxies If you encounter a problem Collecting additional data for support cases Log files Troubleshooting Frequently Asked Questions Cannot scan IPv6 addresses Unmounting partitions Appliance does not have an IP on eth0
PDF Download You can download a PDF version of the Troubleshooting Information here. Be aware that the PDF document is simply a snapshot and will be outdated should updates occur on the wiki. The PDF file is generated from the wiki on each request so it may take some time before the download dialog is displayed.
Page 1068
filtering out sensitive information from command lines. These regular expressions are often matched against a large data set, so it is important that they are written to be efficient.
\b.*xx.*foo
This was a regular expression that was compared against the command line of processes found on a host. It was failing when run against a half-megabyte string which included lots of repetitions of xx but did not contain foo. Let's break down what happens when it is matched against a string: 1. 2. 3. 4. 5. The engine starts scanning from the start of the string. The engine scans forward until it finds a word boundary \b. The engine scans forward from the word boundary until it finds a matching xx. The engine scans forward from the xx until it finds the string foo or it reaches the end of the string.
Page 1069
5. If it has reached the end of the string and not matched foo, it loops back to step 3 and scans forward to the next xx. 6. If it has matched all the xx and still not found foo, it loops back to step 2 and scans forward to the next word boundary. So the regular expression matching contains nested loops; the total processing time is determined by the length of the string (for the command line that was causing the problem this was approximately 500,000) times the number of xx substrings (approximately 500) times the number of word boundaries (approximately 80,000). This was roughly equivalent to scanning a string twenty trillion characters long, and took more than a day to complete. This was fixed in two ways: Firstly, the \b.* was removed from the start of the regular expression, since it served no purpose other than to slow the whole thing down. This change reduced the runtime from days to a few seconds. Secondly, we can use knowledge about the data we want to match; in this case we are only interested in the text from the start of the string up to the first space. So to stop the regular expression scanning the entire string we can anchor the regular expression to the start of the string with ^ and use the \S token to match non-whitespace characters. The final regular expression ^\S*xx\S*foo will stop as soon as it reaches a whitespace character. It now takes a few microseconds when run against the same string.
-A(\D+)+-B(\D+)
This was used as a sensitive data filter. The intention was to scan command lines and remove options that start with -A and -B. However it not only did not do what the writer intended, it performed in a manner that could potentially take forever to process. Let's break it down: 1. Scan from the start of the string until we find -A. 2. Match all non-digit characters until we find -B. 3. If -B is not found then try matching every combination of groups between -A and the end of the string or the next digit. For example, if the remainder of the string was abcd then it would match each of the following groups for (\D+)+ (abcd) (abc)(d) (ab)(cd) (ab)(c)(d) (a)(b)(cd) (a)(bc)(d) (a)(bcd) (a)(b)(c)(d). The number of combinations will double for each additional character in the remaining string. So for the situation where the command line contains -A but not followed by -B it will take a time proportional to 2 N, where N is the number of characters between the -A and the next digit or end of the string. To put that into perspective, on a standard PC a string of 22 characters takes about one second. A string of 40 characters would take about 3 days, and a string of 100 characters would take 9,583,696,565,945,500 years, though this has not been tested. This regular expression was fixed by removing the group repetition, since it served no purpose: -A(\D+)-B(\D+). The runtime went down from forever to a few microseconds.
Page 1070
a.*x*.*x - will also take N 2 matches, since x* can be a zero-length match so can match anywhere in the string. a.*y.*x - will take N*M matches, where M is the number of occurrences of y.
Page 1071
matched by the group in case it is needed later. This can slow the matching process down, in some cases by a factor of four or more. If you need to use parentheses, for example for part of a regular expression that is repeated, but you do not need to use the contents of the group afterwards then you can use the non-grouping version of parentheses - (?:...). This is generally still slower than not having parentheses at all, but is faster than normal parentheses. For example: (abc|def) -* slow. Only use if you need to use the matched text later. (?:abc|def) - faster abc|def - fastest
The index
Regular expressions are indexed by a property called a hint. A hint is a case-insensitive string that is a substring of every string that matches the expression. For example, the expression Hoist the (mizzen mast|main brace), ye (landlubbers|scurvy dogs)! has three obvious hints: {{Hoist the }} , ye " ! For simplicity, each expression is indexed only by its longest hint. In the case of the example above, the hint would be {{Hoist the }}. Some regular expressions do not have any hints. For example, \d+[-/]\d+[-/]\d+ has no substring that must be part of the matching strings. The hint calculator is also (currently) fairly naive, and misses some hints that could potentially be extracted. Expressions that have no hints must be treated separately. When trying to match a string, the index is queried for a set of the regular expressions whose hint appears as a substring of the string being matched. Once built, the index can perform this query very quickly. This set, combined with a set of expressions that have no hints, forms the set of all the regular expressions that can potentially be matched by the string. An attempt is made to match the string against each expression in turn until one is found or there are no expressions left, meaning that the string does not match.
Page 1072
Further Reading
This section provides some suggested further reading.
Useful tools
There are a lot of useful tools for developing and testing regular expressions, and a few that have been found useful are listed below. Note that BMC Atrium Discovery uses the Python regular expression library. Other libraries may have different features, behavior and performance.
Ipython
Ipython is an enhanced interactive python shell. It is useful for timing regular expression matches using the timeit command. For example:
In [1]: import re In [2]: x = '-' * 1000000 + 'abc' In [3]: timeit re.search('abc', x) 100 loops, best of 3: 2.28 ms per loop In [4]:
Regex Coach
Regex Coach is a GUI regular expression debugger similar to Kodos. The big advantage of Regex Coach is that you can step through the regular expression as it attempts a match and watch the backtracking in action. This is very useful for getting a deeper understanding of how regular expressions work, and for debugging regular expressions that take much longer than expected. Regex Coach is written in Lisp but the regular expression library is similar to the Python library. The current version is available for Windows only but earlier versions are available for Linux and Mac OS X. It is free to use, but may not be distributed commercially without permission.
Page 1073
If you have installed a Windows proxy and it fails to connect, there are a number of checks that you should make before contacting Customer Support.
That tells omniORB that only the specified IP address can connect with SSL. The IP address should be the IP address of the appliance. If multiple appliances are to use the same Windows proxy, one serverTransportRule line for each should be added.
Page 1074
Description The name of the appliance. By default this is Discovery_Appliance. This name is displayed at the top-left-side of the BMC Atrium Discovery User Interface and in the title bar of the browser. The appliance ID. The email address of the appliance administrator. The email address of Customer Support. A free-text description of the appliance. The serial number allocated to the appliance. This uniquely identifies the appliance. The date that the appliance was commissioned. Kickstart is a system configuration program. When an appliance is kickstarted the version of the kickstart configuration file used during installation is displayed in this field. The BMC Atrium Discovery software version of the appliance. The RPM (Red Hat Package Manager) version for the appliance.
ID Admin Email Support Email Description Serial Number Commission Date Kickstart Version Software Version Software Revision
To view logs
1. 2. 3. 4. From the Appliance section of the Administration tab, select Logs. Select the Log files tab (by default, the Log files tab is displayed). A list of all the available logs is displayed on the Appliance Logs page. Click the Watch link next to any log to view the log in detail.
The logs are described in Log files. The log level associated with each message is displayed. For information on log levels see Changing log levels at runtime.
1. To delete a log, click the Delete link next to a log. 2. To download a log, click the Download link next to a log. You can then choose to open or save the text document for that log. 3. Click the Compress link next to a log for larger files that you want to download. The log is given a .gz extension and is described as 'Compressed' in the Type column. The Watch link is also removed for that log.
You can also delete old logs by clicking the Delete Old Logs button on the main log display page. An old log is one to which writing has been completed and there is a replacement currently being written.
Page 1075
4. Click the Session Logs link to show the list of session logs for that host. The Session Logs icon is not displayed if there are no logs associated with the host you have selected. 5. Select a session log to view and click on it. The session log is displayed.
Save time! Send the right information when opening the support issue In case of problem related to an SNMP device, collect the device capture first. If you are not satisfied by the scan result of a host, collect: the session logs the record and pool data (see below) the discovery and reasoning logs (see below)
Collecting additional data that Customer Support requests can be accomplished using the Appliance Support page, accessed by clicking the Administration tab, and then clicking the Appliance Support icon in the Appliance section of the tab. From the Appliance Support page you can do the following: Collect BMC Atrium Discovery logs between specified dates. Collect Target Host Data and session logs for specified IP addresses or ranges. Collect Miscellaneous files. Create named archives of collected data. Download and manage archives of collected data. The following systems and aspects of the appliance create logs which may be useful to Customer Support when diagnosing problems with the appliance: AppServer used to diagnose UI errors. Security used to track user access and actions. Performance used to diagnose performance problems. Discovery used to diagnose discovery and data quality issues. Model used to diagnose queries or reporting problems. Reasoning used to diagnose discovery and data quality issues. Integrations used to diagnose problems with integrations. Others used to diagnose miscellaneous problems, for example, script problems. Phurnace provider used to diagnose web server discovery problems. Mainframe provider used to diagnose mainframe computer discovery problems. AppServer Errors used to diagnose UI errors. Audit Logs used for general diagnostics and user errors. Windows proxies used to diagnose Windows discovery problems. The discovery data and logs are also stored on the appliance. These may help Customer Support diagnose discovery problems. Enter the IP Ranges for which you wish to collect data: Get session logs raw discovery data used for diagnosing discovery and data quality issues. Record data used to reproduce discovery problems on test appliances. Pool data used to reproduce discovery problems on test appliances. The following files and system logs may be useful to Customer Support when diagnosing problems with the appliance: Configuration files used to check the appliance configuration. Customer extensions used to check customer configured extensions. Export extensions used to configure customer configured export extensions.
Page 1076
System messages used to diagnose appliance related problems. omniNames files used to check communications configuration. Installed packages used to ensure that the correct packages are installed. bash history used to check the command line history. sar logs used to diagnose performance problems. Tripwire reports used to view these reports. Atrium Discovery output files used to check the output files. Generated rules used to check the generated rules. Report usage provides information on the type of reports used and the number of times they were used. Windows proxy configuration files used to check the configuration of Windows proxies.
Note If fully qualified domain hostnames (FQDNs) are important when gathering data, *.*.*.* and ::/0 should both be used in Step 1 to ensure that all relevant record data is available in the archive. This is because getNames creates record data directories for the IP numbers it returns in FQDNList nodes that may be different from the IP numbers in the injected range. For example, discovering an endpoint with its IPv6 address could result in the creation of both IPv4 and IPv6 directories.
To Create an Archive
To create an archive of the selected logs, data, and files; from the Create Archive section of the Appliance Support page: 1. Enter a name for the archive in the Name field. 2. Enter a description for the archive in the Description field. 3. Click the Gather button. The selected logs, data, and files are gathered into an archive and saved on the appliance. The archive and information relating to the archive are displayed in a new row of the Gathered Data Files section. The following information is displayed for each archive: Name the name of the archive. Description the description entered when the archive was created. Date the date that the archive was created. Size the size (in bytes) of the archive. User the user who created the archive.
Page 1077
State the state of the archive. This can be: Creating or Complete. Summary a View button. Click this to see a list of the archive contents in a popup window.
tw_health_check
You may be asked by BMC Support to run the tw_health_check tool in the event of an investigation. It will gather some generic configuration, performance and status information helpful to the support process.
Page 1078
5.
When the archive is created, a row representing it is displayed in the Gathered Data Files section.
6. Click the name link to download the data file. 7. Send the data file to Customer Support as part of your case. The Appliance Support page is described more fully here.
Log files
As each BMC Atrium Discovery component and script runs, it outputs logging information. Logs are all stored in /usr/tideway/log. The files can be accessed directly from the appliance command line or in the log viewer in the user interface. Logs can be downloaded from the appliance through the Support Services administration page. Logs are critical to understanding problems that may be encountered in the system. This section describes the most important log files, and how to understand the information contained in them.
Log levels
Log messages are classified into the following levels: DEBUG fine-grained informational events that are most useful to debug the application. INFO informational messages that highlight the progress of the application at coarse-grained level. WARNING potentially harmful situations. ERROR error events that may still allow the application to continue running. CRITICAL severe error events that may cause the application to abort. USEFUL start-up messages. Unless asked to do otherwise by Customer Support, it is recommended that you configure each service to log at INFO level.
Discovery log
tw_svc_discovery.log contains log messages from the Discovery subsystem, which is responsible for connecting to discovery targets and retrieving information from them. At INFO level, the log contains information about each discovery method that is executed; at DEBUG level, it contains a great deal of detail about how the discovery methods attempt different access mechanisms. tw_svc_integrations_sql.log contains the log messages related to database scans.
Reasoning logs
The Reasoning subsystem coordinates discovery and executes patterns, maintaining the data model. On multi-cpu machines, the Reasoning subsystem is split into multiple "ECA Engine" processes; the log files are split correspondingly, so the files are suffixed _nn.log where nn is the engine number, such as 00 or 01. Reasoning files are split as follows: tw_svc_reasoning.log contains information from the top-level Reasoning control process. It is periodically updated with information about the states of the active scanning ranges. In case of pattern activation error, this file will help you to find the root cause. tw_svc_eca_engine_nn.log contain information about scanning activities, including when each IP address starts and completes scanning. Each separate ECA engine process has its own log. On single CPU machines, the single ECA engine is part of the Reasoning control process, so this log output is in tw_svc_reasoning.log rather than a specific ECA engine log. tw_svc_eca_patterns_nn.log contain log messages from TPL patterns. If a pattern is misbehaving, its output will be here.
Page 1079
tw_svc_eca_trackers_nn.log contain debug information about the event trackers in the ECA engines. They only contain useful output when the Reasoning subsystem is set to log at DEBUG level. The information is not intended for human consumption, but for processing by diagnostic tools by Customer Support.
Model log
The "model" subsystem contains the data store, search, taxonomy and audit services. tw_svc_model.log contains log messages from the model subsystem. At INFO level, it contains details of all searches, and information about database checkpoints. At DEBUG level, it contains a great deal of detail about how data is stored and retrieved.
Security log
tw_svc_security.log contains output from the Security subsystem, which is responsible for authenticating users and authorizing user actions. It contains information about authentication failures. If LDAP-based authentication is used, it also contains information about the state of the connection to the LDAP server.
Performance logs
performance.log.* contain information about the appliance load, obtained by running the top command every 10 minutes. This is useful for diagnosing performance issues.
Script logs
In addition to the main subsystem logs, the logs from a number of scripts are often relevant: tw_upgrade.log contains a log of actions performed during a BMC Atrium Discovery version upgrade. tw_tax_import.log contains a log of the actions of the Taxonomy importer. If a Taxonomy extension fails to import, the details will be in this log. tw_injectip.log contains any messages output by the tw_injectip tool for creating scanning ranges. tw_imp_csv.log contains messages output by the CSV import tool, which can be run from the command line as tw_imp_csv, or via the web UI in Administration > CSV Data Import.
Log roll-over
To prevent the log files growing without bound, they are rolled on a daily basis. The first time a log message is output after midnight (appliance time), the current .log file is renamed to correspond to the date, in the form .log.YYYY.MM.DD, and a new log file ending simply .log is opened. Note that since it is the output of a log entry after midnight that causes the roll-over, if a subsystem has not output any log messages since midnight, the .log file may contain entries from the previous day(s). This also means that unless execution of a script straddles midnight, script log files do not roll over, and the .log file will contain information about every run of the script. Logs are automatically compressed seven days after they are created. When a log is compressed it is no longer available in any of the log selector drop down lists. Compressed logs are automatically deleted 30 days after the initial log file was created. Old logs can be manually compressed or deleted in the log viewer, or directly using the command line.
Page 1080
UI Tracebacks
Whenever the appserver component experiences an unhandled error, you are presented with an error message. When this happens the UI is obscuring a detailed traceback page which may contain useful information that can assist you with troubleshooting and debugging activities. The appserver saves the entire html page that is being hidden as a self-contained html file that can be opened in a browser. These files are stored in the following location on the appliance: /usr/tideway/python/ui/web/ErrorMsgs You can manually delete these files from the Appliance command line.
Will any network security need to be disabled for the discovery process?
Obviously, the BMC Atrium Discovery appliance needs to be able to reach areas of the network in order to discover hosts. However various methods of providing secure access are possible without disabling firewalls and access control policies, including using VPN tunnels and using Windows proxy BMC Atrium Discovery appliances. Some IDS systems may identify certain activities (such as port scans) as suspicious.
What is the impact of my applications running on platforms that are not supported by BMC Atrium Discovery?
The discovery process will identify endpoints on such computers if they are visible from other hosts. You will need to complete details of programs running on them manually, though it may also be possible to categorize some of the components of the applications running on the unsupported platform either by which port it, or its counterpart, is listening on.
Can the product introduce any risk into my network or application infrastructure?
To provide a clear picture of your total IT infrastructure, BMC Atrium Discovery will actually reduce risk in your network by allowing you to weed
Page 1081
out rogue elements that do not meet corporate policy, are out of date or provide potential security holes. The BMC Atrium Discovery discovery process uses standard techniques that should not de-stabilize elements of the infrastructure. Since there are always risks with deploying new technology, BMC's implementation plan involves analyzing areas of potential risk and achieving the right balance of risk and reward. BMC's test plan is also aimed at minimizing risk, ideally including testing in the customer's test environment.
Changing passwords for command line users The tw_passwd utility is for changing UI users' passwords. To change the passwords for command line users, as the root user, use the Linux command passwd. This is described in Changing the root and user passwords
If you have any other questions about BMC Atrium Discovery, please contact Customer Support.
where, local_hostname is the hostname set on the computer. To resolve this problem, see the previous section, Setting the hostname locally.
Page 1082
localhost cannot be resolved. If this is the case errors of the following form are displayed when restarting the tideway services.
[current date/time] [error] couldn't resolve WKServer address [current date/time] [error] Couldn't resolve hostname for WebKit Server
Unmounting partitions
Unmounting partitions on used disks
To unmount a partition on a "Non ADDM Disk", log in to the appliance command line as the tideway user. 1. Check the mounted partitions. Enter:
[tideway@appliance01 ~]$ df Filesystem Size /dev/sda7 965M /dev/sda1 99M /dev/sda6 1.5G /dev/sda5 1.9G /dev/sda8 38G /dev/sda3 2.4G /dev/sdb2 1.9G 36M /dev/sdc2 1.2G tmpfs 1.5G [tideway@appliance01 ~]$ -h Used Avail Use% Mounted on 273M 643M 30% / 17M 78M 18% /boot 35M 1.4G 3% /home 36M 1.8G 2% /tmp 30G 6.7G 82% /usr 617M 1.7G 27% /var 1.8G 2% /mnt/old 37M 1.1G 4% /mnt/addm/data 0 1.5G 0% /dev/shm
In this example /dev/sdb2 is mounted on /mnt/old. /dev/sdb2 shows as a "Non ADDM Disk" in the disk configuration utility. It must be unmounted before it can be configured using the disk configuration utility where it is displayed as "New Disk". 2. Unmount the /mnt/old partition. Enter:
[tideway@appliance01 ~]$ sudo umount /mnt/old [tideway@appliance01 ~]$
Page 1083
The /mnt/old partition is unmounted. 3. Check for enabled swap partitions. Enter:
[tideway@appliance01 ~]$ /sbin/swapon Filename Type /dev/sda2 partition /dev/sdb1 partition [tideway@appliance01 ~]$ -s Size Used 4192956 0 2928700 0
Priority -1 -2
In this example /dev/sdb1 is an enabled swap partition. 4. Disable the /dev/sdb1 swap partition. Enter:
[tideway@appliance01 ~]$ sudo /sbin/swapoff /dev/sdb1 [tideway@appliance01 ~]$
Legal Information
Copyright 2012 BMC Software, Inc. BMC, BMC Software, and the BMC Software logo are the exclusive properties of BMC Software, Inc., are registered with the U.S. Patent and Trademark Office, and may be registered or pending registration in other countries. All other BMC trademarks, service marks, and logos may be registered or pending registration in the U.S. or in other countries. All other trademarks or registered trademarks are the property of their respective owners. The information included in this documentation is the proprietary and confidential information of BMC Software, Inc., its affiliates, or licensors. Your use of this information is subject to the terms and conditions of the applicable End User License agreement for the product and to the proprietary and restricted rights notices included in the product documentation. BMC Software, Inc. 5th Floor 157-197 Buckingham Palace Road London SW1W 9SP United Kingdom Tel: +44 (0) 800 279 8925 Email: customer_support@bmc.com Web: http://www.bmc.com/support
Page 1084